KR102005878B1 - Managing real-time handwriting recognition - Google Patents

Managing real-time handwriting recognition Download PDF

Info

Publication number
KR102005878B1
KR102005878B1 KR1020187024261A KR20187024261A KR102005878B1 KR 102005878 B1 KR102005878 B1 KR 102005878B1 KR 1020187024261 A KR1020187024261 A KR 1020187024261A KR 20187024261 A KR20187024261 A KR 20187024261A KR 102005878 B1 KR102005878 B1 KR 102005878B1
Authority
KR
South Korea
Prior art keywords
strokes
handwriting
recognition
user
embodiments
Prior art date
Application number
KR1020187024261A
Other languages
Korean (ko)
Other versions
KR20180097790A (en
Inventor
메이-쿤 지아
자네스 쥐. 돌핑
라이언 에스. 딕슨
칼 엠. 그로에세
카란 미스라
제롬 알. 벨레가르다
우엘리 마이어
Original Assignee
애플 인크.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US61/832,921 priority Critical
Priority to US61/832,908 priority
Priority to US201361832908P priority
Priority to US201361832942P priority
Priority to US201361832921P priority
Priority to US201361832934P priority
Priority to US61/832,942 priority
Priority to US61/832,934 priority
Priority to US14/290,935 priority
Priority to US14/290,945 priority
Priority to US14/290,935 priority patent/US9898187B2/en
Priority to US14/290,945 priority patent/US9465985B2/en
Application filed by 애플 인크. filed Critical 애플 인크.
Priority to US14/292,138 priority patent/US20140361983A1/en
Priority to US14/292,138 priority
Priority to US14/291,865 priority
Priority to US14/291,722 priority
Priority to PCT/US2014/040417 priority patent/WO2014200736A1/en
Priority to US14/291,865 priority patent/US9495620B2/en
Priority to US14/291,722 priority patent/US20140363082A1/en
Publication of KR20180097790A publication Critical patent/KR20180097790A/en
Application granted granted Critical
Publication of KR102005878B1 publication Critical patent/KR102005878B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00852Recognising whole cursive words
    • G06K9/00859Recognising whole cursive words using word shape
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for entering handwritten data, e.g. gestures, text
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K2209/00Indexing scheme relating to methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K2209/01Character recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K2209/00Indexing scheme relating to methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K2209/01Character recognition
    • G06K2209/011Character recognition of Kanji, Hiragana or Katakana characters
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K2209/00Indexing scheme relating to methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K2209/01Character recognition
    • G06K2209/013Character recognition of non-latin characters other than Kanji, Hiragana or Katakana characters

Abstract

Methods, systems, and computer readable media related to techniques for providing handwriting input functionality on a user device are provided. The handwriting recognition module is trained to have a repertoire that can recognize tens of thousands of characters using a single handwriting recognition model and include a number of non-overlapping scripts. The handwriting input module provides independent handwriting recognition for real-time, stroke order, and stroke direction for multi-character handwriting input. In particular, handwriting recognition independent of real-time, stroke order, and stroke direction is provided for multi-character or sentence-level Chinese handwriting recognition. User interfaces for providing handwriting input functionality are also disclosed.

Description

{MANAGING REAL-TIME HANDWRITING RECOGNITION}

The present invention relates to providing handwriting input functionality on a computing device, and more particularly, to providing real-time, multi-script, stroke-independent handwriting recognition and input functionality on a computing device.

The handwriting input method is an important alternative input method for computing devices with touch sensitive surfaces (e.g., touch sensitive display screens or touch pads). Numerous users, especially users in some Asian or Arabic countries, are accustomed to using cursive writing, and feel comfortable writing handwriting in contrast to typing on the keyboard.

In the case of certain logical writing systems such as Hanzi and Kanji (also known as kanji), alternative syllable input methods (e.g., Pinyin) or Kana) are available, such syllable input methods are not appropriate when the user does not know how to spell the slogan character as a tabular food and uses the inaccurate table spellings of the slogan character. Thus, the ability to use handwriting input on a computing device becomes important to users who can not pronounce words sufficiently well or can not pronounce at all for the associated theme writing system.

While handwriting input has gained some popularity in some regions of the world, improvements still need to be made. In particular, human handwriting is very diverse (e.g., in terms of stroke order, size, typeface, etc.), and high quality handwriting recognition software is complex and requires extensive training. Thus, there has been a challenge in providing efficient real-time handwriting recognition and computing resources on mobile devices with limited memory.

Also, in today's multicultural world, it may be necessary for users of different countries to use multiple languages and often fill in more than one script (for example, write a message in Chinese to mention a movie title in English). However, manually switching the recognition system to the desired script or language during writing is cumbersome and inefficient. In addition, the utility of conventional multi-script handwriting recognition techniques is severely limited, since extending the device's cognitive abilities to handle multiple scripts simultaneously greatly increases the demands on computer resources and the complexity of the recognition system.

In addition, conventional handwriting techniques rely heavily on language or script-specific properties to achieve recognition accuracy. Such specialities are not easily compatible with other languages or scripts. Thus, adding handwriting capabilities to new languages or scripts is a tough task that is not performed lightly by the vendors of the devices and software. As a result, users of numerous languages have lost important alternative input methods for their electronic devices.

Conventional user interfaces for providing handwriting input include an area for allowing handwriting input from a user and an area for displaying handwriting recognition results. For portable devices with a small form factor, there is still a significant need for a significant improvement in user interface to improve efficiency, accuracy and user experience in general.

This specification describes a technique for providing multi-script handwriting recognition using a universal recognizer. The universal recognizer is trained using large multi-script corpus of write samples for characters with different languages and scripts. The training of universal recognizers is language-independent, script-independent, stroke-independent, and stroke-independent. Thus, the same recognizer can recognize handwritten input of mixed languages, mixed scripts, without requiring manual switching between input languages during use. In addition, the universal recognizer is light enough to be deployed as a standalone module on mobile devices to enable handwriting input in different languages and scripts used in different regions of the world.

In addition, universal recognizers are trained for stroke-independent, and stroke-independent, spatially-derived features, and do not require any temporal or sequence information at the stroke-level , A universal recognizer provides a number of additional features and advantages over conventional time-based recognition methods (e.g., a recognition method based on a Hidden Markov Method (HMM)). For example, the user is allowed to enter the strokes of one or more characters, phrases, and sentences in any order to still obtain the same recognition results. Thus, non-sequential multi-character input, and nonsequential modifications (e.g., addition or rewriting) of previously input characters are now possible.

In addition, universal recognizers are used for real-time handwriting recognition, where time information for each stroke is available and optionally used to clarify or segment the handwriting input before character recognition is performed by a universal recognizer . The real-time, stroke-order independent recognition described herein differs from conventional offline recognition methods (e.g., optical character recognition (OCR)) and can provide better performance than conventional offline recognition methods. In addition, the universal recognizer described herein can be used to generate individual (e.g., non-trivial) speech without explicitly embedding distinguishing features of different variations in the recognition system (e.g., variations in speed, tempo, stroke order, stroke direction, stroke continuity, By handling the high variability of the writing habits (e.g., speed, tempo, stroke order, stroke direction, variability in stroke continuity, etc.), the overall complexity of the recognition system can be reduced.

As described herein, in some embodiments, the time-induced stroke distribution information is selectively reintroduced into a universal recognizer to clarify and increase recognition accuracy between similar-looking recognition outputs for the same input image. Since the temporally-derived features and the space-derived features are acquired through a separate training process and are combined only in the handwriting recognition model after the separate training is completed, re-introduction of the time- Does not impair the stroke order and stroke direction independence of the universal recognizer. In addition, the time-induced stroke distribution information is carefully designed to capture distinct temporal characteristics of characters that appear to be similar, without relying on explicit knowledge of the differences in stroke order of characters that appear to be similar.

User interfaces for providing handwriting input functionality are also described herein.

In some embodiments, a method for providing multi-script handwriting recognition includes training a multi-script handwriting recognition model based on space-guiding features of a multi-script training corpus, Comprising separate handwriting samples corresponding to characters of at least three non-overlapping scripts; And providing real-time handwriting recognition of the user's handwriting input using a multi-script handwriting recognition model trained on space-guiding features of the multi-script training corpus.

In some embodiments, a method for providing multi-script handwriting recognition includes receiving a multi-script handwriting recognition model, wherein the multi-script recognition model is trained on space-guiding features of the multi-script training corpus, Comprising separate handwriting samples corresponding to characters of at least three non-overlapping scripts; Receiving a handwriting input from a user, the handwriting input including one or more handwriting strokes provided on a touch sensitive surface coupled to a user device; And in response to receiving the handwriting input, providing one or more handwriting recognition results to the user in real time based on a multi-script handwriting recognition model trained on space-guiding features of the multiscreen training corpus.

In some embodiments, a method for providing real-time handwriting recognition comprises receiving a plurality of handwriting strokes from a user, wherein the plurality of handwriting strokes corresponds to a handwriting character; Generating an input image based on a plurality of written strokes; Real-time recognition of handwritten characters by providing an input image as a handwriting recognition model; the handwriting recognition model provides handwriting recognition independent of stroke order; And displaying the same first output character in real time, on receipt of the plurality of handwritten strokes, regardless of the individual orders received from the user of the plurality of handwritten strokes.

In some embodiments, the method includes receiving a second plurality of handwritten strokes from a user, wherein a second plurality of handwritten strokes corresponds to a second handwritten character; Generating a second input image based on a second plurality of written strokes; Performing a real-time recognition of a second handwritten character by providing a second input image as a handwriting recognition model; And displaying a second output character corresponding to a second plurality of written strokes in real time on receiving a second plurality of written strokes, wherein the first output character and the second output character comprise a first plurality The second plurality of handwritten inputs and the second plurality of handwritten inputs are simultaneously displayed in the spatial sequence regardless of the individual order provided by the user.

In some embodiments, a second plurality of written strokes spatially follows a first plurality of written strokes along a default writing direction of a user device ' s handwriting input interface, and a second output character is arranged in a spatial sequence along a default writing direction Following the first output character, the method comprising receiving a third handwritten stroke from a user to revise the handwritten character, wherein the third handwritten stroke is temporally followed by a first plurality of handwritten strokes and a second plurality of handwritten strokes Received -; Assigning a handwriting stroke to the same recognition unit as the first plurality of handwriting strokes based on the relative proximity of the third handwriting stroke to the first plurality of handwriting strokes in response to receiving the third handwriting stroke; Generating a revised input image based on the first plurality of handwritten strokes and the third handwritten stroke; Performing a real-time recognition of the revised handwritten character by providing the revised input image as a handwriting recognition model; And displaying a third output character corresponding to the revised input image in response to receiving the third handwritten input, the third output character replacing the first output character and displaying the second output character in the spatial sequence along the default writing direction, And displayed simultaneously with the output character.

In some embodiments, the method includes receiving a deletion input from a user while the third output character and the second output character are simultaneously displayed as recognition results in a candidate display area of the handwriting input interface; And deleting the second output character from the recognition result while maintaining the third output character in the recognition result, in response to the delete input.

In some embodiments, rendering the first plurality of handwriting strokes, the second plurality of handwriting strokes, and the third handwriting strokes in real time in the handwriting input area of the handwriting input interface as each handwriting stroke is provided by the user ; And deleting the individual rendering of the second plurality of handwritten strokes from the handwriting input area, in response to receiving the delete input, while maintaining the individual renderings of the first plurality of handwritten strokes and the third handwritten strokes in the handwriting input area.

In some embodiments, a method for providing real-time handwriting recognition includes receiving handwriting input from a user, the handwriting input including one or more handwriting strokes provided in a handwriting input area of a handwriting input interface; Identifying a plurality of output characters for the handwriting input based on the handwriting recognition model; Dividing the plurality of output characters into two or more categories based on a predetermined classification criterion; Displaying an individual output character in a first one of two or more categories in an initial view of a candidate display area of a handwritten input interface, the initial view of the candidate display area including an extended view of the candidate display area, At the same time providing an affordance for invoking; Receiving user input selecting an affordance to invoke an extended field of view; And in response to the user input, in the expanded view of the candidate display area, individual output characters in the first of the two or more categories and individual output characters in at least the second category that were not previously displayed in the initial view of the candidate display area Lt; / RTI >

In some embodiments, a method for providing real-time handwriting recognition includes receiving a handwriting input from a user, the handwriting input including a plurality of handwriting strokes provided in a handwriting input area of a handwriting input interface; Recognizing a plurality of output characters from a handwriting input based on a handwriting recognition model, the output characters including at least a first emotion character from a natural human language script and at least a first character; And displaying a recognition result including a first emotion character and a first character from a script of a natural human language in a candidate display area of the handwriting input interface.

In some embodiments, a method for providing handwriting recognition includes receiving a handwriting input from a user, the handwriting input including a plurality of handwriting strokes provided on a touch sensitive surface coupled to the device; Rendering a plurality of handwriting strokes in real time in a handwriting input area of a handwriting input interface; Receiving one of a pinch gesture input and an expand gesture input for a plurality of written strokes; Upon receipt of a pinch gesture input, generating a first recognition result based on the plurality of written strokes by processing the plurality of handwritten strokes as a single recognition unit; Generating a second recognition result based on a plurality of written strokes by processing a plurality of handwritten strokes as two separate recognition units torn apart by an expansion gesture input upon receiving an expansion gesture input; And displaying the recognition result generated in the candidate display area of the handwriting input interface when generating the first recognition result and the second recognition result.

In some embodiments, a method for providing handwriting recognition includes receiving a handwriting input from a user, the handwriting input including a plurality of handwriting strokes provided in a handwriting input area of a handwriting input interface; Identifying a plurality of recognition units from a plurality of written strokes, each recognition unit comprising a respective subset of a plurality of written strokes; Generating a multi-character recognition result including individual characters recognized from a plurality of recognition units; Displaying a multi-character recognition result in a candidate display area of a handwriting input interface; Receiving a delete input from a user while a multi-character recognition result is displayed in a candidate display area; And removing the end character from the multi-character recognition result displayed in the candidate display area, in response to receiving the delete input.

In some embodiments, a method for providing real-time handwriting recognition comprises: determining an orientation of a device; Providing a handwriting input interface on the device in a horizontal input mode according to a device in a first orientation, wherein individual lines of handwriting input input in a horizontal input mode are divided into one or more individual recognition units along a horizontal writing direction, ; And providing a handwriting input interface on the device in a vertical input mode according to a device in a second orientation, wherein the individual lines of handwriting input input in the vertical input mode are divided into one or more individual recognition units along the vertical writing direction -.

In some embodiments, a method for providing real-time handwriting recognition includes receiving handwriting input from a user, the handwriting input including a plurality of handwriting strokes provided on a touch sensitive surface coupled to the device; Rendering a plurality of handwriting strokes in a handwriting input area of a handwriting input interface; Dividing a plurality of handwriting strokes into two or more recognition units, each recognition unit comprising a separate subset of a plurality of handwriting strokes; Receiving an edit request from a user; Visually distinguishing two or more recognition units in the handwriting input area in response to an edit request; And means for individually deleting each of the two or more recognition units from the handwriting input area.

In some embodiments, a method for providing real-time handwriting recognition comprises receiving a first handwriting input from a user, the first handwriting input including a plurality of handwriting strokes, wherein the plurality of handwriting strokes comprise a handwriting input Forming a plurality of recognition units distributed along individual writing directions associated with the areas; Rendering each of a plurality of handwriting strokes in a handwriting input area as a handwriting stroke is provided by a user; Initiating a separate fading process for each of the plurality of recognition units after the recognition unit is fully rendered; during the individual fading process, the rendering of the recognition unit at the first handwriting input is fading more and more; Receiving a second handwriting input from a user on a region occupied by a fading recognition unit of a plurality of recognition units in a handwriting input region; And responsive to receiving the second handwriting input: rendering a second handwriting input in the handwriting input area; And clearing both of the fading recognition units from the handwriting input area.

In some embodiments, a method for providing handwriting recognition may include: separately training a set of space-guiding features and a set of time-guiding features of a handwriting recognition model; a set of space- Wherein each of the images is an image of a handwritten sample of individual characters of an output character set, the set of time-derived features being trained for the corpus of stroke-distribution profiles, each stroke- Characterizing the spatial distribution of a plurality of strokes in handwritten samples for individual characters of a character set; Combining a set of space-directed features and a set of time-derived features in a handwriting recognition model; And providing real-time handwriting recognition of the user's handwriting input using a handwriting recognition model.

The details of one or more embodiments of the subject matter described herein are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.

1 is a block diagram illustrating a portable multifunction device having a touch sensitive display in accordance with some embodiments.
Figure 2 illustrates a portable multifunction device with a touch sensitive display in accordance with some embodiments.
Figure 3 is a block diagram of an exemplary multifunction device having a display and a touch sensitive surface in accordance with some embodiments.
Figure 4 illustrates an exemplary user interface for a multifunction device having a touch sensitive surface that is separate from the display in accordance with some embodiments.
5 is a block diagram of an operating environment of a handwriting input system in accordance with some embodiments.
6 is a block diagram of a multi-script handwriting recognition model in accordance with some embodiments.
7 is a flow chart of an exemplary process for training a multi-script handwriting recognition model in accordance with some embodiments.
8A and 8B illustrate exemplary user interfaces representing real-time multi-script handwriting recognition and input on a portable multifunction device according to some embodiments.
Figures 9A and 9B are flow charts of an exemplary process for providing real-time multi-script handwriting recognition and input on a handheld multifunction device.
Figures 10A-10C are flowcharts of an exemplary process for providing real-time stroke order independent handwriting recognition and input on a portable multifunction device in accordance with some embodiments.
FIGS. 11A through 11K are diagrams illustrating a method of selectively displaying recognition results of one category in a normal view of a candidate display area according to some embodiments, selectively displaying recognition results of different categories in an expanded view of a candidate display area, Lt; RTI ID = 0.0 > user interfaces. ≪ / RTI >
12A and 12B are diagrams illustrating a method for selectively displaying one category of recognition results in a normal view of a candidate display area and for selectively displaying recognition results of different categories in an expanded view of a candidate display area, Are flowcharts of an exemplary process.
Figures 13A-13E illustrate exemplary user interfaces for inputting emotion characters via handwriting input in accordance with some embodiments.
14 is a flow chart of an exemplary process for entering emotion characters through handwriting input in accordance with some embodiments.
Figures 15A-15K illustrate exemplary user interfaces for communicating to a handwriting input module a manner of dividing a current accumulated handwriting input into one or more recognition units using a pinch or expansion gesture, in accordance with some embodiments.
16A and 16B are flowcharts of an exemplary process for communicating to a handwriting input module a manner of dividing a current accumulated handwriting input into one or more recognition units using a pinch or expansion gesture, in accordance with some embodiments.
Figures 17A-17H illustrate exemplary user interfaces for providing character-by-character deletion of a user's handwriting input in accordance with some embodiments.
18A and 18B are flowcharts of an exemplary process for providing character-based deletion of a user's handwriting input in accordance with some embodiments.
Figures 19A-19F illustrate exemplary user interfaces for switching between a vertical write mode and a horizontal write mode in accordance with some embodiments.
20A-20C are flow charts of an exemplary process for switching between a vertical writing mode and a horizontal writing mode in accordance with some embodiments.
Figures 21A-21H illustrate user interfaces for providing means for displaying and selectively deleting individual recognition units identified in a user's handwriting input, in accordance with some embodiments.
Figures 22A and 22B are flowcharts of an exemplary process for providing means for displaying and selectively deleting individual recognition units identified in a user's handwriting input, in accordance with some embodiments.
Figures 23A-23L illustrate a new handwriting input provided over an existing handwriting input in a handwriting input area, according to some embodiments, an implicit confirmation input for inputting recognition results displayed for existing handwriting input illustrate exemplary user interfaces for use as input.
Figures 24A and 24B illustrate the use of a new handwriting input provided over an existing handwriting input in an handwriting input area as an implicit confirmation input for inputting a recognition result displayed for an existing handwriting input, Lt; / RTI >
25A and 25B illustrate an example for incorporating time-induced stroke distribution information into a handwriting recognition model based on space-directed features, without compromising the stroke order and stroke direction independence of the handwriting recognition model, in accordance with some embodiments. Gt; flowchart < / RTI >
Figure 26 is a block diagram illustrating separate training and subsequent integration of space-guiding features and time-directed features of an exemplary handwriting recognition system in accordance with some embodiments.
FIG. 27 is a block diagram illustrating an exemplary method for calculating a stroke distribution profile of a character. FIG.
Like numbers refer to corresponding parts throughout the drawings.

Many electronic devices have graphical user interfaces with soft keyboards for character input. On some electronic devices, the user may also install or enable a handwriting input interface that allows the user to input characters via handwriting on the touch sensitive surface or touch sensitive display screen associated with the devices have. Conventional handwriting recognition input methods and user interfaces have a number of problems and disadvantages. E.g,

Figure 112018083390499-pat00001
In general, the conventional handwriting input function becomes possible in a language unit or a script unit. Each additional input language requires the installation of a separate handwriting recognition model that occupies a separate storage space and memory. The synergy of combining handwriting recognition models for different languages is rarely provided, and mixed language or mixed script handwriting recognition has traditionally been very time consuming due to the complex clarification process.

Figure 112018083390499-pat00002
In addition, conventional handwriting recognition systems rely heavily on language-specific or script-specific characteristics for character recognition. Recognition of mixed language handwriting input had insufficient accuracy. In addition, the available combinations of recognized languages are very limited. Most systems have required the user to manually identify the desired language-specific handwriting recognizer before providing handwriting in each non-default language or script.

Figure 112018083390499-pat00003
Many existing real-time handwriting recognition models require time or sequence information on a stroke-by-stroke level, which can result in high variability in the way the characters can be written (e.g., due to fonts and personal habits) High variability in shape, length, tempo, segmentation, sequence, and direction of strokes). Some systems also require users to comply with strict space and time criteria when providing handwriting input (e.g., by unique assumptions about the size, sequence and time frame of each character input). Deviations from these standards have resulted in inaccurate recognition results that are difficult to correct.

Figure 112018083390499-pat00004
Currently, most real-time handwriting input interfaces allow the user to enter only a few characters at a time. The input of long phrases or sentences is broken down into short segments and entered separately. Such formal input not only removes the cognitive burden on the user to maintain the flow of composition but also makes it difficult for the user to modify or revise the input character or phrase first.

The embodiments described below address these and related problems.

1 to 4 below provide a description of exemplary devices. Figures 5, 6, 26, and 27 illustrate exemplary handwriting recognition and input systems. Figs. 8A and 8B, Figs. 11A to 11K, 13A to 13E, 15A to 15K, 17A to 17H, 19A to 19F, 21A to 21H, And exemplary user interfaces for input. 7, 9A and 9B, 10A to 10C, 12A and 12B, 14, 16A and 16B, 18A and 18B, 20A to 20C, 22A and 22B, And Figs. 24 and 25 illustrate the steps of training handwriting recognition models, providing real time handwriting recognition results, providing means for inputting and revising handwriting input, and inputting the recognition results as text input And means for enabling handwriting recognition and input on the user devices. The user in Figs. 8A and 8B, 11A to 11K, 13A to 13E, 15A to 15K, 17A to 17H, 19A to 19F, 21A to 21H and 23A to 12L The interfaces may be implemented in any of a variety of ways, such as in Figures 7, 9A and 9B, 10A to 10C, 12A and 12B, 14, 16A and 16B, 18A and 18B, 20A to 20C, 22A and 22B, Is used to illustrate the processes in Figs. 24A and 24B and Fig.

Exemplary devices

Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.

It is also to be understood that although the terms "first "," second ", etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, without departing from the scope of the present invention, a first contact may be referred to as a second contact, and similarly, a second contact may be referred to as a first contact. Although both the first contact and the second contact are contacts, they are not the same contact.

The terminology used herein to describe the invention is for purposes of describing particular embodiments only and is not intended to be limiting of the invention. As used in the description of the present invention and the appended claims, the singular forms "a", "an" and "the" are intended to also include the plural forms unless the context clearly dictates otherwise . It is also to be understood that the term "and / or" as used herein denotes and encompasses any and all possible combinations of one or more of the listed items of related items. The terms "include," "including," "comprise," and / or "comprising", when used in this specification, Elements, steps, operations, elements, and / or components but does not preclude the presence of one or more other features, integers, steps, operations, elements, components, and / Or < RTI ID = 0.0 > additions. ≪ / RTI >

As used herein, the term " if, "as used herein is intended to encompass all forms of" when "or" upon "or" in response to determining " Or "in response to detecting ". Similarly, the phrase " when determined "or" when a stated state or event is detected " Can be interpreted to mean "in response to detecting [the stated state or event].

Embodiments of electronic devices, user interfaces to such devices, and related processes using such devices are described. In some embodiments, the device is a portable communication device, such as a mobile phone, which also includes other functions such as PDA and / or music player functions. Exemplary embodiments of portable multifunction devices include the iPhone (registered trademark), iPod Touch (registered trademark), and iPad (registered trademark) trademarks from Apple Inc. of Cupertino, iPad < / RTI > (registered trademark) devices. Other portable electronic devices such as laptops or tablet computers with touch sensitive surfaces (e.g., touch screen displays and / or touch pads) may also be used. It should also be understood that in some embodiments, the device is not a portable communication device, but a desktop computer with a touch sensitive surface (e.g., a touch screen display and / or a touchpad).

In the following discussion, electronic devices including a display and a touch sensitive surface are described. However, it should be understood that the electronic device may include one or more other physical user-interface devices, such as a physical keyboard, a mouse and / or a joystick.

A device is typically used in a variety of applications, such as a drawing application, a presentation application, a word processing application, a web site creation application, a disk authoring application, a spreadsheet application, a gaming application, a phone application, a video conferencing application, Such as one or more of the following: a management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and / or a digital video player application.

Various applications that may be executed on the device may use at least one generic physical user-interface device, such as a touch sensitive surface. The one or more functions of the touch sensitive surface as well as the corresponding information displayed on the device may be adjusted and / or may be changed from one application to the next and / or within the individual application. In this way, the general physical architecture of the device (such as the touch sensitive surface) can support a variety of applications with intuitive and clear user interfaces to the user.

Attention is now directed to embodiments of portable devices having touch sensitive displays. 1 is a block diagram illustrating a portable multifunction device 100 having touch sensitive displays 112 in accordance with some embodiments. The touch sensitive display 112 is sometimes referred to as a "touch screen " for convenience, and may also be known or referred to as a touch sensitive display system. The device 100 includes a memory 102 (which may include one or more computer readable storage media), a memory controller 122, one or more processing units (CPUs) 120, a peripheral interface 118, an RF circuit (Not shown), audio circuitry 110, speaker 111, microphone 113, input / output (I / O) subsystem 106, other input or control devices 116, . The device 100 may include one or more optical sensors 164. These components may communicate via one or more communication buses or signal lines 103.

Device 100 is only one example of a portable multifunction device and device 100 may have more or fewer components than shown, may combine two or more components, or may have different configurations or arrangements of components It should be understood. The various components shown in FIG. 1 may be implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and / or application specific integrated circuits.

Memory 102 may include a high speed random access memory and may also include non-volatile memory such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to the memory 102 by other components of the device 100, such as the CPU 120 and the peripheral interface 118, may be controlled by the memory controller 122.

The peripheral interface 118 may be used to couple the input and output peripherals of the device to the CPU 120 and the memory 102. One or more of the processors 120 drives or executes various software programs and / or sets of instructions stored in the memory 102 to perform various functions for the device 100 and process the data.

In some embodiments, peripheral interface 118, CPU 120, and memory controller 122 may be implemented on a single chip, such as chip 104. In some other embodiments, they may be implemented on separate chips

A radio frequency (RF) circuit 108 receives and transmits RF signals, also referred to as electromagnetic signals. The RF circuit 108 converts electrical signals to and from electromagnetic signals and communicates with the communication networks and other communication devices via electromagnetic signals.

The audio circuitry 110, the speaker 111 and the microphone 113 provide an audio interface between the user and the device 100. The audio circuit 110 receives audio data from the peripheral interface 118, converts the audio data into an electrical signal, and transmits the electrical signal to the speaker 111. The speaker 111 converts the electric signal into a sound wave that can be heard by a person. The audio circuit 110 also receives electrical signals converted from sound waves by the microphone 113. Audio circuitry 110 converts electrical signals to audio data and transmits audio data to peripheral interface 118 for processing. The audio data may be retrieved from and / or transmitted to memory 102 and / or RF circuitry 108 by peripheral interface 118. In some embodiments, the audio circuitry 110 also includes a headset jack (e.g., 212 of FIG. 2).

The I / O subsystem 106 couples the input / output peripherals on the device 100 to the peripheral interface 118, such as the touch screen 112 and other input control devices 116. The I / O subsystem 106 may include a display controller 156, and one or more input controllers 160 for other input or control devices. One or more input controllers 160 receive / transmit electrical signals to / from other input or control devices 116. Other input control devices 116 may include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, In some alternative embodiments, the input controller (s) 160 may be coupled to (or may not be coupled to) any of the pointer devices such as a keyboard, infrared port, USB port, and mouse. . One or more buttons (e.g., 208 in FIG. 2) may include an up / down button for volume control of the speaker 111 and / or the microphone 113. One or more buttons may include a push button (e.g., 206 in FIG. 2).

The touch sensitive display 112 provides an input interface and an output interface between the device and the user. The display controller 156 receives and / or transmits electrical signals to / from the touch screen 112. The touch screen 112 displays a visual output to the user. The visual output may include graphics, text, icons, video, and any combination thereof (collectively referred to as "graphics"). In some embodiments, some or all of the visual output may correspond to user-interface objects.

The touch screen 112 has a touch sensitive surface, a sensor, or a set of sensors that accept input from a user based on haptic and / or tactile contacts. The touch screen 112 and the display controller 156 can be used to move a touch (and any movement or interruption of contact) on the touch screen 112 (along with any associated modules and / or sets of instructions within the memory 102) And converts the detected contact into interaction with the user-interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on the touch screen 112. In an exemplary embodiment, the point of contact between the touch screen 112 and the user corresponds to the user's finger.

The touch screen 112 may utilize LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies may be used in other embodiments. The touch screen 112 and the display controller 156 may include other proximity sensor arrays or other elements as well as capacitive, resistive, infrared and surface acoustic wave techniques to determine one or more points of contact with the touch screen 112 Any of the currently known or later developed touch sensing techniques, including but not limited to, can be used to detect contact and any movement or interruption thereof. In an exemplary embodiment, a projection-type mutual capacitive sensing, such as that found in the iPhone (registered trademark), iPod touch (registered trademark), and iPad (registered trademark) from Apple Inc. of Cupertino, California, A projected mutual capacitance sensing technology is used.

The touch screen 112 may have a video resolution greater than 100 dpi. In some embodiments, the touch screen has a video resolution of approximately 160 dpi. The user may contact the touch screen 112 using any suitable object or accessory, such as a stylus, a finger, and the like. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which may be less accurate than the stylus-based input due to the wider contact area of the finger on the touch screen. In some embodiments, the device translates the approximate finger-based input into a precise pointer / cursor position or command for performing the actions desired by the user. The handwriting input may be provided on the touch screen 112 through the location and movement of a finger-based or stylus-based contact. In some embodiments, the touch screen 112 renders the finger-based input or the stylus-based input as instant visual feedback to the current handwriting input, and uses a writing instrument (e.g., a pen) A single piece of paper).

In some embodiments, in addition to the touch screen, the device 100 may include a touch pad (not shown) for activating or deactivating certain functions. In some embodiments, the touchpad is a touch sensitive area of a device that does not display a visual output, unlike a touch screen. The touch pad may be a touch sensitive surface different from the touch screen 112 or an extension of the touch sensitive surface formed by the touch screen.

The device 100 also includes a power system 162 for powering various components. Power system 162 may include one or more power supplies (e.g., a battery, an alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (LEDs), and any other components associated with the generation, management, and distribution of power within portable devices.

The device 100 may also include one or more optical sensors 164. FIG. 1 illustrates an optical sensor coupled to an optical sensor controller 158 in an I / O subsystem 106. The light sensor 164 may comprise a charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) phototransistors. The light sensor 164 receives light from the ambient environment projected through one or more lenses and converts the light into data representative of the image. The optical sensor 164, along with an imaging module 143 (also referred to as a camera module), can capture still images or video.

The device 100 may also include one or more proximity sensors 166. FIG. 1 illustrates a proximity sensor 166 coupled to a peripheral interface 118. FIG. Alternatively, the proximity sensor 166 may be coupled to the input controller 160 in the I / O subsystem 106. In some embodiments, the proximity sensor disables and disables the touch screen 112 when the multifunction device is located near the user ' s ear (e.g., when the user is making a phone call) .

The device 100 may also include one or more accelerometers 168. FIG. 1 shows an accelerometer 168 coupled to a peripheral interface 118. FIG. Alternatively, the accelerometer 168 may be coupled to the input controller 160 in the I / O subsystem 106. In some embodiments, information is displayed in a portrait view or a landscape view on a touch screen display based on an analysis of data received from one or more accelerometers. Device 100 may include a magnetometer (not shown) in addition to accelerometer (s) 168 and a GPS (or GLONASS) device to obtain information about the location and orientation (e.g., vertical or horizontal) Or other global navigation system) receiver (not shown).

In some embodiments, the software components stored in memory 102 may include an operating system 126, a communication module (or set of instructions) 128, a contact / motion module (or set of instructions) 130, a graphics module (Or a set of instructions) 132, a text input module (or a set of instructions) 134, a GPS positioning module (or a set of instructions) 135, 136). Also, in some embodiments, the memory 102 stores a handwriting input module 157 as shown in Figures 1 and 3. The handwriting input module 157 includes a handwriting recognition model and provides a handwriting recognition and input function to the user of the device 100 (or the device 300). Further details of the handwriting input module 157 are provided in connection with Figures 5-27 and accompanying descriptions.

An embedded operating system such as an operating system 126 (e.g., Darwin, RTXC, Linux, UNIX, OS X, WINDOWS, or VxWorks) Management software, storage device control, power management, etc.) and facilitates communication between the various hardware and software components.

Communication module 128 may be configured to facilitate communication with other devices via one or more external ports 124 and to process data received by RF circuitry 108 and / And various software components. The external port 124 (e.g., Universal Serial Bus (USB), FIREWIRE, etc.) may be coupled to other devices either directly or indirectly via a network .

Contact / motion module 130 may detect contact with touch screen 112 (with display controller 156) and other touch sensitive devices (e.g., a touch pad or physical click wheel). The touch / motion module 130 may be configured to determine whether a contact has occurred (e.g., to detect a finger-down event), to determine whether there is movement of the contact, (E.g., detecting one or more finger-dragging events), and determining whether the contact has been stopped (e.g., a finger-up event or a contact interruption) Such as detecting the presence of a contact (e.g., detecting a contact). The contact / motion module 130 receives contact data from the touch sensitive surface. Determining the movement of the point of contact represented by the series of contact data can include determining the speed (magnitude), velocity (magnitude and direction) and / or acceleration (variation in magnitude and / or direction) of the point of contact . These operations may be applied to single contacts (e.g., one finger contacts) or multiple simultaneous contacts (e.g., "multi-touch" / multiple finger contacts). In some embodiments, contact / motion module 130 and display controller 156 detect contact on the touchpad.

The touch / motion module 130 may detect a gesture input by a user. Different gestures on the touch sensitive surface have different contact patterns. Thus, the gesture can be detected by detecting a specific contact pattern. For example, detecting a finger tap gesture may include detecting a finger tap gesture in the same position (or substantially the same position) as the finger-down event (e.g., at the position of the icon) Up (lifting) event. As another example, detecting a finger swipe gesture on a touch sensitive surface may include detecting one or more finger-drag events after detecting a finger-down event, followed by finger-up (lifting) And detecting an event.

The touch / motion module 130 is located within the handwriting input area of the handwriting input interface displayed on the touch sensitive display screen 112 (or in the handwriting input area displayed on the display 340 in FIG. 3) (In the area of the touch pad 355 corresponding to the input area). In some embodiments, the initial finger-down event, the final finger-up event, locations associated with the contact during any time between them, motion path and intensities are recorded as handwritten strokes. Based on such information, written strokes can be rendered on the display as feedback to the user input. In addition, one or more input images may be generated based on the written strokes registered by the touch / motion module 130.

Graphics module 132 includes a variety of well known software components for rendering and displaying graphics on touch screen 112 or other display, including components for modifying the intensity of the graphics being displayed. As used herein, the term "graphic" encompasses text, web pages, icons (e.g., user-interface objects including soft keys), digital images, videos, animations, Including any object that can be displayed to the user.

In some embodiments, graphics module 132 stores data representing graphics to be used. A corresponding code may be assigned to each graphic. Graphics module 132 receives one or more codes from an application or the like that specify graphics to be displayed along with coordinate data and other graphical property data as needed, and then generates screen image data and outputs it to display controller 156 .

The text input module 134, which may be a component of the graphics module 132, may include various applications (e.g., contacts 137, email 140, IM 141, browser 147, Lt; RTI ID = 0.0 > and / or < / RTI > any other application). In some embodiments, the handwriting input module 157 is selectively called through the user interface of the text input module 134, e.g., via keyboard selection affordance. In some embodiments, the same or similar keyboard selection affordance is also provided at the handwriting input interface to call the text input module 134.

GPS module 135 determines the location of the device and sends this information to phone 138 for use in various applications (e.g., for use in location-based dialing, camera 143 as photo / video metadata ), And to applications that provide location-based services such as weather widgets, local yellow page widgets, and map / navigation widgets.

Applications 136 may include the following modules (or sets of instructions) or a subset or superset thereof: contact module 137 (sometimes called an address book or contact list); A telephone module 138; A video conference module 139; An email client module 140; An instant messaging (IM) module 141; A motion support module 142; A camera module 143 for stationary and / or video images; An image management module 144; A browser module 147; A calendar module 148; The weather widget 149-1, the stock widget 149-2, the calculator widget 149-3, the alarm clock widget 149-4, the dictionary widget 149-5, and other widgets obtained by the user Widget modules 149, which may include one or more of user-generated widgets 149-6 and user-generated widgets 149-6; A widget generator module 150 for creating user-generated widgets 149-6; A search module 151; A video and music player module 152 that can be configured as a video player module and a music player module; A memo module 153; A map module 154; And / or online video module 155.

Examples of other applications 136 that may be stored in the memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, Management, voice recognition, and voice replication.

Along with the touch screen 112, the display controller 156, the contact module 130, the graphics module 132, the handwriting input module 157 and the text input module 134, (S) added; Delete name (s) from address book; Associate telephone number (s), email address (s), physical address (s), or other information with the name; Associate an image with a name; Sort and sort names; Telephone numbers or e-mail addresses to initiate and / or facilitate communication by telephone 138, videoconference 139, email 140, or IM 141; (E.g., stored in memory 102 or application internal state 192 of contact module 137 in memory 370), as well as a list of contacts or contacts.

The RF circuit 108, the audio circuitry 110, the speaker 111, the microphone 113, the touch screen 112, the display controller 156, the touch module 130, the graphics module 132, Phone module 138, along with text input module 134, inputs a sequence of characters corresponding to the telephone number, accesses one or more telephone numbers in address book 137, Modify, dial each phone number, have a conversation, and disconnect or disconnect when the conversation is complete. As described above, wireless communication may utilize any of a plurality of communication standards, protocols, and techniques.

An RF circuit 108, an audio circuit 110, a speaker 111, a microphone 113, a touch screen 112, a display controller 156, an optical sensor 164, an optical sensor controller 158, The video conferencing module 139, along with the graphics module 132, the handwriting input module 157, the text input module 134, the contact list 137 and the telephone module 138, And initiating, enforcing, and terminating a videoconference between one or more other participants.

Together with an RF circuit 108, a touch screen 112, a display controller 156, a contact module 130, a graphics module 132, a handwriting input module 157 and a text input module 134, 140 includes executable instructions for creating, sending, receiving, and managing email in response to user commands. Along with the image management module 144, the e-mail client module 140 makes it very easy to create and send e-mails with still or video images taken with the camera module 143. [

Along with an RF circuit 108, a touch screen 112, a display controller 156, a touch module 130, a graphics module 132, a handwriting input module 157 and a text input module 134, an instant messaging module 141 may be configured to input a sequence of characters corresponding to an instant message, to modify previously entered characters, to send a short message service (SMS) or multimedia message for phone-based instant messages (Using XMPP, SIMPLE or IMPS for Internet-based instant messages, using a Multimedia Message Service (MMS) protocol), sending individual instant messages, receiving instant messages, Include possible instructions. In some embodiments, the instant messages that are sent and / or received may include graphics, pictures, audio files, video files, and / or multimedia files as supported by MMS and / or Enhanced Messaging Service (EMS) ≪ / RTI > and the like. As used herein, "instant messaging" is intended to encompass both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., XMPP, SIMPLE or IMPS Quot; messages ").

The RF circuit 108, the touch screen 112, the display controller 156, the touch module 130, the graphics module 132, the handwriting input module 157, the text input module 134, the GPS module 135, Along with the map module 154 and the music player module 146, the motion support module 142 generates motions (e.g., having time, distance and / or calorie expenditure goals); Communicate with motion sensors (sports devices); Receiving motion sensor data; Calibrating the sensors used to monitor motion; Select and play music for exercise; And to store, transmit and transmit motion data.

Along with the touch screen 112, the display controller 156, the optical sensor (s) 164, the optical sensor controller 158, the contact module 130, the graphics module 132 and the image management module 144, Module 143 may capture still images or video (including video streams) and store them in memory 102, modify the characteristics of still images or video, Quot ;, and " delete "

Along with the touch screen 112, the display controller 156, the touch module 130, the graphics module 132, the handwriting input module 157, the text input module 134 and the camera module 143, 144 may be operable to arrange, modify (e.g., edit) or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album) ≪ / RTI >

Together with the RF circuitry 108, the touch screen 112, the display system controller 156, the touch module 130, the graphics module 132, the handwriting input module 157 and the text input module 134, 147) includes browsing, linking to, receiving, and displaying attachments and other files linked to web pages as well as web pages or portions thereof, to browse the Internet according to user commands, Lt; / RTI >

The RF circuit 108, the touch screen 112, the display system controller 156, the touch module 130, the graphics module 132, the handwriting input module 157, the text input module 134, the email client module 140 ) And browser module 147. The calendar module 148 generates, displays, and displays data (e.g., calendar entries, to-do lists, etc.) associated with calendars and calendars in accordance with user commands, Modify, and store the executable instructions.

The RF circuit 108, the touch screen 112, the display system controller 156, the touch module 130, the graphics module 132, the handwriting input module 157, the text input module 134 and the browser module 147, The widget modules 149 may be downloaded and used by the user (e.g., weather widget 149-1, stock widget 149-2, calculator widget 149-3, 149-4) and dictionary widgets 149-5), or mini-applications that may be generated by the user (e.g., user-generated widget 149-6). In some embodiments, the widget includes a Hypertext Markup Language (HTML) file, a Cascading Style Sheets (CSS) file, and a JavaScript (JavaScript) file. In some embodiments, the widget includes Extensible Markup Language (XML) files and JavaScript files (e.g., Yahoo! widgets).

The RF circuit 108, the touch screen 112, the display system controller 156, the touch module 130, the graphics module 132, the handwriting input module 157, the text input module 134 and the browser module 147, The widget generator module 150 may be used to create widgets by the user (e.g., change the user-specific portion of the web page to a widget).

Along with the touch screen 112, the display system controller 156, the touch module 130, the graphics module 132, the handwriting input module 157 and the text input module 134, Music, sound, images, video, and / or other files in the memory 102 that match one or more search criteria (e.g., one or more user-specific search terms) .

Along with the touch screen 112, the display system controller 156, the touch module 130, the graphics module 132, the audio circuitry 110, the speaker 111, the RF circuitry 108 and the browser module 147, Video and music player module 152 includes executable instructions that enable a user to download and play back recorded music and other sound files stored in one or more file formats such as MP3 or AAC files, , On the touch screen 112, or on a display connected externally via the external port 124) to play, or otherwise play. In some embodiments, the device 100 may include the functionality of an MP3 player such as an iPod (trademark of Apple Inc.).

Along with the touch screen 112, the display controller 156, the touch module 130, the graphics module 132, the handwriting input module 157 and the text input module 134, the memo module 153 is connected to user commands And executable instructions for creating and managing notes, to-do lists, and the like.

The RF circuit 108, the touch screen 112, the display system controller 156, the touch module 130, the graphics module 132, the handwriting input module 157, the text input module 134, the GPS module 135, And browser module 147, the map module 154 may be configured to store data (e.g., travel direction) associated with maps and maps in accordance with user instructions, such as stores and other points of interest at or near a particular location , And other location-based data), as described herein.

The touch screen 112, the display system controller 156, the touch module 130, the graphics module 132, the audio circuit 110, the speaker 111, the RF circuit 108, the handwriting input module 157, Along with the input module 134, the email client module 140 and the browser module 147, the online video module 155 allows the user to access, browse and search online videos in one or more file formats, such as H.264, (E. G., By streaming and / or downloading), playing (e. G., On a touch screen or external connected display via external port 124), sending an email with a link to a particular online video , And instructions to make it otherwise manageable. In some embodiments, the instant messaging module 141 is used rather than the email client module 140 to send a link to a particular online video.

Each of the identified modules and applications may be implemented as an executable for performing one or more of the functions described above and the methods described herein (e.g., computer-implemented methods and other information processing methods described herein) Corresponding to a set of possible instructions. These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments . In some embodiments, the memory 102 may store a subset of the identified modules and data structures. In addition, the memory 102 may store additional modules and data structures not described above.

In some embodiments, device 100 is a device in which the operation of a predefined set of functions on a device is exclusively performed through a touch screen and / or a touch pad. The number of physical input control devices (such as push buttons, dials, etc.) on the device 100 can be reduced by using the touch screen and / or touchpad as the primary input control device for operation of the device 100. [

Figure 2 illustrates a portable multifunction device 100 having a touch screen 112 in accordance with some embodiments. The touch screen may display one or more graphics within the user interface (UI) In other embodiments described below, as well as in this embodiment, a user may select one or more fingers 202 (not shown in scale in the figure) or one or more styluses 203 (Not shown), one or more of the graphics may be selected by performing a gesture on the graphics. In some embodiments, the selection of one or more graphics occurs when the user stops touching one or more graphics. In some embodiments, the gesture may include one or more taps of the finger in contact with the device 100, one or more swipes (left to right, right to left, up and / or down) and / (From right to left, from left to right, up and / or down). In some embodiments, careless contact with graphics may not select the graphics. For example, a swipe gesture sweeping over an application icon may not select a corresponding application when the gesture corresponding to the selection is a tab.

The device 100 may also include one or more physical buttons, such as a "home" or menu button 204. [ As previously described, the menu button 204 may be used to navigate from any set of applications that may be executed on the device 100 to any application 136. [ Alternatively, in some embodiments, the menu button is implemented as a soft key in a GUI displayed on the touch screen 112. [

In one embodiment, the device 100 includes a touch screen 112, a menu button 204, a push button 206 for turning the device on / off and locking the device, volume control button (s) 208, A subscriber identity module (SIM) card slot 210, a headset jack 212, and a docking / charging external port 124. Push button 206 changes the power on / off for the device by pressing a button and keeping the button pressed for a predefined time interval; Lock the device by depressing the button and depressing the button before the predefined time interval elapses; And / or may be used to unlock the device or initiate the unlocking process. In an alternative embodiment, the device 100 may also allow verbal input for activation or deactivation of some of the functions via the microphone 113.

Figure 3 is a block diagram of an exemplary multifunction device having a display and a touch sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 may be a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child's learning toy), a gaming system, a telephone device, For example, a home or industrial controller). Device 300 typically includes one or more processing units (CPUs) 310, one or more networks or other communication interfaces 360, a memory 370, and one or more communication busses 320). Communication busses 320 may include circuitry (sometimes referred to as a chipset) that interconnects system components and controls communication therebetween. The device 300 includes an input / output (I / O) interface 330 that includes a display 340 that is typically a touch screen display. The I / O interface 330 may also include a keyboard and / or mouse (or other pointing device) 350 and a touchpad 355. Memory 370 includes high speed random access memory, e.g., DRAM, SRAM, DDR RAM or other random access solid state memory devices; Non-volatile memory, e.g., one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. The memory 370 may optionally include one or more storage devices remotely located from the CPU (s) In some embodiments, the memory 370 may include data structures, modules and data structures similar to programs, modules and programs stored in the memory 102 of the portable multifunction device 100 (Fig. 1) And stores its subset. In addition, the memory 370 may store additional programs, modules, and data structures that are not present in the memory 102 of the portable multifunction device 100. For example, the memory 370 of the device 300 may include a drawing module 380, a presentation module 382, a word processing module 384, a website creation module 386, a disk authoring module 388, and / Or spreadsheet module 390, while the memory 102 of the portable multifunction device 100 (Fig. 1) may not store these modules.

In Figure 3, each of the identified elements may be stored in one or more of the memory devices previously mentioned. Each of the identified modules corresponds to a set of instructions for performing the functions described above. The identified modules or programs (i. E. Sets of instructions) need not be implemented as separate software programs, procedures or modules, so that the various subsets of these modules may be combined or varied in various embodiments Can be rearranged. In some embodiments, memory 370 may store a subset of the identified modules and data structures. In addition, memory 370 may store additional modules and data structures not described above.

Figure 4 illustrates a device (e.g., device 300) having a touch sensitive surface 451 (e.g., a tablet or touch pad 355, Figure 3) separated from a display 450 (e.g., , Figure 3). ≪ / RTI > A number of subsequent examples will be given with reference to inputs on the touch screen display 112 (in combination with the touch sensitive surface and display), but in some embodiments the device may have a touch that is distinct from the display And detects inputs on the sensitive surface. In some embodiments, the touch sensitive surface (e.g., 451 in FIG. 4) has a major axis (e.g., 452 in FIG. 4) corresponding to a major axis (e.g., 453 in FIG. 4) on the display According to these embodiments, the device is in contact with the touch sensitive surface 451 at positions corresponding to individual positions on the display (e.g., in FIG. 4, 460 corresponds to 468 and 462 corresponds to 470) (E.g., 460 and 462 in FIG. 4). In this manner, the user inputs (e.g., contacts 460, 462 and their movements) detected by the device on the touch sensitive surface (e.g., 451 in FIG. 4) Is used by the device to manipulate the user interface on the display of the multifunction device (e.g., 450 in FIG. 4). It should be appreciated that similar methods may be used for other user interfaces described herein.

Attention is now drawn to embodiments of handwriting input methods and user interfaces ("UI") that may be implemented on a multifunction device (e.g., device 100).

5 is a block diagram of an I / O interface module 500 (e.g., the I / O interface 330 of FIG. 3 or the I / O subsystem 500 of FIG. 1) to provide handwriting capabilities on a device in accordance with some embodiments. ≪ / RTI > 106) of the handwriting input module < RTI ID = 0.0 > 157 < / RTI > 5, the handwriting input module 157 includes an input processing module 502, a handwriting recognition module 504, and a result generation module 506. As shown in FIG. In some embodiments, the input processing module 502 includes a partitioning module 508 and a normalization module 510. In some embodiments, the result generation module 506 includes a radical clustering module 512 and one or more language models 514.

In some embodiments, the input processing module 502 communicates with the I / O interface module 500 (e.g., the I / O interface 330 of FIG. 3 or the I / O subsystem 106 of FIG. 1) And receives handwritten inputs from the user. The handwriting is input via any suitable means, such as the touch sensitive display system 112 of FIG. 1 and / or the touch pad 355 of FIG. The handwritten inputs include data representing each stroke provided by a user within a predetermined handwriting input area within the handwriting input UI. In some embodiments, the data indicative of each stroke of the handwriting input includes at least one of a start and end position of a touch that has lasted in the handwriting input area (e.g., a contact between the user's finger or stylus and the touch sensitive surface of the device) Intensity profile, and motion path. In some embodiments, the I / O interface module 500 delivers sequences of handwritten strokes 516 with associated time and spatial information to the input processing module 502 in real time. At the same time, the I / O interface module also provides a real-time rendering 518 of writing strokes within the handwriting input area of the handwriting input user interface as visual feedback to the user's input.

In some embodiments, as data indicative of each handwriting stroke is received by the input processing module 502, temporal and sequence information associated with a plurality of continuous strokes is also recorded. For example, the data optionally includes a stack showing the shape, size, spacing of individual strokes with individual stroke sequence numbers, and relative spatial locations along the writing direction of the entire handwriting input. In some embodiments, the input processing module 502 may include an instruction to render the input strokes on the display 518 of the device (e.g., the display 340 of FIG. 3 or the touch sensitive display 112 of FIG. 1) To the I / O interface modules 500. In some embodiments, the rendering of the received strokes is animated to provide a visual effect that mimics the actual progress of writing on a writing surface (e.g., a piece of paper) with a writing instrument (e.g., a pen). In some embodiments, the user is allowed to specify pen-tip styles, colors, textures, etc. of selectively rendered strokes.

In some embodiments, the input processing module 502 processes the currently accumulated strokes in the handwriting input area and assigns the strokes to one or more recognition units. In some embodiments, each recognition unit corresponds to a character that should be recognized by the handwriting recognition model 504. In some embodiments, each recognition unit corresponds to an output character or number of copies to be recognized by the handwriting recognition model 504. Ancillary is an iterative component found in multiple compound word characters. The composite slogan character may include two or more collations arranged in accordance with a general layout (e.g., a left-right layout, a vertical layout, and the like). In one example, a single "

Figure 112018083390499-pat00005
Quot; is constructed using two copies, i.e., the left-hand side "mouth" and the right-hand side "kil".

In some embodiments, the input processing module 502 relies on the partitioning module to assign or divide the currently accumulated handwriting strokes into one or more recognition units. For example, the handwritten letter "

Figure 112018083390499-pat00006
Quot ;, the partitioning module 508 optionally assigns the clustered strokes to the left of the handwriting input (i.e., to the left handwriting "mouth") and assigns the clustering strokes to the right of the handwriting input The partitioning module 508 also assigns all strokes (i.e., the letters "character ") to the other recognition units,
Figure 112018083390499-pat00007
Quot; to a single recognition unit). ≪ / RTI >

In some embodiments, the partitioning module 508 divides the currently accumulated handwriting input (e.g., one or more writing strokes) into groups of recognition units in a number of different ways to create a partitioning grid 520. For example, assume that a total of nine strokes have been accumulated so far in the handwriting input area. According to the first partition chain of the partition grid 520, the stroke 1, stroke 2, and stroke 3 are grouped into a first recognition unit 522 and strokes 4, 5, and 6 are grouped into a second recognition unit 526 Lt; / RTI > According to the second partitioned chain of partitioned grids 520, all of the strokes 1 through 9 are grouped into one recognition unit 526.

In some embodiments, each segmentation chain is given a segmentation score to measure the likelihood that a particular segmentation chain is the correct segmentation of the current handwriting input. In some embodiments, the factors that are optionally used to compute the partition score of each partitioning chain are the absolute and / or relative magnitude of the stroke, the number of strokes in various directions (e.g., x, y, z directions) Absolute and / or relative distances to adjacent strokes, absolute and / or relative positions of strokes, strokes, and / or relative positions, relative and / or absolute spans, variations in the saturation level of strokes and / The sequence or sequence, the duration of each stroke, the variations in the velocity (or tempo) at which the stroke was entered and / or its mean, the intensity profile of each stroke along the stroke length, and the like. In some embodiments, one or more functions or transforms are selectively applied to one or more of these factors to generate split scores of different split chains in the partitioning grid 520.

In some embodiments, after the partitioning module 508 divides the current handwriting input 516 received from the user, the partitioning module 508 passes the partitioning grid 520 to the normalization module 510. In some embodiments, the normalization module 510 may include an input image (e.g., input images 528) for each recognition unit (e.g., recognition units 522, 524, 526) ). In some embodiments, the normalization module performs the necessary or desired normalization (e.g., stretching, cropping, down-sampling or up-sampling) on the input image to provide an input image as input to the handwriting recognition model 504 . In some embodiments, each input image 528 includes strokes assigned to one individual recognition unit and corresponds to one character or number of copies that should be recognized by the handwriting recognition module 504. [

In some embodiments, the input images generated by the input processing module 502 do not include any temporal information associated with individual strokes, but rather include spatial information (e. G., Displayed by the density and location of pixels in the input image Information) is saved in the input image. The handwriting recognition model, which is entirely trained only for the spatial information of training write samples, is capable of handwriting recognition based on spatial information. As a result, the handwriting recognition model can be applied to all of the characters in its vocabulary (i.e., all output classes) during training, without strictly enumerating all possible permutations of stroke orders and stroke directions, It is independent. Indeed, in some embodiments, the handwriting recognition module 502 does not distinguish pixels belonging to one stroke from pixels belonging to another stroke in the input image.

In some embodiments, to improve recognition accuracy, without compromising the stroke order and stroke direction independence of the recognition model, as described in greater detail below (e.g., with respect to Figures 25A-27) - Induced stroke distribution information is reintroduced into an entirely-spatial handwriting recognition model.

In some embodiments, the input image generated by the input processing module 502 for one recognition unit does not overlap with the input image of any other recognition unit in the same split chain. In some embodiments, the input images generated for different recognition units may be slightly overlapping. In some embodiments, some overlap between the input images may be achieved by handwriting input that includes verbally written and / or run-on characters (e.g., one stroke joins two adjacent characters) Is recognized.

In some embodiments, some normalization is performed prior to segmentation. In some embodiments, the functions of the partitioning module 508 and the normalization module 510 may be performed by the same module or by two or more different modules.

In some embodiments, as the input image 528 of each recognition unit is provided as an input to the handwriting recognition model 504, the handwriting recognition model 504 may be used by the recognition unit as a repertoire of the handwriting recognition model 504, (I. E., A list of all characters recognizable by the handwriting recognition module 504 and a list of collations). As will be described in greater detail below, the handwriting recognition model 504 has been trained to recognize a large number of characters in multiple scripts (e.g., at least three non-overlapping scripts encoded by the Unicode standard).

Examples of non-nested scripts include artificial scripts such as Latin scripts, Chinese characters, Arabic characters, Persian, Cyrillic, and Emoji characters. In some embodiments, the handwriting recognition model 504 generates one or more output characters (i.e., for each recognition unit) for each input image, and generates one or more output characters for each input character based on the confidence level associated with the character recognition Assign a recognition score.

In some embodiments, the handwriting recognition model 504 generates a candidate grid 530 in accordance with the partition grid 520 where it is placed in the partition grid 520 (e.g., in an individual recognition unit 522, 524, 526) Each arc in the corresponding partitioned chain can be divided into one or more candidate arcs (e.g., arcs 532, 534, 536, 538, 540 corresponding to individual output characters) in the candidate grid 530 Is expanded. Each candidate chain in the candidate grid 530 is scored according to individual segmentation scores of the segmented chains underlying the candidate chain and recognition scores associated with output characters in the character chain.

In some embodiments, after the handwriting recognition model 504 generates output characters from the input images 528 of the recognition units, the candidate grid 530 is passed to the result generation module 506 to generate the current accumulated handwriting input RTI ID = 0.0 > 516 < / RTI >

In some embodiments, the result generation module 506 utilizes the coarse clustering module 512 to combine one or more collations in the candidate chain into a composite character. In some embodiments, the result generation module 506 may use one or more language models 514 to determine whether the character chain at the candidate grid 530 is a probable sequence in a particular language represented by language models . In some embodiments, the result generation module 506 generates a revised candidate grid 542 by combining two or more arcs in the candidate grid 530 or removing certain arcs.

In some embodiments, the result generation module 506 is configured to generate recognition scores (e.g., augmentations or reductions) of output characters in the character sequence as modified (e.g., augmented or reduced) by the collateral clustering module 512 and language models 514 (E.g., character sequences 544, 546) that are still remaining in the revised candidate grid 542 based on the number of character sequences (e.g., character sequences 544, 546). In some embodiments, the result generation module 506 ranks the different character sequences remaining in the revised candidate grid 542 based on their integrated recognition scores.

In some embodiments, the result generation module 506 sends the top-ranked character sequences to the I / O interface module 500 as ranked recognition results 548 to display to the user. In some embodiments, the I / O interface module 500 receives the received recognition results 548 (e.g., "

Figure 112018083390499-pat00008
"And"
Figure 112018083390499-pat00009
Quot;) in the candidate display area of the handwriting input interface. In some embodiments, the I / O interface module displays a plurality of recognition results (e.g., "
Figure 112018083390499-pat00010
"And"
Figure 112018083390499-pat00011
"), And allows the user to select and input the recognition results as text input for the associated application. In some embodiments, the I / O interface module may display indications of user confirmation of recognition results or other inputs (E.g., "
Figure 112018083390499-pat00012
"). ≪ / RTI > The effective automatic input of the top-level result can improve the efficiency of the input interface and provide a better user experience.

In some embodiments, the result generation module 506 changes the integrated recognition scores of the candidate chains using other factors. For example, in some embodiments, result generation module 506 optionally maintains a log of the most frequently used characters for a particular user or a large number of users. If certain candidate characters or character sequences are found in the list of the most frequently used characters or character sequences, the result generation module 506 optionally raises the integrated recognition scores of the particular candidate characters or character sequences.

In some embodiments, the handwriting input module 157 provides real-time updates for recognition results displayed to the user. For example, in some embodiments, for each additional stroke entered by the user, the input processing module 502 optionally re-divides the current cumulative handwriting input, and the input image provided to the handwriting recognition model 504 And the partition grid. Finally, the handwriting recognition model 504 optionally revises the candidate grid provided to the result generation module 506. As a result, the result generation module 506 selectively updates the recognition results presented to the user. As used herein, real-time handwriting recognition refers to handwriting recognition in which handwriting recognition results are presented to the user immediately or within a short period of time (e.g., within a few tens of milliseconds to a few seconds). Real-time handwriting recognition may be performed by a user who does not perform a single after-the-user session from a recorded image that is stored for later retrieval, and that recognition is immediately initiated and performed substantially simultaneously with the receipt of the handwriting input (Such as in OCR applications). In addition, since offline character recognition is performed without any time information regarding individual strokes and stroke sequences, segmentation is performed without the benefit of such information. Further clarification between candidate characters that look similar is also done without the benefit of such time information.

In some embodiments, the handwriting recognition model 504 is implemented as a convolutional neural network (CNN). FIG. 6 illustrates an exemplary convolutional neural network 602 trained for a multiscript training corpus 604 that includes write samples for characters in a number of non-overlapping scripts.

6, the convolutional neural network 602 includes an input plane 606 and an output plane 608. Between the input plane 606 and the output plane 608 a plurality of convolution layers 610 (e.g., a first convolution layer 610a, zero or more intermediate convolution layers (not shown) Layer 610n) is present. Each convolution layer 610 is followed by a respective sub-sampling layer 612 (e.g., a first sub-sampling layer 612a, zero or more intermediate sub-sampling layers (not shown) and a last sub- 612n). There are hidden layers 614 behind the convolution layers and sub-sampling layers and immediately before the output plane 608. [ The hidden layer 614 is the last layer before the output plane 608. In some embodiments, a kernel layer 616 (e.g., a first kernel layer 616a, zero or more intermediate kernel layers (not shown) and a final kernel layer 616a) are provided in front of each convolution layer 610 to improve the efficiency of the calculation Layer 612n) is inserted.

6, the input plane 606 receives an input image 614 (e.g., a handwritten letter or ancillary) of the handwriting recognition unit and the output plane 608 receives the input image 614 A particular character in the output character set that the neural network is configured to recognize). Overall, the output classes of the neural network (or the output character set of the neural network) are also referred to as repertoires or vocabularies of the handwriting recognition model. The convolutional neural network described herein can be trained to have a repertoire of tens of thousands of characters.

When the input image 614 is processed through the different layers of the neural network, different spatial features embedded in the input image 614 are extracted by the convolution layers 610. Each convolution layer 610 is also referred to as a set of feature maps and acts as a filter for selecting specific features in the input image 614 to distinguish between images corresponding to different characters. The sub-sampling layers 612 ensure that the features are captured from the input image 614 at increasingly larger scales. In some embodiments, the sub-sampling layers 612 are implemented using a max-pooling technique. The maximum-pooling layers create a position invariance over the larger localized areas, down-sampling the output image of the preceding convolution layer by a factor of Kx and Ky along each direction, where Kx and Ky are the maximum-pulling rectangular Size. Maximum-pulling causes faster convergence rates by selecting good invariant features, which improves generalization performance. In some embodiments, sub-sampling is accomplished using other methods.

In some embodiments, a fully-connected layer, i.e., a hidden layer 614, is provided behind the last set of convolution layer 610n and sub-sampling layer 612n and before the output plane 608 exist. The full-connect concealment layer 614 is a multi-layer perceptron that completely connects nodes in the output sub-plane 608 with the nodes in the last sub-sampling layer 612n. The hidden layer 614 occupies the output images received from the layer before and during the logistic regression reaches one of the output characters in the output layer 608. [

During training of the convolutional neural network 602, the individual weights associated with the features and features in the convolution layers 610 such that misclassification errors are minimized for write samples using known output classes in the training corpus 604 The weights associated with the parameters in the hidden layer 614 are coordinated. Once the convolutional neural network 602 has been trained and the optimal set of parameters and associated weights have been established for the different layers in the network, the convolutional neural network 602 is used to generate a training corpus 604 that is not part of the training corpus 604 New write samples 618, e.g., input images generated based on real-time handwriting input received from a user.

As described herein, a convolutional neural network of handwriting input interfaces is trained using multi-script training corpus to enable multi-script or mixed script handwriting recognition. In some embodiments, the convolutional neural network is trained to recognize a large repertoire of more than 30,000 to more than 60,000 characters (e.g., all characters encoded by the Unicode standard). Most state-of-the-art handwriting recognition systems are based on stroke order dependent Hidden Markov Models (HMMs). In addition, most existing handwriting recognition models are language-specific, and are language-specific, and include dozens of characters (e.g., English alphabet, Greek alphabet, all 10 letters, A set of Chinese characters). As such, the universal recognizer described herein can handle tens of times more characters than most conventional systems.

Some conventional handwriting systems may include several individually trained handwriting recognition models, and each handwriting recognition model is tailored to a particular language or a small set of characters. The write samples are propagated through different recognition models until the distinction is made. For example, a handwriting sample may be provided as a series of contiguous, language-specific or script-specific character recognition models, and if the handwriting sample can not be clearly distinguished by the first recognition model, , Which then attempts to distinguish the handwriting sample within its own repertoire. This approach to classification is time consuming, and the memory requirements rapidly increase with each additional recognition model that needs to be used.

Other recent models require the user to specify the preferred language, and use the selected handwriting recognition model to distinguish the current input. These implementations are cumbersome to use and consume considerable memory as well as may not be used to recognize mixed language input. It is impractical to require users to switch language preferences while entering mixed language or mixed script input.

The multiscript or universal recognizer described herein addresses at least some of the above problems with conventional recognition systems. Figure 7 illustrates a handwriting recognition module (e. G., A convolutional neural network < / RTI > (e. G., A handwriting recognition module) using a large multiscript training corpus to enable the handwriting recognition module to subsequently be used to provide real- (E. G., ≪ / RTI >

In some embodiments, after the training of the handwriting recognition model is performed on the server device, a trained handwriting recognition model is provided to the user device. The handwriting recognition model selectively performs real-time handwriting recognition locally on the user device without requiring additional support from the server. In some embodiments, both training and recognition are provided on the same device. For example, a server device may receive a user's handwriting input from a user device, perform handwriting recognition, and transmit recognition results to the user device in real time.

In an exemplary process 700, in a device having one or more processors and memory, the device may generate a multi-script handwriting recognition model based on the space-guided features of the multi-script training corpus (e.g., stroke- (702). In some embodiments, the space-guiding features of the multi-script training corpus are stroke-sequence independent and stroke-independent (704). In some embodiments, the training of the multi-script handwriting recognition model is not related to the time information associated with individual strokes in the handwriting samples (706). Specifically, the images of the handwriting samples are normalized to a predetermined size, and the images do not contain any information about the order in which the individual strokes are entered to form an image. In addition, images also do not contain any information about the direction in which individual strokes are entered to form an image. In fact, during training, features are extracted from the handwritten images regardless of how the images are formed temporally by the individual strokes. Thus, during recognition, time information related to individual strokes is not required. As a result, recognition provides firmly consistent recognition results despite delayed, out-of-order strokes in the handwriting input, and any stroke directions.

In some embodiments, the multiscript training corpus includes handwriting samples corresponding to the characters of at least three non-overlapping scripts. As shown in FIG. 6, the multiscript training corpus includes handwriting samples collected from many users. Each handwriting sample corresponds to one character of an individual script displayed in the handwriting recognition model. To properly train the handwriting recognition model, the training corpus includes a large number of write samples for each character of the scripts displayed in the handwriting recognition model.

In some embodiments, at least three non-overlapping scripts include characters, emoji characters, and Latin script (708). In some embodiments, the multi-script handwriting recognition model has at least 30,000 output classes (710) representing 30,000 characters spanning at least three non-overlapping scripts.

In some embodiments, the multiscript training corpus may be applied to each character of every Hanzi encoded in the Unicode standard (e.g., substantially all or all of the unified ideographs of all CJK (Chinese-Japanese-Korean) unified ideographs) ≪ / RTI > The Unicode standard defines a total of approximately 74,000 CJK unified tab characters. The basic block of CJK unified ideograms (4E00-9FFF) contains 20,941 basic Chinese characters not only in Chinese, but also in Japanese, Korean and Vietnamese. In some embodiments, the multiscript training corpus includes write samples for all characters in the base block of CJK unified table characters. In some embodiments, the multiscript training corpus further includes write samples for CJK collations that can be used to structurally construct one or more compound Chinese characters. In some embodiments, the multiscript training corpus further includes write samples for less frequently used Chinese characters, such as Chinese characters encoded with one or more of the CJK unified table character extensions.

In some embodiments, the multiscript training corpus further includes individual write samples for each character of every character in the Latin script encoded by the Unicode standard. Characters in the basic Latin script include various lowercase letters and uppercase letters as well as various basic symbols and numbers commonly used on standard Latin keyboards. In some embodiments, the multiscript training corpus further includes characters in an extended Latin script (e.g., various intonation types of basic Latin characters).

In some embodiments, the multiscript training corpus includes write samples corresponding to each character of an artificial script that is not associated with any natural human language. For example, in some embodiments, a set of emotional characters is selectively defined in the emoticon script, and write samples corresponding to each emoticon character are included in the multiscreen training corpus. For example, a hand-drawn heart-shaped symbol is a handwritten sample of the emotion character "♥" in the training corpus. Similarly, a smiley face drawn by hand (e.g., two points on an overturned arc) is displayed in the training corpus as the emotion character "

Figure 112018083390499-pat00013
"Other emoji characters include different emotions (eg, happiness, sadness, anger, embarrassment, shock, laughter, crying, frustration, etc.), different objects and characters (eg, Dog, rabbit, heart, fruit, eyes, lips, gift, flower, candle, moon, star, etc.) and different behaviors (eg, shaking, kissing, running, dancing, jumping, sleeping, eating, , Voting, etc.). In some embodiments, the strokes in the handwriting sample corresponding to the epilog character are simplified and / or stylized of the actual lines forming the corresponding epilogue character In some embodiments, each device or application may use a different design for the same emotional character. For example, if handwritten inputs received from two users are substantially Although the same, the smiley emoji characters presented to female users may be different than the smiley emoji characters are presented to the user as men.

In some embodiments, the multiscript training corpus may also include other scripts, such as Greek scripts (e.g., including Greek letters and symbols), Cyrillic script, Hebrew scripts, and one And write samples for the characters in the above other scripts. In some embodiments, at least three non-overlapping scripts included in the multiscript training corpus include characters in kanji, emoji characters, and Latin scripts. Characters in Chinese characters, Emoji characters, and Latin scripts are naturally non-nested scripts. Many other scripts can overlap each other for at least some characters. For example, some characters (e.g., A, Z) in a Latin script may be found in many other scripts (e.g., Greek and Cyrillic). In some embodiments, the multiscript training corpus includes Chinese characters, Arabic scripts, and Latin scripts. In some embodiments, the multiscript training corpus includes different combinations of overlapping and / or non-overlapping scripts. In some embodiments, the multiscript training corpus includes write samples for all characters encoded by the Unicode standard.

As shown in Figure 7, in some embodiments, to train a multi-script handwriting recognition model, the device provides handwritten samples of the multi-script training corpus as a single convolution neural network with a single input plane and a single output plane (712). The device uses the convolutional neural network to generate space-derived features (e.g., stroke-sequence independent features) of handwritten samples to distinguish the characters of at least three non-overlapping scripts displayed in the multiscript training corpus, - Determine individual weights for the derived features (714). The multi-script handwriting recognition model differs from conventional multi-script handwriting recognition models in that a single handwriting recognition model with a single input plane and a single output plane is trained using all samples in a multi-script training corpus. Without relying on individual sub-networks (e.g., the characters of a particular script or the sub-networks each trained to recognize characters used in a particular language) that each handle a small subset of the training corpus, A single convolution neural network is trained to distinguish all the characters displayed in the corpus. In addition, a single convolutional neural network may be used for a number of non-overlapping scripts rather than characters of several overlapping scripts, such as Latin scripts and Greek scripts (e.g., having nested characters A, B, E, Z, It is trained to distinguish a large number of characters.

In some embodiments, the device provides 716 real-time handwriting recognition of the user's handwriting input using a multi-script handwriting recognition model trained on the space-guiding features of the multiscreen training corpus. In some embodiments, providing real-time handwriting recognition of the user's handwriting input includes continuously revising the recognition output for the handwriting input of the user as the user continues to provide additional or revised handwriting input . In some embodiments, providing real-time handwriting recognition of the user's handwriting input further comprises providing 718 a multi-script handwriting recognition model to the user device, wherein the user device receives handwriting input from the user And performs handwriting recognition on the handwriting input locally based on the multi-script handwriting recognition model.

In some embodiments, the device provides a multi-script handwriting recognition model with a plurality of devices in which the individual input languages of the plurality of devices are not overlapping, and a multi-script handwriting recognition model may be provided on each of the plurality of devices, It is used for handwriting recognition of the associated different languages. For example, when a multi-script handwriting recognition model is trained to recognize characters in many different scripts and languages, the same handwriting recognition model can be used globally to provide handwriting input for any of these input languages Can be used. The first device for the user who wishes to input only in English and Hebrew can provide a handwriting input function using the same handwriting recognition model as the second device for other users who want to input only Chinese and Emoji characters. Instead of requiring the user of the first device to separately install an English handwriting input keyboard (e.g. implemented with an English-specific handwriting recognition model) and a separate Hebrew handwriting input keyboard (e.g. implemented with a Hebrew-specific handwriting recognition model) , The same universal multi-script handwriting recognition model can be installed and used once on the first device to provide handwriting input for mixed input of both languages as well as both English and Hebrew. In addition, instead of requiring a second user to install a Chinese handwriting input keyboard (e.g. implemented with a Chinese-specific handwriting recognition model) and a separate handwriting handwriting input keyboard (e.g. implemented with an handwriting recognition model) , The same universal multi-script handwriting recognition model can be installed once on the second device and used to provide both handwriting input for both Chinese and Simplified as well as mixed input of two scripts. Handling a large repertoire (e.g., much or all of the characters encoded in nearly 100 different scripts) across multiple scripts using the same multiscript handwriting model is a significant burden on users and part of the device suppliers Thus improving the usability of the recognizer.

Multi-script handwriting recognition model training using large multi-script training corpus differs from conventional HMM-based handwriting recognition systems and does not rely on time information associated with individual strokes of characters. In addition, the resources and memory requirements for the multiscript recognition system do not increase linearly with the increase in the number of symbols and languages covered by the multiscript recognition system. For example, in a conventional handwriting system, increasing the number of languages means adding another independently trained model, and the memory requirement will be at least doubled to accommodate the increased capability of the handwriting recognition system. In contrast, when the multiscript model is trained by multiscript training corpus, increasing language coverage requires training the handwriting recognition model back into additional handwriting samples and increasing the size of the output plane, The increase is very gentle. Assuming that the multiscript training corpus includes handwriting samples corresponding to n different languages and that the multiscript handwriting recognition model occupies memory of size m , if increasing the language coverage to N languages (N > n) , The device re-trains the multiscript handwriting recognition model based on the space-guiding features of the second multiscript training corpus, where the second multiscript training corpus includes the second handwriting samples corresponding to N different languages . The change in M / m is maintained substantially constant within the range of 1 to 2, wherein the change in N / n is 1 to 100. Once the multi-script handwriting recognition model is re-trained, the device can provide real-time handwriting recognition of the user's handwriting input using a re-trained multi-script handwriting recognition model.

8A and 8B illustrate exemplary user interfaces for providing real-time multi-script handwriting recognition and input on a portable user device (e.g., device 100). 8A and 8B, the handwriting input interface 802 is displayed on the touch sensitive display screen (e.g., touch screen 112) of the user device. The handwriting input interface 802 includes a handwriting input area 804, a candidate display area 806, and a text input area 808. In some embodiments, the handwriting input interface 802 further includes a plurality of control elements, each of which may be invoked to cause the handwriting input interface to perform a predetermined function. As shown in FIG. 8A, a delete button, a space button, a carriage return or enter button, and a keyboard switch button are included in the handwriting input interface. Other control elements are possible and may optionally be provided in the handwriting input interface to suit each different application utilizing the handwriting input interface 802. [ The layout of the different components of the handwriting input interface 802 is exemplary only and may vary depending on the different devices and different applications.

In some embodiments, the handwriting input area 804 is a touch sensitive area for receiving handwriting input from a user. Persistent touches in the handwriting input area 804 on the touch screen and its associated motion path are registered as handwriting strokes. In some embodiments, the written strokes registered by the device are visually rendered within the handwriting input area 804 at the same locations tracked by the persistent touches. As shown in FIG. 8A, in the handwriting input area 804, the user selects some handwriting characters (e.g., "

Figure 112018083390499-pat00014
Handwritten English characters (e.g., "Happy"), and a hand drawn emoji character (e.g., smiley character) And distributed over a plurality of lines (e.g., two lines).

In some embodiments, the candidate display area 806 displays one or more recognition results (e.g., 810 and 812) for the currently accumulated handwriting input in the handwriting input area 804. Generally, the highest recognition result (e.g., 810) is displayed in the first position in the candidate display area. As shown in FIG. 8A, since the handwriting recognition model described herein can recognize characters of a number of non-overlapping scripts including Chinese characters, Latin script, and emoji characters, the recognition result provided by the recognition model (E. G., 810) accurately includes Chinese characters, English characters, and emotion characters, which are displayed by handwriting input. The user is not required to stop while writing input to select or switch recognition languages.

In some embodiments, the text entry area 808 is an area that displays text input provided in a separate application employing a handwriting input interface. 8A, the text entry area 808 is used by the memo application and includes text currently displayed in the text entry area 808 (e.g., "America

Figure 112018083390499-pat00015
. ") Is a text input already provided with the memo application. In some embodiments, the cursor 813 indicates the current text entry position in the text entry area 808.

In some embodiments, a user may choose to display a selection (e.g., a tap gesture on one of the displayed recognition results), or an implicit confirmation input (e.g., a double tap gesture or " Button gesture), a specific recognition result displayed in the candidate display area 806 can be selected. 8B, the user has explicitly selected the highest recognition result 810 using a tap gesture (as indicated by the contact 814 above the recognition result 810 of FIG. 8A). In response to the select input, the text of the recognition result 810 is inserted into the text entry area 808 at the insertion point indicated by the cursor 813. [ 8B, once the text of the selected recognition result 810 is input to the text input area 808, both the handwriting input area 804 and the candidate display area 806 are cleared. The handwriting input area 804 is now ready to accept new handwriting input and the candidate display area 806 can now be used to display recognition results for a new handwriting input. In some embodiments, the implicit confirmation input allows the user to enter the text entry area 808 without requiring the user to stop and select the top recognition result. A well-designed implicit confirmation input improves the speed of text input and reduces the cognitive burden of being erased to the user during text construction.

In some embodiments (not shown in FIGS. 8A and 8B), the top recognition result of the current handwriting input is provisionally displayed in the text input area 808 selectively. The provisional text input shown in the text entry area 808 is visually distinguished from other text input in the text entry area, for example, by a provisional input box surrounding the provisional text input. The text shown in the provisional input box is not committed or provided to an associated application (e.g., a memo application), e.g., in response to a user revision of the current handwritten input, , It is automatically updated.

9A and 9B are flowcharts of an exemplary process 900 for providing multi-script handwriting recognition on a user device. In some embodiments, as shown in FIG. 900, the user device receives a multi-script handwriting recognition model (902), where the multi-script recognition model includes spatial-derived features of the multiscript training corpus And stroke-direction independent features), and the multiscript training corpus includes handwriting samples corresponding to the characters of at least three non-overlapping scripts. In some embodiments, the multi-script handwriting recognition model is a single convolutional neural network (906) with a single input plane and a single output plane, to distinguish the characters of at least three non-overlapping scripts displayed in the multiscreen training corpus And individual weights for space-directed features and space-derived features. In some embodiments, the multi-script handwriting recognition model is configured (908) to recognize characters based on individual input images of one or more recognition units identified in the handwriting input, and the individual space-inducing features Is independent of the order of the individual strokes in the handwriting input, the stroke direction, and the continuity of the strokes.

In some embodiments, the user device receives (908) handwriting input from a user, and the handwriting input includes one or more handwriting strokes provided on a touch sensitive surface coupled to the user device. For example, the handwriting input includes individual data about the location and movement of the contact between the finger or stylus and the touch sensitive surface coupled to the user device. In response to receiving the handwriting input, the user device provides one or more handwriting recognition results to the user in real time based on the multi-script handwriting recognition model trained on the space-guiding features of the multiscreen training corpus 912 (910 ).

In some embodiments, when providing real-time handwriting recognition results to a user, the user device divides (914) the user's handwriting input into one or more recognition units, each recognition unit comprising one or more of the handwriting fills provided by the user . In some embodiments, the user device divides the user's handwriting input according to the shape, location, and size of the individual strokes created by the contact between the user's finger or stylus and the touch sensitive surface of the user device. In some embodiments, the division of the handwriting input further takes into account the relative order and relative position of the individual strokes created by the contact between the user's finger or stylus and the touch sensitive surface of the user device. In some embodiments, the user's handwriting input is in a transitional form, and each successive stroke in the handwriting input may correspond to a number of strokes in the recognized character in print form. In some embodiments, the user's handwriting input may include a continuous stroke spanning a plurality of recognized characters in a printed form. In some embodiments, segmentation of the handwriting input creates one or more input images, each corresponding to an individual recognition unit. In some embodiments, some of the input images optionally include some overlapping pixels. In some embodiments, the input images do not contain any overlapping pixels. In some embodiments, the user device generates a partitioning grid, and each partitioning chain of partitioning grid represents an individual way of partitioning the current handwriting input. In some embodiments, each arc in the split chain corresponds to a separate group of strokes at the current handwriting input.

As shown in FIG. 900, the user device provides 914 an individual image of each of one or more recognition units as an input to the multiscript recognition model. For at least one of the one or more recognition units, the user device receives from the multiscript handwriting recognition model at least a first output character from a first script and at least a second output character from a second script different from the first script, An output is obtained (916). For example, the same input image may cause the multiscript recognition model to output two or more similar-looking output characters from different scripts as recognition results for the same input image. For example, handwritten entries for the letter "a" in Latin scripts and the letter "α" in Greek scripts are often similar. In addition, handwritten input for the letter "J" in the Latin script and the letter "D" in the Chinese character is often similar. Similarly, the emotion character "

Figure 112018083390499-pat00016
Quot; may be similar to the handwriting input for the CJK subtype "west. &Quot; In some embodiments, because the visual appearance of the handwriting input will be difficult for human readers to decipher, The first script is a CJK basic character block and the second script is a Latin script as encoded by the Unicode standard. In some embodiments, the first script is a CJK basic character block and the second script is a Latin script as encoded by the Unicode standard. In some embodiments, the first script is a CJK basic character block and the second script is a set of emoji characters. In some embodiments, the first script is a Latin script and the second script is an emoji character.

In some embodiments, the user device displays (918) both the first output character and the second output character in the candidate display area of the handwriting input interface of the user device. In some embodiments, the user device is configured to output one of the first output character and the second output character based on whether the first script and the second script are individual scripts used on the soft keyboard currently installed on the user device Characters are selectively displayed (920). For example, suppose the handwriting recognition model identifies a Chinese character " lambda "and a Greek character" lambda "as output characters for the current handwritten input, the user device determines that the user has entered a Chinese soft keyboard Method is installed or a Greek input keyboard is installed. When the user device determines that only the Chinese soft keyboard is installed, the user device selectively displays the Chinese character "ON" as the recognition result to the user and does not display the Greek character "? &Quot;.

In some embodiments, the user device provides real-time handwriting recognition and input. In some embodiments, before the user makes an explicit or implicit selection of the recognition result displayed to the user, the user device may respond to a subsequent addition or revision of the handwriting input by the user, The above recognition results are continuously revised (922). In some embodiments, the user displays (924) one or more recognition results that have been individually revised to the user in the candidate display area of the handwriting input user interface, in response to each revision of the one or more recognition results.

In some embodiments, the multi-script handwriting recognition model is trained to recognize all the characters of at least three non-overlapping scripts including Chinese characters, Emoji characters, and Latin script encoded according to the Unicode standard (926 ). In some embodiments, at least three non-overlapping scripts include Hanzi, Arabic script, and Latin script. In some embodiments, the multi-script handwriting recognition model has at least 30,000 output classes representing at least 30 characters spanning at least three non-overlapping scripts (928).

In some embodiments, the user device allows the user to enter multi-script handwriting input, such as a phrase containing characters in more than one script. For example, the user can receive handwriting recognition results that include characters in more than one script, can be continuously written, without stopping during writing to manually switch the recognition language. For example, a user may enter a multi-script sentence "Hello means "

Figure 112018083390499-pat00017
in Chinese. "The Chinese characters"
Figure 112018083390499-pat00018
"Without having to switch the input language from English to Chinese or to input the English words" in Chinese. "Before switching the input language back from Chinese to English.

As described herein, a multi-script handwriting recognition model is used to provide real-time handwriting recognition for a user's input. In some embodiments, real-time handwriting recognition is used to provide real-time multi-script handwriting input functionality on the user's device. 10A-10C are flowcharts of an exemplary process 1000 for providing real-time handwriting recognition and input on a user device. Specifically, real-time handwriting recognition is independent of stroke order at the character level, phrase level, and sentence level.

In some embodiments, strobe-independent handwriting recognition at the character level requires that the handwriting recognition model provide the same recognition result for a particular handwriting character, regardless of the sequence in which individual strokes of a particular character are provided by the user do. For example, individual strokes of a Hanja are typically typed in a specific order. Although native speakers of Chinese are often trained in school to fill out each character in a specific order, many users later adopted personalized styles and stroke sequences that deviate from the conventional stroke order. In addition, the cursive writing styles are highly individualized, and many strokes in the printed form of a Chinese character are merged into a single stylized stroke, often twisted and sometimes even associated with the next character. A stroke-sequence-independent recognition model is trained based on images of write samples with no time information associated with individual strokes. Thus, recognition is independent of stroke order information. For example, in the case of the Chinese character "ten ", the same recognition result" ten "will be given by the handwriting recognition model regardless of whether the user first wrote the horizontal stroke or the vertical stroke first.

As shown in FIG. 10A, in process 1000, a user device receives a plurality of handwritten strokes from a user (1002), wherein a plurality of handwritten strokes corresponds to a handwritten character. For example, the handwriting input for the character "ten " typically includes a substantially horizontal handwriting stroke that intersects a substantially vertical handwriting stroke.

In some embodiments, the user device generates an input image based on a plurality of written strokes (1004). In some embodiments, the user device performs real-time handwriting recognition of handwritten characters by providing an input image as a handwriting recognition model, wherein the handwriting recognition model provides handwriting recognition independent of stroke order. Thereafter, the user device, in real time, receives a plurality of handwritten strokes, wherein a plurality of handwritten strokes (e.g., horizontal strokes and vertical strokes) are generated in the same first output character (e.g., Quot; 10 "in the form) is displayed (1008).

Some conventional handwriting recognition systems allow for small stroke-order variations in a small number of characters, but specifically by including such variations in the training of a handwriting recognition system. These conventional handwriting recognition systems are not extensible to accommodate any stroke-order variations in a large number of complex characters, such as Chinese characters, even if characters of moderate complexity are already causing many variations in stroke order Because. In addition, by simply including more permutations of allowable stroke sequences for specific characters, conventional recognition systems still allow multiple strokes to be combined into a single stroke (e.g., as in extreme crochet writing) or One stroke will not be able to handle handwritten inputs that are broken down into multiple sub-strokes (e.g., as in characters captured by very rough sampling of an input stroke). Thus, a multi-scripted handwriting system trained for space-guided features as described herein provides advantages over conventional recognition systems.

In some embodiments, strobe-independent handwriting recognition is performed regardless of the time information associated with individual strokes within each handwritten character. In some embodiments, strobe-independent handwriting recognition is performed in cooperation with stroke-distribution information that considers the spatial distribution of individual strokes before the individual strokes are merged into the flat input image. More details on how the time-induced stroke-distribution information is used to enhance the stroke order independent handwriting recognition described above are provided later herein (e.g., with respect to Figures 25A-27). The techniques described with respect to Figures 25A-27 do not impair stroke independence of the handwriting recognition system.

In some embodiments, the handwriting recognition model provides stroke direction independent handwriting recognition (1010). In some embodiments, the stroke-independent recognition may be such that, in response to the user device receiving a plurality of handwriting input, each of the plurality of handwriting strokes displays the same first output character, regardless of the individual stroke direction provided by the user . For example, when the user has entered a Chinese character "ten" in the handwriting input area of the user device, the handwriting recognition model can recognize the same recognition result regardless of whether the user has drawn the horizontal stroke from the left side to the right side or from the right side to the left side . Similarly, the handwriting recognition model will output the same recognition result regardless of whether the user has drawn the vertical stroke in the downward direction or in the upward direction. In another example, many Chinese characters are structurally composed of two or more copies. Some kanji include left and right sides, respectively, and people customarily write the left side first, followed by the right side. In some embodiments, as long as the resulting handwritten input shows the left hand count to the left of the right hand side when the user completes the handwriting, the handwriting recognition model determines whether the user first wrote the right hand hand count Will provide the same recognition results regardless of Similarly, some Chinese characters include upper and lower circles, respectively, and people conventionally fill in the upper circles first and then the lower circles. In some embodiments, the handwriting recognition model will provide the same recognition result regardless of whether the user has first written the top or bottom numbers, as long as the resulting handwriting input indicates the top portion above the bottom number . In other words, the handwriting recognition model does not depend on the direction in which the user provides the individual strokes of the handwritten character to determine the identity of the handwritten character.

In some embodiments, the handwriting recognition model provides handwriting recognition based on the image of the recognition unit, regardless of the number of sub-strokes provided by the user. In other words, in some embodiments, the handwriting recognition model provides stroke handwriting independent handwriting recognition (1014). In some embodiments, the user device displays the same first output character in response to receiving a plurality of handwritten strokes, regardless of how many handwritten strokes are used to form a continuous stroke in the input image. For example, if the user enters a Chinese character "+" in the handwriting input area, the handwriting recognition model may be configured to allow the user to create four strokes (e.g., (E.g., two short horizontal strokes and two short vertical strokes), or two strokes (e.g., L-shaped and seven-shaped strokes, or horizontal strokes and vertical strokes) (E.g., hundreds of very short strokes or dots) of the same stroke.

In some embodiments, the handwriting recognition model not only allows each single character to recognize the same character regardless of the order, direction, and stroke number in which it is written, but also allows multiple strokes of characters to be recognized in multiple Quot; < / RTI >

In some embodiments, the user device not only received a first plurality of handwritten strokes, but also received a second plurality of handwritten strokes from the user (1016), where a second plurality of handwritten strokes corresponds to a second handwritten character . In some embodiments, the user device generates a second input image based on a second plurality of written strokes (1018). In some embodiments, the user device provides a second input image to a handwriting recognition model to perform real-time recognition of the second handwritten character (1020). In some embodiments, the user device displays (1022) a second output character corresponding to a second plurality of written strokes in real time on receiving a second plurality of written strokes. In some embodiments, the first output character and the second output character are simultaneously displayed in the spatial sequence regardless of the individual sequences in which the first plurality of written strokes and the second plurality of written strokes are provided by the user. For example, if the user has written two Chinese characters (e.g., "ten" and "eight") in the handwriting input area of the user device, the current cumulative handwriting input in the handwriting input area is the stroke The user device determines whether or not the user has first entered the strokes of the character "ten " or the strokes of the character" 8 " " In fact, even if the user has written some of the strokes for the character "8 " (e.g., a slanted stroke to the left) before some of the strokes for the character" The user device will display the recognition result "18" in the spatial sequence of the two handwritten characters, as long as the image represents all the strokes for the character "ten " to the left of all of the strokes for the character" 8 ".

In other words, as shown in FIG. 10B, in some embodiments, the spatial sequence of the first output character and the second output character is aligned along the default writing direction (e.g., from left to right) of the handwriting input interface of the user device Corresponds to the spatial distribution of the first plurality of written strokes and the second plurality of strokes (1024). In some embodiments, a second plurality of handwritten strokes is received temporally after a first plurality of handwritten strokes, and a second output character is received along a default writing direction (e.g., from left to right) of the handwriting input interface of the user device And precedes the first output character in the spatial sequence (1026).

In some embodiments, the handwriting recognition model provides stroke-order independent recognition at the sentence-to-sentence level. For example, if the handwritten character "ten" is in the first handwritten sentence and the handwritten character "8" is in the second handwritten sentence and the two handwritten characters are in the handwritten input area to one or more other handwritten characters and / , The handwriting recognition model will still provide recognition results representing the two characters in the spatial sequence "ten ... eight ". The recognition result and spatial sequence of the two recognized characters remain the same regardless of the time order of the two characters provided by the user, except that when the user completes the handwriting input, Are arranged spatially in the sequence "ten ... eight ". In some embodiments, a first handwritten character (e.g., "ten") is provided by a user as part of a first handwritten sentence (eg, "ten is a number.") 8 ") is provided by the user as part of a second handwritten sentence (e.g.," 8 is another number. "), And the first handwritten sentence and the second handwritten sentence are simultaneously displayed in the handwriting input area of the user device. In some embodiments, when the user recognizes that the recognition result (e.g., "ten is a number .8. Is another number.") Is an accurate recognition result, two sentences will be entered into the text input area of the user device, The handwriting input area will be cleared so that the user can input another handwriting input.

In some embodiments, since the handwriting recognition model is stroke-order independent at the character level as well as at the phrase level and sentence level, the user can make corrections to previously incomplete characters after the subsequent characters are written. For example, if a user forgets to fill in a particular stroke for a character before moving to write one or more subsequent characters in the handwriting area, the user may miss the correct position in the particular character to receive the correct recognition result You can still write the strokes later.

In traditional stroke-independent recognition systems (e.g., an HMM-based recognition system), once a character is written, it is committed and the user can not change it anymore. If the user wishes to make any changes, the user must delete the character and all subsequent characters and restart from the beginning. In some conventional recognition systems, a user is required to complete handwritten characters within a short predetermined time window, and any strokes entered outside of a predetermined time window are included in the same recognition unit as other strokes provided during the time window . Such conventional systems are difficult to use and cause great frustration to the user. Stroke order independent systems do not suffer from such shortcomings and the user can complete the character in any order and any time frame that the user deems appropriate. The user may also modify (e.g., add one or more strokes) the written character first, after subsequently writing one or more characters in the handwriting input interface. In some embodiments, the user may also delete the previously written characters individually (e.g., using the methods described later with respect to Figures 21a-b) and rewrite it to the same location in the handwriting input interface .

10B and 10C, a second plurality of handwritten strokes spatially follows a first plurality of handwriting strokes along the default writing direction of the handwriting input interface of the user device, and the second output character spatially follows a handwriting input interface (1028) the first output character in the spatial sequence along the default writing direction in the candidate display area of FIG. The user device receives (1030) a third handwritten stroke from the user to revise the first handwritten character (i.e., the handwritten character formed by the first plurality of handwritten strokes) Written strokes and a second plurality of written strokes. For example, a user may enter two characters (e.g., "

Figure 112018083390499-pat00019
The first plurality of strokes form the handwritten character "8 ". In fact,
Figure 112018083390499-pat00020
&Quot;, but note that one stroke was missed. The second plurality of strokes is a handwritten character "
Figure 112018083390499-pat00021
"≪ / RTI >
Figure 112018083390499-pat00022
" Instead of "
Figure 112018083390499-pat00023
Quot ;, the user can simply write another vertical stroke underneath the strokes for the character "8 ", and the user device sends a vertical stroke to the first recognition unit (e.g.," The user device will output a new output character (e.g., "8") for the first recognition unit, where the new output character will be the previous output character (e.g., " As shown in Figure 10C, in response to receiving the third written stroke, the user device determines, based on the relative proximity of the third written strokes to the first plurality of written strokes, A third handwritten stroke is assigned 1032 the same recognition unit as the plurality of handwritten strokes. In some embodiments, the user device generates a revised input image based on the first plurality of handwritten strokes and the third handwritten stroke 1034. The user device provides the revised input image as a handwriting recognition model to perform real-time recognition of the revised handwritten character 1036. In some embodiments, the user device is configured to receive a third handwriting input In response, a third output character corresponding to the revised input image is displayed (1040), where the third output character replaces the first output character and is displayed concurrently with the second output character in a spatial sequence along the default writing direction .

In some embodiments, the handwriting recognition module recognizes the handwriting input written in the default writing direction from left to right. For example, a user can write characters from left to right and into one or more rows. In response to the handwriting input, the handwriting input module presents recognition results that include characters in a spatial sequence from left to right and as needed, in one or more rows. When the user selects the recognition result, the selected recognition result is input to the text input area of the user device. In some embodiments, the default writing direction is top to bottom. In some embodiments, the default write direction is from right to left. In some embodiments, the user optionally changes the default writing direction to an alternative writing direction after the recognition result is selected and the writing input area is cleared.

In some embodiments, the handwriting input module allows a user to input multi-character handwriting input in the handwriting input area and deletes strokes from one handwriting recognition input unit at a time, . In some embodiments, the handwriting input module allows deletion of one stroke at a time from the handwriting input. In some embodiments, the deletion of the recognition unit proceeds one by one in the opposite direction of the default writing direction, regardless of the order in which the recognition units or strokes are entered to generate the current handwriting input. In some embodiments, the deletion of strokes occurs when the strokes go one by one in the reverse order entered in each recognition unit, and when strokes in one recognition unit are all erased, deletion of the strokes occurs in a direction opposite to the default writing direction And proceeds to the next recognition unit.

In some embodiments, the user device receives the delete input from the user while the third output character and the second output character are simultaneously displayed as a candidate recognition result in the candidate display area of the handwriting input interface. In response to the delete input, the user device deletes the second output character from the recognition result while retaining the third output character in the recognition result displayed in the candidate display area.

In some embodiments, as shown in FIG. 10C, the user device may include a first plurality of handwritten strokes, a second plurality of handwritten strokes, and a third handwritten stroke in real time as the handwritten strokes each are provided by the user (1042). In some embodiments, in response to receiving the delete input from the user, the user device may be configured to, in response to receiving the deletion input from the user, generate a first plurality of handwritten strokes and a third handwritten stroke (e.g., corresponding to the revised first handwritten character) (1044) the individual rendering of the second plurality of handwritten inputs (e.g., corresponding to the second handwritten character) from the handwriting input area, while maintaining the individual renderings. For example, if the user selects a character sequence "

Figure 112018083390499-pat00024
&Quot;, when the user inputs a deletion input, the character "
Figure 112018083390499-pat00025
Quot; are removed from the handwriting input area, and the character "
Figure 112018083390499-pat00026
"Is the recognition result in the candidate display area of the user device"
Figure 112018083390499-pat00027
Quot ;. After deletion, the character "
Figure 112018083390499-pat00028
"Remain in the handwriting input area while the recognition result is the character"
Figure 112018083390499-pat00029
"

In some embodiments, the handwritten characters are multi-character ones. In some embodiments, the first plurality of handwritten inputs are provided in a transitional form. In some embodiments, the first plurality of handwritten inputs are provided in a crossover, and the handwritten characters are multi-character ones. In some embodiments, handwritten characters are written in Arabic in a transitional form. In some embodiments, the handwritten characters are written in other scripts in a verbose manner.

In some embodiments, the user device establishes individual predetermined constraints on a set of allowable dimensions for handwritten character input and, based on individual predetermined constraints, In which individual input images provided from each recognition unit to a handwriting recognition model are generated and recognized as corresponding output characters.

In some embodiments, the user device receives additional handwritten strokes from the user after dividing the currently accumulated plurality of handwritten strokes. The user device assigns additional handwriting strokes to the individual recognition units of the plurality of recognition units based on the spatial position of the additional handwriting strokes for the plurality of recognition units.

Now, note the exemplary user interfaces for providing handwriting recognition and input on the user device. In some embodiments, exemplary user interfaces are provided on the user device based on a multi-script handwriting recognition model that provides real-time, stroke order independent handwriting recognition of the user's handwriting input. In some embodiments, exemplary user interfaces include an exemplary handwriting input interface 802 (e.g., Figs. 8A and 8B) including a handwriting input area 804, a candidate display area 804, and a text input area 808 8B). ≪ / RTI > In some embodiments, the exemplary handwriting input interface 802 also includes a plurality of control elements 1102, such as a delete button, a space bar, an enter button, a keyboard toggle button, and the like. One or more other areas and / or elements may be provided in the handwriting input interface 802 to enable the additional functions described below.

As described herein, the multi-script handwriting recognition model can have a very large repertoire of tens of thousands of characters in many different scripts and languages. As a result, for handwriting input, the recognition model is very likely to identify a large number of output characters having all of the fairly good possibilities of characters intended by the user. It is advantageous to initially provide only a subset of recognition results on a user device with a limited display area, while maintaining other results available at the time of a user request.

Figs. 11A-11G illustrate exemplary user interfaces for displaying a subset of recognition results in the normal view of the candidate display area, with the posibility to invoke the extended view of the candidate display area to display the remaining recognition results do. Further, within the scope of view of the candidate display area, the recognition results are divided into different categories and displayed on different tabbed pages in the extended view.

FIG. 11A illustrates an exemplary handwriting input interface 802. FIG. The handwriting input interface includes a handwriting input area 804, a candidate display area 806, and a text input area 808. One or more control elements 1102 are also included in the handwriting input interface 1002.

11A, the candidate display area 806 may optionally include an area for displaying one or more recognition results, and an area for displaying an aperture 1104 (e.g., an expansion icon) for calling an extended version of the candidate display area 806 ).

Figures 11A-11C illustrate how the user device is currently accumulating in the handwriting input area 804 as the user provides one or more handwriting strokes (e.g., strokes 1106, 1108, 1110) And identifying and displaying a separate set of recognition results corresponding to the strokes. 11B, after the user enters the first stroke 1106, the user device generates three recognition results 1112, 1114, 1116 (e.g., characters "/", "1" &Quot;, "). In some embodiments, a small number of candidate characters are displayed in the candidate display area 806 in the order of recognition confidence associated with each character.

In some embodiments, the top candidate result (e.g., "/") is provisionally displayed in text entry area 808, e.g., box 1118. The user can optionally confirm that the top candidate is the intended input by providing a simple confirmation input (e.g., by depressing the "Enter" key or by providing a double-tap gesture in the handwriting input area).

11C shows that as the user enters two more strokes 1108 and 1100 in the handwriting input area 804 before the user selects any candidate recognition result, Is rendered in the input area 804 and candidate results are updated to reflect changes to the recognition unit (s) identified from the currently accumulated handwritten inputs. As shown in FIG. 11C, based on the three strokes, the user device has identified a single recognition unit. Based on the single recognition unit, the user device has identified and displayed a number of recognition results 1118 through 1124. In some embodiments, one or more of the currently displayed recognition results (e.g., 1118 and 1122) in the candidate display area 806 are representative candidate characters selected from a number of similar-looking candidate characters for the current handwriting input, respectively .

As shown in Figures 11C and 11D, when the user selects an affordance 1104 (e.g., using a tap gesture with a contact 1126 on an affordance 1104), the candidate display area may have a normal view (e.g., , Fig. 11C) to an extended view (e.g., shown in Fig. 11D). In some embodiments, the expanded view represents all recognition results (e.g., candidate characters) identified for the current handwriting input.

In some embodiments, the normal view displayed at the beginning of the candidate display area 806 represents only the most common characters used in an individual script or language, while an extended view includes characters rarely used in a script or language To represent all candidate characters. The extended field of view of the candidate display area can be designed in different ways. 11D-11G illustrate an exemplary design of an expanded candidate display area, in accordance with some embodiments.

As shown in Figure 11D, in some embodiments, the expanded candidate display area 1128 includes one or more tapped pages (e.g., pages 1130, 1132, 1134 , 1136). The taped design shown in FIG. 11D allows the user to quickly find the desired category of characters and then to look for the character he or she would like to enter in the corresponding tabbed page.

In FIG. 11d, the first taped page 1130 displays all candidate characters identified for the current accumulated handwriting input, including both commercial characters as well as rare characters. 11D, the tapped page 1130 includes all of the characters shown in the initial candidate display area 806 of FIG. 11C and a number of additional characters (not shown in the initial candidate display area 806) For example,

Figure 112018083390499-pat00030
"," β "," width ", etc.).

In some embodiments, the characters displayed in the initial candidate display area 806 may be from a set of common characters associated with the script (e.g., all characters in the base block of the CJK script, such as encoded according to the Unicode standard) Of characters. In some embodiments, the characters displayed in the expanded candidate display area 1128 include a set of rare characters associated with the script (e.g., all characters in an extension block of a CJK script, such as encoded according to the Unicode standard) . In some embodiments, the expanded candidate display area 1128 further includes candidate characters from other scripts that are not commonly used by the user, such as Greek scripts, Arabic scripts, and / or emoticon scripts.

11D, the expanded candidate display area 1128 includes individual tabbed pages 1130, 1132, 1134, 1138, each of which is associated with an individual category of candidate characters (E.g., each, all characters, rare characters, characters from a Latin script, and characters from an instant script). Figures 11E-11G illustrate that a user can select each of the different tapped pages to expose candidate characters in the corresponding category. FIG. 11E shows only the rare characters corresponding to the current handwriting input (e.g., characters from the extension block of the CJK script). FIG. 11F shows only Latin and Greek characters corresponding to the current handwriting input. FIG. 11G shows only the emotion characters corresponding to the current handwriting input.

In some embodiments, the expanded candidate display area 1128 may be based on an individual criterion (e.g., based on the number of strokes, based on the number of strokes, based on the number of strokes, and based on the number of copies, etc.) Lt; RTI ID = 0.0 > a < / RTI > The ability to sort candidate characters in each category according to criteria other than cognitive confidence scores provides the user with the additional ability to quickly find desired candidate characters for text input.

11h-11k illustrate that in some embodiments, candidate characters that appear similar may be grouped and that only representative characters from each group of candidate characters that appear to be similar are presented in the initial candidate display area 806 do. Because the multiscript recognition model described herein can produce many candidate characters that are nearly equally good for a given handwriting input, the recognition model can always remove one candidate at the expense of another similar-looking candidate no. On a device with a limited display area, suddenly displaying a large number of similar-looking candidates does not help the user select the correct character because it is not easy to see fine distinctions, , It can be difficult to select it from a very dense display using a finger or stylus.

In some embodiments, to address these problems, the user device identifies candidate characters having a large similarity to each other (e.g., according to a dictionary or term index of similar-looking characters, or some image-based criteria) , And group them into individual groups. In some embodiments, one or more groups of characters that appear similar from a set of candidate characters for a given handwriting input may be identified. In some embodiments, the user device identifies a representative candidate character among a plurality of similarly-looking candidate characters in the same group and displays only the representative candidate in the initial candidate display area 806. [ If the commercial character does not seem sufficiently similar to any other candidate characters, it is displayed by itself. In some embodiments, as shown in FIG. 11H, each representative candidate character (e.g., candidate characters 1118, 1122, "

Figure 112018083390499-pat00031
Quot; and "T") are displayed in a different manner than the candidate characters (e.g., candidate characters 1120, 1124, In some embodiments, the criteria for selecting representative characters of a group is based on the relative frequency of use of candidate characters in the group. In some embodiments, other criteria may be used.

In some embodiments, once the representative character (s) are displayed to the user, the user may optionally expand the candidate display area 806 to indicate candidate characters that look similar in an expanded view. In some embodiments, the selection of a particular representative character may cause an expansion field of only those candidate characters in the same group as the selected representative character.

A variety of designs are possible to provide an inflation view of similar-looking candidates. Figures 11h-11k illustrate one embodiment in which the expansion field of a representative candidate character is invoked by a predetermined gesture (e.g., expansion gesture) detected on a representative candidate character (e.g., representative character 1118). A predetermined gesture (e.g., an expansion gesture) for calling an expansion field is different from a predetermined gesture (e.g., a tap gesture) for selecting representative characters for text input.

As shown in Figures 11h and 11i, when the user provides an expansion gesture on the first representative character 1118 (e.g., as indicated by the two contacts 1138, 1140 moving away from each other) , The area displaying the representative character 1118 is expanded and the enlarged field of view compared to other candidate characters (e.g., " , 1146), three similar looking characters (e.g., "

Figure 112018083390499-pat00032
","
Figure 112018083390499-pat00033
Quot; and "width").

As shown in Fig. 11i, when presented in an enlarged view, three similar looking characters (e.g., "

Figure 112018083390499-pat00034
","
Figure 112018083390499-pat00035
Quot; and "width") can be more easily seen by the user. If one of the three candidate characters is the intended character input, the user can, for example, 11J and 11K, the user may select a second character (e.g., "
Figure 112018083390499-pat00036
Quot;) (by contact 1148). In response, the selected character (e.g., "
Figure 112018083390499-pat00037
Quot;) is input to the text input area 808 at the insertion position indicated by the cursor. Once the character is selected, the handwriting input in the handwriting input area 804 and the handwriting input in the candidate display area 806) (or the expanded view of the candidate display area) are cleared for subsequent handwriting input.

In some embodiments, if the user does not see the desired candidate character in the expansion field 1142 of the first representative candidate character, the user may alternatively use the same gesture to display the other representative character (s) displayed in the candidate display area 806 Lt; / RTI > In some embodiments, inflating other representative characters in the candidate display area 806 automatically restores the currently presented inflation field to normal viewing. In some embodiments, the user optionally uses a reduction gesture to restore the current inflation field to normal viewing. In some embodiments, the user may scroll the candidate display area 806 (e.g., left or right) to reveal other candidate characters that are not visible in the candidate display area 806.

Figures 12A and 12B illustrate a first subset of recognition results in an initial candidate display area while displaying a second subset of recognition results in an expanded candidate display area hidden in view until they are specifically called by a user. Are exemplary flowchart 1200 procedures in which a subset is presented. In exemplary process 1200, the device identifies, from a plurality of handwriting recognition results for a handwriting input, a subset of recognition results having a visual similarity level that exceeds a predetermined threshold. Thereafter, the user device selects the representative recognition result from the subset of recognition results, and displays the selected representative recognition result in the candidate display area of the display. Process 1200 is illustrated in Figures 11A-11K.

As shown in FIG. 12A, in an exemplary process 1200, a user device receives handwriting input from a user (1202). The handwriting input includes one or more handwriting strokes (e.g., 1106, 1108, 1110 in Figure 11C) provided in a handwriting input area of the handwriting input interface (e.g., 802 in Figure 11C) (e.g., 806 in Figure 11C). The user device identifies 1204 a plurality of output characters for the handwriting input (e.g., the characters shown in the tapped page 1130, Figure 11c), based on the handwriting recognition model. The user device divides a plurality of output characters into two or more categories based on a predetermined classification criterion (1206). In some embodiments, the predetermined classification criteria determines whether the individual character is a commercial character or a rare character (1208).

In some embodiments, the user device may, in an initial view of the candidate display area of the handwriting input interface (e.g., 806 as shown in FIG. 11C), display individual output characters in a first one of two or more categories (E.g., 1104 in FIG. 11C) to invoke an extended view of the candidate display area (e.g., 1128 in FIG. 11D) is initially provided in the initial view of the candidate display area.

In some embodiments, the user device receives 1212 a user input that selects an affordance to invoke an extended view, for example, as shown in FIG. 11C. In response to the user input, the user device, in an expanded view of the candidate display area, for example as shown in Figure 11D, displays the individual output characters in the first of the two or more categories and the dictionary (1214) individual output characters in at least the second category that are not displayed in the second category.

In some embodiments, the individual characters in the first category are those found in the dictionary of common characters, and the individual characters in the second category are those found in the dictionary of rare characters. In some embodiments, dictionaries of dictionary and rare characters of commercial characters are dynamically adjusted or updated based on usage history associated with the user device.

In some embodiments, the user device is configured to receive, from a plurality of output characters, a group of visually similar characters according to a predetermined similarity criterion (e.g., based on a dictionary of similar characters or based on some space-derived features) (1216). In some embodiments, the user device selects a representative character from a group of visually similar characters based on a predetermined selection criterion (e.g., based on the historical frequency of use). In some embodiments, the predetermined selection criterion is based on the relative frequency of use of characters in the group. In some embodiments, the predetermined selection criteria are based on a preferred input language associated with the device. In some embodiments, the representative candidate is selected based on other factors that indicate the probability that each candidate is the intended input by the user. These factors may include, for example, whether the candidate character belongs to a script for a soft keyboard currently installed on the user's device, or whether the candidate character is in a set of most common characters in a particular language associated with the user or user device .

In some embodiments, the user device may display other characters in the group of visually similar characters (e.g., "

Figure 112018083390499-pat00038
Quot ;, "width"),
Figure 112018083390499-pat00039
") 1220. In some embodiments, the visual indications in the initial field of view of the candidate display area to indicate whether each candidate character is a representative character of the group or a general candidate character that is not in any group For example, as shown in FIG. 11H, the user device may display, as a representative character displayed in the initial view of the candidate display area, (E.g., an expansion gesture), which is a predetermined expansion input (e.g., an expansion gesture) to the user device 1222. In some embodiments, for example, as shown in Figure 11i, the user device, in response to receiving the predetermined expansion input, An enlarged view of a representative character in a group of similar characters and an individual enlarged view of one or more other characters are displayed simultaneously (1224).

In some embodiments, the predetermined inflation input is an inflation gesture detected on a representative character displayed in the candidate display area. In some embodiments, the predetermined inflation input is a contact that is detected on a representative character displayed in the candidate display area and lasts longer than a predetermined threshold time. In some embodiments, a persistent touch for expanding a group has a longer critical duration than a tap gesture for selecting a representative character for a text input.

In some embodiments, each representative character is displayed simultaneously with a separate affordance (e.g., an individual expansion button) for calling an expanded view of a group of similar-looking candidate characters. In some embodiments, the predetermined inflation input is a selection of individual affordances associated with the representative character.

As described herein, in some embodiments, the repertoire of the multi-script handwriting recognition model includes an imagery script. The handwriting input recognizing module can recognize the emotion character based on the handwriting input of the user. In some embodiments, the handwriting recognition module presents both emoji characters identified directly from the handwriting, and characters or words in the natural human language that represent the identified emoji characters. In some embodiments, the handwriting input module recognizes a character or word in a natural human language based on the handwriting input of the user, and recognizes both the recognized character or word and the emotion character corresponding to the recognized character or word . In other words, the handwriting input module provides ways to input emoji characters without switching from the handwriting input interface to the emoji keyboard. In addition, the handwriting input module also provides a way to input normal natural language characters and words by drawing the emoji character by hand. Figures 13A-13E provide exemplary user interfaces illustrating these different ways of inputting emoji characters and normal natural language characters.

13A shows an exemplary handwriting input interface 802 called under a chat application. The handwriting input interface 802 includes a handwriting input area 804, a candidate display area 806, and a text input area 808. In some embodiments, once the user is satisfied with the text configuration in the text entry area 808, the user can select to send the text configuration to another participant in the current chat session. The conversation history of the chat session is displayed in the dialog panel 1302. [ In this example, the user may select a chat message 1304 (e.g., "Happy Birthday

Figure 112018083390499-pat00040
").

As shown in FIG. 13B, the user has provided a handwriting input 1306 for the English word "Thanks" in the handwriting input area 804. In response to handwriting input 1306, the user device has identified a number of candidate recognition results (e.g., recognition results 1308, 1310, 1312). The top recognition result 1303 is provisionally input into the text input area 808 in the box 1314. [

13C, after the user enters the handwriting word "Thanks" in the handwriting input area 806, the user then selects a stylized exclamation point (e.g., An elongated circle with a rounded circle below). The user device recognizes that additional strokes 1316 form a recognition unit separate from other recognition units previously recognized from the written strokes 1306 accumulated in the handwriting input area 806. [ Based on the newly entered recognition unit (i.e., the recognition unit formed by the strokes 1316), the user device uses the handwriting recognition model to identify the emotion character (e.g., stylized "!"). Based on this recognized emoticon character, the user device presents a first recognition result 1318 (e.g., "Thanks!" With stylized "!") In the candidate display area 806. Additionally, the user device also also identifies a number "8" visually similar to the newly entered recognition unit. Based on this recognized number, the user device presents a second recognition result 1322 (e.g., "Thanks 8") in the candidate display area 806. Additionally, based on the identified emoji character (e.g., stylized "!"), The user device also identifies a regular character (eg, the regular character "!") Corresponding to the emoji character. Based on this indirectly identified regular character, the user device presents a third recognition result 1320 (e.g., "Thanks!" With a regular character "!") In the candidate display area 806. At this point, the user can select any one of the candidate recognition results 1318, 1320, 1322 and input it to the text input area 808. [

As shown in FIG. 13D, the user continues to provide additional written strokes 1324 in the handwriting input area 806. This time, the user painted a heart symbol following the stylized exclamation mark. In response to the new writing strokes 1324, the user device recognizes that the newly provided writing strokes 1324 form another new recognition unit. Based on the new recognition unit, the user device identifies the emotion character "" as the candidate characters for the new recognition unit, and, alternatively, the number "0 ". Based on these new candidate characters recognized from the new recognition unit, the user device presents two updated candidate recognition results 1326, 1330 (e.g., "Thanks! ♥" and "Thanks 80"). In some embodiments, the user device further identifies the regular character (s) or word (s) (e.g., "Love") corresponding to the identified emotional character (e.g., " Based on the identified regular character (s) or word (s) for the recognized emotion character, the user device determines that the recognized emotion character (s) are replaced with the corresponding regular character (s) or word (s) A third recognition result 1328 is presented. As shown in Fig. 13D, in the recognition result 1328, the emotion character "!" Is replaced by the general exclamation point "! &Quot;, and the emotion character"

13E, the user has selected one of the candidate recognition results (e.g., the candidate result 1326 indicating the mixed script text "Thanks! ♥") and the text of the selected recognition result is displayed in the text input area 808 ) And subsequently transmitted to the other participants of the chat session. The message bubble 1332 represents the text of the message in the dialog panel 1302.

14 is a flow chart of an exemplary process 1400 in which a user enters an emotion character using handwriting input. Figures 13A-13E illustrate an exemplary process 1400 in accordance with some embodiments.

In process 1400, the user device receives handwriting input from the user (1402). The handwriting input includes a plurality of handwriting strokes provided in the handwriting input area of the handwriting input interface. In some embodiments, the user device recognizes a plurality of output characters from the handwriting input based on the handwriting recognition model (1404). In some embodiments, the output characters include at least a first emotional character (e.g., a stylized exclamation point "!" Or an emotional character " , Letters from the word "Thanks" in Fig. 13D). In some embodiments, for example, as shown in FIG. 13D, the user device may display a first emotional character (e.g., a stylized exclamation point in FIG. 13D) from a script of natural human language in the candidate display area of the handwriting input interface, (E.g., result 1326 in FIG. 13D) that includes the first character (e.g., the character "?" Or the emotion character "?") And the first character .

In some embodiments, based on the handwriting recognition model, the user device optionally recognizes (1408) at least a first semantic unit (e.g., word "thanks") from the handwriting input, Includes individual letters, words or phrases that can convey individual semantic meanings in individual human languages. In some embodiments, the user device identifies (1410) a second emotional character (e.g., a "handshake" emotion character) associated with a first semantic unit (e.g., the word "Thanks" . In some embodiments, the user device is configured to display, in the candidate display area of the handwriting input interface, a second recognition result (e.g., a first recognition result) including at least a second emotion character identified from a first semantic unit Quot ;, "", and " ♥ " emoticon characters following the" handshake "emoticon character). In some embodiments, displaying the second recognition result may include displaying the second recognition result in a third recognition result (e.g., a recognition result "Thanks! &Quot;) that includes at least a first semantic unit "). ≪ / RTI >

In some embodiments, the user receives a user input that selects a first recognition result to be displayed in the candidate display area. In some embodiments, the user device, in response to a user input, enters text of a first recognition result selected in a text input area of a handwriting input interface, wherein the text is at least a first auction from a script of a natural human language Character and a first character. In other words, the user can input a mixed script text input using a single handwriting input (although handwriting input including multiple strokes) in the handwriting input area, without switching between the natural language keyboard and the emoji character keyboard.

In some embodiments, the handwriting recognition model has been trained for multiscript training corpus that includes write samples corresponding to characters of at least three non-overlapping scripts, and three non-overlapping scripts are used to identify emotional characters, And a set of Latin scripts.

In some embodiments, the user device identifies a second semantic unit (e.g., the word "Love") corresponding to a first emotional character (e.g., " 1414). In some embodiments, the user device has at least a second semantic unit (e.g., the word "Love") identified from a first emotional character (e.g., "♥" emotional character) (E.g., 1328 of FIG. 13D) including the first recognition result (step 1416). In some embodiments, as shown in FIG. 13D, the user device may send a fourth recognition result (e.g., result 1328) "Thanks! Love" in the candidate display area to a first recognition result ♥! ").

In some embodiments, the user device allows the user to input regular text by drawing an emotion character. For example, if the user does not know how to spell the word "elephant ", the user may optionally select a stylized emoji character for" elephant " in the handwriting input area, The user device optionally also presents the plain text word "elephant" as one of the recognition results displayed in the candidate display area. In another example, instead of filling in the Chinese character " cat ", the user can draw a stylized cat in the handwriting input area. If the user device identifies an emotion character for the "cat" based on the handwriting input provided by the user, the user device may optionally also select "cat "Which is a Chinese character" cat "is presented. By presenting plain text for the recognized emoji character, the user device provides an alternative way of entering complex characters or words using several stylized strokes usually associated with well known emoji characters. In some embodiments, a user device may associate emoji characters with their corresponding generic text (e.g., letters, words, phrases, symbols (e.g., letters) in one or more desired scripts or languages Etc.).

In some embodiments, the user device recognizes the emotion character based on the visual similarity of the emotion character to the image generated from the handwriting input. In some embodiments, a handwriting recognition model used on a user device to enable recognition of emotional characters from a handwriting input includes handwriting samples corresponding to characters of a natural human language script and also artificially designed ancestors And a training corpus that includes both handwriting samples corresponding to the set of characters. In some embodiments, emotion characters associated with the same semantic concept may have different appearances when used in mixed input with text in different natural languages. For example, an emoji character for the semantic concept of "Love" is a "heart" emoji character when presented in the plain text of one natural language (eg, Japanese) and another natural language (eg, English or French) Quot; kiss " emotion character when presented in plain text of " kiss "

As described herein, when performing recognition of a multi-character handwriting input, the handwriting input module performs the division of the currently accumulated handwriting input in the handwriting input area and divides the accumulated strokes into one or more recognition units. One of the parameters used to determine how to divide the handwriting input may be the distance between the different clusters of the strokes and the manner in which the strokes are clustered in the handwriting input area. So far, people have different typefaces. Some people have a tendency to write a very large number of strokes with different distances between different parts of a stroke or the same letter, while others tend to fill very tightly with very small distances between strokes or different characters . Even in the case of the same user, due to incomplete planning, handwritten characters can escape a balanced appearance and can be shifted, stretched, or squeezed in different ways. As described herein, the multi-script handwriting recognition model provides stroke-sequence independent recognition, so that the user can write letters or portions of characters non-sequentially. As a result, it may be difficult to obtain spatial uniformity and balance in handwriting input between characters.

In some embodiments, the handwriting input model described herein allows the user to notify the handwriting input module whether to merge two adjacent recognition units into a single recognition unit or to divide a single recognition unit into two separate recognition units Method. With the help of the user, the handwriting input module can revise the initial segmentation and produce the intended result by the user.

15A-15J illustrate some exemplary user interfaces and processes for providing a user with predetermined pinch and inflation gestures to modify recognition units identified by a user device.

As shown in FIGS. 15A and 15B, a user has entered a plurality of writing strokes 1502 (e.g., three strokes) in the handwriting input area 806 of the handwriting input interface 802. The user device identifies a single recognition unit based on the current accumulated handwritten strokes 1502 and selects three candidate characters 1504, 1506, 1508 (e.g., "width", ""And"

Figure 112018083390499-pat00041
").

Figure 15c additionally inputs several additional strokes 1510 to the right of the initial handwritten strokes 1502 in the handwriting input area 606. [ The user device determines that the strokes 1502 and the strokes 1510 should be considered as two separate recognition units (e.g., based on the dimensions and spatial distributions of the plurality of strokes 1502, 1510). Based on the division of the recognition units, the user device provides input images of the first and second recognition units to a handwriting recognition model and obtains two sets of candidate characters. Thereafter, the user device generates a plurality of recognition results (e.g., 1512, 1514, 1516, 1518) based on different combinations of recognized characters. Each recognition result includes a recognized character for the first recognition unit and a recognized character for the second recognition unit. As shown in FIG. 15C, each of the plurality of recognition results 1512, 1514, 1516, and 1518 includes two recognized characters.

In this example, although the user actually intended handwriting input to be recognized as a single character, he or she would be able to enter the left part of the handwritten character (e.g., "cap") and the right part (e.g., And "Blood"). Looking at the results (e.g., 1512, 1514, 1516, 1518) presented in the candidate display area 806, the user will notice that the user device has incorrectly split the current handwriting input into two recognition units.

Segmentation may be based on objective standards, but it would be undesirable for the user to delete the current handwriting input and rewrite the entire character again with a smaller distance between the left and right portions.

Alternatively, as shown in FIG. 15D, the user may select two of the handwriting fills 1502 and 1510 to indicate to the handwriting input module that the two recognition units identified by the handwriting input module should be merged as a single recognition unit. Pinch gestures are used on top of clusters. The pinch gesture is indicated by two contacts 1520 and 1522 that are moved toward each other on the touch sensitive surface.

15E shows that, in response to a user's pinch gesture, the user device has revised the division of the current accumulated handwriting input (e.g., strokes 1502, 1510) and merged handwritten strokes into a single recognition unit. 15E, the user device provides an input image based on the revised recognition unit to the handwriting recognition model and generates three new candidate characters 1524, 1526, 1528 (e.g., " Hat ","

Figure 112018083390499-pat00042
"&Quot;and""). In some embodiments, as shown in FIG. 15E, the user device optionally has a handwriting input area 806 such that the distance between the left- The user device does not change the rendering of the handwriting input shown in the handwriting input area 608 in response to the pinch gesture. In some embodiments, the user device controls the rendering of the handwriting input in the handwriting input area 608. In some embodiments, Distinguishes pinch gestures from input strokes based on two simultaneous contacts detected in input area 806 (as opposed to a single contact).

As shown in FIG. 15F, the user has entered two more strokes 1530 to the right of the previously entered handwriting input (i.e., the strokes for the letter "hat"). The user device determines that the newly entered strokes 1530 are new recognition units and recognizes candidate characters (e.g., "child") for the newly identified recognition unit. The user device then combines the newly identified characters (e.g., "child") with candidate characters for the recognized recognition unit first, and generates a number of different recognition results (e.g., results (1532, 1534).

As shown in FIG. 15G, following the handwritten strokes 1530, the user continues to write more strokes 1536 (e.g., three more strokes) to the right of the strokes 1530. Because the horizontal distance between the strokes 1530 and the strokes 1536 is very small the user device determines that the strokes 1530 and the strokes 1536 belong to the same recognition unit and the input formed by the strokes 1530 and 1536 Provide the image as a handwriting recognition model. The handwriting recognition model identifies three different candidate characters for the revised recognition unit and generates two revised recognition results (1538, 1540) for the current accumulated handwriting input.

In this example, it is assumed that the last two sets of strokes 1530 and 1536 are in fact intended as two separate characters (e.g., "child" and "±"). After the user finds out that the user device has incorrectly combined the two sets of strokes 1530 and 1536 into a single recognition unit, the user may split the two sets of strokes 1530 and 1536 into two separate recognition units Lt; RTI ID = 0.0 > a < / RTI > user device. 15H, after taking two contacts 1542 and 1544 around the strokes 1530 and 1536, the user may move the two contacts in a generally horizontal direction (i.e., along the default writing direction) .

Figure 15i illustrates that, in response to a user's expansion gesture, the user device revises a previous partition of the current accumulated handwriting input and assigns the strokes 1530 and the strokes 1536 to two consecutive recognition units. Based on the input images generated for the two separate recognition units, the user device generates one or more candidate characters for the first recognition unit based on the strokes 1530, and a second recognition unit ≪ / RTI > Thereafter, the user device generates two new recognition results (1546, 1548) based on different combinations of recognized characters. In some embodiments, the user device optionally modifies the rendering of the strokes 1536,1536 to reflect the division of the previously identified recognition unit.

15J and 15K, the user has selected one of the candidate recognition results displayed in the candidate display area 806 (as indicated by the contact 1550), and the selected recognition result (e.g., Results 1548) were entered in the text input area 808 of the user interface. After the selected recognition result is input into the text input area 808, both the candidate display area 806 and the handwriting input area 804 are cleared and ready to display the subsequent user input.

16A and 16B illustrate an exemplary process for notifying a handwriting input module of the manner in which a user uses a predetermined gesture (e.g., pinch gesture and / or expansion gesture) to split or modify an existing segment of the current handwriting input 1600). 15J and 15K provide an illustration of an exemplary process 1600 in accordance with some embodiments.

In some embodiments, the user device receives handwriting input from a user (1602). The handwriting input includes a plurality of handwriting strokes provided on a touch sensitive surface coupled to the device. In some embodiments, the user device 1604 renders a plurality of handwriting strokes in real time in the handwriting input area of the handwriting input interface (e.g., handwriting input area 806 of Figures 15A-15K). For example, as shown in Figs. 15D and 15H, the user device receives one of a pinch gesture input and an expansion gesture input for a plurality of written strokes.

In some embodiments, for example, as shown in FIGS. 15C-15E, the user device, upon receiving a pinch gesture input, processes a plurality of handwriting strokes as a single recognition unit, A first recognition result is generated (1606).

In some embodiments, for example, as shown in Figures 15G-15i, a user device may be configured to receive a plurality of handwriting strokes at the time of receiving an inflating gesture input with two separate recognition units < RTI ID = 0.0 > Thereby generating a second recognition result based on the plurality of written strokes (1608).

In some embodiments, for example, as shown in Figs. 15E and 15I, the user device may be configured to generate a first recognition result and a second recognition result in the candidate display area of the handwriting input interface And displays the generated recognition result.

In some embodiments, the pinch gesture input includes two simultaneous contacts that converge towards each other in an area occupied by a plurality of writing strokes on the touch sensitive surface. In some embodiments, the expansion gesture input comprises two simultaneous contacts leaving one another in an area occupied by a plurality of written strokes on the touch sensitive surface.

In some embodiments, the user device identifies two adjacent recognition units from a plurality of written strokes (e.g., 1614). For example, as shown in FIG. 15C, the user device may display, in the candidate display area, an initial recognition result (e.g., results 1512, 1514, 1516 in FIG. 15C) containing individual characters recognized from two adjacent recognition units , 1518) is displayed (1616). In some embodiments, when displaying a first recognition result (e.g., result (1524, 1526, or 1528) in Figure 15e) in response to a pinch gesture, As a result, it is replaced (1618). In some embodiments, as shown in FIG. 15D, the user device receives a pinch gesture input while an initial recognition result is displayed in the candidate display area (1620). In some embodiments, for example, as shown in FIG. 15E, the user device may be configured to respond to a pinch gesture input by moving a plurality of handwriting strokes to reduce the distance between two adjacent recognition units in the handwriting input area. (1622).

In some embodiments, the user device identifies 1624 a single recognition unit from a plurality of written strokes. In the candidate display area, the user device recognizes characters recognized from a single recognition unit (e.g., "

Figure 112018083390499-pat00043
(E.g., results 1538 or 1540 of FIG. 15g) that includes the initial recognition results (e.g., "" or "") When displaying the second recognition result (e.g., the result (1546 or 1548) in Fig. 15i) in response to the expansion gesture, the user device will assign an initial recognition result (e.g., result 1538 or 1540) (E.g., result 1546 or 1548) 1628. In some embodiments, as shown in FIG. 15H, the user device may determine that the initial recognition result is displayed in the candidate display area while the expansion gesture input 15H and 15i, the user device, in response to the expansion gesture input, receives the first of the strokes assigned to the first recognition unit in the handwriting input area, Subset A second recognition and a plurality of handwriting stroke so as to increase the separation between the second sub-set of handwritten strokes are assigned to the re-rendering unit 1632.

In some embodiments, the user may optionally select to process the plurality of strokes as a single recognition unit, immediately after the user has provided the strokes and noticed that the strokes may be too broad to make an accurate partition based on the standard partitioning process Provide a pinch gesture to inform the user device. The user device can distinguish pinch gestures from regular strokes based on the two simultaneous contacts that exist during a pinch gesture. Similarly, in some embodiments, the user may optionally select a plurality of strokes as two separate strokes, shortly after the user has provided the strokes and noticed that the strokes may be too close together to make an accurate division based on the standard segmentation process Lt; RTI ID = 0.0 > a < / RTI > The user device can distinguish the expansion gesture from the regular stroke based on the two simultaneous contacts that exist during the pinch gesture.

In some embodiments, the direction of movement of the pinch or inflation gesture is optionally used to provide additional guidance on how to split the strokes under the gesture. For example, when a multi-line handwriting input is enabled for a handwriting input area, a pinch gesture with two contacts moving in a vertical direction is split into two recognition units identified on two adjacent lines as a single recognition unit (For example, as upper and lower circles) to the handwriting input module. Similarly, an expansion gesture with two contacts moving in a vertical direction can notify the handwriting input module that dividing a single recognition unit into two recognition units in two adjacent lines. In some embodiments, the pinch and inflation gestures may also be used for segmentation guidance in the lower portion of the character input, e.g., in different portions of the composite character (e.g., up, down, left, or right portions)

Figure 112018083390499-pat00044
, ≪ / RTI > etc.), or to split a single component. This is particularly helpful in recognizing complex compound Chinese characters because users tend to lose accurate proportions and balance when typing complex compound characters by hand. The ability to adjust the ratios and balances of the handwriting input, e.g., by pinch and expansion gestures, after completion of the handwriting input allows the user to enter the correct characters without having to make various attempts to achieve accurate proportions and balance Especially helpful.

As described herein, the handwriting input module allows a user to enter multi-character handwriting input, and allows the user to enter multiple handwriting input, across multiple characters in the handwriting input area, and even multiple phrases, sentences, and / Across lines, nonsequential strokes for multi-character handwriting input within a character are allowed. In some embodiments, the handwriting input module also provides character-by-character deletion in the handwriting input area, wherein the order of character deletion is such that the strokes for each character are written in the handwriting input area, direction. In some embodiments, the deletion of each recognition unit (e.g., letter or number) in the handwriting input area is optionally performed on a stroke-by-stroke basis, where the strokes are in reverse temporal order provided in the recognition unit . 17A-17H illustrate exemplary user interfaces for responding to deletion input from a user and providing character-by-character deletion in multi-character handwriting input.

As shown in FIG. 17A, the user provided a plurality of writing strokes 1702 in the handwriting input area 804 of the handwriting input user interface 802. Based on the current accumulated strokes 1702, the user device presents three recognition results (e.g., results 1704, 1706, 1708) in the candidate display area 806. As shown in FIG. 17B, the user has provided an additional plurality of strokes 1710 in the handwriting input area 806. The user device recognizes three new output characters and replaces the three previous recognition results 1704, 1706, and 1708 with three new recognition results 1712, 1714, and 1716. In some embodiments, even though the user device has identified two separate recognition units from the current handwriting input (e.g., strokes 1702 and 1710), as shown in Figure 17B, the clusters of strokes 1710 Does not correspond well to any known characters in the repertoire of the handwriting recognition module. As a result, the candidate characters identified for the recognition unit, including the strokes 1710 (e.g., "

Figure 112018083390499-pat00045
(E.g., "day") for the first recognition unit in the candidate display area 806. In some embodiments, (E.g., result 1712) that does not include any candidate characters for the second recognition unit. In some embodiments, the user device determines whether the recognition reliability has passed the predetermined threshold (E.g., results 1714 or 1716), including candidate characters for both recognition results, regardless of whether or not the portion of the handwriting input is revised. In addition, the user may also need to first input the correctly recognized portion of the handwriting input, Can be rewritten.

FIG. 17C shows that the user has continuously provided additional handwriting strokes 1718 to the left of the strokes 1710. Based on the relative position and distance of the stroke 1718, the user device determines that the newly added stroke belongs to the same recognition unit as the clusters of the writing strokes 1702. [ Based on the revised recognition units, a new character (e.g., "

Figure 112018083390499-pat00046
1724, 1724. Again, since none of the candidate characters identified for the strokes 1710 meet the predetermined confidence threshold, the new recognition results 1720, 1722, The first recognition result 1720 is a partial recognition result.

17D illustrates that the user has now entered a plurality of new strokes 1726 between the strokes 1702 and the strokes 1710. FIG. The user device assigns the newly entered strokes 1726 to the same recognition unit as the strokes 1710. [ Now, the user selects two Chinese characters (e.g., "

Figure 112018083390499-pat00047
"), And the correct recognition result 1728 is displayed in the candidate display area 806. [0148]

Figure 27E illustrates that the user has entered an initial portion of the delete input, for example by taking a light touch 1730 on the delete button 1732. [ When the user maintains contact with the delete button 1732, the user can delete the current handwriting input character by character (or per recognition unit). Deletion is not performed for all handwriting input at the same time.

In some embodiments, when the user's finger touches the delete button 1732 on the touch sensitive screen for the first time, the last recognition unit (e.g., from left to right) in the default writing direction , text "

Figure 112018083390499-pat00048
Is visually emphasized (e.g., highlighted with a borderline 1734 or illuminated background, etc.) relative to other recognition unit (s) that are simultaneously displayed in the handwriting input area 804.

In some embodiments, when the user device detects that the user has maintained the contact 1730 on the delete button 1732 for longer than the threshold duration, the user device may enter the handwriting input area (e.g., Removes the highlighted recognition unit (e.g., from box 1734). Additionally, as shown in FIG. 17F, the user device also revises the recognition results shown in the candidate display area 608 to delete any output characters generated based on the deleted recognition unit.

17F shows a case where the user continues to maintain the contact 1730 on the delete button 1732 after the last recognition unit in the handwriting input area 806 (e.g., recognition unit for the character " , An adjacent recognition unit (e.g., character "

Figure 112018083390499-pat00049
Quot;) is the next recognition unit to be deleted. As shown in Figure 17F, the remaining recognition units are visually accented (e.g., in box 1736) and are ready to be deleted The visual highlighting of the recognition unit provides a preview of the recognition unit that will be deleted if the user continues to keep in touch with the delete button. Before the user reaches the threshold duration, The visual emphasis is removed from the last recognition unit and the recognition unit is not deleted. As will be appreciated by those skilled in the art, the duration of the contact is reset each time the recognition unit is deleted. Additionally, In embodiments, the contact strength (e.g., the pressure applied by the user to contact 1730 with the touch sensitive screen) may optionally remove the currently highlighted recognition unit 17F and 17G show that the user has aborted the contact 1730 on the delete button 1732 before reaching the threshold duration and the character "
Figure 112018083390499-pat00050
The result of recognition by the user (e.g., result 1738) for the recognition unit (e. G., Result 1738) is stored in the handwriting input area 806, as shown in Figures 17G and 17H. (As indicated by contact 1740), the text of the first recognition result 1738 is input to the text entry area 808. [

18A and 18B are flowcharts of an exemplary process 1800 in which a user device provides character-by-character deletion in multi-character handwriting input. In some embodiments, the deletion of the handwriting input is performed before the recognized characters from the handwriting input are identified and input to the text input area of the user interface. In some embodiments, the deletion of characters in the handwriting input proceeds according to the reverse spatial order of the recognition units identified from the handwriting input, and is independent of the temporal sequence in which the recognition units are formed. Figures 17A-17H illustrate an exemplary process 1800 in accordance with some embodiments.

As shown in FIG. 18A, in an exemplary process 1800, the user device receives a handwriting input 1802 from the user, and the handwriting input enters a handwriting input area of the handwriting input interface (e.g., area 804 in FIG. 17D) Quot;). The user device identifies (1804) a plurality of recognition units from a plurality of written strokes, each recognition unit comprising a separate subset of a plurality of written strokes. For example, as shown in FIG. 17D, the first recognition unit includes strokes 1702 and 1718, and the second recognition unit includes strokes 1710 and 1726. The user device generates 1804 a multi-character recognition result (e.g., result 1728 of FIG. 17D) that includes individual characters recognized from a plurality of recognition units. In some embodiments, the user device displays a multi-character recognition result (e.g., result 1728 of FIG. 17D) in the candidate display area of the handwriting input interface. In some embodiments, while the multilevel recognition result is displayed in the candidate display area, for example, as shown in FIG. 17E, the user device may enter a delete input (e.g., touch 1730 on the delete button 1732) (1810). In some embodiments, in response to receiving the delete input, as shown, for example, in FIGS. 17E and 17F, the user device may display a multi-character recognition (e.g., (E.g., result 1728) to the last character (e.g., the spatial sequence "

Figure 112018083390499-pat00051
"The last character that appears in"
Figure 112018083390499-pat00052
Quot;) is removed (1812).

In some embodiments, for example, as shown in Figures 17A-17D, the user device renders a plurality of handwriting strokes in real time in the handwriting input area of the handwriting input interface as a plurality of handwriting strokes are provided by the user (1814). In some embodiments, in response to receiving the delete input, the user device may be configured to detect, from the handwriting input area (e.g., handwriting input area 804 of Figure 17E) (1816) an individual subset of the plurality of handwritten strokes corresponding to the last recognition unit in the spatial sequence (e.g., a recognition unit comprising the strokes 1726, 1710). The last recognition unit recognizes the last character (e.g., character "1 ") in the multi-character recognition result (e.g., result 1728,

Figure 112018083390499-pat00053
").

In some embodiments, the last recognition unit does not include the last written stroke in time among the plurality of written strokes provided by the user (1818). For example, if the user provided a stroke 1718 after he or she provided the strokes 1726, 1710, the last recognizing unit containing the strokes 1726, 1710 would still be deleted first.

In some embodiments, for example, as shown in Figure 17E, in response to receiving the initial portion of the delete input, the user device visually distinguishes the last recognition unit from other recognition units identified in the handwriting input region (1820). In some embodiments, the initial portion of the delete input is the initial contact detected on the delete button on the handwriting input interface, and the delete input is detected 1822 when the initial contact lasts longer than the predetermined threshold amount of time.

In some embodiments, the last recognition unit corresponds to a handwritten Chinese character. In some embodiments, the handwriting input is written in a crosshatch. In some embodiments, the handwriting input corresponds to a plurality of Chinese characters written in a verbatim format. In some embodiments, at least one of the written strokes is divided into two adjacent recognition units of the plurality of recognition units. For example, from time to time, the user may use a long stroke associated with multiple characters, in which case the segmentation module of the handwriting input module optionally divides the long stroke into multiple recognition units. When deletion of the handwriting input is performed character by character (or per recognition unit), only a segment of a long stroke (for example, a segment in the corresponding recognition unit) is deleted at a time.

In some embodiments, the delete input is a persistent touch on a delete button provided at a handwriting input interface, and removing a separate subset of the plurality of handwritten strokes means that the subset of handwritten strokes is in a reverse time order (1824) removing, from the handwriting input area, a subset of handwritten strokes in the last recognition unit, on a stroke-by-stroke basis.

In some embodiments, for example, as shown in FIGS. 17B and 17C, the user device generates 1826 a partial recognition result that includes a subset of individual characters recognized from a plurality of recognition units, Each subset of characters meets a predetermined confidence threshold. In some embodiments, the user device may provide a partial recognition result (e.g., results 1712 in FIG. 17B and results 1720 in FIG. 17C) in a candidate display area of the handwriting input interface to a multi- 1714, and 1722) simultaneously (1828).

In some embodiments, the partial recognition result does not include at least the last character in the multi-character recognition result. In some embodiments, the partial recognition result does not include at least the initial character in the multi-character recognition result. In some embodiments, the partial recognition result does not include at least an intermediate character in the multi-character recognition result.

In some embodiments, the minimum unit of deletion is cumulative, and each time handwritten inputs are deleted one at a time, each time the number of copies becomes the last recognition unit at the handwritten inputs still in the handwritten input area.

As described herein, in some embodiments, the user device provides both a horizontal write mode and a vertical write mode. In some embodiments, the user device allows the user to enter text in both a writing direction from left to right and a right to left direction in the horizontal write mode. In some embodiments, the user device allows the user to enter text in both the top-down writing direction and the bottom-up direction, or both, in the vertical writing mode. In some embodiments, the user device provides various asserts (e. G., Write mode or write direction button) on the user interface to invoke the individual write mode and / or write direction for current handwritten inputs. In some embodiments, the text input direction in the text entry area is by default the same as the handwriting input direction in the handwriting input direction. In some embodiments, the user device allows the user to manually set the input direction in the text input area and the writing direction in the handwriting input area. In some embodiments, the text display direction in the candidate display area is, by default, the same as the handwriting input direction in the handwriting input area. In some embodiments, the user device allows the user to manually set the text display direction in the text entry area, regardless of the handwriting input direction in the handwriting input area. In some embodiments, the user device associates the writing mode and / or writing direction of the handwriting input interface with the corresponding device orientation, and a change in device orientation automatically triggers a change in writing mode and / or writing direction. In some embodiments, a change in writing direction automatically causes an entry of the highest recognition result to be input into the text entry area.

19A-19F illustrate exemplary user interfaces of a user device providing both a horizontal input mode and a vertical input mode.

19A shows a user device in a horizontal input mode. In some embodiments, as shown in Figure 19A, the horizontal input mode is provided when the user device is in a transverse orientation. In some embodiments, the horizontal input mode is optionally provided in association with when the device is operated in portrait orientation. In different applications, the association between device orientation and write mode may be different.

In the horizontal input mode, the user can provide handwritten characters in a horizontal writing direction (e.g., a default writing direction going from left to right or a default writing direction going from right to left). In the horizontal input mode, the user device performs the division of the handwriting input into one or more recognition units along the horizontal writing direction.

In some embodiments, the user device allows only a single line input in the handwriting input area. In some embodiments, as shown in Figure 19A, the user device allows multi-line input (input of two lines) in the handwriting input area. In Figure 19A, the user has provided a plurality of handwriting strokes in the handwriting input area 806 in multiple rows. Based on the sequence of the user providing the plurality of handwritten strokes and the distances between the plurality of handwritten strokes and the relative positions of the plurality of handwritten strokes, the user device determines that the user has entered the characters of the two lines. After dividing the handwriting input into two separate lines, the device determines the recognition unit (s) in each line.

As shown in FIG. 19A, the user device has recognized individual characters for each recognition unit currently identified in the handwriting input 1902, and has generated a plurality of recognition results 1904 and 1906. As shown further in Figure 19A, in some embodiments, if the output character (e.g., character "I") for a particular set of recognition units (e.g., a recognition unit formed by an initial stroke) is weak, (E.g., result 1906) indicating only output characters having sufficient recognition reliability. In some embodiments, the user may notice from the partial recognition result 1906 that the first stroke may be revised for the recognition model, or individually erased or rewritten, to produce an accurate recognition result. In this particular example, editing of the first recognition unit is not necessary since the first recognition unit 1904 represents the desired recognition result for the first recognition unit.

In this example, as shown in Figs. 19A and 19B, the user rotated the device in the portrait orientation (e.g., as shown in Fig. 19B). As shown in Fig. 19B, in response to the change in the device orientation, the handwriting input interface is changed from the horizontal input mode to the vertical input mode. In the vertical input mode, the layout of the handwriting input area 804, the candidate display area 806, and the text input area 808 may be different from that shown in the horizontal input mode. The particular layout of the horizontal and vertical input modes may be varied to suit different device configurations and application needs. In some embodiments, according to the rotation of the device orientation and the change in input mode, the user device automatically enters the top result (e.g., result 1904) into text entry area 808 as text input 1910. The orientation and position of the cursor 1912 also reflects changes in the input mode and writing direction.

In some embodiments, a change in the input mode is optionally triggered by the user touching a particular input mode selection < RTI ID = 0.0 > In some embodiments, the input mode selection affordance is a graphical user interface element that also indicates the current write mode, current write direction, and / or current paragraph direction. In some embodiments, the input mode selection affordance can cycle through all available input modes and write directions provided by the handwriting input interface 802. [ As shown in Fig. 19A, the acceptance 1908 indicates that the current input mode is the horizontal input mode, the writing direction is from left to right, and the shorting direction is from top to bottom. In Fig. 19B, the acceptance 1908 indicates that the current input mode is the vertical input mode, the writing direction is from top to bottom, and the shorting direction is from right to left. According to various embodiments, different combinations of writing direction and shorting direction are possible.

As shown in FIG. 19C, the user may select a plurality of new strokes 1914 (e.g., two Chinese characters ") in the handwriting input area 804 in the vertical input mode,

Figure 112018083390499-pat00054
The user device divides the handwriting input into two recognition units in the vertical direction and generates two recognized characters each laid out in the vertical direction 1918 < / RTI >

Figures 19C and 19D illustrate that when the user selects a displayed recognition result (e.g., result 1916), the selected recognition result is entered in the vertical direction into the text input area 808. [

Figures 19e and 19f illustrate that the user has entered the handwriting input 1920 of the additional lines in the vertical writing direction. The lines continue from left to right according to the paragraph direction of traditional Chinese writing. In some embodiments, the candidate display area 806 also represents recognition results (e.g., results 1922, 1924) in the same writing direction and shorting direction as for the handwriting input area. In some embodiments, different writing directions and paragraph directions may be provided by default depending on the primary language associated with the user device or the language of the soft keyboard installed on the user device (e.g., Arabic, Chinese, Japanese, English, etc.) .

Figs. 19E and 19F also show that when the user selects a recognition result (e.g., result 1922), the text of the selected recognition result is input to the text input area 808. Fig. Thus, as shown in Fig. 19F, the current text input in the text input area 808 includes both the text written in the horizontal mode in the writing direction from left to right and the text written in the vertical mode in the top-down writing direction . The paragraph direction for horizontal text is top down, while the paragraph direction for vertical text is from right to left.

In some embodiments, the user device allows the user to separately establish the preferred writing directions, paragraph directions, for each of the handwriting input area 804, the candidate display area 806, and the text input area 808. In some embodiments, the user device establishes a preferred writing direction and a shorting direction for each of the handwriting input area 804, the candidate display area 806, and the text input area 808 associated with each device orientation, I can do it.

20A to 20C are flowcharts of an exemplary process 2000 for changing the text input direction and the handwriting input direction of the user interface. Figures 19A-19F illustrate a process 2000 according to some embodiments.

In some embodiments, the user device determines the orientation of the device (2002). Changes in device orientation and device orientation may be detected by accelerometers and / or other orientation sensing elements in the user device. In some embodiments, the user device provides a handwriting input interface on the device in the horizontal input mode, according to the device in the first orientation (2004). The individual lines of the handwriting input input in the horizontal input mode are divided into one or more individual recognition units along the horizontal writing direction. In some embodiments, the device provides a handwriting input interface on the device in vertical input mode (2006), depending on the device in the second orientation. The individual lines of the handwriting input input in the vertical input mode are divided into one or more individual recognition units along the vertical writing direction.

In some embodiments, while operating in a horizontal input mode (2008): the device detects a change in device orientation from a first orientation to a second orientation (2010). In some embodiments, in response to a change in device orientation, the device transitions from a horizontal input mode to a vertical input mode (2012). This is illustrated, for example, in Figs. 19A and 19B. In some embodiments, during operation in vertical input mode (2014): the user device detects a change in device orientation from a second orientation to a first orientation (2016). In some embodiments, in response to a change in device orientation, the user device transitions from a vertical input mode to a horizontal input mode (2018). In some embodiments, the association between device orientation and input mode may be reversed as described above.

In some embodiments, while operating in the horizontal input mode (2020): the user device receives (2022) a first multi-word handwriting input from the user. In response to the first multi-word handwriting input, the user device presents a first multi-word recognition result in the candidate display area of the handwriting input interface along the horizontal writing direction (2024). This is illustrated, for example, in Figure 19a. In some embodiments, during operation in vertical input mode (2026): the user device receives a second multi-word handwriting input from the user (2028). In response to the second multi-word handwriting input, the user device presents a second multi-word recognition result in the candidate display area along the vertical writing direction (2030). This is illustrated, for example, in Figures 19c and 19e.

In some embodiments, for example, as shown in FIGS. 19A and 19B, the user device receives a first user input 2032 to select a first multi-word recognition result (2032), where the selection changes the input direction (E.g., selection of rotation or affordance 1908 of the device). For example, as shown in FIG. 19C or FIG. 19E, the user device receives a second user input that selects a second multi-word recognition result (2034). The user device simultaneously displays (2036) the first multi-word recognition result and the individual text of the second multi-word recognition result in the text input area of the handwriting input interface, wherein the individual text of the first multi- And thus the individual text of the second multi-word recognition result is displayed along the vertical writing direction. This is illustrated, for example, in the text entry area 808 of Figure 19f.

In some embodiments, the handwriting input area allows multiple lines of handwriting input in the horizontal writing direction and has a default top-down paragraph direction. In some embodiments, the horizontal writing direction is from left to right. In some embodiments, the horizontal writing direction is from right to left. In some embodiments, the handwriting input area allows multiple lines of handwriting input in the vertical writing direction and has a default left-to-right paragraph direction. In some embodiments, the handwriting input area allows multiple lines of handwriting input in the vertical writing direction and has a shorting direction from the default right to left. In some embodiments, the vertical writing direction is top to bottom. In some embodiments, the first orientation is transversely oriented by default and the second orientation is vertically oriented by default. In some embodiments, the user device provides individual positives at the handwriting input interface to manually switch between the horizontal input mode and the vertical input mode, regardless of device orientation. In some embodiments, the user device provides individual positives at the handwriting input interface to manually switch between the two alternative writing directions. In some embodiments, the user device provides individual positives at the handwriting input interface to manually switch between the two alternative paragraph directions. In some embodiments, the affordance is a toggle button that cycles through each available combination of input and paragraph directions when one or more consecutive calls are made.

In some embodiments, the user device receives handwriting input from a user (2038). The handwriting input includes a plurality of handwriting strokes provided in the handwriting input area of the handwriting input interface. In response to the handwriting input, the user device displays 2040 one or more recognition results in the candidate display area of the handwriting input interface. While one or more recognition results are displayed in the candidate display area, the user device detects (2042) a user input for switching from the current handwriting input mode to an alternative handwriting input mode. In response to the user input 2044: the user device switches from the current handwriting input mode to the alternative handwriting input mode 2046. In some embodiments, the user device clears the handwriting input from the handwriting input area (2048). In some embodiments, the user device automatically enters 2050 the highest recognition result in the text input area of the handwriting input interface, among the one or more recognition results displayed in the candidate display area. This is illustrated, for example, in FIGS. 19A and 19B where the current handwriting mode is the horizontal input mode and the alternative handwriting input mode is the vertical input mode. In some embodiments, the current handwriting input mode is a vertical input mode and the alternative handwriting input mode is a horizontal input mode. In some embodiments, the current handwriting input mode and the alternative handwriting input mode are modes in which any two different handwriting input directions or paragraph directions are provided. In some embodiments, the user input is rotation of the device from the current orientation to another orientation (2052). In some embodiments, the user input is an invocation of affordance for manually switching the current handwriting input mode to an alternative handwriting input mode.

As described herein, the handwriting input module allows a user to enter handwritten strokes and / or characters in any time order. Accordingly, it is advantageous to delete individual handwritten characters from multi-character handwritten input and rewrite the same or different handwritten characters as deleted characters at the same location, which allows the user to revise long handwritten input without having to delete the entire handwritten input Because it will help.

20A-20H illustrate exemplary user interfaces for visually highlighting and / or deleting recognition units identified in a plurality of currently accumulated handwriting strokes in a handwriting input area. Allowing a user to individually select, view, and delete any one of a plurality of recognized units identified on a plurality of inputs is particularly advantageous when multi-character and even multi-line handwriting input is allowed by the user device useful. Allowing the user to delete a particular recognition unit at the beginning or middle of the handwriting input allows the user to modify the long input without requiring the user to delete all recognition units located after the undesired recognition unit.

21A-21C, a user has provided a plurality of handwriting strokes (e.g., strokes 2102, 2104, 2106) in a handwriting input area 804 of the handwriting input user interface 802. While the user continues to provide additional strokes to the handwriting input area 804, the user device updates the identified recognition units from the current accumulated handwriting input in the handwriting input area, and recognizes the output characters recognized from the updated recognition units The recognition results are revised according to As shown in FIG. 20C, the user device identified two recognition units from the current handwriting input and presented three recognition results (e.g., 2108, 2010, 2112), each containing two Chinese characters.

In this example, after the user has written two handwritten characters, the user notices that the first recognition unit is incorrectly written and as a result the user device has not identified and presented the desired recognition result in the candidate display area .

In some embodiments, when the user provides a tap gesture (e.g., an immediate lift after touch at the same location) on the touch sensitive display, the user device interprets the tap gesture as an input, Resulting in visual emphasis of the individual recognition units. In some embodiments, another predetermined gesture (e.g., multiple finger rub gestures on the handwriting input area) is used to cause the user device to highlight individual recognition units in the handwriting input area 804. Tap gestures are sometimes desirable because they are relatively easy to distinguish from handwriting strokes, which typically involve movement of the contact within the handwriting input area 804 and involve continued contact of a longer duration. A multi-touch gesture is sometimes desirable because it is relatively easy to distinguish from a writing stroke, which usually involves a single touch in the handwriting input area 804. In some embodiments, the user device may provide in the user interface an affordance 2112 that may be called by the user (e.g., via contact 2114) such that the individual recognition units (e.g., boxes 2108, 2110 ) As shown in FIG. In some embodiments, the affordance is desirable when there is sufficient screen space to accommodate such affordances. In some embodiments, the acceptance may be called a number of consecutive times by the user, which allows the user device to visually highlight the identified recognition unit (s) according to different partitioning chains in the partitioning grid, When the emphasis is released.

21D, when the user provides the necessary gestures to highlight the individual recognition units in the handwriting input area 804, the user device may select individual delete positives (e.g., small deletions) on each highlighted recognition unit Buttons 2116 and 2118). 21E and 21F illustrate that when the user touches (e.g., via touch 2120) an erasure affordance of an individual recognition unit (e.g., a delete button 2116 for the first recognition unit in box 2118) (E.g., in box 2118) is removed from the handwriting input area 804. [ In this particular example, the deleted recognition unit is not the temporally last recognized recognition unit, nor is it spatially the last recognition unit along the writing direction. In other words, the user can delete any recognition unit regardless of when and where the recognition unit is provided in the handwriting input area. 21F shows that the user device also updates the recognition results displayed in the candidate display area 806 in response to the deletion of the first recognition unit in the handwriting input area. As shown in FIG. 21F, the user device has also deleted candidate characters corresponding to the deleted recognition unit from the recognition results. As a result, a new recognition result 2120 is shown in the candidate display area 806. [

After the first recognition unit is removed from the handwriting input interface 804, as shown in FIGS. 21G and 21H, the user may place a plurality of new handwriting strokes 2122 in the area previously occupied by the deleted recognition unit Respectively. The user device has subdivided the currently accumulated handwriting input in the handwriting input area 804. Based on the recognition units identified from the handwriting input, the user device has again generated recognition results (e.g., results 2124, 2126) in the candidate display area 806. [ Figures 21G and 21H illustrate how the user may select one of the recognition results (e.g., result 2124) (e.g., via contact 2128) and input the text of the selected recognition result into text input area 808 Lt; / RTI >

Figures 22A and 22B are flow charts for an exemplary process 2200 in which the individual recognition units identified in the current handwriting input can be visually presented and deleted individually, regardless of the time order in which the recognition units were formed. Figures 21A-21H illustrate a process 2200 in accordance with some embodiments.

In an exemplary process 2200, the user device receives handwriting input from a user (2202). The handwriting input includes a plurality of handwriting strokes provided on a touch sensitive surface coupled to the device. In some embodiments, the user device renders a plurality of handwriting strokes 2204 in the handwriting input area of the handwriting input interface (e.g., handwriting input area 804). In some embodiments, the user device divides a plurality of handwritten strokes into two or more recognition units 2206, each recognition unit comprising a separate subset of a plurality of handwritten strokes.

In some embodiments, the user device receives an edit request from the user (2208). In some embodiments, the edit request is a contact 2210 that is detected on a predetermined impedance (e.g., the impedance 2112 of Figure 21D) provided at the handwriting input interface. In some embodiments, the edit request is a tap gesture detected on a predetermined area in the handwriting input interface (2212). In some embodiments, the predetermined area is within the handwriting input area of the handwriting input interface. In some embodiments, the predetermined area is outside the handwriting input area of the handwriting input interface. In some embodiments, another predetermined gesture (e.g., a cross gesture, a horizontal swipe gesture, a vertical swipe gesture, a slanted swipe gesture) outside the handwriting input region may be used as an editing request . Gestures outside the handwriting input area can be easily distinguished from the handwriting stroke because it is provided outside the handwriting input area.

In some embodiments, in response to an edit request, the user device visually identifies (2214) two or more recognition units in the handwriting input area, e.g., using boxes 2108 and 2110 in FIG. 21d. In some embodiments, visually distinguishing between two or more recognition units further includes (2216) highlighting individual boundaries between two or more recognition units in the handwriting input area. In various embodiments, different ways of visually distinguishing recognized units currently identified in handwriting input may be used.

In some embodiments, the user device provides (2218) means for individually deleting each of the two or more recognition units from the handwriting input area. In some embodiments, the means for individually deleting each of the two or more recognition units, as shown, for example, by the delete buttons 2116, 2118 of Figure 21D, Button. In some embodiments, the means for individually deleting each of the two or more recognition units is a means for detecting a predetermined deletion gesture input for each recognition unit. In some embodiments, the user device does not visibly display the individual erasure absences for the highlighted recognition units. Instead, in some embodiments, the user is allowed to delete the individual recognition unit under the delete gesture using the delete gesture. In some embodiments, the user device does not allow additional handwriting strokes in the handwriting input area because the user device is displaying the recognition units in a visually emphasized manner. Instead, a predetermined gesture or any gesture detected on the visually highlighted recognition unit will cause the user device to remove the recognition unit from the handwriting input area, thereby causing the recognition results displayed in the candidate display area to be revised. In some embodiments, the tap gesture allows the user device to visually highlight individual recognition units identified in the handwriting recognition area, and then the user can delete individual recognition units one by one using the delete button in the reverse writing direction.

In some embodiments, for example, as shown in FIG. 21E, the user device may receive a delete input for individually deleting the first of the two or more recognition units from the handwriting input area, (2224). For example, as shown in FIG. 21F, the user device removes 2226 an individual subset of handwritten strokes in the first recognition unit from the handwriting input area, in response to the delete input. In some embodiments, the first recognition unit is spatially an initial recognition unit among two or more recognition units. In some embodiments, for example, as shown in Figures 21E and 21F, the first recognition unit is spatially intermediate recognition unit among two or more recognition units. In some embodiments, the first recognition unit is spatially the last recognition unit among the two or more recognition units.

In some embodiments, the user device generates 2228 a partitioned grid from a plurality of written strokes, the partitioned grid comprising a plurality of alternative partitioned chains each representing a respective set of recognition units identified from a plurality of written strokes do. For example, FIG. 21G shows recognition results 2024 and 2026, where recognition result 2024 is generated from one split chain with two recognition units and recognition result 2026 is generated from three recognition units ≪ / RTI > In some embodiments, the user device receives (2230) two or more consecutive edit requests from the user. For example, two or more consecutive edit requests may be several consecutive tabs on the acceptance 2112 of FIG. 21G. In some embodiments, in response to each of two or more consecutive edit requests, the user device visually distinguishes (2232) a separate set of recognition units in the handwriting input region from another of the plurality of alternative split chains, . For example, in response to the first tap gesture, two recognition units (e.g., for characters "cap" and "child", respectively) are highlighted in the handwriting input area 804, and in response to the second tap gesture , Three recognition units (e.g., for the characters "width "," blind ", and "child" In some embodiments, in response to the third tap gesture, the visual emphasis is selectively removed from all of the recognition units, and the handwriting input area returns to the normal state and is ready to allow additional strokes. In some embodiments, the user device provides (2234) means for individually deleting each separate set of recognition units currently displayed in the handwriting input area. In some embodiments, this means is an individual delete button for each highlighted recognition unit. In some embodiments, the means is a means for calling a function of detecting a predetermined deletion gesture on each highlighted recognition unit and deleting the highlighted recognition unit under a predetermined deletion gesture.

As described herein, in some embodiments, the user device provides a continuous input mode in the handwriting input area. Since the area of the handwriting input area is limited on the portable user device, it is possible to provide a way to cache handwritten input provided by the user and allow the user to reuse the screen space without committing previously provided handwritten input Sometimes it is desirable. In some embodiments, the user device provides a scrolling handwriting input area, where the input area is gradually shifted by a predetermined amount (e.g., one recognition unit at a time) if the user is close enough at the end of the handwriting input area . In some embodiments, shifting existing recognition units in the handwriting input region may dynamically shift recognition units without disturbing the user ' s writing process and possibly interfering with accurate division of recognition units, It is sometimes advantageous to recycle previously used areas. In some embodiments, when the user reuses the area occupied by the handwriting input that has not yet been entered into the text entry area, the top recognition result for the handwriting input area is automatically entered into the text entry area, Allows you to continue to provide new handwriting input without explicitly selecting results.

In some conventional systems, the user is allowed to write over the existing handwriting input that is still displayed in the handwriting input area. In such systems, time information is used to determine whether the new stroke is part of a previous recognition unit or part of a new recognition unit. Such time-information-dependent systems place stringent requirements on the speed and tempo at which the user provides handwriting input, which is difficult to meet by a large number of users. Additionally, the visual rendering of the handwriting input can be a confusing condition that is difficult for the user to decipher. Thus, the filling process can be disgruntled and confusing to the user, resulting in an unpleasant user experience.

As described herein, a fading process is used to indicate when the user can reuse the area occupied by the previously written recognition unit and continue writing in the handwriting input area. In some embodiments, the fading process progressively reduces the visibility of each recognition unit provided in the handwriting input area for a time of a threshold amount so that when new strokes are written over it, the existing text is visually competing with the new strokes do. In some embodiments, writing over the fading recognition unit may automatically cause the highest recognition result for the recognition unit to be entered into the text input < RTI ID = 0.0 > Area. This implicit and automatic confirmation of the top-level recognition results improves the efficient input and speed of the handwriting input interface and reduces the cognitive burden imposed on the user in order to maintain the thought flow of the current text configuration. In some embodiments, writing over the fading recognition unit does not result in automatic selection of the top search result. Instead, the fading recognition units are stored in the handwriting input stack and are combined with the new handwriting input as the current handwriting input. The user can see recognition results generated based on all the recognition units accumulated in the handwriting input stack before making a selection.

Figures 23A-23J illustrate how the recognition units provided in different areas of the handwriting input area, e.g., after a predetermined amount of time, are gradually faded out from their respective areas, and after a fade-out occurs in a particular area, Illustrate exemplary user interfaces and processes for which it is permissible to provide strokes.

23A, the user has provided a plurality of writing strokes 2302 (e.g., three writing strokes for an uppercase "I") in the handwriting input area 804. [ Handwritten strokes 2302 are identified by the user device as recognition units. In some embodiments, the handwriting input currently displayed in the handwriting input area 804 is stored in the first layer in the handwriting input stack of the user device. A plurality of recognition results generated based on the identified recognition unit are provided in the candidate display area 804. [

Figure 23B illustrates that when the user continues to write one or more strokes 2302 to the right of the strokes 2304, the writing strokes 2302 in the first recognition unit gradually fade out in the handwriting input area 804 Lt; / RTI > In some embodiments, an animation that mimics the gradual fading or disappearance of the visual rendering of the first recognition unit is displayed. For example, animation can create a visual effect of ink evaporating from the whiteboard. In some embodiments, the fading of the recognition unit is not uniform over the entire recognition unit. In some embodiments, the fading of the recognition unit increases with time and eventually the recognition unit becomes completely invisible in the writing area. However, even though the recognition unit is no longer visible in the handwriting input area 804, in some embodiments, the invisible recognition unit remains at the top of the handwriting input stack, and recognition results generated from the recognition unit are displayed in the candidate display area And is continuously displayed. In some embodiments, the fading recognition unit is not completely removed from view until a new handwriting input is written over it.

In some embodiments, the user device enables a new handwriting input to be provided over the area occupied by the immediately recognizable recognition unit at the beginning of the fading animation. In some embodiments, the user device is located above the area occupied by the fading recognition unit only after the fading has progressed to a predetermined level (e.g., at a level where no guess is at all or recognition is not visible at all in that area) So that a new handwriting input can be provided.

Figure 23C illustrates that the first recognition unit (i.e., strokes 2302) has completed its fading process (e.g., the ink color has stabilized at a very weak level or has become invisible). The user device has identified additional recognition units (e.g., recognition units for handwritten characters "a" and "m") from additional handwritten strokes provided by the user and displayed recognition results presented in the candidate display area 806 Updated.

Figures 22D-22F illustrate that over time, the user has provided a plurality of additional handwritten strokes (e.g., 2304, 2306) in the handwritten input area 804. At the same time, previously recognized recognition units gradually fade away from the handwriting input area 804. In some embodiments, initiating its own fading process for each recognition unit after the recognition unit is identified takes a predetermined amount of time. In some embodiments, the fading process for each recognition unit does not begin until the user begins to enter the second recognition unit thereafter. 23B-23F, when a handwriting input is presented in a vernacular manner, a single stroke (e.g., stroke 2304 or stroke 2306) may be provided to a plurality of recognition units (e.g., word " quot; am "or" back ").

Figure 22G illustrates a flow chart of a method for determining whether a user has entered a predetermined recovery input on a delete button 2310 (e.g., as indicated by an immediate lift following contact 2308), even after the recognition unit has started its fading process, Lt; RTI ID = 0.0 > non-fading < / RTI > When the recognition units are restored, their appearance returns to the normal visibility level. In some embodiments, the recovery of the fading recognition units is done in character-wise direction in the handwriting input area 804 in the reverse writing direction. In some embodiments, the recovery of the fading recognition units is done word by word in the handwriting input area 804. As shown in FIG. 23G, the recognition units corresponding to the word "back " have been recovered from the fully faded state to the non-faded state. In some embodiments, the clock for initiating the fading process is reset per recognition unit when the recognition unit is restored to a non-fading state.

22H shows that continued contact on the delete button causes the last recognition unit in the default writing direction (e.g., the recognition unit for the letter "k" in the word "back ") to be deleted from the handwriting input area 804 Respectively. As the delete input is kept on hold, more recognition units (e.g., recognition units for characters "c "," a ", "b" . In some embodiments, the deletion of the recognition unit is on a word-by-word basis and all characters of the handwriting word " back "that are deleted from the handwriting input area 804 are removed at the same time. Figure 22h also shows that previously contacted recognition unit "m" is also displayed on the delete button 2310 after contact 2308 is held on delete button 2310 after the recognition unit has been deleted for letter "b" Recovery.

FIG. 23I shows that the recovered recognition unit is gradually fading again if the deletion input is interrupted before deletion of the recovered recognition unit "m" in handwritten word "am " In some embodiments, the state of each recognition unit (e.g., one or more of the faded states and the selected state from the set of non-faded states) is maintained and updated in the handwriting input stack.

Figure 23j shows that when a user provided one or more strokes 2312 over an area occupied by a fading recognition unit (e.g., a recognition unit for the letter "I ") in the handwriting input area, 231 and 23J illustrate that the text of the top recognition result (e.g., result 2314) for the handwriting input made prior to the strokes 2312 is automatically entered into the text input area 808 . As shown in Figure 23j, the text "I am" no longer appears to be provisional, but instead is committed in the text entry area 808. In some embodiments, once the text input is made for the full or partially faded handwriting input, the handwriting input is removed from the handwriting input stack. Newly entered strokes (e.g., strokes 2312) become the current input in the handwriting input stack.

As shown in Figure 23j, the text "I am" no longer appears to be provisional, but instead is committed in the text entry area 808. In some embodiments, once the text input is made for the full or partially faded handwriting input, the handwriting input is removed from the handwriting input stack. Newly entered strokes (e.g., strokes 2312) become the current input in the handwriting input stack.

In some embodiments, when strokes 2312 are provided on an area occupied by a fading recognition unit (e.g., a recognition unit for the letter "I") in the handwriting input area, The text of the top recognition result (e.g., result 2314) for the input is not automatically entered into the text entry area 808. [ Instead, the current handwriting input (both faded and non-faded) in the handwriting input area 804 is cleared and stored in the handwriting input stack. New strokes 2312 are added to the stored handwriting input in the handwriting input stack. The user device determines recognition results based on the entire accumulated handwriting input in the handwriting input stack. The recognition results are displayed in the candidate display area. In other words, even if only a portion of the current cumulative handwriting input appears in the handwriting input area 804, the recognition results are generated based on the entire handwriting input (both visible and no longer visible) stored in the handwriting input stack do.

Figure 23k shows that the user has entered more strokes 2316 faded in time in the handwriting input area 804 over time. 231 shows that the new stroke 2318 written on the fading strokes 2312 and 2316 causes the text of the top recognition result 2320 for the fading strokes 2312 and 2316 to be input into the text input area 808 .

In some embodiments, the user optionally provides handwriting input on multiple lines. In some embodiments, when multi-line input is possible, the same fading process can be used to clear the handwriting input area for new handwriting input.

24A and 24B are flowcharts of an exemplary process 2400 for providing a fading process in the handwriting input area of a handwriting input interface. 23A-23K illustrate process 2400 in accordance with some embodiments.

In some embodiments, the device receives a first handwritten input from a user (2402). The first handwriting input includes a plurality of handwriting strokes and the plurality of handwriting strokes form a plurality of recognition units distributed along a respective writing direction associated with the handwriting input area of the handwriting input interface. In some embodiments, the user device renders each of the plurality of handwriting strokes in the handwriting input area as the handwriting stroke is provided by the user (2404).

In some embodiments, the user device initiates a separate fading process for each of a plurality of recognition units after the recognition unit is fully rendered (2406). In some embodiments, during the separate fading process, rendering of the recognition unit at the first handwriting input is faded out. This is illustrated in Figures 23A-23F in accordance with some embodiments.

In some embodiments, for example, as shown in Figs. 23I and 23J and Figs. 23K and 231, the user device may receive a user input from a user on a region occupied by a fading recognition unit of a plurality of recognition units, 2 handwriting input is received (2408). In some embodiments, in response to receiving the second handwriting input 2410: the user device renders (2412) a second handwriting input in the handwriting input area and clears all fading recognition units from the handwriting input area (2414). In some embodiments, regardless of whether the recognition unit has started its fading process, all recognition units entered in the handwriting input area before the second handwriting input are cleared from the handwriting input area. This is illustrated, for example, in Figures 23i and 23j and Figures 23k and 23l.

In some embodiments, the user device generates (2416) one or more recognition results for the first handwriting input. In some embodiments, the user device displays 2418 one or more recognition results in the candidate display area of the handwriting input interface. In some embodiments, in response to receiving the second handwriting input, the user device automatically enters 2420 the highest recognition result displayed in the candidate display area into the text input area of the handwriting input interface, without user selection, . This is illustrated, for example, in Figures 23i and 23j and Figures 23k and 23l.

In some embodiments, the user device stores (2422) an input stack that includes a first handwritten input and a second handwritten input. In some embodiments, the user device generates 2424 one or more multi-character recognition results that each include a separate concatenation of the first handwritten input and a separate spatial sequence of characters recognized from the second handwritten input. In some embodiments, the user device displays one or more multi-character recognition results in the candidate display area of the handwriting input interface while the rendering of the second handwriting input replaces the rendering of the first handwriting input in the handwriting input area ).

In some embodiments, the individual fading process for each recognition unit is initiated when a predetermined period of time has elapsed since the recognition unit was completed by the user.

In some embodiments, the fading process for each recognition unit begins when the user begins to input strokes for the next recognition unit after the recognition unit.

In some embodiments, the last state of the individual fading process for each recognition unit is in a state having a predetermined minimum visibility for the recognition unit.

In some embodiments, the last state of the individual fading process for each recognition unit is in a state with zero visibility to the recognition unit.

In some embodiments, after the last recognition unit at the first handwriting input has faded, the user device receives 2428 a predetermined recovery input from the user. In response to receiving the predetermined recovery input, the user device returns 2430 the last recognition unit from the faded state to the non-faded state. This is illustrated, for example, in Figures 23f-23h. In some embodiments, the predetermined recovery input is an initial contact detected on a delete button provided in the handwriting input interface. In some embodiments, persistent touch detected on the delete button deletes the last recognition unit from the handwriting input area and recovers the last to second recognition unit from the fading state to the non-fading state. This is illustrated, for example, in Figures 23g and 23h.

As described herein, the multi-script handwriting recognition model performs stroke-sequence independent and stroke-independent recognition of handwritten characters. In some embodiments, the recognition model is trained only for the space-inducing features included in the flat images of the write samples corresponding to different characters in the vocabulary of the handwriting recognition model. Since the images of the write sample do not contain any temporal information related to the individual strokes contained in the images, the resulting recognition models are stroke-independent and stroke-independent.

As exemplified above, stroke order and stroke direction independent handwriting recognition offers many advantages over conventional recognition systems that rely on information related to the temporal generation of characters (e.g., temporal sequences of strokes in characters) do. However, in real-time handwriting recognition scenarios, time information relating to individual strokes is available and it is sometimes beneficial to use this information to improve the recognition accuracy of handwriting recognition systems. The following describes a technique for integrating time-derived stroke-distribution information into spatial feature extraction of a handwriting recognition model wherein the use of time-derived stroke-distribution information is based on the stroke order and / or stroke independence . Based on the stroke-distribution information associated with different characters, it becomes possible to clarify between similar-looking characters that are generated with uniquely different sets of strokes.

In some embodiments, when the handwriting input is converted to an input image (e.g., an input bitmap image) for a handwriting recognition model (e.g., CNN), the time information associated with the individual strokes is lost. For example, Chinese characters "

Figure 112018083390499-pat00055
(E.g., labeled # 1 to # 8 of Figure 27) can be used to write a character. The sequence and orientation of strokes for a character provides some unique features associated with the character An innocent way of capturing stroke order and stroke direction information without compromising the stroke order and stroke direction independence of the recognition system is to explicitly enumerate all possible permutations of stroke order and stroke direction in the training samples. For characters of moderate complexity, this will result in a probability of more than one billion, which is not impossible, but is difficult to implement in practice. As described herein, a stroke-distribution profile is generated for each write sample, (I.e., temporal information) of the stroke generation. The stroke-distribution profiles of the write samples are subsequently determined by the space- Is also trained to extract a set of time-derived features (e.g., from input bitmap images) that are also combined with the features to improve recognition accuracy without affecting the stroke order and stroke direction independence of the handwriting recognition system.

As described herein, temporal information associated with a character is extracted by calculating various pixel distributions to characterize each writing stroke. Each writing stroke of a character causes a deterministic pattern (or profile) when projected in a given direction. This pattern itself may be insufficient to clearly recognize strokes, but when combined with other similar patterns, it may be appropriate to capture certain characteristics inherent in this particular stroke. Integrating this kind of stroke representation with spatial feature extraction (e.g., feature extraction based on input images at CNN) is ultimately useful for clarifying between similar-looking characters in the repertoire of handwriting recognition models And provides orthogonal information that can be used to generate the orthogonal information.

25A and 25B are flow charts of an exemplary process 2500 for incorporating time-derived features and space-derived features of handwriting samples during training of a handwriting recognition model, wherein the resulting recognition model includes stroke order and stroke Direction-independent. In some embodiments, exemplary process 2500 is performed on a server device that provides a trained recognition model to a user device (e.g., portable device 100). In some embodiments, a server device includes one or more processors, and a memory including instructions for executing the process (2500) when executed by the one or more processors.

In an exemplary process 2500, the device separately trains a set of space-guided features and a set of time-derived features of a handwriting recognition model 2502, wherein a set of space- Wherein each of the images is an image of a handwritten sample of individual characters of an output character set, the set of time-derived features being trained for the corpus of stroke-distribution profiles, each stroke- Characterizes the spatial distribution of a plurality of strokes in handwritten samples for individual characters in a character set.

In some embodiments, separately training a set of space-guiding features may include an input layer, an output layer, and a first convolution layer, a last convolution layer, zero between the first convolution layer and the last convolution layer (2504) training the convolution neural network having a plurality of convolution layers, including a convolution layer between the final convolution layer and the output layer, and a convolution layer between the final convolution layer and the output layer. An exemplary convolution network 2602 is shown in Fig. The exemplary convolutional network 2602 may be implemented in substantially the same manner as the convolutional network 602 shown in FIG. Convolutional network 2602 includes an input layer 2606, an output layer 2608, a first convolution layer 2610a, a plurality of convolution layers 2610n including zero or more intermediate convolution layers and a final convolution layer 2610n, Layers, and a hidden layer 2614 between the final convolution layer and the output layer 2608. Convolutional network 2602 also includes kernel layers 2616 and sub-sampling layers 2612 according to the arrangement shown in FIG. The training of the convolution network is based on images 2616 of the write samples at the training corpus 2604. By minimizing recognition errors for training samples in the training corpus, the space-guiding features are obtained and the individual weights associated with the different features are determined. The same features and weights, once trained, are used for recognition of new handwriting samples that do not exist in the training corpus.

In some embodiments, separately training a set of time-derived features may include providing a plurality of stroke-distribution profiles as statistical models to generate a plurality of time-derived parameters and a plurality Gt; 2506 < / RTI > further comprising determining individual weights for the time-directed parameters of the time domain. In some embodiments, a stroke-distribution profile 2620 is derived from each write sample in the training corpus 2622, as shown in FIG. Training corpus 2622 optionally includes the same write samples as corpus 2604, but also includes time information associated with stroke generation in each write sample. Stroke-distribution profiles 2622 are provided to a statistical modeling process 2624 that is based on statistical modeling methods (e.g., CNN, K-Nearest Neighbor, etc.) during the statistical modeling process By minimizing the error, the time-derived features are extracted and the individual weights for the different features are determined. 26, the set of individual time-directed features and the individual weights are converted to a set of feature vectors (e.g., feature vectors 2626 or feature vectors 2628), and the convolutional neural network 2602 Lt; / RTI > Thus, the resulting network includes space-directed parameters and time-directed parameters that are orthogonal to one another and together contribute to the recognition of characters.

In some embodiments, the device combines a set of space-guided features and a set of time-derived features in a handwriting recognition model (2508). In some embodiments, combining a set of space-derived features and a set of time-derived features in a handwriting recognition model may be performed using a plurality of space-derived parameters and a plurality of time- To a convolution layer or a concealment layer of one of the convolution layers. In some embodiments, the individual weights for the plurality of time-inducing parameters and the plurality of time-inducing parameters may be used for the final convolution layer of the convolutional neural network (e.g., the last convolution layer 2610n). In some embodiments, the individual weights for the plurality of time-derived parameters and the plurality of time-derived parameters are stored in a hidden layer of the convolutional neural network (e.g., hidden layer 2614 of FIG. 26) for handwriting recognition .

In some embodiments, the device provides a real-time handwriting recognition for a user's handwriting input using a handwriting recognition model (2512).

In some embodiments, the device generates a corpus of stroke-distribution profiles from a plurality of write samples (2514). In some embodiments, each of the plurality of handwritten samples corresponds to a character in the output character set and, when written, separately stores individual spatial information for each constituent stroke of the handwriting sample (2516). In some embodiments, to generate a corpus of stroke-distribution profiles, the device performs (2518) the following steps:

For each of the plurality of handwriting samples (2520): the device identifies (2522) composition strokes in the handwriting sample; For each of the identified strokes of the handwritten sample, the device computes an individual occupancy (2524) along each of the plurality of predetermined directions, the occupancy being the ratio between the projection span in each stroke direction and the maximum projection span of the write sample; For each identified stroke of the handwritten sample, the device also calculates (2526) the individual saturation ratios for each stroke based on the ratio between the number of individual pixels in each stroke and the total number of pixels in the fill sample. Thereafter, the user device generates 2528 a feature vector for the handwriting sample as a stroke-distribution profile of the write sample, the feature vector comprising the individual occupancy rates of the at least N strokes in the handwriting sample and the individual saturation ratios , N is a predetermined natural number. In some embodiments, N is less than the maximum number of strokes observed in any single write sample in the plurality of write samples.

In some embodiments, for each of the plurality of handwritten samples: the device arranges the individual occupancies of the identified strokes in each of the predetermined directions in descending order; Only the N highest occupied ratios and saturation ratios of the write sample are included in the feature vector of the write sample.

In some embodiments, the plurality of predetermined directions include a horizontal direction of the write sample, a vertical direction, a positive 45 degree direction, and a negative 45 degree direction.

In some embodiments, to provide real-time handwriting recognition of a user's handwriting input using a handwriting recognition model, the device receives the handwriting input of the user; In response to receiving the handwriting input of the user, the handwriting recognition output is provided to the user substantially simultaneously with the reception of handwriting input.

The letter "

Figure 112018083390499-pat00056
In some embodiments, each input image of a handwritten character is optionally normalized into a square. Each individual handwritten stroke (e.g., stroke # Vertical, +45 degree diagonal, and -45 degree diagonal of the square. The spans of each stroke Si are measured in four projection directions s for each xspan (i), yspan (i ), is recorded as cspan (i), and dspan (i). in addition, the maximum span is observed over the entire image, it is also recorded. the maximum span of the characters 4 Yspan , cspan , and dspan , respectively, for the projection directions of the projection system 100. For example purposes, although four projection directions are considered here, in principle, any arbitrary set of projections may be used in various embodiments 4 < RTI ID = 0.0 > In transparent directions, the character "
Figure 112018083390499-pat00057
(E.g., represented as xspan , yspan , cspan , and dspan ) and spans (e.g., xspan ( 4 ), yspan ( 4 )) of one of the strokes (e.g., , cspan ( 4 ), and dspan (4)) are shown in Fig.

In some embodiments, once the spans have been measured for all strokes 1 through 5, where 5 is the number of individual writing strokes associated with the input image, the individual occupancy along each projection direction is calculated. For example, the individual occupancy R x ( i ) along the x -direction for the stroke S i is calculated as R x ( i ) = xspan ( i ) / xspan . Similarly, the individual occupancies along different projection directions can be calculated as follows: R y ( i ) = Yspan (i) / yspan, R c (i) = cspan (i) / cspan, R d (i) = dspan (i) / dspan.

In some embodiments, the occupancies of all strokes in each direction are separately arranged in descending order so that the individual rankings of all strokes in the input image are obtained in terms of their occupancy in that direction relative to each projection direction. The ranking of strokes in each projection direction reflects the relative importance of each stroke along the associated projection direction. This relative importance does not depend on the order and direction in which strokes are generated in the write samples. Thus, this ranking based on occupancy is time-directed and stroke-independent time-directed information.

In some embodiments, each stroke is given a relative weight representing the importance of the stroke to the entire character. In some embodiments, the weights are measured by the ratio of the number of pixels in each stroke to the total number of pixels in the character. This ratio is referred to as the saturation ratio associated with each stroke.

In some embodiments, a feature vector may be generated for each stroke, based on the saturation ratios and occupancies of each stroke. For each character, a set of feature vectors is generated that includes 5S features. The set of these features is referred to as the stroke-distribution profile of the character.

In some embodiments, only a predetermined number of topmost strokes are used when constructing the stroke-distribution profile of each character. In some embodiments, the predetermined number of strokes is ten. Based on the top 10 strokes, 50 stroke-inducing features may be generated for each character. In some embodiments, these features are implanted in the final convolution layer of the convolutional neural network, or in a subsequent concealment layer.

In some embodiments, during real-time recognition, an input image of a recognition unit is provided in a handwriting recognition mode trained with both space-directed features and time-derived features. The input image is processed through each layer of the handwriting recognition model shown in Fig. Distribution profile of the recognition unit is injected into that layer when the processing of the input image reaches the layer requiring the stroke-distribution profile input (e.g., the last convolution layer or the concealment layer). The processing of the input image and the stroke-distribution profile continues until an output segment (e.g., one or more candidate characters) is provided at the output layer 2608. In some embodiments, stroke-distribution profiles of all recognition units are calculated and provided with the input images of recognition units as a handwriting recognition model as an input. In some embodiments, the input image of the recognition unit initially experiences a handwriting recognition model (without the benefit of time-trained features). Distribution profiles of the recognition unit are used in the handwriting recognition model (e.g., the last convolution layer or the hidden layer) trained with time-derived features when two or more similar-looking candidate characters are identified with near recognition reliability values Lt; / RTI > As the input image and stroke-distribution profile of the recognition unit pass through the last layers of the handwriting recognition model, two or more similar-looking candidate characters may be better distinguished due to differences in their stroke-distribution profiles . Thus, in order to improve the recognition accuracy without compromising the stroke order and stroke direction independence of the handwriting recognition system, time-directed information related to how the recognition unit is formed by individual writing strokes is used.

The foregoing description has been presented for purposes of illustration and with reference to specific embodiments. It should be understood, however, that the foregoing illustrative discussion is not intended to limit the invention to the precise forms disclosed, Many modifications and variations are possible in light of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling those skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated .

Claims (25)

  1. A method of providing handwriting recognition,
    In an electronic device having one or more processors, a touch sensitive surface, and a display,
    Displaying a user input interface on the display including a message area and a stroke input area;
    Receiving a first set of strokes on the touch sensitive surface in the stroke input area;
    Determining a first single character based on the first set of strokes;
    Displaying the first single character in the message area;
    Receiving a second set of strokes on the touch sensitive surface in the stroke input area after receiving the first set of strokes and displaying the first single character;
    Determining a modified first single character based on the first set of strokes and the second set of strokes; And
    Replacing the display of the first single character with the modified first single character
    Wherein the handwriting recognition method comprises the steps of:
  2. The method according to claim 1,
    Rendering the first set of strokes in the stroke input area; And
    Starting a fading process for the first set of strokes, wherein rendering the first set of strokes in the stroke input area during the fading process is increasingly faded How to provide handwriting recognition.
  3. 3. The method of claim 2,
    Wherein the fading process for the first set of strokes begins when a predetermined time period has elapsed after the first set of strokes have been completed by the user.
  4. 3. The method of claim 2,
    Wherein the fading process for the first set of strokes begins when a user begins to input strokes of the second set.
  5. 3. The method of claim 2,
    Wherein an end state of the fading process for the first set of strokes is a state having a predetermined minimum visibility for the strokes of the first set.
  6. 3. The method of claim 2,
    Wherein an end state of the fading process for the first set of strokes has a visibility of zero for the first set of strokes.
  7. The method according to claim 1,
    Wherein displaying the first single character occurs before receiving the second set of strokes.
  8. The method according to claim 1,
    Receiving a third set of strokes on the touch sensitive surface within the stroke input area after receiving the second set of strokes and after displaying the modified first single character;
    Determining a second single character based on the third set of strokes; And
    Displaying the second single character with the modified first single character
    Further comprising the steps of:
  9. The method according to claim 1,
    Wherein the first set of strokes is a single continuous stroke.
  10. The method according to claim 1,
    Wherein the second set of strokes is a single continuous stroke.
  11. The method according to claim 1,
    Wherein the first single character comprises a character not included in the modified first single character.
  12. 12. A computer-readable storage medium for storing one or more programs configured to be executed by one or more processors of an electronic device having a display and a touch sensitive surface, the one or more programs comprising one or more of the handwriting recognition A computer-readable storage medium having stored thereon instructions for performing a method of providing.
  13. As an electronic device,
    display;
    Touch sensitive surface;
    One or more processors; And
    A memory for storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the handwriting recognition method of any one of claims 1 to 11,
    ≪ / RTI >
  14. A method of providing handwriting recognition,
    In an electronic device having one or more processors, a touch sensitive surface, and a display,
    Displaying a user input interface on the display including a message area and a stroke input area;
    Receiving a first set of strokes on the touch sensitive surface in the stroke input area;
    Determining a first text based on the first set of strokes;
    Displaying the first text on the display in the message area;
    Determining one or more candidates based on the first set of strokes, the one or more candidates comprising an imoji;
    Displaying the one or more candidates in a candidate display area;
    Receiving a user input that selects a first candidate from the one or more candidates while displaying the one or more candidates;
    In response to the user input,
    Replacing the display of the first text with a display of the selected first candidate; And
    Clearing the one or more candidates from the candidate display area
    Wherein the handwriting recognition method comprises the steps of:
  15. 15. The method of claim 14,
    Wherein the first text includes a plurality of characters.
  16. 15. The method of claim 14,
    Wherein the first text is a single character.
  17. 15. The method of claim 14,
    Receiving a second set of strokes on the touch sensitive surface within the stroke input area after receiving the first set of strokes and displaying the first text, The second set of strokes being based on a second set of strokes.
  18. 15. The method of claim 14,
    Wherein the first set of strokes is a single continuous stroke.
  19. 18. The method of claim 17,
    Wherein the second set of strokes is a single continuous stroke.
  20. 15. The method of claim 14,
    Rendering the first set of strokes within the stroke input area; And
    Initiating a fading process for the first set of strokes, wherein during the fading process, rendering the first set of strokes within the stroke input area is fading more and more.
  21. 21. The method of claim 20,
    Wherein the fading process for the first set of strokes begins when a predetermined time period has elapsed after the first set of strokes have been completed by the user.
  22. 21. The method of claim 20,
    Wherein an end state of the fading process for the first set of strokes is a state having a predetermined minimum visibility for the strokes of the first set.
  23. 21. The method of claim 20,
    Wherein an end state of the fading process for the first set of strokes has a visibility of zero for the first set of strokes.
  24. A computer-readable medium having stored thereon one or more programs configured to be executed by one or more processors of an electronic device having a display and a touch-sensitive surface, the one or more programs comprising one or more of the handwriting recognition A computer-readable storage medium having stored thereon instructions for performing a method of providing.
  25. As an electronic device,
    display;
    Touch sensitive surface;
    One or more processors; And
    A memory for storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the handwriting recognition method of any one of claims 14 to 23,
    ≪ / RTI >
KR1020187024261A 2013-06-09 2014-05-30 Managing real-time handwriting recognition KR102005878B1 (en)

Priority Applications (19)

Application Number Priority Date Filing Date Title
US201361832908P true 2013-06-09 2013-06-09
US201361832942P true 2013-06-09 2013-06-09
US201361832921P true 2013-06-09 2013-06-09
US201361832934P true 2013-06-09 2013-06-09
US61/832,921 2013-06-09
US61/832,908 2013-06-09
US61/832,942 2013-06-09
US61/832,934 2013-06-09
US14/290,935 US9898187B2 (en) 2013-06-09 2014-05-29 Managing real-time handwriting recognition
US14/290,945 US9465985B2 (en) 2013-06-09 2014-05-29 Managing real-time handwriting recognition
US14/290,935 2014-05-29
US14/290,945 2014-05-29
US14/292,138 2014-05-30
US14/291,865 2014-05-30
US14/291,722 2014-05-30
PCT/US2014/040417 WO2014200736A1 (en) 2013-06-09 2014-05-30 Managing real - time handwriting recognition
US14/291,865 US9495620B2 (en) 2013-06-09 2014-05-30 Multi-script handwriting recognition using a universal recognizer
US14/291,722 US20140363082A1 (en) 2013-06-09 2014-05-30 Integrating stroke-distribution information into spatial feature extraction for automatic handwriting recognition
US14/292,138 US20140361983A1 (en) 2013-06-09 2014-05-30 Real-time stroke-order and stroke-direction independent handwriting recognition

Publications (2)

Publication Number Publication Date
KR20180097790A KR20180097790A (en) 2018-08-31
KR102005878B1 true KR102005878B1 (en) 2019-07-31

Family

ID=52022661

Family Applications (4)

Application Number Title Priority Date Filing Date
KR1020197021958A KR102121487B1 (en) 2013-06-09 2014-05-30 Managing real-time handwriting recognition
KR1020187024261A KR102005878B1 (en) 2013-06-09 2014-05-30 Managing real-time handwriting recognition
KR1020157033627A KR101892723B1 (en) 2013-06-09 2014-05-30 Managing real-time handwriting recognition
KR1020207016098A KR20200068755A (en) 2013-06-09 2014-05-30 Managing real-time handwriting recognition

Family Applications Before (1)

Application Number Title Priority Date Filing Date
KR1020197021958A KR102121487B1 (en) 2013-06-09 2014-05-30 Managing real-time handwriting recognition

Family Applications After (2)

Application Number Title Priority Date Filing Date
KR1020157033627A KR101892723B1 (en) 2013-06-09 2014-05-30 Managing real-time handwriting recognition
KR1020207016098A KR20200068755A (en) 2013-06-09 2014-05-30 Managing real-time handwriting recognition

Country Status (5)

Country Link
JP (3) JP6154550B2 (en)
KR (4) KR102121487B1 (en)
CN (4) CN109614846A (en)
HK (1) HK1220276A1 (en)
WO (1) WO2014200736A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107526449A (en) * 2016-06-20 2017-12-29 国基电子(上海)有限公司 Character input method
US10114544B2 (en) * 2015-06-06 2018-10-30 Apple Inc. Systems and methods for generating and providing intelligent time to leave reminders
CN107220655A (en) * 2016-03-22 2017-09-29 华南理工大学 A kind of hand-written, printed text sorting technique based on deep learning
DK179374B1 (en) * 2016-06-12 2018-05-28 Apple Inc Handwriting keyboard for monitors
CN106126092A (en) * 2016-06-20 2016-11-16 联想(北京)有限公司 A kind of information processing method and electronic equipment
US10325018B2 (en) 2016-10-17 2019-06-18 Google Llc Techniques for scheduling language models and character recognition models for handwriting inputs
CN106527875B (en) * 2016-10-25 2019-11-29 北京小米移动软件有限公司 Electronic recording method and device
CN107861684A (en) * 2017-11-23 2018-03-30 广州视睿电子科技有限公司 Write recognition methods, device, storage medium and computer equipment
KR102008845B1 (en) * 2017-11-30 2019-10-21 굿모니터링 주식회사 Automatic classification method of unstructured data
KR102053885B1 (en) * 2018-03-07 2019-12-09 주식회사 엘렉시 System, Method and Application for Analysis of Handwriting
CN108710882A (en) * 2018-05-11 2018-10-26 武汉科技大学 A kind of screen rendering text recognition method based on convolutional neural networks
KR101989960B1 (en) 2018-06-21 2019-06-17 가천대학교 산학협력단 Real-time handwriting recognition method using plurality of machine learning models, computer-readable medium having a program recorded therein for executing the same and real-time handwriting recognition system
CN109446780A (en) * 2018-11-01 2019-03-08 北京知道创宇信息技术有限公司 A kind of identity identifying method, device and its storage medium
CN109471587B (en) * 2018-11-13 2020-05-12 掌阅科技股份有限公司 Java virtual machine-based handwritten content display method and electronic equipment
CN110362247A (en) * 2019-07-18 2019-10-22 江苏中威科技软件系统有限公司 It is a set of to amplify the mode signed on electronic document

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100329562A1 (en) 2009-06-30 2010-12-30 Feng Drake Zhu Statistical Online Character Recognition

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3353954B2 (en) * 1993-08-13 2002-12-09 ソニー株式会社 Handwriting input display method and handwriting input display device
DE69523567T2 (en) * 1994-11-14 2002-06-27 Motorola Inc Method for dividing handwriting inputs
US5737443A (en) * 1994-11-14 1998-04-07 Motorola, Inc. Method of joining handwritten input
JP3333362B2 (en) * 1995-04-11 2002-10-15 株式会社日立製作所 Character input device
JPH10307675A (en) * 1997-05-01 1998-11-17 Hitachi Ltd Method and device for recognizing handwritten character
JP4663903B2 (en) * 2000-04-20 2011-04-06 パナソニック株式会社 Handwritten character recognition device, handwritten character recognition program, and computer-readable recording medium recording the handwritten character recognition program
JP4212270B2 (en) * 2001-12-07 2009-01-21 シャープ株式会社 Character input device, character input method, and program for inputting characters
US8479112B2 (en) * 2003-05-13 2013-07-02 Microsoft Corporation Multiple input language selection
JP2005341387A (en) * 2004-05-28 2005-12-08 Nokia Corp Real time communication system, transceiver and method for real time communication system
US7496547B2 (en) * 2005-06-02 2009-02-24 Microsoft Corporation Handwriting recognition using a comparative neural network
US7720316B2 (en) * 2006-09-05 2010-05-18 Microsoft Corporation Constraint-based correction of handwriting recognition errors
KR100859010B1 (en) * 2006-11-01 2008-09-18 노키아 코포레이션 Apparatus and method for handwriting recognition
CN101893987A (en) * 2010-06-01 2010-11-24 华南理工大学 Handwriting input method of electronic equipment
WO2012071730A1 (en) * 2010-12-02 2012-06-07 Nokia Corporation Method, apparatus, and computer program product for overlapped handwriting
CN102135838A (en) * 2011-05-05 2011-07-27 汉王科技股份有限公司 Method and system for partitioned input of handwritten character string
US20130002553A1 (en) * 2011-06-29 2013-01-03 Nokia Corporation Character entry apparatus and associated methods
JP2013089131A (en) * 2011-10-20 2013-05-13 Kyocera Corp Device, method and program
CN102566933A (en) * 2011-12-31 2012-07-11 广东步步高电子工业有限公司 Method for effectively distinguishing command gestures and characters in full-screen handwriting
JP6102374B2 (en) * 2013-03-15 2017-03-29 オムロン株式会社 Reading character correction program and character reading device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100329562A1 (en) 2009-06-30 2010-12-30 Feng Drake Zhu Statistical Online Character Recognition

Also Published As

Publication number Publication date
JP6154550B2 (en) 2017-06-28
JP2016523406A (en) 2016-08-08
CN105247540A (en) 2016-01-13
JP6559184B2 (en) 2019-08-14
KR20160003112A (en) 2016-01-08
CN109614845A (en) 2019-04-12
HK1220276A1 (en) 2017-04-28
CN109614847A (en) 2019-04-12
KR20180097790A (en) 2018-08-31
JP2017208101A (en) 2017-11-24
KR102121487B1 (en) 2020-06-11
WO2014200736A1 (en) 2014-12-18
CN105247540B (en) 2018-10-16
CN109614846A (en) 2019-04-12
JP2019164801A (en) 2019-09-26
KR101892723B1 (en) 2018-08-29
KR20200068755A (en) 2020-06-15
KR20190090887A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
DK179545B1 (en) Intelligent digital assistant in a multi-tasking environment
US10599331B2 (en) Touch input cursor manipulation
NL2017009B1 (en) Canned answers in messages
KR101967593B1 (en) Touch input cursor manipulation
US9772749B2 (en) Device, method, and graphical user interface for managing folders
EP3141987B1 (en) Zero latency digital assistant
JP6063997B2 (en) Device, method and graphical user interface for navigating a list of identifiers
JP2018088261A (en) Device, method, and graphical user interface for providing navigation and search function
US10592601B2 (en) Multilingual word prediction
AU2016202878B2 (en) Device, method, and graphical user interface for manipulating soft keyboards
US10042549B2 (en) Device, method, and graphical user interface with a dynamic gesture disambiguation threshold
US9842105B2 (en) Parsimonious continuous-space phrase representations for natural language processing
US9355472B2 (en) Device, method, and graphical user interface for adjusting the appearance of a control
JP2017199420A (en) Surfacing off-screen visible objects
AU2019200030B2 (en) Devices and methods for manipulating user interfaces with a stylus
US10067938B2 (en) Multilingual word prediction
US9703450B2 (en) Device, method, and graphical user interface for configuring restricted interaction with a user interface
EP3084580B1 (en) User interface for overlapping handwritten text input
JP6701066B2 (en) Dynamic phrase expansion of language input
US20180239512A1 (en) Context based gesture delineation for user interaction in eyes-free mode
US20170263248A1 (en) Dictation that allows editing
US20190121520A1 (en) Device, Method, and Graphical User Interface for Manipulating Framed Graphical Objects
US20180267686A1 (en) Semantic zoom animations
JP6404267B2 (en) Correction of language input
US10007426B2 (en) Device, method, and graphical user interface for performing character entry

Legal Events

Date Code Title Description
A107 Divisional application of patent
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant