EP2044587A2 - Systeme et procede d'interface utilisateur pour la modification de texte et la selection de menus - Google Patents

Systeme et procede d'interface utilisateur pour la modification de texte et la selection de menus

Info

Publication number
EP2044587A2
EP2044587A2 EP07835973A EP07835973A EP2044587A2 EP 2044587 A2 EP2044587 A2 EP 2044587A2 EP 07835973 A EP07835973 A EP 07835973A EP 07835973 A EP07835973 A EP 07835973A EP 2044587 A2 EP2044587 A2 EP 2044587A2
Authority
EP
European Patent Office
Prior art keywords
textual
text
component
user
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP07835973A
Other languages
German (de)
English (en)
Other versions
EP2044587A4 (fr
Inventor
Clifford A. Kushler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/533,714 external-priority patent/US7382358B2/en
Application filed by Individual filed Critical Individual
Publication of EP2044587A2 publication Critical patent/EP2044587A2/fr
Publication of EP2044587A4 publication Critical patent/EP2044587A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Definitions

  • the invention relates to graphical and gestural user interfaces for computer systems and, more specifically, to any of a variety of computer systems wherein a user performs input actions using one or more input devices, and wherein the data generated by such input actions are analyzed in order to recognize one or
  • the invention also relates to a graphical approach for presenting a menu of two or more choices to a user for selection by simple gestures that are quick and easy to perform, where such menu selections are performed as part of a text input method or for other control functions of the computer system.
  • input action recognition systems Input action recognition systems
  • the textual objects identified by such input action recognition systems are output by a text presentation system (most commonly implemented as a region of a display device on which the text is displayed) such that the generated text can be further edited by the user.
  • a text presentation system most commonly implemented as a region of a display device on which the text is displayed
  • the system can maintain a record (at least for a limited number of the most-recently generated textual objects) of one or more of the alternate textual objects also determined to correspond reasonably closely to the input action, and (at least temporarily) associate these alternate textual objects with the textual object actually generated for output.
  • the system can record certain data or information regarding the input actions and associate this data with the record of the associated alternate textual interpretations, or re-process this recorded data regarding the input actions to identify alternate textual objects at a later time.
  • the text presentation system comprises a text output region on a display.
  • this action is, perhaps universally, a single click of the mouse performed within the text output region.
  • stylus-based touch-screen systems this action is, again, perhaps universally, a single tap of the stylus performed within the text output region.
  • any editing action performed by the user results in the text insertion position being re-located to the location of the edited text. While this behavior made sense (and was also in a sense unavoidable) in the original mouse-and-keyboard model for text editing, in many scenarios in the input action recognition systems described above, this behavior is no longer desirable. In the "text editing" that is necessitated by these scenarios, the user is in general trying to input a particular stream of text.
  • the user looks at the text output region (or otherwise examines the text presentation system) and notices that the text that has been generated differs in some way from the text that the user intended to generate, due to the fact that one or more of the user's preceding input actions were "incorrectly recognized” by the system so that one or more textual objects other than those intended by the user have appeared somewhere earlier in the text output region.
  • the user simply wishes to correct the incorrectly recognized text (in essence, to "re-map” or "re-edit” the system's interpretation of the original input action(s) performed by the user when the textual object was generated), and to continue to input new text at the current text insertion position.
  • the problem remains that, with existing systems, it is not possible to edit any incorrectly recognized text without re-locating the text insertion position to the location of the edited text.
  • Standard desktop computing systems are almost universally equipped with a full-size desktop keyboard and a mouse (or mouse equivalent such as a trackball or a graphic tablet digitizer). As a result, the majority of users are comfortable and relatively efficient using a keyboard and mouse for text input on
  • desktop systems For portable, handheld devices, the desktop keyboard and mouse are impractical due to their size and the need (in the case of a standard mouse) for a flat, relatively stable surface. Consequently, many of the above input action recognition text input systems were either developed specifically for use on portable, handheld devices, or are often viewed as being particularly useful when used on such devices.
  • Portable computing devices continue to get more powerful and more useful.
  • the touch-screen has proved to be a very useful, flexible and easy-to-use interface for portable devices.
  • the touch-screen interface is used on a wide variety of portable devices, including larger devices such as Tablet PCs, but it has been found to be particularly effective on smaller devices such as PDA's and mobile phones.
  • the development of such devices has largely been focused on two conflicting goals: one is making the devices smaller, and another is making them easier, faster and more convenient to use.
  • One user interface element that is commonly used in a wide variety of systems is to present a menu of choices to the user to allow the user to select a desired response among the alternatives presented.
  • This user interface element is frequently used in the above input action recognition text input systems, since it is often necessary to present the user with a list of possible alternative textual interpretations of one or more input actions, and allow the user to select the correct interpretation that reflects the user's intention in performing the input actions.
  • the most natural way to interact with an onscreen menu is to select the desired option simply by selecting it by contacting the desired menu selection with a stylus or a finger. It is often desirable to minimize the amount of display area required to display the menu, so that other elements on the
  • the user is required to control the placement of the stylus such that the first contact of the screen by the stylus (or finger) occurs within the region associated with the desired selection.
  • the user is allowed to initially contact the screen within any of the set of active selection regions, then slide the stylus until it is within the region associated with the desired selection (without breaking contact with the screen) before lifting the stylus.
  • the initial contact location must be carefully controlled to achieve the desired selection, and in the second approach the final contact location must be similarly controlled.
  • each menu selection item is represented as a given two-dimensional region of the displayed menu, in either approach the user is required to control the placement of the screen contact in two dimensions in order to effect the desired menu selection.
  • the methods and systems of the present invention solve the problems described above for input action recognition text input systems by enabling the user to edit any incorrectly recognized text without re-locating the text insertion position to the location of the edited text.
  • the text editing system of the present invention tracks and records the location of the text insertion position within the text output region so that the text insertion position can be automatically restored to this location immediately following the "re-editing" of incorrectly recognized text.
  • the system defines a text editing user action (or “gesture”) that is distinct from that which is used to change the text insertion position to a new location in the text output region, and wherein this distinct text editing user action is performed to indicate that the user wishes to change a previously generated (but incorrectly recognized) textual object to one of the alternate textual objects associated with the textual object actually generated for output (hereinafter, the "Re-Edit" gesture).
  • a distinct Re-Edit gesture can be defined as a "double tap" of the stylus (two taps occurring within a threshold maximum distance of each other with no more than a threshold maximum time interval between the two consecutive taps) that occurs near a previously output textual object in the text output M'
  • the system detects that the user has performed the Re-Edit gesture in a region associated with a previously output textual object, the textual object is replaced in the text output region by a selected one of the alternate textual objects associated with the one or more input actions which resulted in the generation of the original textual object (either a default selection or a selection performed by the user), and the text insertion position is then automatically restored to its previous location (prior to the detection of the Re-Edit gesture).
  • the alternate textual object that replaces the original textual object is to be selected by the user, often the most effective user interface is to present the user with a menu of alternate textual objects from which to select the desired textual object.
  • the present invention further enhances the speed and ease with which such selections can be made by enabling the user to make a selection from a menu of choices without having to control the screen contact action so that it either begins or ends within the two-dimensional menu region associated with the desired menu selection.
  • the menu choice that is activated by the user's screen contact action is determined by the segment of the menu's outermost boundary that is crossed by the path of the contact action, where each segment of the menu boundary is associated with one of the menu selections.
  • the initial contact can occur anywhere within the displayed menu, and is not restricted to having to start (or end) within a region that is (or appears to be) associated with the particular desired textual object. Even greater efficiencies can be obtained by utilizing the menu selection system of the present invention whenever a selection needs to be made between two or more possible choices.
  • FIGURE 1 shows a structural system block diagram showing the typical hardware components of a system that embodies the methods of the text editing and menu selection systems of the present invention as shown in FIGURES 2A, 2B 5 and 2C and in FIGURES 3 A, 3 B and 3C;
  • FIGURE 2A shows an example of a textual object that is being Re- Edited according to an aspect of the method of the present invention
  • FIGURE 2B shows an example of two concatenated textual objects wherein one is being Re-Edited according to an aspect of the method of the present invention
  • FIGURE 2C shows an example of the result of Re-Editing one of two concatenated textual objects from FIGURE 2B according to an aspect of the method of the present invention
  • FIGURE 3A shows an example of a menu that is structured in one way that takes advantage of the method of the present invention
  • FIGURE 3B shows an example of a menu that is structured in one way that takes advantage of the method of the present invention, where two of the presented choices have a lower a priori probability of corresponding to the user's intended selection;
  • FIGURE 3C shows an example of a menu that is structured in one way that takes advantage of the method of the present invention, where the boundary segments of the menu that are associated with each of the presented choices are
  • FIGURE 4 shows an example of a contact action on a menu that is structured in one way that takes advantage of the method of the present invention
  • FIGURE 5 shows an example of a menu that is structured in one way that takes advantage of the method of the present invention, and allows the number of presented selection choices to be increased;
  • FIGURE 6 shows an example of a menu that is structured in the manner illustrated in FIGURE 5, and where similar background colors are used to show sets of menu choices that are selectable by similar selection gestures.
  • the system of the present invention enables the user to accomplish the goal of correcting text that has been incorrectly recognized by an input action recognition system without unnecessarily interfering with the user's process of entering and editing text.
  • an intended word or phrase is incorrectly recognized (where one or more input actions have been "mapped" to text other than that intended by the user in performing the input actions)
  • the user must correct the mis-recognition of the system, but this correction does not reflect an intention by the user to change or edit the user's intended text, but rather only to "re-map” or "re-edit” the system's interpretation of the original input action(s) performed by the user when the textual object was generated.
  • the system of the present invention enables the user to correct the mis-recognized text, and then continue to input text at the original location
  • the system tracks and records the location of a text insertion position within the text output region so that, when appropriate, a text insertion position can be automatically restored to this location.
  • the system defines a text editing user action (or "gesture") that is distinct from that which is used to change the text insertion position to a new location in the text output region, and wherein this distinct text editing user action is performed to indicate that the user wishes to change a previously generated (but incorrectly recognized) textual object to one of the alternate textual objects associated with the textual object actually generated for output (hereinafter, the "Re-Edit" gesture).
  • a distinct Re-Edit gesture can be defined as a double tap of the stylus near a previously output textual object in the text output region.
  • the system detects that the user has performed the Re-Edit gesture in the region associated with a previously output textual object, the textual object is replaced in the text output region by one of the associated alternate textual objects (as described below), and the text insertion position is then automatically restored to its previous location (prior to the detection of the Re-Edit gesture).
  • the system allows the user to select a word in the output text for re-editing by highlighting the word to be edited (or by positioning the text insertion position within or adjacent to the boundaries of the word) and then activating a designated "Re-Edit" editing function key (in this case, the system must track the previous two locations of the text insertion position in order to restore it to its position prior to the re-location of the text insertion position that was performed in order to activate the Re-Edit function.
  • the system recognizes when the Re-Edit gesture is performed in the output text region, and identifies the word in the output text region closest to where the pre-determined stylus action or gesture was performed as the target word for re-editing.
  • the pre-determined editing action is a "double-tap" of a stylus on a word displayed on a touch-screen.
  • the pre-determined editing action is a "double-tap" of a stylus on a word displayed on a touch-screen.
  • determined editing action is (for example) to briefly hover the mouse over the word to be edited, then quickly move the mouse back and forth one time (this is simply an exemplary gesture, as many such gestures can be defined).
  • the Re-Edit gesture is defined such that a plurality of words can be selected with a single gesture.
  • recognition errors commonly affect more than one word in the generated output.
  • An example of this might be where the user utters "It's hard to recognize speech" and the output generated by the system is "It's hard to wreck a nice beach.”
  • the Re-Edit gesture may be defined, for example, as a line drawn through a sequence of words, where in the example above, a line would be drawn through the phrase "wreck a nice beach.” The system would then generate one or more alternate interpretations of the associated utterance without changing the interpretation of the portion of the utterance corresponding to "It's hard to”.
  • Re-Edit gestures may be defined, such as a circle drawn around the sequence of words to be Re-Edited, and are not to be considered outside the scope of the present invention.
  • the Re-Edit "gesture” may be defined as a spoken command, which in the current example could be a spoken phrase such as "Re- Edit: wreck a nice beach.”
  • the system stores a list of the alternate textual objects identified as being those most closely corresponding to one or more input actions for one or more of the most recently output textual objects.
  • the system displays a textual object selection list containing the list of alternate textual objects originally identified as the most likely matching textual objects determined with respect to the original input action(s) performed by the user resulting in the output of the textual object being Re-Edited.
  • the originally output textual object is omitted from the displayed textual object selection list since the Re-Edit procedure is (in general) only performed in order to replace it.
  • the textual object to be Re-Edited is automatically replaced with the next-most-closely corresponding alternate textual object from the list of the alternate textual objects identified as being those most closely corresponding to the input action(s) from which the textual object to be Re-Edited was generated (without requiring the user to select a textual object from a displayed textual object selection list).
  • This approach can be advantageous in systems where, in the majority of cases, the user's intended textual object is next-most-closely corresponding alternate textual object.
  • the system when a textual object is Re- Edited that has already been Re-Edited and thus corresponds to the next-most-closely corresponding alternate textual object that has automatically replaced the originally output textual object, the system then displays a textual object selection list to allow the user to select an alternate textual object.
  • FIGURE 1 shows a simplified block diagram of the hardware components of a typical device 100 in which the System and Method for a User Interface for Text Editing and Menu Selection is implemented.
  • the device 100 includes one or more input devices 120 that provide input to the CPU (processor) 1 10 notifying it of actions performed by a user, typically mediated by a hardware controller that interprets the raw signals received from the input device and
  • an input device 120 is a touchscreen that provides input to the CPU 1 10 notifying it of contact events when the touch-screen is touched by a user.
  • the CPU 1 10 communicates with a hardware controller for a display 130 to draw on the display 130.
  • a display 130 is a touch-screen display that provides graphical and textual visual feedback to a user.
  • a speaker 140 is also coupled to the processor so that any appropriate auditory signals can be passed on to the user as guidance (predominantly for error signals), and a microphone 141 is also coupled to the processor so that any spoken input can be received from the user (predominantly for input action recognition systems implementing speech recognition as a method of text input by the user).
  • the processor 1 10 has access to a memory 150, which may include a combination of temporary and/or permanent storage, and both read-only and writable memory (random access memory or RAM), read-only memory (ROM), writable non-volatile memory such as FLASH memory, hard drives, floppy disks, and so forth.
  • the memory 150 includes program memory 160 that contains all programs and software such as an operating system 161 , an input action recognition system software 162, and any other application programs 163.
  • the program memory 160 also contains at least one of a text editing system software 164 for recording and restoring the location of a text insertion position according to the method of the present invention, and a menu selection system software 165 for graphically displaying two or more choices to a user and determining a selection by a user of one of said graphically displayed choices according to the method of the present invention.
  • the memory 150 also includes data memory 170 that includes any textual
  • object database(s) 171 required by the input action recognition system software 162, optional storage for maintaining a record of user options and preferences 172, and any other data 173 required by any element of the device 100.
  • FIGURE 2A shows how the Re-Edit procedure can be activated by a Re-Edit function key 208 presented on the displayl30 of the system 100, or by performing the pre-determined Re-Edit gesture on a previously output textual object ("great" 200 in FIGURE 2A) to correct an output textual object that does not correspond to the user's intended textual object.
  • the system identifies the textual object containing or adjacent to the current text insertion position and automatically selects it as the target of the Re-Edit procedure.
  • FIGURE 2A shows a resulting textual object selection list 202.
  • the originally intended textual object "heat” appears as a first textual object 204 in the textual object selection list 202 because it was determined to be the second-most-closely matching textual object with respect to the original input action (following the word "great” which was originally output as the textual object corresponding to the original input action).
  • the processor 1 10 automatically replaces the highlighted textual object "great” 200 with the originally intended textual object "heat” in a text output region 206 on the display.
  • the Re-Editing process inserts or deletes spaces according to the manner in which spaces are automatically generated by the input action recognition system software 162.
  • an input action recognition system automatically generates spaces between textual objects, if a space is automatically
  • a simple approach to solving this problem in an input action recognition system is to flag certain textual objects as exceptions to the usual rule of automatically generating a space between adjacent textual objects. For example, in English, the textual object " 's " (apostrophe-s) is flagged in the system's database of textual obje.cts to indicate that a space is to be generated between it and following textual objects, but not between it and preceding textual objects.
  • the textual object " 1' " (1-apostrophe) is flagged in the system's database of textual objects to indicate that a space is to be generated between it and preceding textual objects, but not between it and following textual objects.
  • the input action recognition system includes a function that can be activated to suppress the automatic generation of a space the next time that it would normally occur (so that two such textual objects can be generated in succession without the generation of an intervening space by the
  • the function suppresses the automatic generation of any spaces that would normally be generated until the function is activated to re-enable the automatic generation of spaces.
  • the input action recognition system includes a function that removes the most recently generated automatic space. As described below, the Re-Edit processing of the current invention accommodates these various exceptions in the automatic generation of spaces, so that Re-Edited text is generated with the proper spacing when a textual object is replaced in the Re-Editing process by a textual object that is governed by different rules regarding automatic spacing.
  • the Re-Edit gesture is defined such that a substring of a single contiguous string of textual object characters can be selected with a single gesture.
  • the word "oFs" 210 is comprised of the words "of" 212 and " 's " 214. Note that in this example, these two textual objects were created in response to two separate input actions performed in succession. Note also that this text was generated on a user input action recognition system which, by default, automatically outputs a space .
  • the textual object " 's " 214 is flagged in the system's database of textual objects as an exception to this default behavior so that no space is generated prior to outputting " 's " 214 so that it is concatenated to the end of the preceding textual object to create its correct possessive form.
  • the textual object " 's " 214 was not the textual object intended by the user when the corresponding input action was performed.
  • the system first determines whether the complete text string ("of s" 210 in the example of FIGURE 2B) was generated in response to a single user input action, and if so, the system responds in the manner described above based on the alternate textual objects identified for the input action.
  • the system identifies the component sub-string textual object that is closest to the performed Re-Edit gesture, and, for example, the identified component sub-string textual object is highlighted and a list of the alternate textual objects associated with the user input action from which the component substring textual object was generated is displayed to the user for selection.
  • the location 216 associated with the detected Re-Edit gesture (for example, a double-tap at location 216) is nearest to the component sub-string " 's " 214 which has been identified as the Re-Editing "target” and has been highlighted in the text output region 206.
  • the user has just finished entering a complete sentence, so that, prior to the detection of the Re-Edit gesture at location 216, the text insertion position was located at the end of the just-entered sentence at location 218, such that the user is ready to enter the next sentence.
  • the originally intended textual object "Oz" appears as the first textual object 220 in the textual object selection list 202 because it was determined to be the second-most-closely matching textual object with respect to the original input action (following the textual object " 's " 214 which was originally output as the textual object corresponding to the original input action).
  • FIGURE 2C
  • the selected replacement textual object "Oz" 220 is not flagged as such an exception in the system's database of textual objects 171 , so that when the system replaces the textual object " 's " 214 with the textual object "Oz” 220, a space is generated prior to inserting the replacement text so that a space 222 appears between the words "of 212 and "Oz" 220.
  • the text editing system software 164 tracks the location of the text insertion position in the text output region 206, and immediately following the replacement of a Re-Edited textual object with an alternate textual object, the text insertion position is automatically restored to its former location in the output text prior to the performance of the Re-Edit procedure.
  • the text insertion position is automatically restored to its original location 218 at the end of the completed sentence (its location prior to the detection of the Re-Edit gesture at location 216), so that the user can continue entering text without having to manually re-locate the text insertion position.
  • the identified textual object remains highlighted (selected) and the text insertion position is not restored to its former location in the output text so that other actions may be taken with respect to the still- ⁇ highlighted identified textual object.
  • the textual object selection list is automatically cancelled, the text insertion position is not restored to its former location in the output text, and the text generated in response to the one or more additional input actions is sent to the text output region and, in accordance with the standard behavior of word processing programs, consequently replaces the previously output textual object by virtue of the fact that the previously output textual object is the currently highlighted (selected) text region.
  • an input action that corresponds to the generation of a control character results in sending a control character to the target
  • the target application receives the control-B and applies bold formatting to the highlighted previously output textual object.
  • the system detects that the user has scrolled the displayed text region such that the location of the text insertion position is no longer visible on the display screen when the Re-Edit gesture is performed, the text insertion position is not restored to its former (no longer visible) location in the output text when an alternate textual object is selected from the automatically generated textual object selection list.
  • the various possible responses of the system to the pre-determined Re-Edit gesture and subsequent actions are determined by the user by selecting from among a set of system preferences.
  • the text editing system software 164 detects when the user has re-located the text insertion cursor within the text output region, and modifies automatic system behaviors with respect to aspects of the surrounding context of the new location of the text insertion position. In one aspect, when the system in general automatically outputs spaces between generated words, and where the system detects that the text insertion position has been moved to a new context, the system disables the automatic output of a space prior to the first word output in the new context.
  • the system when the system detects that the text insertion position has been moved to a new context and such automatic spacing is enabled, the system examines the character to the left of the new text insertion position, and when the character to the left of the text insertion position is a "white space" character, and/or when the text insertion position is at the first character position of a text field, and/or when the text field is a password-entry field, the system automatically disables the automatic output of a space prior to the first word output in the new context.
  • a list of one or more alternative textual objects is presented to the user for selection of the textual object intended by the user.
  • These alternative textual objects correspond to alternative "mappings" by the input action recognition system of the one or more input actions originally mapped by the system to the textual object being re-edited.
  • the list of alternative textual objects is presented for selection by the user in a special graphical presentation that enables the user to indicate the desired selection with a simple and intuitive gesture that in general requires less precision than menu selection methods known in the prior art and which therefore further speeds up the re-editing process by speeding up the process of selecting the intended object.
  • this same special graphical menu presentation is utilized elsewhere in the user interface to correspondingly speed up the process of selecting from two or more alternate choices elsewhere in the system.
  • the graphical presentation and gestural selection method of the present invention is particularly effective when one or more of the possible alternative selection choices presented has a higher than average a priori probability of being the user's intended choice. This is frequently the case in many situations where the user is offered a choice.
  • FIGURE 3A shows an example of a menu 300 that is structured in one way that takes advantage of the method of the present invention.
  • the example shown in FIGURE 3A contains six selection sub-regions: five selection sub-regions 301 - 305 for Choice 1 through Choice 5, and a sixth sub-region 306 labeled with an icon designating this selection as an action to cancel the menu 300.
  • Each of the six selection sub-regions 301 - 306 is associated with a corresponding segment 311 - 316 of the perimeter boundary of the menu 300.
  • a menu selection is made by initially contacting the screen anywhere within the region enclosed by the outermost perimeter boundary of the menu and remaining in contact with the screen while tracing a path that exits the menu region through the segment of the menu perimeter boundary that is associated with the desired selection, then terminating the contact action by breaking contact with the screen (e.g. lifting the stylus) at a location
  • the menu selection is made effective as soon as the contact action exits the menu region, without requiring the termination of the contact action. Requiring the termination of the contact action to make the menu selection effective potentially allows a user to correct a pending selection by re-entering the menu region without breaking contact. then exiting through a different segment of the menu perimeter boundary prior to terminating the contact action.
  • the point of control can be the cursor controlled by the movement of a mouse, which is "activated” and "de-activated” by clicking and releasing, respectively, the mouse button.
  • the segment of the perimeter boundary of the menu 300 that is associated with a menu selection sub-region is that segment of the menu's perimeter boundary that is also part of the perimeter boundary of the menu selection sub-region itself.
  • the example menu shown in FIGURE 3A shows that Choice 1 in selection sub-region 301 is associated with the menu perimeter boundary segment 31 1 that comprises the entire top border of the menu 300, which can be selected by contacting the screen anywhere within the menu 300, stroking upward to exit the menu region at the top, and breaking contact with the screen.
  • FIGURE 4 shows an example of a contact action 400 that begins at an initial contact location 401, exits from the menu at exit location 402 on the perimeter boundary segment 31 1, and
  • the result of contact action 400 is the selection of menu choice 301 ("Choice 1") even though the initial contact location is within menu selection sub-region 303 that is associated with "Choice 3.”
  • One advantage of the present invention is that a user can in general make menu selections much faster because the contact action can be much less precise and can therefore be performed much more quickly. Further advantage can be gained when there is an a priori expectation that certain menu selections are more likely to be selected than others. Assume, for example, that in the case of the menu shown in FIGURE 3 A, it is known in advance that menu selections 301 (“Choice 1") and 305 (“Choice 5”) each tend to be selected more than twice as frequently as any of selections 302, 303 or 304.
  • the example menu shown in FIGURE 3A is designed so that the entire top border segment 31 1 of the menu perimeter boundary is associated with selection sub-region 301 of "Choice 1 " and the entire bottom border segment 315 of the menu perimeter boundary is associated with selection sub-region 305 of "Choice 5.”
  • segments 31 1 and 315 are both nearly three times longer than each of segments 312, 313 and 314 (associated with selection sub-regions 302, 303 and 304, respectively), so that "Choice 1" and "Choice 5" are significantly easier to select, since there can be a wide margin of error in quickly tracing a path that exits the menu by crossing through the corresponding segments. It is a simple matter to design the various menu selection sub-regions so that the relative lengths of the associated perimeter boundary segments approximate, within a reasonable tolerance, the relative expected probabilities of the various menu selections.
  • Another significant advantage of the present invention is that a contact action that begins within the region enclosed by the outermost perimeter
  • FIGUElE 3B shows an example of a possible menu arrangement with the same set of choices as that in FIGURE 3A.
  • FIGURE 3B shows a menu arrangement for a set of menu choices wherein "Choice 2" (322) has a much higher a priori probability of being selected than in FIGURE 3A, and both "Choice 3" and “Choice 4" have much lower a priori probabilities of being selected than in FIGURE 3A.
  • the entire right side 332 of the menu is associated with "Choice 2", while both "Choice 3" and “Choice 4" must be selected by contacting the screen directly within the associated menu selection sub-regions 323 and 324 respectively, then breaking the contact without exiting from the desired menu selection sub-region (i.e. using traditional menu selection "tapping").
  • FIGURE 5 shows a menu 500 with eight selection sub-regions 501 - 508.
  • Selection sub-regions 501 - 504 together comprise a first sub-menu region 521
  • sub-regions 505 — 508 together comprise a second sub-menu region 522.
  • the selection sub-regions 501 - 504 are associated with the sub-menu 521 perimeter boundary segments 51 1 — 514
  • the selection sub-regions 505 - 508 are associated with the sub-menu 522 perimeter boundary segments 515 — 518.
  • sub-menu 521 perimeter boundary segment 514 and the sub-menu 522 perimeter boundary segment 515 are in fact the same line segment (514/515), which is itself in the interior of the menu 500, rather than lying along the menu 500's outer perimeter boundary. This is not a problem, however, as the selection actions for the menu selections 504 and 505 differ in the direction in which the line segment (514/515) is crossed and the sub-menu region in which the initial contact is made.
  • a contact action that has an initial contact location in the region of the sub-menu 521 and crosses the line segment (514/515) in a downward direction is an unambiguous selection of the menu selection 504, while a contact action that has an initial contact location in the region of the sub-menu 522 and crosses the line segment (514/515) in an upward direction is an unambiguous selection of the menu selection 505.
  • the sub-menu containing the location of the initial contact is visually highlighted to indicate the set of menu choices (contained within that sub-menu) that may be selected by tracing a path that exits from the submenu by crossing the sub-menu's outer perimeter boundary.
  • buttons 341 — 346 are displayed with background colors of orange (341), yellow (342), green (343), blue (344), violet (345), and red (346), respectively, and the corresponding menu perimeter boundary segments are correspondingly displayed as orange (351), yellow (352), green (353), blue (354), violet (355), and red (356) segments, respectively.
  • the six menu choices 34] — 346 could be displayed with background colors of light orange, light yellow, light green, light blue, light violet, and light red, respectively (for enhanced readability of each menu item due to the lighter background coloration), and the corresponding menu perimeter boundary segments displayed as vivid orange (351), yellow (352), green (353), blue (354), violet (355), and red (356) segments, respectively.
  • a combination of the two approaches enables fewer colors to be utilized where the color and proximity of a perimeter boundary segment makes it clear which menu selection sub-region it is associated with.
  • the menu can be displayed as in the example shown in FIGURB 3C, where each menu selection sub- region appears as a more traditional rectangular shape, rather than the somewhat more complex appearing polygons as shown in FIGURE 3A.
  • shading may be used in place of color to distinguish various menu sub-regions and to associate the sub-regions with the corresponding menu perimeter boundary segments.
  • the menu structure 500 can utilize a consistent color scheme of four colors to make the required menu selection action in each case simple and intuitive.
  • menu selection sub-regions 501 and 505 can appear with a blue background, sub-regions 502 and 506 with a red background, sub-regions 503 and 507 with a yellow background, and sub-regions 504 and 508 with a green background. Then the user need only remember: blue background — tap below and stroke upward; red background — tap nearby and stroke to the right; yellow background — tap nearby and stroke to the left; and green background — tap above and stroke downward.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Document Processing Apparatus (AREA)

Abstract

La présente invention concerne des procédés et un système permettant à un utilisateur d'un système de saisie de texte avec reconnaissance d'action de saisie de modifier tout texte incorrectement reconnu sans avoir à repositionner le point d'insertion de texte à l'emplacement du texte à corriger. Le système ci-décrit effectue aussi une gestion automatique de l'espacement entre objets de texte quand un objet de texte est remplacé par un objet pour lequel un espacement automatique est généré différemment. Le système ci-décrit permet aussi la présentation graphique de choix de menus d'une manière qui permette une sélection plus rapide et facile d'un choix en réalisant un geste de sélection exigeant moins de précision qu'un contact direct avec la sous-zone du menu associée au choix en question.
EP07835973A 2006-07-03 2007-07-03 Systeme et procede d'interface utilisateur pour la modification de texte et la selection de menus Withdrawn EP2044587A4 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US80652206P 2006-07-03 2006-07-03
US11/533,714 US7382358B2 (en) 2003-01-16 2006-09-20 System and method for continuous stroke word-based text input
US91784907P 2007-05-14 2007-05-14
PCT/US2007/015403 WO2008013658A2 (fr) 2006-07-03 2007-07-03 Système et procédé d'interface utilisateur pour la modification de texte et la sélection de menus

Publications (2)

Publication Number Publication Date
EP2044587A2 true EP2044587A2 (fr) 2009-04-08
EP2044587A4 EP2044587A4 (fr) 2012-09-26

Family

ID=38981954

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07835973A Withdrawn EP2044587A4 (fr) 2006-07-03 2007-07-03 Systeme et procede d'interface utilisateur pour la modification de texte et la selection de menus

Country Status (5)

Country Link
EP (1) EP2044587A4 (fr)
JP (1) JP5661279B2 (fr)
KR (1) KR101488537B1 (fr)
CN (1) CN101529494B (fr)
WO (1) WO2008013658A2 (fr)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5438909B2 (ja) * 2008-03-14 2014-03-12 ソニーモバイルコミュニケーションズ株式会社 文字入力装置、文字入力支援方法及び文字入力支援プログラム
KR101412586B1 (ko) * 2008-07-01 2014-07-02 엘지전자 주식회사 이동단말기의 문자입력 방법
US20100235784A1 (en) * 2009-03-16 2010-09-16 Bas Ording Methods and Graphical User Interfaces for Editing on a Multifunction Device with a Touch Screen Display
KR101633332B1 (ko) 2009-09-30 2016-06-24 엘지전자 주식회사 단말기 및 그 제어 방법
JP5486977B2 (ja) * 2010-03-24 2014-05-07 株式会社日立ソリューションズ 座標入力装置及びプログラム
CN101957724A (zh) * 2010-10-05 2011-01-26 孙强国 一种拼音文字联想输入的改进方法
TWI490705B (zh) * 2010-10-07 2015-07-01 英業達股份有限公司 純文字內容的編輯操作系統及其方法
JP5609718B2 (ja) * 2011-03-10 2014-10-22 富士通株式会社 入力支援プログラム,入力支援装置および入力支援方法
EP2698692B1 (fr) 2011-04-09 2019-10-30 Shanghai Chule (Cootek) Information Technology Co., Ltd. Système et procédé permettant de mettre en oeuvre l'entrée d'un texte par glissement sur la base d'un clavier programmable à l'écran sur un équipement électronique
KR20130034747A (ko) * 2011-09-29 2013-04-08 삼성전자주식회사 휴대 단말기의 사용자 인터페이스 제공 방법 및 장치
US8667414B2 (en) 2012-03-23 2014-03-04 Google Inc. Gestural input at a virtual keyboard
US8782549B2 (en) 2012-10-05 2014-07-15 Google Inc. Incremental feature-based gesture-keyboard decoding
US9021380B2 (en) 2012-10-05 2015-04-28 Google Inc. Incremental multi-touch gesture recognition
US8850350B2 (en) 2012-10-16 2014-09-30 Google Inc. Partial gesture text entry
US8701032B1 (en) 2012-10-16 2014-04-15 Google Inc. Incremental multi-word recognition
US8843845B2 (en) 2012-10-16 2014-09-23 Google Inc. Multi-gesture text input prediction
US8819574B2 (en) 2012-10-22 2014-08-26 Google Inc. Space prediction for text input
US8806384B2 (en) 2012-11-02 2014-08-12 Google Inc. Keyboard gestures for character string replacement
CN103838458B (zh) * 2012-11-26 2017-05-10 北京三星通信技术研究有限公司 移动终端及其输入法的控制方法
US8832589B2 (en) 2013-01-15 2014-09-09 Google Inc. Touch keyboard using language and spatial models
US8887103B1 (en) 2013-04-22 2014-11-11 Google Inc. Dynamically-positioned character string suggestions for gesture typing
US9081500B2 (en) 2013-05-03 2015-07-14 Google Inc. Alternative hypothesis error correction for gesture typing
CN103399793B (zh) * 2013-07-30 2017-08-08 珠海金山办公软件有限公司 一种自动切换同类内容的方法及系统
CN103533448B (zh) * 2013-10-31 2017-12-08 乐视致新电子科技(天津)有限公司 智能电视的光标控制方法和光标控制装置
WO2015109507A1 (fr) * 2014-01-24 2015-07-30 华为终端有限公司 Procédé et dispositif électronique de saisie d'un caractère
CN107506115A (zh) * 2016-06-14 2017-12-22 阿里巴巴集团控股有限公司 一种菜单的显示处理方法、装置及系统
CN108664201B (zh) 2017-03-29 2021-12-28 北京搜狗科技发展有限公司 一种文本编辑方法、装置及电子设备
CN107203505A (zh) * 2017-05-26 2017-09-26 北京小米移动软件有限公司 文本信息编辑方法及装置
CN108984239B (zh) * 2018-05-29 2021-07-20 北京五八信息技术有限公司 选择控件的处理方法、装置、设备和存储介质
CN110197136B (zh) * 2019-05-13 2021-01-12 华中科技大学 一种基于动作边界概率的级联动作候选框生成方法与系统
US11379113B2 (en) 2019-06-01 2022-07-05 Apple Inc. Techniques for selecting text

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998033111A1 (fr) * 1997-01-24 1998-07-30 Tegic Communications, Inc. Systeme permettant d'eliminer les ambiguites des claviers reduits
US6275612B1 (en) * 1997-06-09 2001-08-14 International Business Machines Corporation Character data input apparatus and method thereof
WO2005064587A2 (fr) * 2003-12-22 2005-07-14 America Online, Inc. Systeme de clavier virtuel avec correction automatique
US20060071915A1 (en) * 2004-10-05 2006-04-06 Rehm Peter H Portable computer and method for taking notes with sketches and typed text

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5378736A (en) 1976-12-23 1978-07-12 Toshiba Corp Return mechanism to final input position
JPS5840584A (ja) 1981-09-02 1983-03-09 株式会社東芝 文字表示装置
JPH0754512B2 (ja) 1986-12-10 1995-06-07 キヤノン株式会社 文書処理装置
US5574482A (en) * 1994-05-17 1996-11-12 Niemeier; Charles J. Method for data input on a touch-sensitive screen
JPH09293328A (ja) 1996-04-25 1997-11-11 Olympus Optical Co Ltd 音声再生装置
JPH11102361A (ja) * 1997-09-29 1999-04-13 Nec Ic Microcomput Syst Ltd 文字入力修正方法及び前記方法手順を記録した記録媒体
JP3082746B2 (ja) * 1998-05-11 2000-08-28 日本電気株式会社 音声認識システム
JP2001060192A (ja) * 1999-08-20 2001-03-06 Nippon Hoso Kyokai <Nhk> 文字データ修正装置および記憶媒体
US7098896B2 (en) 2003-01-16 2006-08-29 Forword Input Inc. System and method for continuous stroke word-based text input
JP4260777B2 (ja) * 2004-07-22 2009-04-30 パナソニック株式会社 半導体装置及びその製造方法
JP2006031725A (ja) 2005-08-10 2006-02-02 Microsoft Corp 文字処理装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998033111A1 (fr) * 1997-01-24 1998-07-30 Tegic Communications, Inc. Systeme permettant d'eliminer les ambiguites des claviers reduits
US6275612B1 (en) * 1997-06-09 2001-08-14 International Business Machines Corporation Character data input apparatus and method thereof
WO2005064587A2 (fr) * 2003-12-22 2005-07-14 America Online, Inc. Systeme de clavier virtuel avec correction automatique
US20060071915A1 (en) * 2004-10-05 2006-04-06 Rehm Peter H Portable computer and method for taking notes with sketches and typed text

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MICROSOFT CORPORATION: "Word 2003, version 11", SOFTWARE, [Disk] 17 November 2003 (2003-11-17), Redmond, États-Unis d'Amérique *
See also references of WO2008013658A2 *

Also Published As

Publication number Publication date
WO2008013658A2 (fr) 2008-01-31
CN101529494B (zh) 2012-01-04
EP2044587A4 (fr) 2012-09-26
WO2008013658A3 (fr) 2008-11-27
KR101488537B1 (ko) 2015-02-02
JP5661279B2 (ja) 2015-01-28
KR20090035570A (ko) 2009-04-09
JP2009543209A (ja) 2009-12-03
CN101529494A (zh) 2009-09-09

Similar Documents

Publication Publication Date Title
US7542029B2 (en) System and method for a user interface for text editing and menu selection
WO2008013658A2 (fr) Système et procédé d&#39;interface utilisateur pour la modification de texte et la sélection de menus
JP7153810B2 (ja) 電子デバイス上の手書き入力
US20210406578A1 (en) Handwriting-based predictive population of partial virtual keyboards
US10275152B2 (en) Advanced methods and systems for text input error correction
KR102413461B1 (ko) 제스쳐들에 의한 노트 필기를 위한 장치 및 방법
JP5468665B2 (ja) 多言語環境を有するデバイスのための入力方法
EP3220252B1 (fr) Éditeur de document à base de gestes
US20110320978A1 (en) Method and apparatus for touchscreen gesture recognition overlay
US20020059350A1 (en) Insertion point bungee space tool
CN111488111A (zh) 虚拟计算机键盘
JP2019514097A (ja) 文字列に文字を挿入するための方法および対応するデジタルデバイス
US11112965B2 (en) Advanced methods and systems for text input error correction
JP2019514096A (ja) 文字列に文字を挿入するための方法およびシステム
CN101601050B (zh) 对字进行预览和选择的系统及方法
JP5977764B2 (ja) 拡張キーを利用した情報入力システム及び情報入力方法
US20150301739A1 (en) Method and system of data entry on a virtual interface
US20230401376A1 (en) Systems and methods for macro-mode document editing
KR20210012993A (ko) 전자장치에서 문자입력시 직전에 입력된 문자가 무엇인지 알려주기 위한 방법
KR20170056809A (ko) 워드프로세싱 방법

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

17P Request for examination filed

Effective date: 20090422

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20120824

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 3/048 20060101ALI20120820BHEP

Ipc: G09G 5/00 20060101AFI20120820BHEP

17Q First examination report despatched

Effective date: 20170809

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20180220