WO2012076743A1 - An apparatus and associated methods for text entry - Google Patents

An apparatus and associated methods for text entry Download PDF

Info

Publication number
WO2012076743A1
WO2012076743A1 PCT/FI2010/051005 FI2010051005W WO2012076743A1 WO 2012076743 A1 WO2012076743 A1 WO 2012076743A1 FI 2010051005 W FI2010051005 W FI 2010051005W WO 2012076743 A1 WO2012076743 A1 WO 2012076743A1
Authority
WO
WIPO (PCT)
Prior art keywords
predicted character
touch
input
character string
area
Prior art date
Application number
PCT/FI2010/051005
Other languages
French (fr)
Inventor
Ashley Colley
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Priority to PCT/FI2010/051005 priority Critical patent/WO2012076743A1/en
Publication of WO2012076743A1 publication Critical patent/WO2012076743A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques

Definitions

  • the present disclosure relates to the field of touch-sensitive displays, associated apparatus, methods and computer programs, and in particular concerns the auto- completion of text/characters in a word string.
  • Certain disclosed aspects/example embodiments relate to portable electronic devices, in particular, so-called hand-portable electronic devices which may be hand-held in use (although they may be placed in a cradle in use).
  • Such hand-portable electronic devices include so-called Personal Digital Assistants (PDAs).
  • PDAs Personal Digital Assistants
  • the portable electronic devices/apparatus may provide one or more audio/text/video communication functions (e.g. tele-communication, video-communication, and/or text transmission, Short Message Service (SMS)/ Multimedia Message Service (MMS)/emailing functions, interactive/non-interactive viewing functions (e.g. web- browsing, navigation, TV/program viewing functions), music recording/playing functions (e.g. MP3 or other format and/or (FM/AM) radio broadcast recording/playing), downloading/sending of data functions, image capture function (e.g. using a (e.g. in-built) digital camera), and gaming functions.
  • audio/text/video communication functions e.g. tele-communication, video-communication, and/or text transmission, Short Message Service (SMS)/ Multimedia Message Service (MMS)/emailing functions, interactive/non-interactive viewing functions (e.g. web- browsing, navigation, TV/program viewing functions), music recording/playing functions (e.g. MP3 or other format and
  • Multi-tap requires a user to press a key repeatedly in order to cycle through the letters/characters associated with that key.
  • disambiguation systems have been developed. Disambiguation involves the determination of words from an ambiguous key sequence by comparing all corresponding character combinations with entries stored in a predictive text dictionary or with a set of statistical rules. Once a match has been found, the device presents a number of possible character strings to the user for selection.
  • Word completion (or “autocompletion”) can be used with both alphanumeric keypads and unambiguous keyboards.
  • Word completion is a predictive text technology which predicts a word string based on one or more characters entered by the user. The prediction is made by comparing the inputted characters with entries stored in a predictive text dictionary or with a set of statistical rules. Once a match has been found, the device presents a number of possible word strings to the user for selection. In this way, the user is able to enter a complete word string by inputting only part of that word string.
  • an apparatus comprising:
  • processor and memory including computer program code, the memory and the computer program code configured to, with the processor, cause the apparatus to:
  • a memory or processor may be a reference to one or more memories/processors.
  • the term "particular area” may be taken to mean a touch-screen key/touch-sensitive region of a touch-sensitive display which has been configured for character input. In this respect, the terms “area”, “region” or “key” may be used interchangeably throughout the specification.
  • the apparatus may be configured to:
  • the predicted character strings may be positionally associated with the particular area such that they radiate outward from the position of the particular input area.
  • the further predicted character strings may be positionally associated with the position of the particular predicted character string such that they radiate outward from the position of the particular predicted character string.
  • the further predicted character strings may appear on touching the touch-sensitive display at the position of the particular predicted character string.
  • a particular predicted character string may be selectable by continuous touching of the touch-sensitive display in a line originating from the particular input area and continuing to the position of the particular predicted character string.
  • a particular further predicted character string may be selectable by continuous touching of the touch-sensitive display in a line originating from the position of the particular predicted character string and continuing to the position of the particular further predicted character string.
  • the particular predicted character string and a particular further predicted character string may be sequentially selectable by continuous touching of the touch-sensitive display in a line originating from the particular input area and continuing to the position of the particular predicted character string and then to the position of the particular further predicted character string.
  • the area of the touch-sensitive display used to select a particular predicted character string may be flush with, may be adjacent to, or may overlap with at least part of the particular input area.
  • the area of the touch-sensitive display used to select a particular further predicted character string may be flush with, may be adjacent to, or may overlap with at least part of the area of the touch-sensitive display used to select a particular predicted character string.
  • the apparatus may be configured to determine the probability of a particular predicted character string matching all or part of the associated full word string.
  • the area of the touch-sensitive display used to select the particular predicted character string may be based on this probability.
  • the area may increase as the probability increases.
  • the distance of the particular predicted character string from the particular area may be based on this probability.
  • the distance of the particular predicted character string from the particular area may decrease as the probability increases (or vice versa).
  • the probability may be based on the number of times the particular predicted character string has previously been input in combination with the particular character and the one or more previous inputted characters. The probability may be based on the commonality of use of the particular predicted character string in combination with the particular character and the one or more previous inputted characters.
  • the apparatus may be configured to determine the probability of a particular further predicted character string matching all or part of the associated full word string.
  • the area of the touch-sensitive display used to select the particular further predicted character string may be based on this probability.
  • the distance of the particular further predicted character string from the position of the particular predicted character string may be based on this probability.
  • the distance of the particular further predicted character string from the position of the particular predicted character string may decrease as the probability increases (or vice versa).
  • the probability may be based on the number of times the particular further predicted character string has previously been input in combination with the particular character, the one or more previous inputted characters and the particular predicted character string. The probability may be based on the commonality of use of the particular further predicted character string in combination with the particular character, the one or more previous inputted characters and the particular predicted character string.
  • the one or more predicted character strings may be positionally associated with the particular input area such that a particular predicted character string may be selected without interrupting physical contact with the touch-sensitive display between detection of the particular touch input and selection of the particular predicted character string.
  • the one or more further predicted character strings may be positionally associated with the position of the particular predicted character string such that a particular further predicted character string may be selected without interrupting physical contact with the touch- sensitive display between selection of the particular predicted character string and selection of the particular further predicted character string.
  • the touch sensitive-display may comprise an input region.
  • the input region may comprise the particular input area and a plurality of other input areas, each input area associated with the input of a respective character.
  • the one or more predicted character strings may be positionally associated with the particular input area such that a particular predicted character string may be selected without causing input of characters associated with the other input areas.
  • the one or more further predicted character strings may be positionally associated with the position of the particular predicted character string such that a particular further predicted character string may be selected without causing input of characters associated with the other input areas.
  • the apparatus may be configured to determine the one or more predicted character strings by comparing the particular character, in combination with the one or more previous inputted characters, with entries stored in a predictive text dictionary and/or with a set of statistical rules.
  • the apparatus may be configured to determine the one or more further predicted character strings by comparing the particular predicted character string, in combination with the particular character and the one or more previous inputted characters, with entries stored in a predictive text dictionary and/or with a set of statistical rules.
  • the apparatus may be configured to detect an interaction property associated with touching of the area of the touch-sensitive display used to select a particular predicted character string.
  • the apparatus may accept or reject input of that particular predicted character string when this interaction property exceeds a predetermined interaction value.
  • the apparatus may be configured to accept input of the particular character automatically when input of the particular predicted character string has been rejected.
  • the apparatus may be configured to detect an interaction property associated with touching of the area of the touch-sensitive display used to select a particular further predicted character string.
  • the apparatus may accept or reject input of that particular further predicted character string when this interaction property exceeds a predetermined interaction value.
  • the apparatus may be configured to accept input of the particular character and/or the particular predicted character string automatically when input of the particular further predicted character string has been rejected.
  • the apparatus may be configured to detect an interaction property of the particular touch input.
  • the apparatus may accept or reject input of the particular character when this interaction property exceeds a predetermined interaction value.
  • the interaction property may be the duration of touch.
  • the apparatus may be configured to accept or reject input when the duration of touch exceeds a predetermined touch time interaction value.
  • the interaction property may be the touch pressure.
  • the apparatus may be configured to accept or reject input when the touch pressure exceeds a predetermined touch pressure interaction value.
  • the apparatus may be configured to accept input of the particular character when physical contact with the touch-sensitive display has been detected or terminated at the particular input area.
  • the apparatus may be configured to accept input of a particular predicted character string when physical contact with the touch-sensitive display has been detected or terminated at an area of the touch-sensitive display used to select that particular predicted character string.
  • the apparatus may be configured to accept input of a particular further predicted character string when physical contact with the touch- sensitive display has been detected or terminated at an area of the touch-sensitive display used to select that particular further predicted character string.
  • the one or more predicted character strings may be provided in the form of a menu.
  • the one or more further predicted character strings may be provided in the form of a menu.
  • the menu comprising the further predicted character strings may a sub-menu of the menu comprising the predicted character strings. Either or both menus may be horizontal linear menus, vertical linear menus, or circular menus (also known as pie menus or radial menus).
  • the one or more predicted character strings may be provided in a key-press pop-up area.
  • the one or more further predicted character strings may be provided in a key-press pop-up area.
  • One or more of the particular character, previous inputted characters, predicted character strings and further predicted character strings may comprise a letter, number, or punctuation mark.
  • the touch-sensitive display may form part of the apparatus.
  • the touch sensitive display may comprise a touch-sensitive alphanumeric keypad, a touch-sensitive portrait qwerty keyboard, or a touch-sensitive landscape qwerty keyboard.
  • the apparatus may be a touch-sensitive display, portable telecommunications device, a module for a touch-sensitive display, or a module for a portable telecommunications device.
  • a method comprising: detecting a particular touch input at a particular input area of a touch-sensitive display, the particular input area associated with the input of a particular character to be used in the input of a full word string;
  • the method may comprise:
  • a computer program recorded on a carrier, the computer program comprising computer code configured to enable: detection of a particular touch input at a particular input area of a touch-sensitive display, the particular input area associated with the input of a particular character to be used in the input of a full word string;
  • determination of a particular character associated with the particular touch input determination of one or more predicted character strings based on the determined particular character in combination with one or more previous inputted characters of a word string, the predicted character strings constituting a prediction of all or part of an associated full word string;
  • the computer program may comprise computer code configured to enable:
  • the apparatus may comprise a processor configured to process the code of the computer program.
  • the processor may be a microprocessor, including an Application Specific Integrated Circuit (ASIC).
  • ASIC Application Specific Integrated Circuit
  • an apparatus comprising: means for detecting of a particular touch input at a particular input area of a touch-sensitive display, the particular input area associated with the input of a particular character to be used in the input of a full word string;
  • the present disclosure includes one or more corresponding aspects, embodiments or features in isolation or in various combinations whether or not specifically stated (including claimed) in that combination or in isolation. Corresponding means for performing one or more of the discussed functions are also within the present disclosure.
  • Figure 1 illustrates schematically the entry of text on an alphanumeric keypad using a multi-tap input method (prior art);
  • Figure 2 illustrates schematically the entry of text on an alphanumeric keypad using a disambiguation input method (prior art);
  • Figure 3 illustrates schematically the entry of text on an alphanumeric keypad using a disambiguation input method with an autocomplete function (prior art);
  • Figure 4 illustrates schematically the entry of text on a qwerty keyboard with an autocomplete function (prior art).
  • Figure 5 illustrates schematically the entry of text on an existing portable telecommunications device (prior art).
  • Figure 6a illustrates schematically the entry of text using an autocomplete function incorporating a circular menu structure (present disclosure).
  • Figure 6a illustrates schematically the entry of text using an autocomplete function incorporating a linear menu structure (present disclosure).
  • Figure 7a illustrates schematically the entry of text using an autocomplete function incorporating another circular menu structure (present disclosure).
  • Figure 7b illustrates schematically the entry of text using an autocomplete function incorporating another linear menu structure (present disclosure)
  • Figure 8a illustrates schematically the entry of text using an autocomplete function incorporating yet another circular menu structure (present disclosure)
  • Figure 8b illustrates schematically the entry of text using an autocomplete function incorporating yet another linear menu structure (present disclosure).
  • Figure 9 illustrates schematically a portable electronic device comprising the apparatus described herein;
  • Figure 10 illustrates schematically a method of operating the portable electronic device of Figure 9;
  • Figure 1 1 illustrates schematically a computer readable media providing a computer program.
  • FIG. 1 shows a standard alphanumeric keypad 101 common to a large number of mobile phones.
  • each number key 102 also has a plurality of (three or four) letters 103 associated with it.
  • a user wishes to enter text (e.g. when drafting a text message), he/she must press each key 102 repeatedly in order to cycle through the letters 103 associated with that key 102.
  • the number of times each key 102 needs to be pressed depends on the order of the letters 103 associated with the key 102.
  • the user has entered the full word string "Cat”.
  • To enter the letter "c” the number 2 key must be pressed three times.
  • To enter the letter "a" the number 2 key must be pressed once.
  • the number 8 key To enter the letter "t", the number 8 key must be pressed once. Pausing for a predetermined period of time, or pressing a different key 102, automatically chooses the current letter in the cycle.
  • the multi-tap input method is inefficient compared to an unambiguous keyboard because, on average, multiple keystrokes are required in order to enter a single character.
  • Figure 2 shows text entry on the same alphanumeric keypad 201 as Figure 1 , but using disambiguation instead of multi-tap.
  • the user in order to enter the full word string "Cat", the user only needs to press the key 202 associated with each letter once (i.e. number keys 2, 2 and 8, sequentially).
  • the device compares all possible character combinations based on this particular key sequence against entries stored in a predictive text dictionary, and presents the results as a list 204 of selectable character strings.
  • the key sequence "228" has produced the character strings "Act", “Cat", “Bat”, “Abu” and "Cau”.
  • the device displays the first character string shown on the list (in this case "Act"), but the user can scroll through the list 204 to select the character string that he/she had intended to input.
  • the user has highlighted 205 the character string "Cat".
  • Activation of an allocated key (not shown) at this point would cause the highlighted character string to be inputted.
  • text entry on an alphanumeric keypad 201 requires more keystrokes per character (on average) than a full unambiguous keyboard.
  • the presence of a predictive text dictionary or other suitable language model is required, and the efficiency of the disambiguation approach is dependent upon the completeness of this model.
  • Some text entry systems incorporate both disambiguation and word completion to make alphanumeric keypads more competitive with full unambiguous keyboards ( Figure 3).
  • a device running this software makes predictions on a word string being input (e.g. by comparing the inputted characters with entries stored in a predictive text dictionary) once the user has inputted one or more characters of the word string.
  • the device provides the user with a list 304 of possible full word strings. Provided that the desired word string is contained within the list 304, word completion removes the need for the user to input the remaining characters of the full word string.
  • the user has started to enter the word string "Cattle", but has only entered the first three characters (by pressing the key sequence "228") before the device has provided a number of selectable character strings.
  • the list 304 contains a large number of textonyms (words produced by the same key sequence).
  • the list 304 may comprise an ever greater number of textonyms than is shown here. Textonyms limit the effectiveness of predictive text systems because the user must either scroll through a large number of suggested word strings to find the one that he/she had intended to input, or must enter a greater number of characters to reduce the size of the list 304.
  • Predicted word strings are sometimes presented above the keyboard for user selection, for example in a dedicated space between the inputted characters and the keyboard.
  • the predicted word strings may be presented in-line with the inputted characters.
  • a disadvantage of presenting the predicted word strings above the keyboard is that the user must focus his/her attention to two different places if he/she is to take advantage of the predicted word strings. This is distracting for the user and reduces the speed of text input. As a result, some users prefer to type each word string out in full rather than make use of the predicted word strings, which defeats the purpose of word completion/predicted text altogether. There will now be described an apparatus and associated methods that may or may not overcome this issue.
  • Figure 5 illustrates an example of word completion.
  • a user is trying to enter the word "Caterpillar”.
  • the device compares the inputted character string with entries stored in a predictive text dictionary and finds a match with the word "Caterpillar”.
  • the device provides the predicted full word string "Caterpillar” in the form of a popup 507 adjacent to the inputted character string "Caterp”.
  • the user then has the option of accepting the predicted full word string by touching the pop-up 507 or pressing the spacebar 508, or rejecting the predicted full word string by touching a close-down box 509 (cross) in the corner of the pop-up 507.
  • One feature of the text entry system of Figure 5 is the delay in character input. Unlike some touch-screen devices, which accept a character as input when physical contact with the corresponding touch-screen key 502 has been detected, in this example the device waits until the user has terminated physical contact with the touch-sensitive display before accepting the character. However, because the character is not input/displayed at the time of contact, and because the user's finger is covering the key identifier (i.e. the letter shown on the key 502) at that moment in time, it is difficult for the user to tell if he/she has pressed the correct key. To provide visual confirmation, therefore, a key-press pop-up 51 1 is provided adjacent to the key 502 during physical contact.
  • One aspect of the present disclosure involves the provision of predicted word strings which are positionally associated with the position of the key 602 that was last touched.
  • the predicted word strings may be provided in the form of a menu structure 612, 613 as shown in Figures 6a and 6b.
  • the user has started to enter the full word string "Catalyst”.
  • the device On pressing the key 602 associated with the letter "t”, however, the device has compared the inputted character string "Cat" with a predictive text dictionary and provided the user with a menu 612 of predicted character strings for selection.
  • the device has provided the full word strings "Caterpillar”, “Catalyst”, “Cathode”, “Cat”, “Catwalk”, “Catalogue”, and “Catch” in a circular menu 612 around the key 602 associated with the letter “t” ( Figure 6a), and in a linear menu 613 adjacent to the key 602 associated with the letter “t” ( Figure 6b).
  • the touch-sensitive display 610 may comprise an input region 614 for entering text, and a display region 615 for displaying the entered text.
  • the predicted character strings may be presented within the input region 614 for user selection (for example, within a keypress pop-up). Since the user will likely be looking at the keys 602 of the input region 614 when entering text, provision of the predicted character strings within this region 614 enables the user to acknowledge and use the predicted character strings without looking away from the keys 602.
  • the area 616 of the touch-sensitive display 610 used to select the predicted character strings may be flush with, adjacent to, or may overlap with at least part of the key 602 that was last pressed (e.g. the area may overlap by up to 5mm with an edge of the key 602).
  • the circular menu 612 is arranged around the last-pressed key 602, whilst in the example illustrated in Figure 6b, the linear menu 613 can be seen to be flush with the right-hand edge of the key 602.
  • a circular menu 612 has the advantage that selection depends on direction rather than distance. This can help to minimise selection errors.
  • a linear menu 613 has the advantage that it may contain any number of selectable options without limiting the size/area 616 associated with their selection.
  • Circular menus 612 are usually limited to a maximum of eight options. This helps to ensure that the direction/angle of movement required to select one option from the circular menu is substantially different to the direction/angle of movement required to select the other options. It also helps to ensure that the size/area 616 of each option is large enough to allow the user to see the options without straining their eyesight, and to allow the user to select an option without running the risk of selecting an adjacent option by mistake.
  • the order of the predicted word strings in the menu 613 may be dependent on the probability of each word string being the one that the user had intended to input. This probability may be based on the number of times each character string has previously been input by the user, and/or the commonality of use of each character string. Furthermore, the distance of each predicted character string from the last-pressed key 602 may be based on this probability. In particular, the distance of each character string from the key 602 might decrease as the probability increases. This helps to ensure that the most probable character strings are close at hand to facilitate their selection. Also, the area 616 of the touch-sensitive display 610 associated with selection of each predicted character string may also be based on this probability. In particular, the area 616 might increase as the probability increases. Again, this helps to facilitate selection of the most probable character strings.
  • activation of a key 602 and selection of a predicted character string may be effected when physical contact with the touch-sensitive display 610 has been detected at the key 602 and area 616 associated with selection of the predicted character string, respectively.
  • the predicted character strings may be positionally associated with the last-pressed key 602 such that a particular predicted character string may be selected without causing input of characters associated with other keys on the display. For example, if the user wished to select the character string "catalyst" shown in Figures 6a and 6b, he/she could simply touch the area 616 associated with the word "catalyst".
  • the list of predicted character strings may be provided to the user as soon as the key 602 (in this case the key associated with the letter "t") has been touched.
  • activation of a key 602, and selection of a predicted character string may be effected when physical contact with the touch-sensitive display 610 has been terminated at the key 602 and area 616 associated with selection of the predicted character string, respectively.
  • the predicted character strings may be positionally associated with the last-pressed key 602 such that a particular predicted character string may be selected without interrupting physical contact with the touch-sensitive display 610 between activation of the key 602 and selection of that particular character string. This feature enables a user to maintain physical contact with the display 610 until he/she has slid his/her finger across the surface of the display to the chosen character string.
  • the list of predicted character strings may be provided to the user as soon as the key 602 has been released. For example, if the user wished to select the character string "catalyst” shown in Figures 6a and 6b, he/she could simply slide his/her finger from the key 602 associated with the letter “t” to the area 616 associated with the word “catalyst” before terminating physical contact with the touch-sensitive display 610.
  • the device may be configured to detect an interaction property associated with the user touching a key 602, or area 616 associated with selection of a predicted character string. In these example embodiments, the device may be configured to accept or reject input of the corresponding character or predicted character string, respectively, when this interaction property exceeds a predetermined interaction value. For example, the device may be configured to detect the duration of touch, and accept or reject the character or character string when the duration of touch exceeds a particular touch time interaction value. Additionally or alternatively, the device may be configured to detect the touch pressure, and accept or reject the character or character string when the touch pressure exceeds a predetermined touch pressure interaction value.
  • this action might be used to reject input of the character, thereby resulting in the input of no character or character string at all.
  • Figures 7a and 7b illustrate another example embodiment of the present disclosure in which the device initially presents predicted part word strings or "word stems" 717, 718 (e.g. single letters or groups of letters) to the user, and builds full word strings up in sequential stages rather than presenting them in their entirety.
  • word stems e.g. single letters or groups of letters
  • the presentation of full word strings can take up a large area of space on the display 710 which could be distracting to the user, and which may conceal other information displayed on-screen (such as a clock or signal/battery indicator).
  • some of the predicted full word strings may contain many of the same characters. In this scenario, there is no need to present the same word stem 717, 718 more than once.
  • the device might present unique primary word stems 717 once in a main menu, and then provide corresponding secondary word stems 718 in a sub-menu 719 on selection of one of the primary word stems 717.
  • the primary 717 and secondary 718 word stems may be referred to as "predicted character strings” and "further predicted characters strings”, respectively.
  • This example embodiment may be better understood with reference to the figures.
  • the user wishes to enter the full word string "Catalyst”.
  • the device discovers that the character string "Cat” matches the full word strings "caterpillar", “catalyst", “cathode”, “cat”, “catwalk”, “catalogue” and "catch”.
  • the device presents the unique primary word stems 717 “ch”, “al”, “er”, “w” and “h” in a main menu 712, 713.
  • the device On selection of the primary word stem 717 “al”, the device then presents the corresponding secondary word stems 718 "ogue” and "yst” to the user for selection in a sub-menu 719.
  • the secondary word stems 718 may be positionally associated with the position of the selected primary word stem 717.
  • the area of the touch-sensitive display 710 used to select a secondary word stem 718 may be flush with, adjacent to, or may overlap with at least part of the area used to select the primary word stem 717 (e.g. the area used to select a secondary word stem 718 may overlap by up to 5mm with an edge of the area used to select the primary word stem 717).
  • the primary word stems 717 may be provided in a circular menu 712, and the corresponding secondary word stems 718 may radiate outward from the position of the selected primary word stem 717.
  • the secondary word stems 718 "ogue” and “yst” extend from the primary word stem 717 "al” as circular sectors.
  • the primary word stems 717 may be provided in a linear menu 713, and the corresponding secondary word stems 718 may extent from the selected primary word stem 717 as a linear sub-menu 719.
  • the secondary word stems 718 may appear only after selection of a primary word stem 717 in order to minimise the area of the display 710 taken up by the menu structure 712, 713, 719.
  • the primary 717 and secondary 718 word stems may be presented at the same time (not shown). As with the example embodiments shown in Figures 6a and 6b, this allows the user to see all possible example embodiments from the outset.
  • selection of a primary 717 or secondary 718 word stem may be effected in a number of ways. For example, selection may occur when the user touches the area associated with selection of a word stem 717, 718, or when the user terminates physical contact with the display 710 at the area associated with selection of a word stem 717, 718.
  • the user may select a primary word stem 717 by continuous touching of the touch-sensitive display 710 in a line originating from the last-pressed key 702 and continuing to the area associated with selection of the primary word stem 717.
  • the user may select a secondary word stem 718 by continuous touching of the touch-sensitive display 710 in a line originating from the area associated with selection of the primary word stem 717 and continuing to the area associated with selection of the secondary word stem 718.
  • the user may select a primary word stem 717 and corresponding secondary word stem 718 sequentially by continuous touching of the touch-sensitive display 710 in a line originating from the last-pressed key 702 and continuing to the area associated with selection of the primary word stem 717, and then to the area associated with selection of the secondary word stem 718.
  • input of a word stem 717, 718 may be accepted or rejected when the user remains in physical contact with the area associated with selection of the word stem 717, 718 for a predetermined period of time, or when the user applies a predetermined pressure to the area.
  • the user is under no obligation to select a word stem 717, 718 presented in the menu 712, 713, 719, and may instead choose to input only the character corresponding to the last-pressed key 702 if none of the suggested word stems 717, 718 appear to the user to be suitable.
  • Input of the character corresponding to the last-pressed key 702 may occur automatically on rejection of a word stem 717, 718.
  • the user may select a primary word stem 717 without necessarily having to select a corresponding secondary word stem 718. In this case, rejection of a secondary word stem 718 may result in the corresponding primary word stem 717 being inputted automatically.
  • Figures 7a and 7b show only two levels of word stems, e.g. primary 717 and secondary 718 word stems, there could be multiple levels. For example, if the list of predicted full word strings contained longer words than those considered above, there could conceivably be three or four different levels, each containing a plurality of unique word stems 717, 718.
  • Figures 8a and 8b show another variation of the present disclosure.
  • the sub-menu 819 does not just display the remaining characters (secondary word stems) of the full word strings, but instead display the complete word strings that will be input if the user selects them. For example, in the examples shown, the user wishes to enter the full word string "Catalyst".
  • the menu structure may take the form of a circular menu structure 812 ( Figure 8a) or a linear menu structure 813 ( Figure 8b).
  • An advantage of these example embodiments with respect to those shown in Figures 7a and 7b may be that there is no need for the user to mentally construct the full word string that he/she is trying to input. Furthermore, because the primary word stems 817 are presented in the first level, there is no need to present every full word string at the same time (compare with the example embodiments shown in Figures 6a and 6b). Instead, only the full word strings corresponding to the primary word stem 817 are presented at the same time. This helps to limit the area of the display 810 taken up by the menu structure 812, 813, 819.
  • the full word string may be presented as a selectable option in the menu 812 of primary word stems 817.
  • the full word string "Cater” may have been provided in addition to the primary word stem 817 "er” if the word "Cater” had appeared in the predictive text dictionary. This also applies to any others levels in the hierarchy.
  • the character associated with the original key-press, the predicted word strings, the primary word stems, the secondary word stems, and/or the full word strings may comprise one or more of a letter, number, or punctuation mark.
  • the letters may be letters from the Roman, Greek Arabic and/or Cyrillic alphabets.
  • the predicted word strings, primary word stems and secondary word stems in the described examples have been part word strings or full word strings, they could also comprise multiple words. This feature may be used for phrase or sentence completion, rather than just word completion. To achieve this, the device may compare one or more inputted words with phrases or sentences stored in the predictive text dictionary.
  • FIG. 9 there is illustrated a device/apparatus 928 comprising a processor 929, a touch-sensitive display 930, and a storage medium 931 , which may be electrically connected to one another by a data bus 932.
  • the device 928 may be a portable electronic device such as a portable telecommunications device.
  • the processor 929 is configured for general operation of the device 928 by providing signalling to, and receiving signalling from, the other device components to manage their operation.
  • the processor 929 is configured to detect a touch input from the touch-sensitive display 930; determine the character associated with the touch input; compare the input word string with entries stored in a predictive text dictionary; and provide one or more predicted character strings for user selection.
  • the processor 929 may also be configured to detect the position of touch, the duration of touch, and/or the touch pressure; and enable the selection or input of characters based on this position, duration or pressure.
  • the processor 929 may be configured to provide the predicted character strings as primary and secondary word stems in a step-wise selectable manner.
  • the touch-sensitive display 930 comprises an input region and a display region (not shown).
  • the input region comprises a plurality of touch-screen keys for the input of respective characters, and is configured to display the predicted character strings in such a way that they are positionally associated with the key that has triggered their generation (i.e. the last-pressed key).
  • the touch-screen keys may be arranged to form a 12-key alphanumeric keypad, a portrait "qwerty" keyboard, or a landscape "qwerty” keyboard.
  • the touch-screen keys may be configured to allow input of numbers, punctuation marks, and/or letters of the Roman, Greek, Arabic and/or Cyrillic alphabets.
  • the touch-screen keys may be configured to allow the input of text in one or more of the following languages: English, Chinese, Japanese, Greek, Arabic, Indo- European, Oriental and Asiatic.
  • the touch-sensitive display may be configured to enable input of Chinese or Japanese characters, either directly or via transcription methods such as Pinyin and/or Bopomofo (Zhuyin Fuhao).
  • the display region is configured to display the characters input by the touch-screen keys.
  • the touch-sensitive display 930 may also be configured to display a graphical user interface to facilitate use of the device 928.
  • the touch-sensitive display 930 may comprise additional touch-screen keys for navigation of the user interface.
  • the touch-sensitive display 930 may comprise one or more of the following technologies: resistive, surface acoustic wave, capacitive, force panel, optical imaging, dispersive signal, acoustic pulse recognition, and bidirectional screen technology.
  • the touch-sensitive display 930 may be configured to detect physical contact with any part of the user's body (not just the user's fingers), and may be configured to detect physical contact with a stylus.
  • the storage medium 931 is configured to store computer code required to operate the device 928, as described with reference to Figure 1 1.
  • the storage medium 928 is also configured to store the predictive text dictionary.
  • the processor 929 may access the storage medium 931 to compare the inputted word string against entries stored in the predictive text dictionary to find a match, and to determine the predicted character strings for presentation to the user.
  • the storage medium 931 may also be configured to store settings for the device components.
  • the processor 929 may access the storage medium 931 to retrieve the component settings in order to manage operation of the device components.
  • the storage medium 931 may be configured to store the graphical user interface.
  • the storage medium 931 may be a temporary storage medium such as a volatile random access memory.
  • the storage medium 931 may be a permanent storage medium such as a hard disk drive, a flash memory, or a non-volatile random access memory.
  • the main steps of the method used to operate the device/apparatus 928 are illustrated schematically in Figure 10.
  • Figure 1 1 illustrates schematically a computer/processor readable medium 1 133 providing a computer program according to one example embodiment.
  • the computer/processor readable medium 1 133 is a disc such as a digital versatile disc (DVD) or a compact disc (CD).
  • DVD digital versatile disc
  • CD compact disc
  • the computer/processor readable medium 1 133 may be any medium that has been programmed in such a way as to carry out an inventive function.
  • the computer/processor readable medium 1 133 may be a removable memory device such as a memory stick or memory card (SD, mini SD or micro SD).
  • the computer program may comprise computer code configured to enable: detection of a particular touch input at a particular input area of a touch-sensitive display, the particular input area associated with the input of a particular character to be used in the input of a full word string; determination of a particular character associated with the particular touch input; determination of one or more predicted character strings based on the determined particular character in combination with one or more previous inputted characters of a word string, the predicted character strings constituting a prediction of all or part of an associated full word string; and provision of the one or more predicted character strings for user selection such that the one or more predicted character strings are positionally associated with the particular input area.
  • the computer program may also comprise computer code configured to enable: determination of one or more further predicted character strings based on a particular predicted character string; and provision of the one or more further predicted character strings for user selection such that the one or more further predicted character strings are positionally associated with the position of the particular predicted character string.
  • feature number 1 can also correspond to numbers 101 , 201 , 301 etc. These numbered features may appear in the figures but may not have been directly referred to within the description of these particular example embodiments. These have still been provided in the figures to aid understanding of the further example embodiments, particularly in relation to the features of similar earlier described example embodiments.
  • any mentioned apparatus/device and/or other features of particular mentioned apparatus/device may be provided by apparatus arranged such that they become configured to carry out the desired operations only when enabled, e.g. switched on, or the like. In such cases, they may not necessarily have the appropriate software loaded into the active memory in the non-enabled (e.g. switched off state) and only load the appropriate software in the enabled (e.g. on state).
  • the apparatus may comprise hardware circuitry and/or firmware.
  • the apparatus may comprise software loaded onto memory.
  • Such software/computer programs may be recorded on the same memory/processor/functional units and/or on one or more memories/processors/functional units.
  • a particular mentioned apparatus/device may be preprogrammed with the appropriate software to carry out desired operations, and wherein the appropriate software can be enabled for use by a user downloading a "key", for example, to unlock/enable the software and its associated functionality.
  • Advantages associated with such example embodiments can include a reduced requirement to download data when further functionality is required for a device, and this can be useful in examples where a device is perceived to have sufficient capacity to store such pre- programmed software for functionality that may not be enabled by a user.
  • any mentioned apparatus/circuitry/elements/processor may have other functions in addition to the mentioned functions, and that these functions may be performed by the same apparatus/circuitry/elements/processor.
  • One or more disclosed aspects may encompass the electronic distribution of associated computer programs and computer programs (which may be source/transport encoded) recorded on an appropriate carrier (e.g. memory, signal).
  • any "computer” described herein can comprise a collection of one or more individual processors/processing elements that may or may not be located on the same circuit board, or the same region/position of a circuit board or even the same device. In some example embodiments one or more of any mentioned processors may be distributed over a plurality of devices. The same or different processor/processing elements may perform one or more functions described herein.
  • the term "signalling" may refer to one or more signals transmitted as a series of transmitted and/or received signals. The series of signals may comprise one, two, three, four or even more individual signal components or distinct signals to make up said signalling. Some or all of these individual signals may be transmitted/received simultaneously, in sequence, and/or such that they temporally overlap one another.
  • processors and memory may comprise a computer processor, Application Specific Integrated Circuit (ASIC), field-programmable gate array (FPGA), and/or other hardware components that have been programmed in such a way to carry out the inventive function.
  • ASIC Application Specific Integrated Circuit
  • FPGA field-programmable gate array

Abstract

An apparatus comprising: a processor and memory including computer program code, the memory and the computer program code configured to, with the processor, cause the apparatus to: detect a particular touch input at a particular input area of a touch-sensitive display, the particular input area associated with the input of a particular character to be used in the input of a full word string;determine a particular character associated with the particular touch input; determine one or more predicted character strings based on the determined particular character in combination with one or more previous inputted characters of a word string, the predicted character strings constituting a prediction of all or part of an associated full word string; and provide the one or more predicted character strings for user selection such that the one or more predicted character strings are positionally associated with the particular input area.

Description

AN APPARATUS AND ASSOCIATED METHODS FOR TEXT ENTRY
Technical Field
The present disclosure relates to the field of touch-sensitive displays, associated apparatus, methods and computer programs, and in particular concerns the auto- completion of text/characters in a word string. Certain disclosed aspects/example embodiments relate to portable electronic devices, in particular, so-called hand-portable electronic devices which may be hand-held in use (although they may be placed in a cradle in use). Such hand-portable electronic devices include so-called Personal Digital Assistants (PDAs).
The portable electronic devices/apparatus according to one or more disclosed aspects/example embodiments may provide one or more audio/text/video communication functions (e.g. tele-communication, video-communication, and/or text transmission, Short Message Service (SMS)/ Multimedia Message Service (MMS)/emailing functions, interactive/non-interactive viewing functions (e.g. web- browsing, navigation, TV/program viewing functions), music recording/playing functions (e.g. MP3 or other format and/or (FM/AM) radio broadcast recording/playing), downloading/sending of data functions, image capture function (e.g. using a (e.g. in-built) digital camera), and gaming functions.
Background
There are currently a number of different methods available for inputting text on a portable electronic device. The methods available for use on a particular device often depends on the user interface. Some devices incorporate a 12-key alphanumeric keypad in which several characters are associated with each key, whilst others incorporate a full unambiguous (e.g. qwerty) keyboard in which only one character is associated with each key.
One method for inputting text on a 12-key alphanumeric keypad is the multi-tap system. Multi-tap requires a user to press a key repeatedly in order to cycle through the letters/characters associated with that key. In another approach, disambiguation systems have been developed. Disambiguation involves the determination of words from an ambiguous key sequence by comparing all corresponding character combinations with entries stored in a predictive text dictionary or with a set of statistical rules. Once a match has been found, the device presents a number of possible character strings to the user for selection.
Word completion (or "autocompletion") can be used with both alphanumeric keypads and unambiguous keyboards. Word completion is a predictive text technology which predicts a word string based on one or more characters entered by the user. The prediction is made by comparing the inputted characters with entries stored in a predictive text dictionary or with a set of statistical rules. Once a match has been found, the device presents a number of possible word strings to the user for selection. In this way, the user is able to enter a complete word string by inputting only part of that word string.
The listing or discussion of a prior-published document or any background in this specification should not necessarily be taken as an acknowledgement that the document or background is part of the state of the art or is common general knowledge. One or more aspects/embodiments of the present disclosure may or may not address one or more of the background issues. Summary
According to an example embodiment, there is provided an apparatus comprising:
a processor and memory including computer program code, the memory and the computer program code configured to, with the processor, cause the apparatus to:
detect a particular touch input at a particular input area of a touch-sensitive display, the particular input area associated with the input of a particular character to be used in the input of a full word string;
determine a particular character associated with the particular touch input;
determine one or more predicted character strings based on the determined particular character in combination with one or more previous inputted characters of a word string, the predicted character strings constituting a prediction of all or part of an associated full word string; and
provide the one or more predicted character strings for user selection such that the one or more predicted character strings are positionally associated with the particular input area. Reference to a memory or processor may be a reference to one or more memories/processors. The term "particular area" may be taken to mean a touch-screen key/touch-sensitive region of a touch-sensitive display which has been configured for character input. In this respect, the terms "area", "region" or "key" may be used interchangeably throughout the specification.
The apparatus may be configured to:
determine one or more further predicted character strings based on a particular predicted character string; and
provide the one or more further predicted character strings for user selection such that the one or more further predicted character strings are positionally associated with the position of the particular predicted character string.
The predicted character strings may be positionally associated with the particular area such that they radiate outward from the position of the particular input area. The further predicted character strings may be positionally associated with the position of the particular predicted character string such that they radiate outward from the position of the particular predicted character string. The further predicted character strings may appear on touching the touch-sensitive display at the position of the particular predicted character string.
A particular predicted character string may be selectable by continuous touching of the touch-sensitive display in a line originating from the particular input area and continuing to the position of the particular predicted character string. A particular further predicted character string may be selectable by continuous touching of the touch-sensitive display in a line originating from the position of the particular predicted character string and continuing to the position of the particular further predicted character string. The particular predicted character string and a particular further predicted character string may be sequentially selectable by continuous touching of the touch-sensitive display in a line originating from the particular input area and continuing to the position of the particular predicted character string and then to the position of the particular further predicted character string.
The area of the touch-sensitive display used to select a particular predicted character string may be flush with, may be adjacent to, or may overlap with at least part of the particular input area. The area of the touch-sensitive display used to select a particular further predicted character string may be flush with, may be adjacent to, or may overlap with at least part of the area of the touch-sensitive display used to select a particular predicted character string. The apparatus may be configured to determine the probability of a particular predicted character string matching all or part of the associated full word string. The area of the touch-sensitive display used to select the particular predicted character string may be based on this probability. The area may increase as the probability increases. The distance of the particular predicted character string from the particular area may be based on this probability. The distance of the particular predicted character string from the particular area may decrease as the probability increases (or vice versa).
The probability may be based on the number of times the particular predicted character string has previously been input in combination with the particular character and the one or more previous inputted characters. The probability may be based on the commonality of use of the particular predicted character string in combination with the particular character and the one or more previous inputted characters.
The apparatus may be configured to determine the probability of a particular further predicted character string matching all or part of the associated full word string. The area of the touch-sensitive display used to select the particular further predicted character string may be based on this probability. The distance of the particular further predicted character string from the position of the particular predicted character string may be based on this probability. The distance of the particular further predicted character string from the position of the particular predicted character string may decrease as the probability increases (or vice versa).
The probability may be based on the number of times the particular further predicted character string has previously been input in combination with the particular character, the one or more previous inputted characters and the particular predicted character string. The probability may be based on the commonality of use of the particular further predicted character string in combination with the particular character, the one or more previous inputted characters and the particular predicted character string.
The one or more predicted character strings may be positionally associated with the particular input area such that a particular predicted character string may be selected without interrupting physical contact with the touch-sensitive display between detection of the particular touch input and selection of the particular predicted character string. The one or more further predicted character strings may be positionally associated with the position of the particular predicted character string such that a particular further predicted character string may be selected without interrupting physical contact with the touch- sensitive display between selection of the particular predicted character string and selection of the particular further predicted character string.
The touch sensitive-display may comprise an input region. The input region may comprise the particular input area and a plurality of other input areas, each input area associated with the input of a respective character. The one or more predicted character strings may be positionally associated with the particular input area such that a particular predicted character string may be selected without causing input of characters associated with the other input areas. The one or more further predicted character strings may be positionally associated with the position of the particular predicted character string such that a particular further predicted character string may be selected without causing input of characters associated with the other input areas.
The apparatus may be configured to determine the one or more predicted character strings by comparing the particular character, in combination with the one or more previous inputted characters, with entries stored in a predictive text dictionary and/or with a set of statistical rules. The apparatus may be configured to determine the one or more further predicted character strings by comparing the particular predicted character string, in combination with the particular character and the one or more previous inputted characters, with entries stored in a predictive text dictionary and/or with a set of statistical rules.
The apparatus may be configured to detect an interaction property associated with touching of the area of the touch-sensitive display used to select a particular predicted character string. The apparatus may accept or reject input of that particular predicted character string when this interaction property exceeds a predetermined interaction value. The apparatus may be configured to accept input of the particular character automatically when input of the particular predicted character string has been rejected. The apparatus may be configured to detect an interaction property associated with touching of the area of the touch-sensitive display used to select a particular further predicted character string. The apparatus may accept or reject input of that particular further predicted character string when this interaction property exceeds a predetermined interaction value. The apparatus may be configured to accept input of the particular character and/or the particular predicted character string automatically when input of the particular further predicted character string has been rejected.
The apparatus may be configured to detect an interaction property of the particular touch input. The apparatus may accept or reject input of the particular character when this interaction property exceeds a predetermined interaction value. The interaction property may be the duration of touch. The apparatus may be configured to accept or reject input when the duration of touch exceeds a predetermined touch time interaction value. The interaction property may be the touch pressure. The apparatus may be configured to accept or reject input when the touch pressure exceeds a predetermined touch pressure interaction value.
The apparatus may be configured to accept input of the particular character when physical contact with the touch-sensitive display has been detected or terminated at the particular input area. The apparatus may be configured to accept input of a particular predicted character string when physical contact with the touch-sensitive display has been detected or terminated at an area of the touch-sensitive display used to select that particular predicted character string. The apparatus may be configured to accept input of a particular further predicted character string when physical contact with the touch- sensitive display has been detected or terminated at an area of the touch-sensitive display used to select that particular further predicted character string.
The one or more predicted character strings may be provided in the form of a menu. The one or more further predicted character strings may be provided in the form of a menu. The menu comprising the further predicted character strings may a sub-menu of the menu comprising the predicted character strings. Either or both menus may be horizontal linear menus, vertical linear menus, or circular menus (also known as pie menus or radial menus). The one or more predicted character strings may be provided in a key-press pop-up area. The one or more further predicted character strings may be provided in a key-press pop-up area. One or more of the particular character, previous inputted characters, predicted character strings and further predicted character strings may comprise a letter, number, or punctuation mark. The touch-sensitive display may form part of the apparatus. The touch sensitive display may comprise a touch-sensitive alphanumeric keypad, a touch-sensitive portrait qwerty keyboard, or a touch-sensitive landscape qwerty keyboard. The apparatus may be a touch-sensitive display, portable telecommunications device, a module for a touch-sensitive display, or a module for a portable telecommunications device.
According to a another example embodimentt, there is provided a method comprising: detecting a particular touch input at a particular input area of a touch-sensitive display, the particular input area associated with the input of a particular character to be used in the input of a full word string;
determining a particular character associated with the particular touch input;
determining one or more predicted character strings based on the determined particular character in combination with one or more previous inputted characters of a word string, the predicted character strings constituting a prediction of all or part of an associated full word string; and
providing the one or more predicted character strings for user selection such that the one or more predicted character strings are positionally associated with the particular input area.
The method may comprise:
determining one or more further predicted character strings based on a particular predicted character string; and
providing the one or more further predicted character strings for user selection such that the one or more further predicted character strings are positionally associated with the position of the particular predicted character string.
The steps of any method described herein do not necessarily have to be performed in the exact order disclosed, unless explicitly stated or understood by the skilled person.
According to another example embodiment, there is provided a computer program, recorded on a carrier, the computer program comprising computer code configured to enable: detection of a particular touch input at a particular input area of a touch-sensitive display, the particular input area associated with the input of a particular character to be used in the input of a full word string;
determination of a particular character associated with the particular touch input; determination of one or more predicted character strings based on the determined particular character in combination with one or more previous inputted characters of a word string, the predicted character strings constituting a prediction of all or part of an associated full word string; and
provision of the one or more predicted character strings for user selection such that the one or more predicted character strings are positionally associated with the particular input area.
The computer program may comprise computer code configured to enable:
determination of one or more further predicted character strings based on a particular predicted character string; and
provision of the one or more further predicted character strings for user selection such that the one or more further predicted character strings are positionally associated with the position of the particular predicted character string. The apparatus may comprise a processor configured to process the code of the computer program. The processor may be a microprocessor, including an Application Specific Integrated Circuit (ASIC).
According to another example embodiment, there is provided an apparatus comprising: means for detecting of a particular touch input at a particular input area of a touch-sensitive display, the particular input area associated with the input of a particular character to be used in the input of a full word string;
means for determining of a particular character associated with the particular touch input;
means for determining of one or more predicted character strings based on the determined particular character in combination with one or more previous inputted characters of a word string, the predicted character strings constituting a prediction of all or part of an associated full word string; and
means for providing of the one or more predicted character strings for user selection such that the one or more predicted character strings are positionally associated with the particular input area. The present disclosure includes one or more corresponding aspects, embodiments or features in isolation or in various combinations whether or not specifically stated (including claimed) in that combination or in isolation. Corresponding means for performing one or more of the discussed functions are also within the present disclosure.
Corresponding computer programs for implementing one or more of the methods disclosed are also within the present disclosure and encompassed by one or more of the described embodiments.
The above summary is intended to be merely exemplary and non-limiting.
Brief Description of the Figures A description is now given, by way of example only, with reference to the accompanying drawings, in which:-
Figure 1 illustrates schematically the entry of text on an alphanumeric keypad using a multi-tap input method (prior art);
Figure 2 illustrates schematically the entry of text on an alphanumeric keypad using a disambiguation input method (prior art);
Figure 3 illustrates schematically the entry of text on an alphanumeric keypad using a disambiguation input method with an autocomplete function (prior art);
Figure 4 illustrates schematically the entry of text on a qwerty keyboard with an autocomplete function (prior art);
Figure 5 illustrates schematically the entry of text on an existing portable telecommunications device (prior art);
Figure 6a illustrates schematically the entry of text using an autocomplete function incorporating a circular menu structure (present disclosure);
Figure 6a illustrates schematically the entry of text using an autocomplete function incorporating a linear menu structure (present disclosure);
Figure 7a illustrates schematically the entry of text using an autocomplete function incorporating another circular menu structure (present disclosure);
Figure 7b illustrates schematically the entry of text using an autocomplete function incorporating another linear menu structure (present disclosure); Figure 8a illustrates schematically the entry of text using an autocomplete function incorporating yet another circular menu structure (present disclosure);
Figure 8b illustrates schematically the entry of text using an autocomplete function incorporating yet another linear menu structure (present disclosure);
Figure 9 illustrates schematically a portable electronic device comprising the apparatus described herein;
Figure 10 illustrates schematically a method of operating the portable electronic device of Figure 9;
Figure 1 1 illustrates schematically a computer readable media providing a computer program.
Description of Example Aspects/Embodiments
As mentioned in the background section, multi-tap, disambiguation, and word completion methods are examples of approaches used toinput text on devices. Each of these techniques will now be described in more detail with respect to Figures 1 to 5.
Figure 1 shows a standard alphanumeric keypad 101 common to a large number of mobile phones. As can be seen, each number key 102 also has a plurality of (three or four) letters 103 associated with it. When a user wishes to enter text (e.g. when drafting a text message), he/she must press each key 102 repeatedly in order to cycle through the letters 103 associated with that key 102. The number of times each key 102 needs to be pressed depends on the order of the letters 103 associated with the key 102. In the present example, the user has entered the full word string "Cat". To enter the letter "c", the number 2 key must be pressed three times. To enter the letter "a", the number 2 key must be pressed once. To enter the letter "t", the number 8 key must be pressed once. Pausing for a predetermined period of time, or pressing a different key 102, automatically chooses the current letter in the cycle. The multi-tap input method is inefficient compared to an unambiguous keyboard because, on average, multiple keystrokes are required in order to enter a single character.
Figure 2 shows text entry on the same alphanumeric keypad 201 as Figure 1 , but using disambiguation instead of multi-tap.. This time, in order to enter the full word string "Cat", the user only needs to press the key 202 associated with each letter once (i.e. number keys 2, 2 and 8, sequentially). On doing so, the device compares all possible character combinations based on this particular key sequence against entries stored in a predictive text dictionary, and presents the results as a list 204 of selectable character strings. In this case, the key sequence "228" has produced the character strings "Act", "Cat", "Bat", "Abu" and "Cau". Initially, the device displays the first character string shown on the list (in this case "Act"), but the user can scroll through the list 204 to select the character string that he/she had intended to input. In this case, the user has highlighted 205 the character string "Cat". Activation of an allocated key (not shown) at this point would cause the highlighted character string to be inputted. In practice, however, even with disambiguation, text entry on an alphanumeric keypad 201 requires more keystrokes per character (on average) than a full unambiguous keyboard. Also, the presence of a predictive text dictionary or other suitable language model is required, and the efficiency of the disambiguation approach is dependent upon the completeness of this model.
Some text entry systems incorporate both disambiguation and word completion to make alphanumeric keypads more competitive with full unambiguous keyboards (Figure 3). As discussed in the background section, a device running this software makes predictions on a word string being input (e.g. by comparing the inputted characters with entries stored in a predictive text dictionary) once the user has inputted one or more characters of the word string. Rather than merely disambiguating an ambiguous key sequence, however, the device provides the user with a list 304 of possible full word strings. Provided that the desired word string is contained within the list 304, word completion removes the need for the user to input the remaining characters of the full word string. In the present example, the user has started to enter the word string "Cattle", but has only entered the first three characters (by pressing the key sequence "228") before the device has provided a number of selectable character strings. In this case, however, because an alphanumeric keypad has been used to enter the characters, the list 304 contains a large number of textonyms (words produced by the same key sequence). In practice, the list 304 may comprise an ever greater number of textonyms than is shown here. Textonyms limit the effectiveness of predictive text systems because the user must either scroll through a large number of suggested word strings to find the one that he/she had intended to input, or must enter a greater number of characters to reduce the size of the list 304. The fact that the user must exercise mental effort in deciding whether to scroll through the current list 304 or whether to enter another character to reduce the size of the current list 304 causes a reduction in throughput. In this case, the user has decided to scroll through the list 304 to highlight 305 the full word string "Cattle". Activation of an allocated key (not shown) at this point would cause the highlighted full word string 305 to be inputted. Figure 4 shows word completion on a full unambiguous keyboard 406. This time, because there is no ambiguity associated with each keystroke, the list 404 of predicted full character strings contains only those that begin with the specific letter sequence "c", "a" and "t" (in that particular order). This reduces the need to add more characters to reduce the size of the list 404.
One problem with at least some text entry systems that offer word completion is the placement of the predicted word strings. Predicted word strings are sometimes presented above the keyboard for user selection, for example in a dedicated space between the inputted characters and the keyboard. Alternatively, the predicted word strings may be presented in-line with the inputted characters.
A disadvantage of presenting the predicted word strings above the keyboard is that the user must focus his/her attention to two different places if he/she is to take advantage of the predicted word strings. This is distracting for the user and reduces the speed of text input. As a result, some users prefer to type each word string out in full rather than make use of the predicted word strings, which defeats the purpose of word completion/predicted text altogether. There will now be described an apparatus and associated methods that may or may not overcome this issue.
Figure 5 illustrates an example of word completion. In this example, a user is trying to enter the word "Caterpillar". When the user touches the key 502 associated with the letter "p", the device compares the inputted character string with entries stored in a predictive text dictionary and finds a match with the word "Caterpillar". After finding a match, the device provides the predicted full word string "Caterpillar" in the form of a popup 507 adjacent to the inputted character string "Caterp". The user then has the option of accepting the predicted full word string by touching the pop-up 507 or pressing the spacebar 508, or rejecting the predicted full word string by touching a close-down box 509 (cross) in the corner of the pop-up 507.
One feature of the text entry system of Figure 5 is the delay in character input. Unlike some touch-screen devices, which accept a character as input when physical contact with the corresponding touch-screen key 502 has been detected, in this example the device waits until the user has terminated physical contact with the touch-sensitive display before accepting the character. However, because the character is not input/displayed at the time of contact, and because the user's finger is covering the key identifier (i.e. the letter shown on the key 502) at that moment in time, it is difficult for the user to tell if he/she has pressed the correct key. To provide visual confirmation, therefore, a key-press pop-up 51 1 is provided adjacent to the key 502 during physical contact.
One possible reason for delaying acceptance of the character is for manual error correction. If the user presses the wrong key and realises his/her mistake based on the key press pop-up, he/she is able to manually correct the error by sliding his/her finger to another touch-screen key before releasing. In this way, the example of Figure 5 only inputs the character corresponding to the latter key, meaning that no post-input error correction is required.
One aspect of the present disclosure involves the provision of predicted word strings which are positionally associated with the position of the key 602 that was last touched. The predicted word strings may be provided in the form of a menu structure 612, 613 as shown in Figures 6a and 6b. In Figure 6a, the user has started to enter the full word string "Catalyst". On pressing the key 602 associated with the letter "t", however, the device has compared the inputted character string "Cat" with a predictive text dictionary and provided the user with a menu 612 of predicted character strings for selection. In this case, the device has provided the full word strings "Caterpillar", "Catalyst", "Cathode", "Cat", "Catwalk", "Catalogue", and "Catch" in a circular menu 612 around the key 602 associated with the letter "t" (Figure 6a), and in a linear menu 613 adjacent to the key 602 associated with the letter "t" (Figure 6b).
The touch-sensitive display 610 may comprise an input region 614 for entering text, and a display region 615 for displaying the entered text. The predicted character strings may be presented within the input region 614 for user selection (for example, within a keypress pop-up). Since the user will likely be looking at the keys 602 of the input region 614 when entering text, provision of the predicted character strings within this region 614 enables the user to acknowledge and use the predicted character strings without looking away from the keys 602. In particular, the area 616 of the touch-sensitive display 610 used to select the predicted character strings may be flush with, adjacent to, or may overlap with at least part of the key 602 that was last pressed (e.g. the area may overlap by up to 5mm with an edge of the key 602). For example, in the example illustrated in Figure 6a, the circular menu 612 is arranged around the last-pressed key 602, whilst in the example illustrated in Figure 6b, the linear menu 613 can be seen to be flush with the right-hand edge of the key 602.
A circular menu 612 has the advantage that selection depends on direction rather than distance. This can help to minimise selection errors. However, a linear menu 613 has the advantage that it may contain any number of selectable options without limiting the size/area 616 associated with their selection. Circular menus 612, on the other hand, are usually limited to a maximum of eight options. This helps to ensure that the direction/angle of movement required to select one option from the circular menu is substantially different to the direction/angle of movement required to select the other options. It also helps to ensure that the size/area 616 of each option is large enough to allow the user to see the options without straining their eyesight, and to allow the user to select an option without running the risk of selecting an adjacent option by mistake. When the menu 613 is linear, the order of the predicted word strings in the menu 613 may be dependent on the probability of each word string being the one that the user had intended to input. This probability may be based on the number of times each character string has previously been input by the user, and/or the commonality of use of each character string. Furthermore, the distance of each predicted character string from the last-pressed key 602 may be based on this probability. In particular, the distance of each character string from the key 602 might decrease as the probability increases. This helps to ensure that the most probable character strings are close at hand to facilitate their selection. Also, the area 616 of the touch-sensitive display 610 associated with selection of each predicted character string may also be based on this probability. In particular, the area 616 might increase as the probability increases. Again, this helps to facilitate selection of the most probable character strings.
Regarding the activation of a key and selection of a predicted character string, there are a number of different possible implementations. In some example embodiments, activation of a key 602 and selection of a predicted character string may be effected when physical contact with the touch-sensitive display 610 has been detected at the key 602 and area 616 associated with selection of the predicted character string, respectively. In these example embodiments, the predicted character strings may be positionally associated with the last-pressed key 602 such that a particular predicted character string may be selected without causing input of characters associated with other keys on the display. For example, if the user wished to select the character string "catalyst" shown in Figures 6a and 6b, he/she could simply touch the area 616 associated with the word "catalyst". Also, in these example embodiments, the list of predicted character strings may be provided to the user as soon as the key 602 (in this case the key associated with the letter "t") has been touched.
In other example embodiments, activation of a key 602, and selection of a predicted character string, may be effected when physical contact with the touch-sensitive display 610 has been terminated at the key 602 and area 616 associated with selection of the predicted character string, respectively. In these example embodiments, the predicted character strings may be positionally associated with the last-pressed key 602 such that a particular predicted character string may be selected without interrupting physical contact with the touch-sensitive display 610 between activation of the key 602 and selection of that particular character string. This feature enables a user to maintain physical contact with the display 610 until he/she has slid his/her finger across the surface of the display to the chosen character string. In these example embodiments, the list of predicted character strings may be provided to the user as soon as the key 602 has been released. For example, if the user wished to select the character string "catalyst" shown in Figures 6a and 6b, he/she could simply slide his/her finger from the key 602 associated with the letter "t" to the area 616 associated with the word "catalyst" before terminating physical contact with the touch-sensitive display 610.
In other example embodiments, the device may be configured to detect an interaction property associated with the user touching a key 602, or area 616 associated with selection of a predicted character string. In these example embodiments, the device may be configured to accept or reject input of the corresponding character or predicted character string, respectively, when this interaction property exceeds a predetermined interaction value. For example, the device may be configured to detect the duration of touch, and accept or reject the character or character string when the duration of touch exceeds a particular touch time interaction value. Additionally or alternatively, the device may be configured to detect the touch pressure, and accept or reject the character or character string when the touch pressure exceeds a predetermined touch pressure interaction value. For example, if the user wished to select the character string "catalyst" shown in Figures 6a and 6b, he/she could simply maintain physical contact with the area 616 associated with the word "catalyst" for a predetermined period of time, or apply a predetermined pressure to the area 616 associated with the word "catalyst". In some example embodiments, however, this action might be used to reject the character string "catalyst" and accept the character associated with the key-press instead (in this case, the letter "t"). Similarly, if the user simply wished to enter the character associated with the key-press (i.e. the letter "t") without selecting any predicted character string from the menu 612, 613, he/she could maintain physical contact with the key 602 for a predetermined period of time, or apply a predetermined pressure to the key 602. In some example embodiments, however, this action might be used to reject input of the character, thereby resulting in the input of no character or character string at all.
Figures 7a and 7b illustrate another example embodiment of the present disclosure in which the device initially presents predicted part word strings or "word stems" 717, 718 (e.g. single letters or groups of letters) to the user, and builds full word strings up in sequential stages rather than presenting them in their entirety. This has at least two advantages. First of all, as shown in Figure 6a and 6b, the presentation of full word strings can take up a large area of space on the display 710 which could be distracting to the user, and which may conceal other information displayed on-screen (such as a clock or signal/battery indicator). Secondly, some of the predicted full word strings may contain many of the same characters. In this scenario, there is no need to present the same word stem 717, 718 more than once. Instead, the device might present unique primary word stems 717 once in a main menu, and then provide corresponding secondary word stems 718 in a sub-menu 719 on selection of one of the primary word stems 717. The primary 717 and secondary 718 word stems may be referred to as "predicted character strings" and "further predicted characters strings", respectively. This example embodiment may be better understood with reference to the figures. In the example shown in Figures 7a and 7b, the user wishes to enter the full word string "Catalyst". When the user inputs the first three letters, the device discovers that the character string "Cat" matches the full word strings "caterpillar", "catalyst", "cathode", "cat", "catwalk", "catalogue" and "catch". This time, however, instead of presenting these options to the user in full, the device presents the unique primary word stems 717 "ch", "al", "er", "w" and "h" in a main menu 712, 713. On selection of the primary word stem 717 "al", the device then presents the corresponding secondary word stems 718 "ogue" and "yst" to the user for selection in a sub-menu 719.
The secondary word stems 718 may be positionally associated with the position of the selected primary word stem 717. For example, the area of the touch-sensitive display 710 used to select a secondary word stem 718 may be flush with, adjacent to, or may overlap with at least part of the area used to select the primary word stem 717 (e.g. the area used to select a secondary word stem 718 may overlap by up to 5mm with an edge of the area used to select the primary word stem 717). In one example embodiment, as shown in Figure 7a, the primary word stems 717 may be provided in a circular menu 712, and the corresponding secondary word stems 718 may radiate outward from the position of the selected primary word stem 717. In this case, shown, the secondary word stems 718 "ogue" and "yst" extend from the primary word stem 717 "al" as circular sectors. In another example embodiment, as shown in Figure 7b, the primary word stems 717 may be provided in a linear menu 713, and the corresponding secondary word stems 718 may extent from the selected primary word stem 717 as a linear sub-menu 719.
In some example embodiments, the secondary word stems 718 may appear only after selection of a primary word stem 717 in order to minimise the area of the display 710 taken up by the menu structure 712, 713, 719. However, in other example embodiments, the primary 717 and secondary 718 word stems may be presented at the same time (not shown). As with the example embodiments shown in Figures 6a and 6b, this allows the user to see all possible example embodiments from the outset.
As with the predicted character strings of Figures 6a and 6b, selection of a primary 717 or secondary 718 word stem may be effected in a number of ways. For example, selection may occur when the user touches the area associated with selection of a word stem 717, 718, or when the user terminates physical contact with the display 710 at the area associated with selection of a word stem 717, 718. The user may select a primary word stem 717 by continuous touching of the touch-sensitive display 710 in a line originating from the last-pressed key 702 and continuing to the area associated with selection of the primary word stem 717. Similarly, the user may select a secondary word stem 718 by continuous touching of the touch-sensitive display 710 in a line originating from the area associated with selection of the primary word stem 717 and continuing to the area associated with selection of the secondary word stem 718. In addition, the user may select a primary word stem 717 and corresponding secondary word stem 718 sequentially by continuous touching of the touch-sensitive display 710 in a line originating from the last-pressed key 702 and continuing to the area associated with selection of the primary word stem 717, and then to the area associated with selection of the secondary word stem 718. Furthermore, input of a word stem 717, 718 may be accepted or rejected when the user remains in physical contact with the area associated with selection of the word stem 717, 718 for a predetermined period of time, or when the user applies a predetermined pressure to the area. It should be noted, however, that the user is under no obligation to select a word stem 717, 718 presented in the menu 712, 713, 719, and may instead choose to input only the character corresponding to the last-pressed key 702 if none of the suggested word stems 717, 718 appear to the user to be suitable. Input of the character corresponding to the last-pressed key 702 may occur automatically on rejection of a word stem 717, 718. Similarly, the user may select a primary word stem 717 without necessarily having to select a corresponding secondary word stem 718. In this case, rejection of a secondary word stem 718 may result in the corresponding primary word stem 717 being inputted automatically.
Whilst Figures 7a and 7b show only two levels of word stems, e.g. primary 717 and secondary 718 word stems, there could be multiple levels. For example, if the list of predicted full word strings contained longer words than those considered above, there could conceivably be three or four different levels, each containing a plurality of unique word stems 717, 718. Figures 8a and 8b show another variation of the present disclosure. In these example embodiments, the sub-menu 819 does not just display the remaining characters (secondary word stems) of the full word strings, but instead display the complete word strings that will be input if the user selects them. For example, in the examples shown, the user wishes to enter the full word string "Catalyst". As before, he/she enters the first few letters (in this case three letters) of the full word string, and is presented with the primary word stems 817 "ch", "al", "er", "w" and "h" based on matches found with entries stored in the predictive text dictionary. On selection of the primary word stem 817 "al", the user is presented with the full word strings "Catalogue" and "Catalyst" in a sub-menu 819 branching-off from the main menu 812, 813. Again, the menu structure may take the form of a circular menu structure 812 (Figure 8a) or a linear menu structure 813 (Figure 8b). An advantage of these example embodiments with respect to those shown in Figures 7a and 7b may be that there is no need for the user to mentally construct the full word string that he/she is trying to input. Furthermore, because the primary word stems 817 are presented in the first level, there is no need to present every full word string at the same time (compare with the example embodiments shown in Figures 6a and 6b). Instead, only the full word strings corresponding to the primary word stem 817 are presented at the same time. This helps to limit the area of the display 810 taken up by the menu structure 812, 813, 819.
In the event that a primary word stem 817 is sufficient to complete a particular full word string without the need for any additional levels, the full word string may be presented as a selectable option in the menu 812 of primary word stems 817. For example, in Figures 8a and 8b, the full word string "Cater" may have been provided in addition to the primary word stem 817 "er" if the word "Cater" had appeared in the predictive text dictionary. This also applies to any others levels in the hierarchy.
In each of the example embodiments described herein, the character associated with the original key-press, the predicted word strings, the primary word stems, the secondary word stems, and/or the full word strings may comprise one or more of a letter, number, or punctuation mark. The letters may be letters from the Roman, Greek Arabic and/or Cyrillic alphabets. In addition, although the predicted word strings, primary word stems and secondary word stems in the described examples have been part word strings or full word strings, they could also comprise multiple words. This feature may be used for phrase or sentence completion, rather than just word completion. To achieve this, the device may compare one or more inputted words with phrases or sentences stored in the predictive text dictionary.
In Figure 9 there is illustrated a device/apparatus 928 comprising a processor 929, a touch-sensitive display 930, and a storage medium 931 , which may be electrically connected to one another by a data bus 932. The device 928 may be a portable electronic device such as a portable telecommunications device.
The processor 929 is configured for general operation of the device 928 by providing signalling to, and receiving signalling from, the other device components to manage their operation. In particular, the processor 929 is configured to detect a touch input from the touch-sensitive display 930; determine the character associated with the touch input; compare the input word string with entries stored in a predictive text dictionary; and provide one or more predicted character strings for user selection. The processor 929 may also be configured to detect the position of touch, the duration of touch, and/or the touch pressure; and enable the selection or input of characters based on this position, duration or pressure. Furthermore, the processor 929 may be configured to provide the predicted character strings as primary and secondary word stems in a step-wise selectable manner.
The touch-sensitive display 930 comprises an input region and a display region (not shown). The input region comprises a plurality of touch-screen keys for the input of respective characters, and is configured to display the predicted character strings in such a way that they are positionally associated with the key that has triggered their generation (i.e. the last-pressed key). The touch-screen keys may be arranged to form a 12-key alphanumeric keypad, a portrait "qwerty" keyboard, or a landscape "qwerty" keyboard. The touch-screen keys may be configured to allow input of numbers, punctuation marks, and/or letters of the Roman, Greek, Arabic and/or Cyrillic alphabets. The touch-screen keys may be configured to allow the input of text in one or more of the following languages: English, Chinese, Japanese, Greek, Arabic, Indo- European, Oriental and Asiatic. The touch-sensitive display may be configured to enable input of Chinese or Japanese characters, either directly or via transcription methods such as Pinyin and/or Bopomofo (Zhuyin Fuhao). The display region is configured to display the characters input by the touch-screen keys. The touch-sensitive display 930 may also be configured to display a graphical user interface to facilitate use of the device 928. The touch-sensitive display 930 may comprise additional touch-screen keys for navigation of the user interface. The touch-sensitive display 930 may comprise one or more of the following technologies: resistive, surface acoustic wave, capacitive, force panel, optical imaging, dispersive signal, acoustic pulse recognition, and bidirectional screen technology. The touch-sensitive display 930 may be configured to detect physical contact with any part of the user's body (not just the user's fingers), and may be configured to detect physical contact with a stylus.
The storage medium 931 is configured to store computer code required to operate the device 928, as described with reference to Figure 1 1. The storage medium 928 is also configured to store the predictive text dictionary. The processor 929 may access the storage medium 931 to compare the inputted word string against entries stored in the predictive text dictionary to find a match, and to determine the predicted character strings for presentation to the user. The storage medium 931 may also be configured to store settings for the device components. The processor 929 may access the storage medium 931 to retrieve the component settings in order to manage operation of the device components. Furthermore, the storage medium 931 may be configured to store the graphical user interface. The storage medium 931 may be a temporary storage medium such as a volatile random access memory. On the other hand, the storage medium 931 may be a permanent storage medium such as a hard disk drive, a flash memory, or a non-volatile random access memory. The main steps of the method used to operate the device/apparatus 928 are illustrated schematically in Figure 10.
Figure 1 1 illustrates schematically a computer/processor readable medium 1 133 providing a computer program according to one example embodiment. In this example, the computer/processor readable medium 1 133 is a disc such as a digital versatile disc (DVD) or a compact disc (CD). In other example embodiments, the computer/processor readable medium 1 133 may be any medium that has been programmed in such a way as to carry out an inventive function. The computer/processor readable medium 1 133 may be a removable memory device such as a memory stick or memory card (SD, mini SD or micro SD).
The computer program may comprise computer code configured to enable: detection of a particular touch input at a particular input area of a touch-sensitive display, the particular input area associated with the input of a particular character to be used in the input of a full word string; determination of a particular character associated with the particular touch input; determination of one or more predicted character strings based on the determined particular character in combination with one or more previous inputted characters of a word string, the predicted character strings constituting a prediction of all or part of an associated full word string; and provision of the one or more predicted character strings for user selection such that the one or more predicted character strings are positionally associated with the particular input area.
The computer program may also comprise computer code configured to enable: determination of one or more further predicted character strings based on a particular predicted character string; and provision of the one or more further predicted character strings for user selection such that the one or more further predicted character strings are positionally associated with the position of the particular predicted character string.
Other example embodiments depicted in the figures have been provided with reference numerals that correspond to similar features of earlier described example embodiments. For example, feature number 1 can also correspond to numbers 101 , 201 , 301 etc. These numbered features may appear in the figures but may not have been directly referred to within the description of these particular example embodiments. These have still been provided in the figures to aid understanding of the further example embodiments, particularly in relation to the features of similar earlier described example embodiments.
It will be appreciated to the skilled reader that any mentioned apparatus/device and/or other features of particular mentioned apparatus/device may be provided by apparatus arranged such that they become configured to carry out the desired operations only when enabled, e.g. switched on, or the like. In such cases, they may not necessarily have the appropriate software loaded into the active memory in the non-enabled (e.g. switched off state) and only load the appropriate software in the enabled (e.g. on state). The apparatus may comprise hardware circuitry and/or firmware. The apparatus may comprise software loaded onto memory. Such software/computer programs may be recorded on the same memory/processor/functional units and/or on one or more memories/processors/functional units.
In some example embodiments, a particular mentioned apparatus/device may be preprogrammed with the appropriate software to carry out desired operations, and wherein the appropriate software can be enabled for use by a user downloading a "key", for example, to unlock/enable the software and its associated functionality. Advantages associated with such example embodiments can include a reduced requirement to download data when further functionality is required for a device, and this can be useful in examples where a device is perceived to have sufficient capacity to store such pre- programmed software for functionality that may not be enabled by a user.
It will be appreciated that the any mentioned apparatus/circuitry/elements/processor may have other functions in addition to the mentioned functions, and that these functions may be performed by the same apparatus/circuitry/elements/processor. One or more disclosed aspects may encompass the electronic distribution of associated computer programs and computer programs (which may be source/transport encoded) recorded on an appropriate carrier (e.g. memory, signal).
It will be appreciated that any "computer" described herein can comprise a collection of one or more individual processors/processing elements that may or may not be located on the same circuit board, or the same region/position of a circuit board or even the same device. In some example embodiments one or more of any mentioned processors may be distributed over a plurality of devices. The same or different processor/processing elements may perform one or more functions described herein. It will be appreciated that the term "signalling" may refer to one or more signals transmitted as a series of transmitted and/or received signals. The series of signals may comprise one, two, three, four or even more individual signal components or distinct signals to make up said signalling. Some or all of these individual signals may be transmitted/received simultaneously, in sequence, and/or such that they temporally overlap one another.
With reference to any discussion of any mentioned computer and/or processor and memory (e.g. including ROM, CD-ROM etc), these may comprise a computer processor, Application Specific Integrated Circuit (ASIC), field-programmable gate array (FPGA), and/or other hardware components that have been programmed in such a way to carry out the inventive function.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole, in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that the disclosed aspects/example embodiments may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the disclosure.
While there have been shown and described and pointed out fundamental novel features as applied to different example embodiments thereof, it will be understood that various omissions and substitutions and changes in the form and details of the devices and methods described may be made by those skilled in the art without departing from the spirit of the invention. For example, it is expressly intended that all combinations of those elements and/or method steps which perform substantially the same function in substantially the same way to achieve the same results are within the scope of the invention. Moreover, it should be recognized that structures and/or elements and/or method steps shown and/or described in connection with any disclosed form or embodiment may be incorporated in any other disclosed or described or suggested form or embodiment as a general matter of design choice. Furthermore, in the claims means- plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures. Thus although a nail and a screw may not be structural equivalents in that a nail employs a cylindrical surface to secure wooden parts together, whereas a screw employs a helical surface, in the environment of fastening wooden parts, a nail and a screw may be equivalent structures.

Claims

Claims
1 . An apparatus comprising:
a processor and memory including computer program code, the memory and the computer program code configured to, with the processor, cause the apparatus to:
detect a particular touch input at a particular input area of a touch-sensitive display, the particular input area associated with the input of a particular character to be used in the input of a full word string;
determine a particular character associated with the particular touch input;
determine one or more predicted character strings based on the determined particular character in combination with one or more previous inputted characters of a word string, the predicted character strings constituting a prediction of all or part of an associated full word string; and
provide the one or more predicted character strings for user selection such that the one or more predicted character strings are positionally associated with the particular input area.
2. The apparatus of claim 1 , wherein the area of the touch-sensitive display used to select a particular predicted character string is positionally associated to be flush with, adjacent to, or overlaps with at least part of the particular input area.
3. The apparatus of claim 1 or 2, wherein the one or more predicted character strings are provided for user selection in a key-press pop-up area positionally associated with the particular input area.
4. The apparatus of any preceding claim, wherein the apparatus is configured to: determine one or more further predicted character strings based on a particular predicted character string; and
provide the one or more further predicted character strings for user selection such that the one or more further predicted character strings are positionally associated with the position of the particular predicted character string.
5. The apparatus of any preceding claim, wherein the predicted character strings are positionally associated such that they radiate outward from the particular input area.
6. The apparatus of claim 4 or 5, wherein the further predicted character strings appear on touching the touch-sensitive display at the position of the particular predicted character string.
7. The apparatus of any preceding claim, wherein a particular predicted character string is selectable by continuous touching of the touch-sensitive display in a line originating from the particular input area and continuing to the position of the particular predicted character string.
8. The apparatus of any of claims 4 to 7, wherein the particular predicted character string and a particular further predicted character string are sequentially selectable by continuous touching of the touch-sensitive display in a line originating from the particular input area and continuing to the position of the particular predicted character string and then to the position of the particular further predicted character string.
9. The apparatus of any preceding claim, wherein the apparatus is configured to determine the probability of a particular predicted character string matching all or part of the associated full word string, and wherein the area of the touch-sensitive display used to select the particular predicted character string is based on this probability.
10. The apparatus of claim 9, wherein the area increases as the probability increases.
1 1 . The apparatus of any preceding claim, wherein the apparatus is configured to determine the probability of a particular predicted character string matching all or part of the associated full word string, and wherein the distance of the particular predicted character string from the particular area is based on this probability.
12. The apparatus of claim 1 1 , wherein the distance of the particular predicted character string from the particular area decreases as the probability increases.
13. The apparatus of any of claims 9 to 12, wherein the probability is based on the number of times the particular predicted character string has previously been input in combination with the particular character and the one or more previous inputted characters, and/or the commonality of use of the particular predicted character string in combination with the particular character and the one or more previous inputted characters.
14. The apparatus of any preceding claim, wherein the one or more predicted character strings are positionally associated with the particular input area such that a particular predicted character string may be selected without interrupting physical contact with the touch-sensitive display between detection of the particular touch input and selection of the particular predicted character string.
15. The apparatus of claim 14, wherein the touch sensitive-display comprises an input region, the input region comprising the particular input area and a plurality of other input areas, each input area associated with the input of a respective character, and wherein the one or more predicted character strings are positionally associated with the particular input area such that a particular predicted character string may be selected without causing input of characters associated with the other input areas.
16. The apparatus of any preceding claim, wherein the apparatus is configured to determine the one or more predicted character strings by comparing the particular character, in combination with the one or more previous inputted characters, with entries stored in a predictive text dictionary and/or with a set of statistical rules.
17. The apparatus of any preceding claim, wherein the apparatus is configured to detect an interaction property associated with touching of the area of the touch-sensitive display used to select a particular predicted character string, and accept or reject input of that particular predicted character string when this interaction property exceeds a predetermined interaction value.
18. The apparatus of claim 17, wherein the apparatus is configured to accept input of the particular character automatically when input of the particular predicted character string has been rejected.
19. The apparatus of any preceding claim, wherein the apparatus is configured to detect an interaction property of the particular touch input, and accept or reject input of the particular character when this interaction property exceeds a predetermined interaction value.
20. The apparatus of any of claims 17 to 19, wherein the interaction property is the duration of touch, and the apparatus is configured to accept or reject input when the duration of touch exceeds a predetermined touch time interaction value.
21 . The apparatus of any of claims 17 to 20, wherein the interaction property is the touch pressure, and the apparatus is configured to accept or reject input when the touch pressure exceeds a predetermined touch pressure interaction value.
22. The apparatus of any preceding claim, wherein the apparatus is configured to accept input of a particular predicted character string when physical contact has been detected at an area of the touch-sensitive display used to select that particular predicted character string.
23. The apparatus of any preceding claim, wherein the apparatus is configured to accept input of a particular predicted character string when physical contact with the touch-sensitive display has been terminated at an area of the touch-sensitive display used to select that particular predicted character string.
24. The apparatus of any preceding claim, wherein the one or more predicted character strings are provided in the form of a menu comprising a circular and/or linear arrangement of predicted character strings.
25. The apparatus of any preceding claim, wherein one or more of the particular character, previous inputted characters, and predicted character strings comprise one or more of a letter, number, or punctuation mark.
26. The apparatus of any preceding claim, wherein the touch-sensitive display forms part of the apparatus.
27. The apparatus of any preceding claim, wherein the touch sensitive display comprises a touch-sensitive alphanumeric keypad, a touch-sensitive portrait qwerty keyboard, or a touch-sensitive landscape qwerty keyboard.
28. The apparatus of any preceding claim, wherein the apparatus is a touch-sensitive display, portable telecommunications device, a module for a touch-sensitive display, or a module for a portable telecommunications device.
29. A method comprising:
detecting a particular touch input at a particular input area of a touch-sensitive display, the particular input area associated with the input of a particular character to be used in the input of a full word string;
determining a particular character associated with the particular touch input;
determining one or more predicted character strings based on the determined particular character in combination with one or more previous inputted characters of a word string, the predicted character strings constituting a prediction of all or part of an associated full word string; and
providing the one or more predicted character strings for user selection such that the one or more predicted character strings are positionally associated with the particular input area.
30. The method of claim 29, wherein the area of the touch-sensitive display used to select a particular predicted character string is positionally associated to be flush with, adjacent to, or overlaps with at least part of the particular input area.
31 . The method of claim 29 or 30, wherein the one or more predicted character strings are provided for user selection in a key-press pop-up area positionally associated with the particular input area.
32. The method of any of claims 29 to 31 , further comprising:
determining one or more further predicted character strings based on a particular predicted character string; and
providing the one or more further predicted character strings for user selection such that the one or more further predicted character strings are positionally associated with the position of the particular predicted character string.
33. The method of any of claims 29 to 32, wherein the predicted character strings are positionally associated such that they radiate outward from the particular input area.
34. The method of claim 32 or 33, wherein the further predicted character strings appear on touching the touch-sensitive display at the position of the particular predicted character string.
35. The method of any of claims 29 to 34, wherein a particular predicted character string is selectable by continuous touching of the touch-sensitive display in a line originating from the particular input area and continuing to the position of the particular predicted character string.
36. The method of any of claims 32 to 35, wherein the particular predicted character string and a particular further predicted character string are sequentially selectable by continuous touching of the touch-sensitive display in a line originating from the particular input area and continuing to the position of the particular predicted character string and then to the position of the particular further predicted character string.
37. The method of any of claims 29 to 36, further comprising:
determining the probability of a particular predicted character string matching all or part of the associated full word string, and wherein the area of the touch-sensitive display used to select the particular predicted character string is based on this probability.
38. The method of claim 37, wherein the area increases as the probability increases.
39. The method of any of claims 29 to 38, further comprising:
determining the probability of a particular predicted character string matching all or part of the associated full word string, wherein the distance of the particular predicted character string from the particular area is based on this probability.
40. The method of claim 39, wherein the distance of the particular predicted character string from the particular area decreases as the probability increases.
41 . The method of any of claims 37 to 40, wherein the probability is based on the number of times the particular predicted character string has previously been input in combination with the particular character and the one or more previous inputted characters, and/or the commonality of use of the particular predicted character string in combination with the particular character and the one or more previous inputted characters.
42. The method of any of claims 29 to 41 , wherein the one or more predicted character strings are positionally associated with the particular input area such that a particular predicted character string may be selected without interrupting physical contact with the touch-sensitive display between detection of the particular touch input and selection of the particular predicted character string.
43. The method of claim 42, wherein the touch sensitive-display comprises an input region, the input region comprising the particular input area and a plurality of other input areas, each input area associated with the input of a respective character, and wherein the one or more predicted character strings are positionally associated with the particular input area such that a particular predicted character string may be selected without causing input of characters associated with the other input areas.
44. The method of any of claims 29 to 43, further comprising:
determining the one or more predicted character strings by comparing the particular character, in combination with the one or more previous inputted characters, with entries stored in a predictive text dictionary and/or with a set of statistical rules.
45. The method of any of claims 29 to 44, further comprising:
detecting an interaction property associated with touching of the area of the touch-sensitive display used to select a particular predicted character string; and
accepting or reject input of that particular predicted character string when this interaction property exceeds a predetermined interaction value.
46. The method of claim 45, further comprising:
accepting input of the particular character automatically when input of the particular predicted character string has been rejected.
47. The method of any of claims 29 to 46, further comprising:
detecting an interaction property of the particular touch input; and
accepting or reject input of the particular character when this interaction property exceeds a predetermined interaction value.
48. The method of any of claims 45 to 47, wherein the interaction property is the duration of touch, and the method comprises accepting or rejecting input when the duration of touch exceeds a predetermined touch time interaction value.
49. The method of any of claims 45 to 48, wherein the interaction property is the touch pressure, and the method comprises to accepting or rejecting input when the touch pressure exceeds a predetermined touch pressure interaction value.
50. The method of any of claims 29 to 49, wherein the method further comprises: accepting input of a particular predicted character string when physical contact has been detected at an area of the touch-sensitive display used to select that particular predicted character string.
51 . The method of any of claims 29 to 50, wherein the method further comprises: accepting input of a particular predicted character string when physical contact with the touch-sensitive display has been terminated at an area of the touch-sensitive display used to select that particular predicted character string.
52. The method of any of claims 29 to 51 , wherein the one or more predicted character strings are provided in the form of a menu comprising a circular and/or linear arrangement of predicted character strings.
53. The method of any of claims 29 to 52, wherein one or more of the particular character, previous inputted characters, and predicted character strings comprise one or more of a letter, number, or punctuation mark.
54. The method of any of claims 29 to 53, wherein the touch-sensitive display forms part of the method.
55. The method of any of claims 29 to 54, wherein the touch sensitive display comprises a touch-sensitive alphanumeric keypad, a touch-sensitive portrait qwerty keyboard, or a touch-sensitive landscape qwerty keyboard.
56. A computer program, recorded on a carrier, the computer program comprising computer code configured to enable:
detection of a particular touch input at a particular input area of a touch-sensitive display, the particular input area associated with the input of a particular character to be used in the input of a full word string;
determination of a particular character associated with the particular touch input; determination of one or more predicted character strings based on the determined particular character in combination with one or more previous inputted characters of a word string, the predicted character strings constituting a prediction of all or part of an associated full word string; and
provision of the one or more predicted character strings for user selection such that the one or more predicted character strings are positionally associated with the particular input area.
57. The computer program of claim 56, wherein the area of the touch-sensitive display used to select a particular predicted character string is positionally associated to be flush with, adjacent to, or overlaps with at least part of the particular input area.
58. The computer program of claim 56 or 57, wherein the one or more predicted character strings are provided for user selection in a key-press pop-up area positionally associated with the particular input area.
59. The computer program of any of claims 56 to 58, wherein the computer program code is further configured to enable:
determining one or more further predicted character strings based on a particular predicted character string; and
providing the one or more further predicted character strings for user selection such that the one or more further predicted character strings are positionally associated with the position of the particular predicted character string.
60. The computer program of any of claims 56 to 59, wherein the predicted character strings are positionally associated such that they radiate outward from the particular input area.
61 . The computer program of claim 59 or 60, wherein the further predicted character strings appear on touching the touch-sensitive display at the position of the particular predicted character string.
62. The computer program of any of claims 56 to 61 , wherein a particular predicted character string is selectable by continuous touching of the touch-sensitive display in a line originating from the particular input area and continuing to the position of the particular predicted character string.
63. The computer program of any of claims 59 to 62, wherein the particular predicted character string and a particular further predicted character string are sequentially selectable by continuous touching of the touch-sensitive display in a line originating from the particular input area and continuing to the position of the particular predicted character string and then to the position of the particular further predicted character string.
64. The computer program of any of claims 56 to 63, wherein the computer program code is further configured to enable:
determining the probability of a particular predicted character string matching all or part of the associated full word string, and wherein the area of the touch-sensitive display used to select the particular predicted character string is based on this probability.
65. The computer program of claim 64, wherein the area increases as the probability increases.
66. The computer program of any of claims 56 to 65, wherein the computer program code is further configured to enable:
determining the probability of a particular predicted character string matching all or part of the associated full word string, wherein the distance of the particular predicted character string from the particular area is based on this probability.
67. The computer program of claim 66, wherein the distance of the particular predicted character string from the particular area decreases as the probability increases.
68. The computer program of any of claims 64 to 67, wherein the probability is based on the number of times the particular predicted character string has previously been input in combination with the particular character and the one or more previous inputted characters, and/or the commonality of use of the particular predicted character string in combination with the particular character and the one or more previous inputted characters.
69. The computer program of any of claims 56 to 68, wherein the one or more predicted character strings are positionally associated with the particular input area such that a particular predicted character string may be selected without interrupting physical contact with the touch-sensitive display between detection of the particular touch input and selection of the particular predicted character string.
70. The computer program of claim 69, wherein the touch sensitive-display comprises an input region, the input region comprising the particular input area and a plurality of other input areas, each input area associated with the input of a respective character, and wherein the one or more predicted character strings are positionally associated with the particular input area such that a particular predicted character string may be selected without causing input of characters associated with the other input areas.
71 . The computer program of any of claims 56 to 70, wherein the computer program code is further configured to enable:
determining the one or more predicted character strings by comparing the particular character, in combination with the one or more previous inputted characters, with entries stored in a predictive text dictionary and/or with a set of statistical rules.
72. The computer program of any of claims 56 to 71 , wherein the computer program code is further configured to enable:
detecting an interaction property associated with touching of the area of the touch-sensitive display used to select a particular predicted character string; and
accepting or reject input of that particular predicted character string when this interaction property exceeds a predetermined interaction value.
73. The computer program of claim 72, wherein the computer program code is further configured to enable:
accepting input of the particular character automatically when input of the particular predicted character string has been rejected.
74. The computer program of any of claims 56 to 73, wherein the computer program code is further configured to enable:
detecting an interaction property of the particular touch input; and accepting or reject input of the particular character when this interaction property exceeds a predetermined interaction value.
75. The computer program of any of claims 72 to 74, wherein the interaction property is the duration of touch, and the computer program code is further configured to enable accepting or rejecting input when the duration of touch exceeds a predetermined touch time interaction value.
76. The computer program of any of claims 72 to 75, wherein the interaction property is the touch pressure, and the computer program comprises to accepting or rejecting input when the touch pressure exceeds a predetermined touch pressure interaction value.
77. The computer program of any of claims 56 to 76, wherein the computer program code is further configured to enable:
accepting input of a particular predicted character string when physical contact has been detected at an area of the touch-sensitive display used to select that particular predicted character string.
78. The computer program of any of claims 56 to 77, wherein the computer program code is further configured to enable:
accepting input of a particular predicted character string when physical contact with the touch-sensitive display has been terminated at an area of the touch-sensitive display used to select that particular predicted character string.
79. The computer program of any of claims 56 to 78, wherein the one or more predicted character strings are provided in the form of a menu comprising a circular and/or linear arrangement of predicted character strings.
80. The computer program of any of claims 56 to 79, wherein one or more of the particular character, previous inputted characters, and predicted character strings comprise one or more of a letter, number, or punctuation mark.
81 . The computer program of any of claims 56 to 80, wherein the touch-sensitive display forms part of the computer program.
82. The computer program of any of claims 56 to 81 , wherein the touch sensitive display comprises a touch-sensitive alphanumeric keypad, a touch-sensitive portrait qwerty keyboard, or a touch-sensitive landscape qwerty keyboard.
83. An apparatus comprising:
means for detecting of a particular touch input at a particular input area of a touch-sensitive display, the particular input area associated with the input of a particular character to be used in the input of a full word string;
means for determining of a particular character associated with the particular touch input;
means for determining of one or more predicted character strings based on the determined particular character in combination with one or more previous inputted characters of a word string, the predicted character strings constituting a prediction of all or part of an associated full word string; and
means for providing of the one or more predicted character strings for user selection such that the one or more predicted character strings are positionally associated with the particular input area.
PCT/FI2010/051005 2010-12-08 2010-12-08 An apparatus and associated methods for text entry WO2012076743A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/FI2010/051005 WO2012076743A1 (en) 2010-12-08 2010-12-08 An apparatus and associated methods for text entry

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/FI2010/051005 WO2012076743A1 (en) 2010-12-08 2010-12-08 An apparatus and associated methods for text entry

Publications (1)

Publication Number Publication Date
WO2012076743A1 true WO2012076743A1 (en) 2012-06-14

Family

ID=44541434

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2010/051005 WO2012076743A1 (en) 2010-12-08 2010-12-08 An apparatus and associated methods for text entry

Country Status (1)

Country Link
WO (1) WO2012076743A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2722731A1 (en) * 2012-10-18 2014-04-23 Samsung Electronics Co., Ltd. Display apparatus and method for inputting characters thereof
JP2014147063A (en) * 2013-01-21 2014-08-14 Keypoint Technologies (Uk) Ltd Text input method and apparatus
GB2511646A (en) * 2013-03-08 2014-09-10 Google Inc Gesture completion path display for gesture-based keyboards
US20150121286A1 (en) * 2013-10-30 2015-04-30 Samsung Electronics Co., Ltd. Display apparatus and user interface providing method thereof
WO2015088669A1 (en) * 2013-12-10 2015-06-18 Google Inc. Multiple character input with a single selection
JP2016066356A (en) * 2014-09-18 2016-04-28 高 元祐 Method for inputting information on two steps in release after connection movement with latent key
EP3051387A4 (en) * 2013-09-23 2017-03-15 Yulong Computer Telecommunication Scientific (Shenzhen) Co. Ltd. Associated prompt input method, system and terminal
US9952764B2 (en) 2015-08-20 2018-04-24 Google Llc Apparatus and method for touchscreen keyboard suggestion word generation and display
EP3598275A1 (en) * 2018-07-20 2020-01-22 Amazonen-Werke H. Dreyer GmbH & Co. KG Operating unit for an agricultural machine
DE112012000189B4 (en) 2012-02-24 2023-06-15 Blackberry Limited Touch screen keyboard for providing word predictions in partitions of the touch screen keyboard in close association with candidate letters

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070046641A1 (en) * 2005-09-01 2007-03-01 Swee Ho Lim Entering a character into an electronic device
US20100225599A1 (en) * 2009-03-06 2010-09-09 Mikael Danielsson Text Input

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070046641A1 (en) * 2005-09-01 2007-03-01 Swee Ho Lim Entering a character into an electronic device
US20100225599A1 (en) * 2009-03-06 2010-09-09 Mikael Danielsson Text Input

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112012000189B4 (en) 2012-02-24 2023-06-15 Blackberry Limited Touch screen keyboard for providing word predictions in partitions of the touch screen keyboard in close association with candidate letters
US9285953B2 (en) 2012-10-18 2016-03-15 Samsung Electronics Co., Ltd. Display apparatus and method for inputting characters thereof
CN103777892A (en) * 2012-10-18 2014-05-07 三星电子株式会社 Display apparatus and method for inputting characters thereof
JP2014087047A (en) * 2012-10-18 2014-05-12 Samsung Electronics Co Ltd Display apparatus and method for inputting characters thereof
EP2722731A1 (en) * 2012-10-18 2014-04-23 Samsung Electronics Co., Ltd. Display apparatus and method for inputting characters thereof
RU2645281C2 (en) * 2012-10-18 2018-02-19 Самсунг Электроникс Ко., Лтд. Display device and method of introducing symbols therewith
JP2014147063A (en) * 2013-01-21 2014-08-14 Keypoint Technologies (Uk) Ltd Text input method and apparatus
GB2511646A (en) * 2013-03-08 2014-09-10 Google Inc Gesture completion path display for gesture-based keyboards
GB2511646B (en) * 2013-03-08 2016-02-10 Google Inc Gesture completion path display for gesture-based keyboards
EP3051387A4 (en) * 2013-09-23 2017-03-15 Yulong Computer Telecommunication Scientific (Shenzhen) Co. Ltd. Associated prompt input method, system and terminal
US10216409B2 (en) 2013-10-30 2019-02-26 Samsung Electronics Co., Ltd. Display apparatus and user interface providing method thereof
US20150121286A1 (en) * 2013-10-30 2015-04-30 Samsung Electronics Co., Ltd. Display apparatus and user interface providing method thereof
WO2015088669A1 (en) * 2013-12-10 2015-06-18 Google Inc. Multiple character input with a single selection
JP2016066356A (en) * 2014-09-18 2016-04-28 高 元祐 Method for inputting information on two steps in release after connection movement with latent key
US9952764B2 (en) 2015-08-20 2018-04-24 Google Llc Apparatus and method for touchscreen keyboard suggestion word generation and display
EP3598275A1 (en) * 2018-07-20 2020-01-22 Amazonen-Werke H. Dreyer GmbH & Co. KG Operating unit for an agricultural machine
DE102018117619A1 (en) * 2018-07-20 2020-01-23 Amazonen-Werke H. Dreyer Gmbh & Co. Kg Control unit for an agricultural machine

Similar Documents

Publication Publication Date Title
WO2012076743A1 (en) An apparatus and associated methods for text entry
US20180039335A1 (en) Touchscreen Keyboard Providing Word Predictions at Locations in Association with Candidate Letters
EP2631758B1 (en) Touchscreen keyboard providing word predictions in partitions of the touchscreen keyboard in proximate association with candidate letters
CA2803192C (en) Virtual keyboard display having a ticker proximate to the virtual keyboard
CA2813393C (en) Touchscreen keyboard providing word predictions at locations in association with candidate letters
US9128921B2 (en) Touchscreen keyboard with corrective word prediction
US20130285927A1 (en) Touchscreen keyboard with correction of previously input text
US8296128B2 (en) Handheld electronic device and method employing logical proximity of characters in spell checking
US20130002553A1 (en) Character entry apparatus and associated methods
EP2592568A1 (en) Displaying a prediction candidate after a typing mistake
US20130187858A1 (en) Virtual keyboard providing an indication of received input
EP2660699A1 (en) Touchscreen keyboard with correction of previously input text
WO2013163718A1 (en) Touchscreen keyboard with correction of previously input text
US9275045B2 (en) Method for automatically preferring a diacritical version of a linguistic element on a handheld electronic device based on linguistic source and associated apparatus
EP2669782B1 (en) Touchscreen keyboard with corrective word prediction
WO2012076742A1 (en) Character indications
US20120169607A1 (en) Apparatus and associated methods
US20130125035A1 (en) Virtual keyboard configuration
EP1921532B1 (en) Method for automatically preferring a diacritical version of a linguistic element on a handheld electronic device based on linguistic source and associated apparatus
EP2660684A1 (en) User interface for changing an input state of a virtual keyboard
US20080255846A1 (en) Method of providing language objects by indentifying an occupation of a user of a handheld electronic device and a handheld electronic device incorporating the same
US9996213B2 (en) Apparatus for a user interface and associated methods
EP2660693B1 (en) Touchscreen keyboard providing word predictions at locations in association with candidate letters
WO2011158064A1 (en) Mixed ambiguity text entry
CA2719387C (en) System and method for facilitating character capitalization in handheld electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10803258

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10803258

Country of ref document: EP

Kind code of ref document: A1