EP1472596A1 - System und verfahren zur dynamischen schlüsselzuweisung in einer erweiterten benutzerschnittstelle - Google Patents

System und verfahren zur dynamischen schlüsselzuweisung in einer erweiterten benutzerschnittstelle

Info

Publication number
EP1472596A1
EP1472596A1 EP02768978A EP02768978A EP1472596A1 EP 1472596 A1 EP1472596 A1 EP 1472596A1 EP 02768978 A EP02768978 A EP 02768978A EP 02768978 A EP02768978 A EP 02768978A EP 1472596 A1 EP1472596 A1 EP 1472596A1
Authority
EP
European Patent Office
Prior art keywords
characters
frequency
valid
character
items
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP02768978A
Other languages
English (en)
French (fr)
Other versions
EP1472596A4 (de
Inventor
Kent Qing Pu
Akmid Alkhatib
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bsquare San Diego Corp
Original Assignee
Bsquare San Diego Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bsquare San Diego Corp filed Critical Bsquare San Diego Corp
Publication of EP1472596A1 publication Critical patent/EP1472596A1/de
Publication of EP1472596A4 publication Critical patent/EP1472596A4/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/26Visual data mining; Browsing structured data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques

Definitions

  • the present invention generally relates to an enhanced user interface for inputting textual data into a computer device and more specifically relates to dynamically assigning characters to input groups or keys to reduce data input keystrokes or increase the efficiency of voice input recognition.
  • touch screens are used in combination with "electronic" stylus pens for inputting textual data through character recognition and/or keyboard emulation techniques.
  • cellular telephones and the like utilize standard telephone keypads for inputting alphanumeric data.
  • a standard telephone keypad attached to a cellular telephone is used not only to dial phone numbers, but to enter names, addresses, and telephone numbers into an electronic address book.
  • Such keypads are also commonly used to enter alphanumeric data into PDAs, auto PCs, home appliances, and the like.
  • This arrangement of information on the keys is used to represent the particular characters that can be input via each key.
  • the "2" key is used to enter any ofthe characters printed on the “2" key, namely the characters “A”, "B", "C” and "2".
  • the actual character that is input into the device depends on the number of times the particular key is successively pressed. For example, pressing the “2” key once results in inputting the letter “A”. Pressing the "2” key twice in succession results in entering the letter "B”.
  • the characters "C” and "2” are input by pressing the "2” key, three and four times in succession, respectively.
  • a shuttle control system is used to scroll through a list of predefined words, phrases and/or alphanumeric characters. When the desired data item appears on the display, or is highlighted by a cursor, the user selects that item by pressing an enter key.
  • shuttle control systems are implemented using a single joystick-like central key that can be pivoted up, down, right or left.
  • the shuttle key is used to scroll data in accordance with the direction the shuttle control is pressed.
  • a user can scroll tlirough the alphabet in an ascending or descending order, depending on whether the shuttle key is moved to the right or the left position (or the up or the down position).
  • the shuttle key can function as the enter key in addition to the direction key.
  • the enter function is implemented by pressing down on the shuttle key, rather than pressing it in a particular direction.
  • Other shuttle control systems may have different arrangements of keys. For example, one common arrangement uses four separate keys that are organized in a pattern to represent the four directions of up, down, right and left. Typically, a key that functions as an enter key is placed in the center ofthe four directional keys. Another common arrangement may use an up key, a down key, and an enter key situated between the up and down keys. Other arrangements are also possible and may also include one or more additional keys.
  • shuttle control system In Regardless of its form, this type of control is referred to herein as a "shuttle control system.”
  • Shuttle control systems are typically used when it is desirable to use fewer physical keys. Commonly, shuttle control systems are used in portable computing devices such as auto PCs, PDAs, cellular telephones, and other hand-held devices such as remote controls for web browsers and the like. However, as can be imagined, entering textual data through shuttle control systems can be both time-consuming and problematic.
  • Another conventional keyboard substitution solution is voice recognition software.
  • This technology is especially useful in devices such as auto PCs, where it is important to keep one's hands and eyes on the road.
  • these devices allow users not only to issue commands, but also to enter textual data by verbally spelling the words.
  • the problem with these conventional solutions is that many letters sound alike and current voice recognition technology can have trouble distinguishing among similar sounding letters. For example, current systems have trouble distinguishing between the letter "B” and the letter “D.” This voice recognition problem increases in noisy environments, such as automobiles and the like.
  • the present invention provides an improved user interface used to input data without the use of a standard keyboard.
  • the system accepts input through a data entry means such as a shuttle control system or a standard telephone keypad.
  • the data that is entered is selected from a predefined list.
  • the list is presented to the user in an arrangement that statistically reduces the number of keystrokes required for data entry. This presentation is the result of determining the relative frequency of each valid selection in the predefined list and presenting those valid selections with the highest frequency in a position that minimizes the number keystrokes required for data entry.
  • One aspect of the invention allows a standard telephone keypad to be presented on a display with the valid data entry selections dynamically assigned to the keys on the keypad. Another aspect ofthe invention presents the valid data entry selections in a linear fashion on the display with the highest frequency selections closest to the default cursor position. Yet another aspect ofthe invention allows for similar sounding letters in a dynamically defined group of available letters to be arranged in a fashion that improves the implementation of voice recognition. Additional aspects and features of the present invention will become apparent after viewing the following figures and reading the corresponding detailed description. Brief Description ofthe Figures
  • Figure 1 is a block diagram illustrating an example input control system according to an embodiment ofthe present invention
  • Figure 2 is a block diagram illustrating two predefined lists in tabular format according to an embodiment ofthe present invention.
  • Figures 3 A - 3C are block diagrams illustrating example selection list arrangements according to various embodiments ofthe present invention.
  • Figures 4 A - 4C are block diagrams illustrating example predefined lists associated with contextual data entry for an input control system according to various embodiments of the present invention
  • Figures 5A - 5C are block diagrams illustrating example predefined lists associated with contextual data entry for an input control system according to various embodiments of the present invention
  • Figures 6A - 6C are block diagrams illustrating example predefined lists associated with contextual data entry for an input control system according to various embodiments of the present invention
  • Figures 7 A - 7C are block diagrams illustrating example predefined lists associated with contextual data entry for an input control system according to various embodiments of the present invention.
  • Figures 8A - 8C are block diagrams illustrating example predefined lists associated with contextual data entry for an input control system according to various embodiments of the present invention.
  • Figures 9A - 9C are block diagrams illustrating example predefined lists associated with contextual data entry for an input control system according to various embodiments of the present invention.
  • Figure 10 is a block diagram illustrating an input control system with data entry completed according to an embodiment ofthe present invention
  • FIG. 11 is a block diagram illustrating a conventional standard telephone keypad
  • Figure 12 is a block diagram illustrating an example f equency mapping for a standard telephone keypad according to an embodiment ofthe present invention
  • Figures 13A - 13H are block diagrams illustrating example dynamic mappings of characters on a keypad according to an embodiment ofthe present invention
  • Figure 14 is a flowchart illustrating an example process for dynamic key assignment according to an embodiment ofthe present invention.
  • Figure 15 is a block diagram illustrating a predefined list in tabular format according to an embodiment ofthe present invention.
  • Figures 16 A - H are block diagrams illustrating example dynamic mappings of characters on a keypad according to an embodiment ofthe present invention.
  • Figures 17A - H are block diagrams illustrating example predefined lists associated with contextual voice data entry for an input control system according to various embodiments ofthe present invention.
  • Figures 18A - H are block diagrams illustrating example predefined lists associated with contextual voice data entry for an input control system according to various embodiments ofthe present invention.
  • Figure 19 is a block diagram illustrating an example user interface display according to an embodiment ofthe present invention.
  • Figure 20 is a block diagram illustrating an example dynamically defined conditional probability matrix according to an embodiment ofthe present invention.
  • Figure 21 is a block diagram illustrating an example dynamically defined conditional probability matrix according to an embodiment ofthe present invention.
  • Figure 22 is a block diagram illustrating an example dynamically defined cluster table according to an embodiment ofthe present invention.
  • Figure 23 is a block diagram illustrating an example dynamically defined character mapping according to an embodiment ofthe present invention.
  • Figure 24 is a flowchart illustrating an example process for distinguishing between similarly sounding letters in a voice recognition system according to an embodiment ofthe present invention.
  • Certain embodiments as disclosed herein provide for systems and methods for dynamically assigning characters to input keys to reduce the number of data entry actions during data input. For example, a predetermined list can be dynamically ordered such that the most frequently selected entries are positioned at the front of the list. Additionally, when entering the individual letters ofthe items in the list, the letters can be dynamically arranged such that those letters with the highest frequency of use are positioned at the front ofthe list.
  • Fig. 1 is a block diagram illustrating an example input control system 10.
  • the input control system 10 may comprise a display area 20 and a selection list 30.
  • input control system 10 is communicatively coupled with data entry device 40, for example a shuttle control system as previously described.
  • Data entry device 40 is preferably communicatively coupled with input control system 10 over communications link 50.
  • Communications link 50 may be implemented in various formats such as a wired link, a wireless link, or an infra red link, just to name a few.
  • alternative layouts for the display area 20 and the selection list 30 of input control system 10 may be employed.
  • the layout and presentation may be determined by the type of intelligent device in which input control system 10 is implemented.
  • a wireless communications device such as a cell phone may have one configuration while an auto PC may have a different configuration.
  • a wireless communications device such as a cell phone may have one configuration while an auto PC may have a different configuration.
  • the description will be explained in the context of a user providing input to an intelligent device through a data entry mechanism.
  • the intelligent device will be described as an auto PC or a wireless communications device and the data entry mechanism will be described as a shuttle control system, an alphanumeric keypad, or a telephonic keypad.
  • the present invention can be used for inputting any type of textual data that has a corresponding predefined list, such as the list of cities described in the examples presented herein.
  • the size ofthe predefined list is limited only by the storage capacity of the device being used to implement the present invention. Conceivably, there is no limit to the size of the predefined list that can be used in an embodiment ofthe present invention.
  • the predefined list may comprise all words defined for any particular language that is being used by an implementation ofthe present invention.
  • Fig. 2 is a block diagram illustrating two predefined lists in tabular format according to an embodiment of the present invention.
  • Table 102 presents an alphabetically ordered predefined list of cities in the states of California and Nevada. The table collapses the entries in the list according to the first letter ofthe city name. The number of cities available for each letter in the alphabet is also presented. This number is referred to as the frequency. For example, there are 35 cities with names that begin with A, 44 cities with names that begin with B, and 78 cities with names that begin with C, and so on through the alphabet.
  • Table 104 presents the same list ordered by frequency.
  • the most commonly used letters are at the top ofthe table.
  • the most frequently used first letter for cities within California and Nevada is the letter S, which corresponds to 94 entries in the predefined list.
  • the next most commonly used letter is the letter C, which corresponds to 78 entries, and so on.
  • the least commonly used first letters of cities in this example are the letters Q and X, each having zero list entries.
  • FIGs. 3 A - 3C are block diagrams illustrating example selection list arrangements according to various embodiments ofthe present invention. As previously stated, in these examples, it is assumed that the application program is expecting the user to input an item from the predefined list of city names from California and Nevada. Additionally, it is assumed that a data entry mechanism (not shown) is used to provide input.
  • a shuttle control system can be used.
  • a shuttle key can be used to scroll through a list of predefined words, phrases and/or alphanumeric characters.
  • the desired data item appears on the display, the user can select the displayed item by pressing an enter key.
  • the scrolling feature is implemented by usmg a cursor control that highlights one character at a time when positioned over the character. This is referred to as the current cursor position.
  • shuttle control systems are implemented as a single joystick-like central key that can be pivoted in an up, down, right or left direction.
  • the shuttle key is used to scroll data or control a cursor in accordance with the direction the shuttle key is pressed. For example, a user can scroll through the alphabet in an ascending or descending order, depending on whether the shuttle key is moved to the right or the left position (or the up or the down position).
  • the shuttle key can function as the enter key in addition to the directional key. For example, in some systems the enter function is implemented by pressing down on the shuttle key, rather than moving it in one ofthe four directions as described above.
  • Other shuttle control systems may have more than one key and different arrangements for the keys. For example, one common arrangement uses four separate keys that are organized in a pattern to represent the four directions of up, down, right and left. Typically, a key that functions as the enter key is placed in the center of the four directional keys. Another common arrangement may use an up key, a down key, and an enter key situated between the up and down keys.
  • Fig. 3A is a block diagram illustrating an example selection list arrangement according to an embodiment ofthe present invention.
  • input control system 106 comprises a display area 108 and a selection list 110.
  • Display area 108 is preferably used to present the selections made from the selection list 110. Additionally, display area 108 may present the highlighted character corresponding to the current cursor position.
  • the cursor in Fig. 3 A may be positioned over the letter S.
  • the user can alter the current cursor position by moving the cursor in either the right or left direction by pressing the shuttle key (not shown) in the appropriate direction.
  • Selection list 110 preferably contains only valid choices. In this fashion, the user is prevented from having to look at and scroll through letters that are not valid input. For example, the letters Q and X do not appear in the selection list 110, thereby preventing erroneous input. In addition, referring back to table 104 in Fig. 2, the letters that are most frequently used appear at the begim ing of the selection list 110. Advantageously, this arrangement of selection list 110 statistically decreases the number of data entry actions required to select the desired letter.
  • the most common letters appear closer to the beginning ofthe selection list 110. In this fashion required keystrokes to select letters are statistically reduced.
  • the user selects the letter S.
  • the letter S appears as the first entry in the selection list 110 because it is the most common first letter in the predefined list. Consequently, no cursor movement or scrolling is required to select this letter. If fact, because in this example, the current cursor position defaults to the first letter in the list 110, the user may simply press the enter key to select the letter S.
  • Fig. 3B is a block diagram illustrating an example selection list arrangement according to an embodiment ofthe present invention.
  • input control system 106 comprises display area 108 and selection list 112.
  • selection list 112 is arranged such that the most common letter S is in the center of the list. In this example, as in all of the examples presented herein, the default cursor position is coincident with the most commonly used character, which in this case is the letter S.
  • the second most common letter C is one cursor position to the right of S.
  • the third most common letter L is one cursor position to the left of S.
  • the forth and fifth most common letters M and P are placed two cursor positions, to the right and left respectively, from the letter S. This pattern is repeated for the remaining letters as shown in selection list 112.
  • fewer keystrokes are required. For example, to select the second or third most common letters, only one cursor keystroke is required. Likewise, to select the fourth or fifth most common letters, only two cursor keystrokes are required. Thus, fewer keystrokes are required using the arrangement presented in Fig. 3B rather than the arrangement presented in Fig. 3 A.
  • Fig. 3C is a block diagram illustrating an example selection list arrangement according to an embodiment ofthe present invention.
  • input control system 106 comprises display area 108 and selection list 114.
  • selection list 114 can be employed with a four-direction shuttle control system (not shown) as the data entry mechanism.
  • the most common letter S appears in the middle of the arrangement in selection list 114.
  • the second, third, fourth, and fifth most common letters are positioned only one cursor keystroke away. For example, the next most common letters appear up, down, right, and left, respectively. This pattern is repeated for the remaining characters as shown by selection list 114.
  • this third example requires fewer keystrokes than the previous two examples, but also requires the most visual acuity and the use of at least four keys, or four directions on a single shuttle key. Quite obviously, there can be many variations to the arrangements shown in the examples above. As such, it is again noted that these examples are used for the limited purpose of describing in detail, how to make and use the present invention, and should therefore not be construed to be limiting in any way.
  • FIGs. 4 - 10 describe the data entry process for selecting a city in California or Nevada.
  • the selection list used throughout Figs. 4 - 10 is of the variety presented in Fig. 3 A. It should be noted that for the purposes of this example, the user has already selected the letter S as the first letter of the city in California or Nevada.
  • Figs. 4A - 4C are block diagrams illustrating example predefined lists associated with contextual data entry for an input control system according to an embodiment ofthe present invention.
  • Table 202 in Fig. 4A and table 204 in Fig. 4B represent a point in time where the intelligent device (e.g. auto PC) is now expecting the second character in the city name. Accordingly, the predefined list of cities in California and Nevada is processed to determine a list of possible second letters and their associated frequencies. The results are shown in tables 202 and 204, where table 202 is shown in alphabetical order and table 204 is sorted by frequency.
  • the intelligent device e.g. auto PC
  • FIGs. 5A - 5C are block diagrams illustrating example predefined lists associated with contextual data entry for an input control system according to an embodiment ofthe present invention.
  • Table 302 in Fig. 5A and table 304 in Fig. 5B represent a point in time where the intelligent device (e.g. auto PC) is now expecting the third character in the city name. Accordingly, the predefined list of cities in California and Nevada is processed to determine a list of possible third letters and their associated frequencies. The results are shown in tables
  • table 302 is shown in alphabetical order and table 304 is sorted by frequency.
  • FIGs. 6A - 6C are block diagrams illustrating example predefined lists associated with contextual data entry for an input control system according to an embodiment ofthe present invention.
  • Table 402 in Fig. 6A and table 404 in Fig. 6B represent a point in time where the intelligent device (e.g. auto PC) is now expecting the third character in the city name. Accordingly, the predefined list of cities in California and Nevada is processed to determine a list of possible fourth letters and their associated frequencies. The results are shown in tables 402 and 404, where table 402 is shown in alphabetical order and table 404 is sorted by frequency.
  • the intelligent device e.g. auto PC
  • Figs. 7A - 7C are block diagrams illustrating example predefined lists associated with contextual data entry for an input control system according to an embodiment ofthe present invention.
  • Table 502 in Fig. 7A and table 504 in Fig. 7B represent a point in time where the intelligent device (e.g. auto PC) is now expecting the third character in the city name.
  • the intelligent device e.g. auto PC
  • the predefined list of cities in California and Nevada is processed to determine a list of possible fifth letters and their associated frequencies.
  • the results are shown in tables
  • table 502 is shown in alphabetical order and table 504 is sorted by frequency.
  • a fifth character following the SAN_ This character can be entered through input control system 506, which comprises display area 508 and selection list 510, as illustrated by Fig. 7C.
  • selection list 510 the most common character following the letters SAN_ is the letter J.
  • the letter D is selected by scrolling the cursor over to the letter D and pressing enter.
  • Figs. 8 A — 8C are block diagrams illustrating example predefined lists associated with contextual data entry for an input control system according to an embodiment ofthe present invention.
  • Table 602 in Fig. 8 A and table 604 in Fig. 8B represent a point in time where the intelligent device (e.g. auto PC) is now expecting the third character in the city name. Accordingly, the predefined list of cities in California and Nevada is processed to determine a list of possible sixth letters and their associated frequencies. The results are shown in tables 602 and 604, where table 602 is shown in alphabetical order and table 604 is sorted by frequency.
  • a sixth character following the SAN D This character can be entered through input control system 606, which comprises display area 608 and selection list 610, as illustrated by Fig. 8C.
  • input control system 606 which comprises display area 608 and selection list 610, as illustrated by Fig. 8C.
  • selection list 610 the only valid character following the letters SAN_D is the letter I.
  • the letter I is selected by pressing the enter key.
  • the letter I may be automatically selected since it is the only valid character.
  • auto selection reduces even further the number of required keystrokes.
  • Figs. 9A - 9C are block diagrams illustrating example predefined lists associated with contextual data entry for an input control system according to an embodiment ofthe present invention.
  • Table 702 in Fig. 9A and table 704 in Fig. 9B represent a point in time where the intelligent device (e.g. auto PC) is now expecting the third character in the city name. Accordingly, the predefined list of cities in California and Nevada is processed to determine a list of possible seventh letters and their associated frequencies. The results are shown in tables 702 and 704, where table 702 is shown in alphabetical order and table 704 is sorted by frequency.
  • SAN_DI seventh character followting the SAN_DI.
  • This character can be entered through input control system 706, which comprises display area 708 and selection list 710, as illustrated in Fig. 9C.
  • selection list 710 the most common character following the letters SAN_DI is the letter E and the letter M. Since both valid characters have equal frequency, they are presented in alphabetical order. Other arrangements for characters with equal frequency may also be employed. In this example, the letter E is selected.
  • SANJDIE namely SAN_DIEGO. Accordingly, the full name ofthe city is presented in display area 808 of input control system 806, as shown in Fig. 10.
  • the city name SAN_DIEGO may be automatically selected and presented in display area 808.
  • input control system 806 may require that the enter key be pressed to verify that the city name is correct.
  • Fig. 11 is a block diagram illustrating a conventional standard telephone keypad 902.
  • the letters associated with each key in the keypad can advantageously be dynamically arranged in accordance with their corresponding frequency list, as described above.
  • the keys on a standard telephone keypad can be assigned relative frequencies such as in Fig. 12, which is a block diagram illustrating an example frequency mapping for a standard telephone keypad 904 according to an embodiment ofthe present invention.
  • a standard telephone keypad can be used in combination with a display that shows the relationship between the physical keys and the associated characters.
  • the display can be used as a guide to determine which key to press.
  • the keypad can be a virtual keypad that is presented on a touch screen display or the like. In this fashion, the input control system can dynamically change the key labels on the touch screen display.
  • a special keypad can be used in which the labels can be dynamically altered via software control. Regardless of the technology used to implement the labeling of the keys on the keypad, the principals in the following example apply.
  • the standard keypad 902 has a number printed on each key. In addition, zero to four letters are printed under each number.
  • a common technique used for inputting alphanumeric data via a standard telephone keypad 902 is to make use of the alphanumeric information printed on the keys.
  • This data is used to represent the specific characters that can be input via each key.
  • the 2 key is used to enter any ofthe characters printed on the 2 key, namely the characters A, B, C, and 2.
  • the actual character that is input into the device depends on the number of times the particular key is successively pressed. For example, pressing the 2 key once results in inputting the letter A. Pressing the 2 key twice in succession results in entering the letter B.
  • the characters C and 2 are input by pressing the 2 key, three and four times in succession, respectively.
  • keypad 904 illustrates the frequency mapping for keypad 904 that is used in the example below.
  • the most common characters are associated with position 1, the second most common characters with position 2, and so on.
  • many alternative frequency mappings may be employed.
  • the first three groupings can advantageously be placed within a single horizontal row that defines a home position for a human hand. In this fashion, the first 9 most commonly used characters can be entered without having to move from the home position.
  • Fig. 13 A is a block diagram illustrating an example keypad 1002 showing the frequency mapping of valid characters.
  • the user is being prompted to enter the first letter of a city from the predefined list of cities in California and Nevada, as described above.
  • the three most commonly used first letters are S, C, and L, in that order.
  • the letters S, C, and L are placed in that order under the 5 key, which represents the highest frequency position as shown in keypad 904 in Fig. 12.
  • the S character can be entered by pressing the 5 key a single time.
  • the C character can be entered by pressing the 5 key twice and the L key can be entered by pressing the 5 key three times.
  • the remaining letters are arranged on keypad 1002 in accordance with their relative frequencies and according to the frequency mapping (e.g. Fig. 12).
  • the M, N and O characters are placed under the 6 key
  • the R, W, and A characters are placed under the 4 key, and so on as shown on keypad 1002.
  • the letter S is entered as the first letter in the input stream, it can be entered by pressing the 5 key a single time.
  • the selected character is presented in display area 1003.
  • Fig. 13B is a block diagram illustrating an example keypad 1004 having a frequency mapping of valid characters after the letter S.
  • the letter A is selected by pressing the 5 key a single time.
  • the selected character is presented in display area 1005.
  • Fig. 13C is a block diagram illustrating an example keypad 1006 having a frequency mapping of valid characters after the letters SA.
  • the letter N is selected by pressing the 5 key a single time.
  • the selected character is presented in display area 1007.
  • Fig. 13D is a block diagram illustrating an example keypad 1008 having a frequency mapping of valid characters after the letters SAN.
  • the space character is selected by pressing the 5 key a single time.
  • the selected character is presented in display area 1009.
  • Fig. 13E is a block diagram illustrating an example keypad 1010 having a frequency mapping of valid characters after the characters SAN_.
  • the letter D is selected by pressing the 4 key a single time.
  • the selected character is presented in display area 1011.
  • Fig. 13F is a block diagram illustrating an example keypad 1012 having a frequency mapping of valid characters after the characters SANJD.
  • the letter I is selected by pressing the 5 key a single time.
  • the letter I may be automatically selected by the input control system since it is the only valid selection.
  • the selected character is presented in display area 1013.
  • Fig. 13G is a block diagram illustrating an example keypad 1014 having a frequency mapping of valid characters after the characters SANXDI.
  • the letter E is selected by pressing the 5 key a single time.
  • the selected character is presented in display area 1015.
  • Fig. 13H is a block diagram illustrating an example keypad 1016 having a frequency mapping of valid characters for cities in California and Nevada.
  • display area 1017 Presented in display area 1017 is the completed city name SAN_DIEGO.
  • SAN_DIEGO is the only valid item in the list that matches the input stream SAN_DIE, the city name can be automatically completed and presented in display area 1017.
  • the completed name may be presented and the user may be required to press the enter key to acknowledge that the name is correct.
  • Fig. 14 is a flowchart illustrating an example process for dynamic key assignment according to an embodiment ofthe present invention.
  • the process begins with step 1100.
  • the system determines an applicable predefined list to be used with the current input request, as illustrated in step 1102. For instance, using the examples described above, it is assumed that the system is expecting input from the user in the form of a city from California or Nevada. It should be noted that the system may dynamically generate the predefined list or the predefined list may be saved in a data storage area.
  • predefined lists may be accessible to the system through a communications mechanism. For example, an auto PC may have a wireless link to a network that stores a variety of lists. In one embodiment, predefined lists may be obtained from a data warehouse on the Internet. [92] Typically, a predefined list may comprise a subset of a larger predefined list.
  • the predefined list can be reduced in size to be as small as possible in order to maximize the benefits ofthe present invention.
  • the system may generate a predefined list based on user interaction, whenever possible. For example, the system may prompt the user to indicate a particular geographical region of interest. In the examples described above, the particular geographical region of interest selected by the user would comprise the states of California and Nevada. Accordingly, in step 1102, the system may dynamically generate a subset of a worldwide geographical database. Preferably, in this example, the subset comprises the names of cities within the states of California and Nevada.
  • the size ofthe predefined list is limited only by the storage capacity ofthe computer system or device used to implement the present invention.
  • network centric databases e.g. web sites on the Internet
  • the size ofthe predefined list is unlimited. Therefore, it is conceivable that an entire language dictionary can be used as a predefined list. In this fashion, generic systems having no to anticipate user input, such as word processors and the like, can benefit from the advantages ofthe present invention.
  • step 1104 the predefined list is processed to determine a set of valid first characters. This can be accomplished using standard well-known database searching techniques. The first time this step is performed, the first letters ofthe words within the predefined list are preferably stored in memory. An example of a table illustrating the results of this step is shown as table 102 in Fig. 2.
  • step 1106 the frequency of each valid character is determined.
  • the frequency may be determined by summing the number of occurrences of each character that is the first character of an item in the predefined list.
  • step 1108 the list of valid characters is sorted according to their relative frequencies.
  • An example of a table illustrating the results of this step is shown as table 104 in Fig. 2.
  • step 1110 the valid set of first characters is displayed as a selection list on the interface control system such that the characters having greater frequencies are selectable with fewer keystrokes than the characters having lower frequencies.
  • a selection list is shown in Fig. 3A.
  • Alternative arrangements for selection lists are presented in Figs. 3B and 3C.
  • the system reads and displays the selected character.
  • the character can be selected by the user, or the character can be automatically selected by the system.
  • the process can automatically select a character whenever the list of valid next characters comprises only a single entry. This condition is illustrated in Fig. 8, where the letter I is the only valid character that can be selected after the input string SANJD is entered. Accordingly, instead of presenting the user with a single selectable character in the selection list as shown in Fig. 8, the system may automatically select the character for the user and then immediately display the input control system as shown in Fig. 9. In this fashion, additional keystrokes are advantageously eliminated.
  • step 1114 the process queries the predefined list for the next set of valid characters. This step is similar to step 1104 in that it determines the set of valid characters from the items in the predefined list. Once the next set of valid characters is obtained, the system determines if the set of valid characters comprises a single entry, as shown in step 1116. If only a single entry is found, the condition described above is detected and the system can automatically select the single valid character, as illustrated by step 1118. After the character is automatically selected in this fashion, control passes back to step 1112 where the selected character is read and displayed.
  • step 1120 the system next determines if the number of valid characters in the set is equal to zero, as shown in step 1120. If the set of valid characters contains no entries, then the input process is complete, as indicated by step 1122. If the number of valid characters in the set is not equal to zero, control passes back to step 1106, where the process repeats as described above with respect to the next set of valid characters. The process preferably repeats for each set of valid next characters until a unique item in the predetermined list is identified.
  • Fig. 15 is a block diagram illustrating a table 1202 containing predefined lists 1204, 1206, and 1208.
  • list 1204 may contain letters of an alphabet
  • list 1206 may contain a mapping, or index to the corresponding letter
  • list 1208 may contain the voice command for the corresponding letter and mapping.
  • each letter of a defined alphabet are presented.
  • the English alphabet could be a set of available characters in a data input system.
  • Other potential sets of letters of characters may also define an alphabet.
  • the second column may contain a mapping or index to the letter according to, for example, a standard 3x3 matrix telephonic keypad as previously shown and described with respect to Fig. 11.
  • the "2" key may contain the letters "A” "B” and "C.”
  • the "2" key can be pushed a certain number of times. For example, to identify the letter “A” the “2” key can be pushed just once. Similarly, to identify the letter “B” the “2” key can be pushed twice, and to identify the letter
  • speech recognition systems are well suited to distinguish between the voice commands representing the single digit numerals, for example: “ONE”, “TWO”, “THREE”, “FOUR”, “FIVE”, “SIX”, “SEVEN”, “EIGHT”, “NINE”, and “ZERO”. Therefore, the voice commands of column 1208 corresponding to the mappings in column 1206 can provide more clearly enunciable and distinguishable voice commands to identify a desired letter in a voice data input system.
  • such a voice input system can be especially useful when the device being used is a phone-type data entry device with a predefined location for letters and numbers, such as a standard telephone keypad. For example, following the alphabet and mappings presented in table 1202, to enter the letter “C” a user may simply speak the voice command “TWO” followed by the voice command "THREE” rather than pressing on the "2" key three times as available in current phone devices.
  • Figs. 16A - H are block diagrams illustrating example dynamic mappings of characters on a keypad.
  • the letters associated with a 3x3 matrix telephonic keypad can be selected through the use of voice commands (in addition to key presses, as previously described).
  • voice commands in addition to key presses, as previously described.
  • the letter “S” can be selected according to the dynamic keypad mapping by speaking the command: “FIVE” "ONE.”
  • the most common next letter can always be placed in the same location for ease of entry.
  • the next letter "A” can also be selected by speaking the command: "FINE" "ONE.”
  • Fig. 16G the command “FIVE” "ONE” can be used to select the next letter "E.”
  • the remaining letters in the entry can be filled into the display 1250, as shown in Fig. 16H.
  • the voice command “YES” or the voice command “NO” may be used to indicate if the correct completed entry is shown on the display 1250.
  • the voice commands "UPPER” and “LOWER” can be used to distinguish between upper and lower case characters. Alternative voice commands may also be employed to indicate case.
  • Figs. 17A - H are block diagrams illustrating example predefined lists associated with contextual voice data entry for an input control system.
  • the format ofthe display presented in Fig. 17A — H may be more suitable to a car PC or some other type of device with wide, but not deep presentation capabilities.
  • a subset ofthe available columns may be presented on the display when the user has the ability to scroll the list from side to side. In this fashion, a smaller width display can be used to present the full set of characters available for selection.
  • the characters or letters of an alphabet can be presented on a display using a 1x9 tabular format such as format 1302 in Fig. 17A.
  • the tabular format can also be longer or shorter depending on the length ofthe alphabet or character set to be displayed.
  • such a format may be more suitable for visual searching.
  • selected letters can be presented in a display such as display 1304.
  • the voice command issued to select the letter is presented in command box 1306. In this case, the voice command "ONE" "ONE" is used to select the letter "S.”
  • the linear display presented in Figs. 17A - H can be shortened according to how many letters are available for selection as the next letter. For example, when selecting the first letter in Fig. 17A, there are 24 available letters. In Fig. 17B, only 11 letters are available, requiring only four columns. In one embodiment, this may eliminate the need to scroll the list of available letters from side to side. For example, in Figs. 17C and 17D, the limited number of characters may be completely presented in a single screen.
  • the system may advantageously automatically complete the entry, as shown in Fig. 17H.
  • the voice command "YES” or the voice command "NO” may be used to indicate if the correct completed entry is shown on the display 1346.
  • FIGs. 18A - H are block diagrams illustrating example predefined lists associated with contextual voice data entry for an input control system according to various embodiments ofthe present invention.
  • a user defined priority sequence may be used to present the available characters in a preferred format.
  • the most likely letters to be selected according to a dynamic mapping, can be presented in a fashion that makes them easier to select. For example, by reducing the number of voice commands required.
  • Fig. 18B the available letters are distributed in format 1358 so that columns having more than one entry, and thereby requiring two voice commands, contain the least likely selections (according to the dynamic mapping).
  • the letter “A” may advantageously be selected with the single command "ONE.”
  • other formats for the presentation of the available letter e.g. Fig. 17B, may require two voice commands to uniquely identify the desired selection.
  • Fig. 19 is a block diagram illustrating an example user interface display 1410 according to an embodiment ofthe present invention.
  • a shortened display area may be scrolled left and right in order to present the entire set of available characters.
  • a particular column 1412 may be selected, i.e. may be the active group.
  • Voice commands may be used to scroll the selected column or active group to the left or right.
  • the voice command "BACK” can be used to scroll the active group one column to the left
  • the voice command "NEXT" can be used to scroll the active group one column to the right.
  • Other voice commands may also be employed.
  • Fig. 20 is a block diagram illustrating an example dynamically defined conditional probability matrix Q according to an embodiment ofthe present invention.
  • This particular matrix Q contains an alphabet consisting of the first 9 characters of the English alphabet. Other alphabets or sets of available characters can be used. For example, a set of next available characters in a data input system may define an alphabet.
  • the matrix Q presents each character in the alphabet associated with each other character in the alphabet.
  • This language can be used as input to a speech engine.
  • the alphabet is not restricted to the letters in a given language.
  • an alphabet may comprise a set of letters (U) and a set of acceptable voice commands in a speech recognition system.
  • a language may then comprise the set of acceptable combinations of items in the alphabet.
  • the definition ofthe conditional probabilities of a letter in the set U of a voice recognition system is presented below.
  • P(A X /A y ) the probability ofthe letter A ⁇ being selected by the speech engine given that the letter A y was the true and actual input to the system.
  • the probability employed may be any suitable measure reflecting the reliability of a speech recognition system and its ability to distinguish between A x and A y .
  • Fig. 21 is a block diagram illustrating an example dynamically defined conditional probability matrix according to an embodiment of the present invention.
  • the matrix Q represents property (e) described above.
  • the letter “A” sounds similar to the letters “A” and “H.” Therefore, according to property (d), the letter “A” will not be grouped in cluster with another letter “A” or the letter “H.”
  • Fig. 22 is a block diagram illustrating an example dynamically defined cluster table according to an embodiment ofthe present invention.
  • the cluster table contains the number of clusters required for each row in the matrix Q.
  • each row in the matrix Q may correspond to a unique letter in the alphabet U.
  • the highest number of required clusters for the letters in matrix Q is 5.-
  • Fig. 23 is a block diagram illustrating an example dynamically defined character mapping according to an embodiment ofthe present invention.
  • the cluster table in Fig. 22 there are 5 separate clusters in Fig. 23.
  • Each of the letters of alphabet U are included in a cluster and each cluster comprises letters that are dissimilar in sound.
  • this arrangement by cluster may improve the efficiency of a speech recognition system.
  • the remaining letters in this case "A” "F” "H” and “I” can be distributed amongst the clusters such that they do not double up with a similar sounding letter.
  • Various distribution techniques or strategies may be employed for placing the remaining letters. For example, if the letters are to be displayed on a standard telephonic keypad, they may be concentrated toward the center ofthe keypad for easy recognition. Alternatively, they may be spread out amongst the keys ofthe keypad to minimize the number of voice commands required to identify a letter.
  • the letters may be displayed in a 1x9 matrix, or a smaller single row matrix.
  • the remaining characters may be placed in a fashion that maximizes the priority ofthe letters.
  • each cluster may be selectable as the active group and the letters within the cluster may be arranged according to priority.
  • the remaining letters may be combined with the letters ofthe 5 clusters according to priority in order to increase the efficiency of the presentation and the necessary voice commands required to select a particular letter.
  • Fig. 24 is a flowchart illustrating an example process for distinguishing between similarly sounding letters in a voice recognition system according to an embodiment ofthe present invention.
  • the matrix Q is constructed, as illustrated in step 1420.
  • the entries in the matrix Q are dependent on the characters available in the alphabet or the commands available in a speech recognition system, or both.
  • the probability associated with each entry in the matrix is dependent upon the particular voice recognition system in use.
  • the matrix Q is constructed of all ofthe characters in the alphabet, then the unused characters for the particular dynamic mapping can be eliminated. Accordingly, in step 1422, the rows and columns corresponding to the unused characters can be masked. Advantageously, this can reduce the size ofthe matrix Q and increase the operational speed ofthe system.
  • step 1424 the number of similar sounding letters per row is determined. This determination may be made based upon the probability condition and whether the probability of success for the voice recognition system meets a predefined threshold, as previously described. Once the number of similar sounding letters has been determined for each row, the number of clusters is determined, as illustrated in step 1426. The number of clusters is preferably equal to the highest number of similar sounding letters across all ofthe rows in the matrix Q.
  • the clusters are preferably initialized, as shown in step 1428.
  • the clusters are initialized with a letters identified in the row with the highest number of similar sounding letters. Staying with the above example, if the "G" row had the highest number of similarly sounding letters, then the 5 clusters would be initialized with the letters "B” "C” "D” "E” and "G.”
  • the remaining letters in the alphabet are assigned to the clusters.
  • Various methods for assigning the remaining letters may be employed to maximize efficiency of the overall system. These various methods may be selected based on the type of display being used by the system, as described in alternative embodiments above. The assignment methods may also be selected based on user preferences or the overall and relative priority ofthe letters.
  • the cluster selection preferably determines if any character already contained in the cluster is similar sounding to the character being placed. If so, then the character can be placed in a different cluster.
  • the number of available clusters for a particular character should not be reduced to zero by virtue of similar sounding characters already being resident in the cluster. This is due to step 1426, whereby the number of clusters is selected based on the maximum number of similar sounding characters across all ofthe rows in the matrix Q.
  • Additional methods for assigning the remaining letters to the various clusters may include the application of Bay's rule to compare the maximum likelihood of separation in a cluster. Another option may be to employ a neural network that is optimized for the classification of items in a list, such as the characters in an alphabet.
  • the letters within each cluster can advantageously be arranged according to priority. This arrangement may also be dependent on the type of display being used in the particular voice recognition system.
EP02768978A 2001-10-04 2002-10-03 System und verfahren zur dynamischen schlüsselzuweisung in einer erweiterten benutzerschnittstelle Withdrawn EP1472596A4 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US971905 1978-12-21
US09/971,905 US7152213B2 (en) 2001-10-04 2001-10-04 System and method for dynamic key assignment in enhanced user interface
PCT/US2002/031761 WO2003029952A1 (en) 2001-10-04 2002-10-03 System and method for dynamic key assignment in enhanced user interface

Publications (2)

Publication Number Publication Date
EP1472596A1 true EP1472596A1 (de) 2004-11-03
EP1472596A4 EP1472596A4 (de) 2007-07-04

Family

ID=25518946

Family Applications (1)

Application Number Title Priority Date Filing Date
EP02768978A Withdrawn EP1472596A4 (de) 2001-10-04 2002-10-03 System und verfahren zur dynamischen schlüsselzuweisung in einer erweiterten benutzerschnittstelle

Country Status (3)

Country Link
US (2) US7152213B2 (de)
EP (1) EP1472596A4 (de)
WO (1) WO2003029952A1 (de)

Families Citing this family (93)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7406084B2 (en) * 1997-09-19 2008-07-29 Nokia Siemens Networks Gmbh & Co. Kg Flexible software architecture for a call processing system
US8938688B2 (en) 1998-12-04 2015-01-20 Nuance Communications, Inc. Contextual prediction of user words and user actions
US7720682B2 (en) * 1998-12-04 2010-05-18 Tegic Communications, Inc. Method and apparatus utilizing voice input to resolve ambiguous manually entered text input
US7881936B2 (en) 1998-12-04 2011-02-01 Tegic Communications, Inc. Multimodal disambiguation of speech recognition
US7712053B2 (en) 1998-12-04 2010-05-04 Tegic Communications, Inc. Explicit character filtering of ambiguous text entry
US7679534B2 (en) 1998-12-04 2010-03-16 Tegic Communications, Inc. Contextual prediction of user words and user actions
US6885317B1 (en) 1998-12-10 2005-04-26 Eatoni Ergonomics, Inc. Touch-typable devices based on ambiguous codes and methods to design such devices
US6950994B2 (en) * 2000-08-31 2005-09-27 Yahoo! Inc. Data list transmutation and input mapping
EP1374225B1 (de) * 2001-03-29 2004-12-29 Philips Electronics N.V. Synchronisierung eines audio- und eines textcursors während der editierung
US20090040184A9 (en) * 2001-10-04 2009-02-12 Infogation Corporation Information entry mechanism
US20030074647A1 (en) * 2001-10-12 2003-04-17 Andrew Felix G.T.I. Automatic software input panel selection based on application program state
US7934236B1 (en) * 2002-01-30 2011-04-26 Lee Capital Llc Advanced navigation method for music players and video players
CA2479302A1 (en) * 2002-03-22 2003-10-02 Sony Ericsson Mobile Communications Ab Entering text into an electronic communications device
US8583440B2 (en) 2002-06-20 2013-11-12 Tegic Communications, Inc. Apparatus and method for providing visual indication of character ambiguity during text entry
WO2008080192A1 (en) 2007-01-03 2008-07-10 Kannuu Pty Ltd Process and apparatus for selecting an item from a database
JP2006523904A (ja) * 2003-04-18 2006-10-19 ガーサビアン、ベンジャミン、フィルーツ 移動体及び固定環境内でのデータ入力向上システム
US7130846B2 (en) * 2003-06-10 2006-10-31 Microsoft Corporation Intelligent default selection in an on-screen keyboard
US7266780B2 (en) * 2003-06-30 2007-09-04 Motorola, Inc. Method for combining deterministic and non-deterministic user interaction data input models
US20050015728A1 (en) * 2003-07-17 2005-01-20 International Business Machines Corporation Method, system, and program product for customizing a user interface
GB2433002A (en) * 2003-09-25 2007-06-06 Canon Europa Nv Processing of Text Data involving an Ambiguous Keyboard and Method thereof.
US20050141770A1 (en) * 2003-12-30 2005-06-30 Nokia Corporation Split on-screen keyboard
FI20031923A0 (fi) * 2003-12-30 2003-12-30 Nokia Corp Päätelaite, menetelmä ja tietokoneohjelma merkkijonon valitsemiseksi
US8103970B1 (en) * 2004-03-08 2012-01-24 Cellco Partnership Method and device for providing a multi-level user interface having a dynamic key assignment for a cellularly communicative device
US7555732B2 (en) * 2004-03-12 2009-06-30 Steven Van der Hoeven Apparatus method and system for a data entry interface
US7376938B1 (en) 2004-03-12 2008-05-20 Steven Van der Hoeven Method and system for disambiguation and predictive resolution
GB0405972D0 (en) * 2004-03-17 2004-04-21 Dibble Stuart S Modification of keyboard, typewriter format and layout,to word recognition capacity
US7379946B2 (en) 2004-03-31 2008-05-27 Dictaphone Corporation Categorization of information using natural language processing and predefined templates
JP4302568B2 (ja) * 2004-04-06 2009-07-29 本田技研工業株式会社 情報検索装置
US8504369B1 (en) 2004-06-02 2013-08-06 Nuance Communications, Inc. Multi-cursor transcription editing
US8095364B2 (en) 2004-06-02 2012-01-10 Tegic Communications, Inc. Multimodal disambiguation of speech recognition
GB2432704B (en) * 2004-07-30 2009-12-09 Dictaphone Corp A system and method for report level confidence
US20060036677A1 (en) * 2004-07-30 2006-02-16 Research In Motion Ltd. Method and system for coordinating input and output between a communications client and its host device
US7650628B2 (en) 2004-10-21 2010-01-19 Escription, Inc. Transcription data security
US7836412B1 (en) 2004-12-03 2010-11-16 Escription, Inc. Transcription editing
KR101191816B1 (ko) * 2004-12-07 2012-10-16 캐나다 지 주식회사 확대된 검색 특성을 갖는 사용자 인터페이스
US7613610B1 (en) * 2005-03-14 2009-11-03 Escription, Inc. Transcription data extraction
US20070016420A1 (en) * 2005-07-07 2007-01-18 International Business Machines Corporation Dictionary lookup for mobile devices using spelling recognition
CN101313271A (zh) * 2005-08-12 2008-11-26 勘努优有限公司 用于从数据库中选择条目的改进的方法和装置
US7788266B2 (en) 2005-08-26 2010-08-31 Veveo, Inc. Method and system for processing ambiguous, multi-term search queries
US7737999B2 (en) * 2005-08-26 2010-06-15 Veveo, Inc. User interface for visual cooperation between text input and display device
US8032372B1 (en) 2005-09-13 2011-10-04 Escription, Inc. Dictation selection
US7539472B2 (en) * 2005-09-13 2009-05-26 Microsoft Corporation Type-ahead keypad input for an input device
TWI313430B (en) * 2005-09-16 2009-08-11 Input method for touch screen
RU2008125130A (ru) * 2005-11-21 2009-12-27 Зи Корпорейшн Оф Канада, Инк. (Ca) Система и способ доставки информации для мобильных устройств
US7961903B2 (en) * 2006-01-25 2011-06-14 Microsoft Corporation Handwriting style data input via keys
US7657526B2 (en) 2006-03-06 2010-02-02 Veveo, Inc. Methods and systems for selecting and presenting content based on activity level spikes associated with the content
EP3822819A1 (de) 2006-04-20 2021-05-19 Veveo, Inc. Benutzerschnittstellenverfahren und systeme zur auswahl und darstellung von inhalt auf der basis von benutzernavigation sowie auswahlaktionen in zusammenhang mit dem inhalt
US8286071B1 (en) 2006-06-29 2012-10-09 Escription, Inc. Insertion of standard text in transcriptions
US7925986B2 (en) * 2006-10-06 2011-04-12 Veveo, Inc. Methods and systems for a linear character selection display interface for ambiguous text input
WO2008063987A2 (en) 2006-11-13 2008-05-29 Veveo, Inc. Method of and system for selecting and presenting content based on user identification
US7899670B1 (en) 2006-12-21 2011-03-01 Escription Inc. Server-based speech recognition
US8115658B2 (en) * 2006-12-29 2012-02-14 Research In Motion Limited Handheld electronic device providing confirmation of input, and associated method
US8719723B2 (en) * 2007-03-05 2014-05-06 Microsoft Corporation Displaying data sensitive targets
KR100883105B1 (ko) 2007-03-30 2009-02-11 삼성전자주식회사 휴대단말기에서 음성인식을 이용한 다이얼링 방법 및 장치
US7996781B2 (en) * 2007-04-04 2011-08-09 Vadim Zaliva List entry selection for electronic devices
US8305239B2 (en) * 2007-05-17 2012-11-06 Zi Corporation Of Canada, Inc. Service access method and apparatus
US8296294B2 (en) * 2007-05-25 2012-10-23 Veveo, Inc. Method and system for unified searching across and within multiple documents
US20080313574A1 (en) * 2007-05-25 2008-12-18 Veveo, Inc. System and method for search with reduced physical interaction requirements
US8549424B2 (en) * 2007-05-25 2013-10-01 Veveo, Inc. System and method for text disambiguation and context designation in incremental search
JP4433019B2 (ja) * 2007-09-03 2010-03-17 株式会社デンソー 単語入力支援装置および単語入力支援装置用のプログラム
JP5058724B2 (ja) * 2007-09-05 2012-10-24 イーストマン コダック カンパニー 入力装置及びこれを備えた撮像装置
KR20090030966A (ko) * 2007-09-21 2009-03-25 삼성전자주식회사 휴대용 단말기에서 메뉴 리스트 순위 구성 방법 및 장치
DE102007051013A1 (de) * 2007-10-25 2009-04-30 Bayerische Motoren Werke Aktiengesellschaft Verfahren zum Betrieb eines Dialogsystems für ein Kraftfahrzeug
CA2665224A1 (en) * 2008-05-02 2009-11-02 Milton E. Milley Computing device keyboard
US20100293457A1 (en) * 2009-05-15 2010-11-18 Gemstar Development Corporation Systems and methods for alphanumeric navigation and input
JP5282699B2 (ja) * 2009-08-14 2013-09-04 富士通株式会社 携帯端末装置、文字変換装置、文字変換方法およびプログラム
KR101595029B1 (ko) * 2009-11-18 2016-02-17 엘지전자 주식회사 이동단말기 및 그 제어방법
US8781520B2 (en) * 2010-01-26 2014-07-15 Hand Held Products, Inc. Mobile device having hybrid keypad
US20110191332A1 (en) 2010-02-04 2011-08-04 Veveo, Inc. Method of and System for Updating Locally Cached Content Descriptor Information
US20120047454A1 (en) * 2010-08-18 2012-02-23 Erik Anthony Harte Dynamic Soft Input
US20120200508A1 (en) * 2011-02-07 2012-08-09 Research In Motion Limited Electronic device with touch screen display and method of facilitating input at the electronic device
US8768723B2 (en) 2011-02-18 2014-07-01 Nuance Communications, Inc. Methods and apparatus for formatting text for clinical fact extraction
US10032127B2 (en) 2011-02-18 2018-07-24 Nuance Communications, Inc. Methods and apparatus for determining a clinician's intent to order an item
US8694335B2 (en) 2011-02-18 2014-04-08 Nuance Communications, Inc. Methods and apparatus for applying user corrections to medical fact extraction
US9904768B2 (en) 2011-02-18 2018-02-27 Nuance Communications, Inc. Methods and apparatus for presenting alternative hypotheses for medical facts
US8799021B2 (en) 2011-02-18 2014-08-05 Nuance Communications, Inc. Methods and apparatus for analyzing specificity in clinical documentation
US9916420B2 (en) 2011-02-18 2018-03-13 Nuance Communications, Inc. Physician and clinical documentation specialist workflow integration
US8738403B2 (en) 2011-02-18 2014-05-27 Nuance Communications, Inc. Methods and apparatus for updating text in clinical documentation
US9679107B2 (en) 2011-02-18 2017-06-13 Nuance Communications, Inc. Physician and clinical documentation specialist workflow integration
US8788289B2 (en) 2011-02-18 2014-07-22 Nuance Communications, Inc. Methods and apparatus for linking extracted clinical facts to text
US10460288B2 (en) 2011-02-18 2019-10-29 Nuance Communications, Inc. Methods and apparatus for identifying unspecified diagnoses in clinical documentation
US8904309B1 (en) 2011-11-23 2014-12-02 Google Inc. Prediction completion gesture
EP2843571A3 (de) * 2011-12-29 2015-05-06 Huawei Technologies Co., Ltd. Kontaktsuchverfahren und -vorrichtung
US8436828B1 (en) 2012-01-27 2013-05-07 Google Inc. Smart touchscreen key activation detection
US9778841B2 (en) 2012-02-10 2017-10-03 Hand Held Products, Inc. Apparatus having random ordered keypad
US8533621B1 (en) * 2012-05-01 2013-09-10 Katherine S. Meddaugh Communication system and method of use thereof
US9753638B2 (en) 2012-06-06 2017-09-05 Thomson Licensing Method and apparatus for entering symbols from a touch-sensitive screen
DE102013001884A1 (de) 2013-02-02 2014-08-07 Audi Ag Systeminitiierte Hilfefunktion für die Bedienung von einer, einem Fahrzeug zugeordneten Vorrichtung - Eingabe von Leerzeichen
KR101334342B1 (ko) * 2013-05-16 2013-11-29 주식회사 네오패드 문자 입력 장치 및 문자 입력 방법
CN108234533B (zh) 2016-12-12 2021-10-15 阿里巴巴集团控股有限公司 用户操作处理方法及相关设备
US10846429B2 (en) 2017-07-20 2020-11-24 Nuance Communications, Inc. Automated obscuring system and method
USD956095S1 (en) * 2020-09-21 2022-06-28 Samsung Electronics Co., Ltd. Display screen or portion thereof with icon
USD956094S1 (en) * 2020-09-21 2022-06-28 Samsung Electronics Co., Ltd. Display screen or portion thereof with icon

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000041062A2 (en) * 1999-01-04 2000-07-13 Dell Robert B O Text input system for ideographic and nonideographic languages
CA2267438A1 (en) * 1999-03-29 2000-09-29 Now See Hear Interactive Inc. Method for mobile text entry
WO2000062150A1 (en) * 1999-04-14 2000-10-19 Motorola Inc. Data entry apparatus having a limited number of character keys and method

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5859638A (en) 1993-01-27 1999-01-12 Apple Computer, Inc. Method and apparatus for displaying and scrolling data in a window-based graphic user interface
US5828991A (en) * 1995-06-30 1998-10-27 The Research Foundation Of The State University Of New York Sentence reconstruction using word ambiguity resolution
US5818437A (en) 1995-07-26 1998-10-06 Tegic Communications, Inc. Reduced keyboard disambiguating computer
CN1154910C (zh) * 1995-07-26 2004-06-23 蒂吉通信系统公司 压缩键盘的明义系统
JP3727399B2 (ja) * 1996-02-19 2005-12-14 ミサワホーム株式会社 画面表示式キー入力装置
US6532001B1 (en) * 1996-04-10 2003-03-11 Snap-On Technologies, Inc. Mouse control for scrolling switch options through screen icon for the switch
US6583797B1 (en) 1997-01-21 2003-06-24 International Business Machines Corporation Menu management mechanism that displays menu items based on multiple heuristic factors
US5953541A (en) * 1997-01-24 1999-09-14 Tegic Communications, Inc. Disambiguating system for disambiguating ambiguous input sequences by displaying objects associated with the generated input sequences in the order of decreasing frequency of use
US6178338B1 (en) * 1997-04-28 2001-01-23 Sony Corporation Communication terminal apparatus and method for selecting options using a dial shuttle
JPH10326138A (ja) * 1997-05-26 1998-12-08 Toshiba Corp キー入力装置
KR100552085B1 (ko) * 1997-09-25 2006-02-20 테직 커뮤니케이션 인코포레이티드 감소된 키보드 명확화 시스템
US6084576A (en) * 1997-09-27 2000-07-04 Leu; Neng-Chyang User friendly keyboard
US6121965A (en) * 1997-10-17 2000-09-19 Lucent Technologies Inc. User interface for graphical application tool
US5896321A (en) * 1997-11-14 1999-04-20 Microsoft Corporation Text completion system for a miniature computer
US6512511B2 (en) * 1998-07-20 2003-01-28 Alphagrip, Inc. Hand grippable combined keyboard and game controller system
US6414700B1 (en) * 1998-07-21 2002-07-02 Silicon Graphics, Inc. System for accessing a large number of menu items using a zoned menu bar
AU9060498A (en) * 1998-09-09 2000-03-27 Qi Hao Keyboard and thereof input method
US6885317B1 (en) * 1998-12-10 2005-04-26 Eatoni Ergonomics, Inc. Touch-typable devices based on ambiguous codes and methods to design such devices
US7506252B2 (en) * 1999-01-26 2009-03-17 Blumberg Marvin R Speed typing apparatus for entering letters of alphabet with at least thirteen-letter input elements
US7030863B2 (en) * 2000-05-26 2006-04-18 America Online, Incorporated Virtual keyboard system with automatic correction
EP1192716B1 (de) * 1999-05-27 2009-09-23 Tegic Communications, Inc. Tastatursystem mit automatischer korrektur
JP2001022497A (ja) * 1999-07-07 2001-01-26 Minolta Co Ltd デバイスドライバー
US6646572B1 (en) * 2000-02-18 2003-11-11 Mitsubish Electric Research Laboratories, Inc. Method for designing optimal single pointer predictive keyboards and apparatus therefore
US20020054135A1 (en) * 2000-03-17 2002-05-09 Masahiro Noguchi Information processing device, information processsing method, and computer-readable storage medium storing program for executing this method on a computer
US6834285B1 (en) * 2000-03-24 2004-12-21 Numoda Corporation Computer system for portable digital data capture and data distribution
US6473675B2 (en) * 2000-04-25 2002-10-29 Honeywell International, Inc. Aircraft communication frequency nomination
US6812939B1 (en) * 2000-05-26 2004-11-02 Palm Source, Inc. Method and apparatus for an event based, selectable use of color in a user interface display
CA2323856A1 (en) * 2000-10-18 2002-04-18 602531 British Columbia Ltd. Method, system and media for entering data in a personal computing device
JP2002244973A (ja) * 2001-02-19 2002-08-30 Canon Inc 通信制御装置および通信制御方法
US6970881B1 (en) * 2001-05-07 2005-11-29 Intelligenxia, Inc. Concept-based method and system for dynamically analyzing unstructured information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000041062A2 (en) * 1999-01-04 2000-07-13 Dell Robert B O Text input system for ideographic and nonideographic languages
CA2267438A1 (en) * 1999-03-29 2000-09-29 Now See Hear Interactive Inc. Method for mobile text entry
WO2000062150A1 (en) * 1999-04-14 2000-10-19 Motorola Inc. Data entry apparatus having a limited number of character keys and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO03029952A1 *

Also Published As

Publication number Publication date
WO2003029952A1 (en) 2003-04-10
US7681145B1 (en) 2010-03-16
EP1472596A4 (de) 2007-07-04
US7152213B2 (en) 2006-12-19
US20030067495A1 (en) 2003-04-10

Similar Documents

Publication Publication Date Title
US7681145B1 (en) Dynamic key assignment in key pad
US8990738B2 (en) Explicit character filtering of ambiguous text entry
US6362752B1 (en) Keypad with strokes assigned to key for ideographic text input
US6307548B1 (en) Reduced keyboard disambiguating system
US8788508B2 (en) Object access system based upon hierarchical extraction tree and related methods
US8401838B2 (en) System and method for multilanguage text input in a handheld electronic device
US7414615B2 (en) System and method for inputting characters using a directional pad
US20070016862A1 (en) Input guessing systems, methods, and computer program products
EP2109046A1 (de) Prädiktives Texteingabesystem und Verfahren mit zwei gleichrangigen Ordnungskriterien
KR20120006503A (ko) 개선된 텍스트 입력
WO2003084194A1 (en) Creation method for characters/words and the information and communication service method thereby
JP2001509290A (ja) 減少型キーボード曖昧さ除去システム
US6766179B1 (en) Cross-shape layout of chinese stroke labels with lyric
US20060279433A1 (en) Method of mapping characters for a mobile telephone keypad
KR101130206B1 (ko) 입력 순서와 무관한 문자 입력 메커니즘을 제공하는 방법, 기기 및 컴퓨터 프로그램 제품
WO2000058816A2 (en) A method for mobile text entry
KR100538248B1 (ko) 한글 모음 입력 장치 및 그 방법
KR980013177A (ko) 전자식 전화형 키패드를 이용한 영문자 입력 방법(method for inputting english character using electric telephone keypad)
KR100745978B1 (ko) 문자 입력 장치, 이를 구비한 휴대용 장치, 및 입력 방법
KR100827638B1 (ko) 빠른 문자입력을 위한 방법과 이를 이용한 전자기기
Zaliva AccelKey Selection Method for Mobile Devices

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20040806

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

A4 Supplementary search report drawn up and despatched

Effective date: 20070605

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 3/023 20060101ALI20070530BHEP

Ipc: G06F 3/14 20060101AFI20030412BHEP

17Q First examination report despatched

Effective date: 20091013

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20150116