GB2486917A - Method for determining the intended character when a keypad receives input - Google Patents
Method for determining the intended character when a keypad receives input Download PDFInfo
- Publication number
- GB2486917A GB2486917A GB1022122.4A GB201022122A GB2486917A GB 2486917 A GB2486917 A GB 2486917A GB 201022122 A GB201022122 A GB 201022122A GB 2486917 A GB2486917 A GB 2486917A
- Authority
- GB
- United Kingdom
- Prior art keywords
- probability density
- density function
- language
- keypad
- user input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000013459 approach Methods 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 123
- 230000008569 process Effects 0.000 claims description 20
- 238000012545 processing Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 13
- 238000009826 distribution Methods 0.000 abstract 2
- 210000003811 finger Anatomy 0.000 description 13
- 210000003813 thumb Anatomy 0.000 description 4
- 238000012937 correction Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000005686 electrostatic field Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007650 screen-printing Methods 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 210000003371 toe Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
-
- G06F17/276—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
- G06F3/0233—Character input methods
- G06F3/0237—Character input methods using prediction or retrieval techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/274—Converting codes to words; Guess-ahead of partial word inputs
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A method, apparatus and computer readable medium for receiving 710 a coordinate based user input at a user interface (such as a keyboard/keypad or virtual keyboard); determining 720 the probabilities, likelihoods or weights of candidate keys/characters based on the user input and the layout of the keypad; and providing 730 the probabilities to a language based processor. The probability distribution of the input may be modified 750 by the processor according to word frequency or other word prediction techniques. Further inputs may be received, and the probability distributions of all the inputs may be used by the processor to obtain 830 a most probable word. Determining the probabilities may comprise analysing 725 parameters such as the amplitude, height, width or angle of the touch; finger approach/retreat direction and speed; and the duration of the touch event. The user interface may be a touch-sensitive screen activated by the finger or digit of the user.
Description
I
TEXT RECOGNITION DEVICES AND METHODS
BACKGROUND:
Field:
Methods, apparatus and computer readable media for enabling language-based processing based on probability density functions.
Description of the Related Art:
Touch screen sensors may be used for many purposes, including the entry of text.
An approach that can be used to permit text entry via a touch screen involves using a controller to determine a central point (or centroid) of a user's touch, and reporting the central point as being the point of contact to a system that receives input from the touch screen.
Certain touch screens employ more advanced information sensed by the screen, such as the finger orientation or the like. Thus, some touch screens report a touch coordinate value that is based on more than just a geometrically central point of the touch.
SUMMARY:
in one embodiment, a method includes receiving a coordinate-based user input at a user interface. The method also includes determining a probability density function based on the user input, wherein the probability density function is based on a layout of a keypad. The method further includes providing the probability density function to a language-based processor.
in another embodiment, an apparatus comprises means for performing each of the above actions.
A language based processor may be a hardware processor, such as a general purpose processor that is configured to perform language-based processing; or an application specific processor such as an application-specific integrated circuit, or a suitable programmed field-programmable gate array.
Alternatively, the language-based processor may be a software component or group of components that perform language-based processing when run on a physical processor or within a virtual machine. The physical processor which runs a software-implemented language-based processor may additionally perform other functions unrelated to language-based processing.
A method, in another embodiment, includes receiving, at a language-based processor, a first probability density function of a first user input, wherein the first probability density function is based on a layout of a keypad.
The method also includes receiving a second probability density function of a second user input, wherein the second probability density function is based on the layout of the keypad. The method further includes processing the first probability density function and the second probability density function in the language-based processor to obtain a most probable word.
In another embodiment, an apparatus comprises means for performing each of the actions of the above method.
A computer-readable medium, according to certain embodiments, can be encoded with instructions that, when executed in hardware, perform a process, the process including one of the preceding methods.
In certain embodiments, an apparatus includes at least one memory including computer program code and at least one processor. The at least one memory and computer program code are configured to, with the at least one process, cause the apparatus at least to, upon receiving a coordinate-based user input at a user interface, determine a probability density function based on the user input, wherein the probability density function is based on a layout of a keypad. The at least one memory and computer program code are configured to, with the at least one process, cause the apparatus at least to provide the probability density function to a language-based processor.
An apparatus according to other embodiments includes at least one memory including computer program code and at least one processor. The at least one memory and computer program code are configured to, with the at least one process, cause the apparatus at least to, upon receiving, at a language-based processor, a first probability density function of a first user input, wherein the first probability density function is based on a layout of a keypad and receiving a second probability density function of a second user input, wherein the second probability density function is based on the layout of the keypad, process the first probability density function and the second probability density function in the language-based processor. The at least one memory and computer program code are also configured to, with the at least one process, cause the apparatus at least to obtain a most probable word based on processing the first probability density function and the second probability density function.
In further embodiments, an apparatus includes receiving means for receiving a coordinate-based user input at a user interface. The apparatus also includes determining means for determining a probability density function based on the user input, wherein the probability density function is based on a layout of a keypad. The apparatus further includes providing means for providing the probability density function to a language-based processor.
An apparatus includes, in certain embodiments, first receiving means for receiving, at a language-based processor, a first probability density function of a first user input, wherein the first probability density function is based on a layout of a keypad. The apparatus also includes second receiving means for receiving a second probability density function of a second user input, wherein the second probability density function is based on the layout of the keypad. The apparatus further includes processing the first probability density function and the second probability density function in the language-based processor to obtain a most probable word.
According to one embodiment, there is provided a computer readable medium having computer-readable code stored thereon. the computer-readable code comprising: code for receiving a coordinate-based user input at a user interface; code for determining a probability density function based on the user input, wherein the probability density function is based on a layout of a keypad; and code for providing the probability density function to a language-based processor.
According to another embodiment, there is provided a computer readable medium having computer-readable code stored thereon, the computer-readable code comprising: code for receiving, at a language-based processor, a first probability density function of a first user input, wherein the first probability density function is based on a layout of a keypad; code for receiving a second probability density function of a second user input, wherein the second probability density function is based on the layout of the keypad; and code for processing the first probability density function and the second probability density function in the language-based processor to obtain a most probable word.
BRIEF DESCRiPTiON OF THE DRAWINGS:
For proper understanding of the invention, reference should be made to the accompanying drawings, wherein: Figure 1 illustrates the operation of an example touch screen.
Figures 2(a), 2(b) and 2(c) provide examples of how width and angle of a press event may be used to create a probability density function that is passed to a language engine or language-based processor.
Figure 3 illustrates an embodiment of the present invention in which touch area size and angle are provided as an input to a language engine compared to a situation in which no touch area information is provided to a language engine.
Figure 4 illustrates a partial press visualization of characters that are included with a probability above a certain threshold.
Figures 5(a), 5(b), and 5(c) illustrate how hovering data from the touch screen can be used in certain embodiments of the present invention.
Figure 6 illustrates a user equipment according to certain embodiments of the present invention.
Figure 7 illustrates a method according to certain embodiments of the present invention.
Figure 8 illustrates another method according to certain embodiments of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT(S): Certain embodiments of the present invention provide a Probability Density Function (PDF) based on a user's press, touch, or hover interaction with a touch screen or similar input device. The probability density function can be provided from a user interface to a language engine in a language-based processor.
Figure 1 illustrates the operation of an example touch screen. As shown in Figure 1, a user's finger can interact either directly or indirectly (for example, hovering over) with a touch screen, which serves as a sensor. The sensor develops an image of changes in an electrostatic field caused by the touch (or other interaction). This image can be continuously re-updated and provided to an image processing controller.
Instead of passing a single x,y (z) point from the touch screen controller or driver to a text input language engine, which may be located in a language-based processor, certain embodiments of the present invention pass data that describes more details of the actual capacitance peak created by the event. For example, the amplitude, width, shape, and time-based evolution can be subjected to further processing. This additional infoimation can be used by the language engine to refine its predictions.
This is an improvement over a touch screen text entry solution in which the two parts of the text input process are fully separated. For example, when the touch screen controller and driver create a single x,y touch co-ordinate based on the users touch and the text input method is passed the single x,y coordinate representing the users press, and based on that, uses a language engine (dictionary look up) to return the most likely letter 1 word candidates, the two parts of the text input process are fully separated.
Moreover, in some text input processes much of the information in how the user touches the touch screen is not utilized by the text input method language engine. Capacitive touch modules can provide additional information beyond merely x,y coordinate data. Examples of available parameters include amplitude of the touch event; height, width and angle of the touch event; finger approach and retreat direction and speed (especially in touch modules capable of hover sense); force level (especially for displays fitted with force sensing capability); and timing issues, such as the time between touch and release. The angle of a touch event relates to an angle of an axis in which the touch area is elongate relative to a reference axis, for example a horizontally elongate touch area may be said to have an angle of 90 degrees from the vertical axis.
By using all the available information from the touch display module and integrating the touch recognition process with the text input method, some improvement in the performance of text input can be achieved. This improvement can enable the text input method to adapt to each user and to the particular context of use.
Figures 2(a), 2(b) and 2(c) provide examples of how width and angle of a press event may be used to create a probability density function that is passed to a language engine or language-based processor.
Figure 2(a) illustrates a first ease, in which there is a small touch area.
This area may be due to the use of pen, finger, or stylus. In this example, the probability density function may be "d" 93%, "x" 3%, "c" 2% and so forth. it is possible to specify a probability for every available character. Alternatively, the probability density function may omit mention of characters with veiy small probabilities of, for example, less than 1%. In this case, ad" is highly likely to be the correct keystroke.
Figure 2(b) illustrates a second case, in which there is a large touch area. This area can be created by the use a more blunt instrument, such as a thumb or a pair of fingers. In this case, the probability density function may be "d" 70%, "s" 6%, f' 5%, and so forth.
Figure 2(c) illustrates a third case, in which there is a non-circular touch area. The non-circular touch area may be caused by a press event in which a thumb is used, or in which a finger is dragged along the screen. The non-circular touch area is in this case an elongate oval, but other shapes are possible.
In the text input method's language engine, which can be located in a language-based processor, the probability density functions for the entered characters can be used to determine the most likely candidate words. The probability density function may be different for each character entered.
Hence characters that are deemed to be entered with more accuracy are given a higher precedence when calculating the most likely candidate words.
Figure 3 illustrates an embodiment of the present invention in which touch area size and angle are provided as an input to a language engine compared to a situation in which no touch area information is provided to a language engine. As shown in Figure 3, if no touch area information is provided to a language engine (for example, if all that is provided is the key corresponding to the center of each touch), the language engine will understand from a user's input (in this example) only that the letters "d", "o", and "g" have been entered, and may consequently provide an output of "dog" as the primary candidate word.
With touch area size information, the same presses in the same places can be processed by a language engine in a language-based processor to return the word "for" as the primary candidate. The reason for this different primary candidate may be because "for" is a much more frequently used word in English than the word "dog," "fog," "sog," or any of a variety of other possible words. Moreover, this result can be enhanced if it is possible for the language engine or language-based processor to predict what pail of speech is expected to be entered (for example, preposition or noun). The way by which this different result is obtained may be that the language engine may apply correction in inverse proportion to the probability of the most probable character. Thus, the "o" may be accepted ahiost without question, while the "ci" and "g" may be open to substantial question and correction.
In certain further embodiments, user feedback is provided as to the "accuracy" of each press event. This can guide users to use the text input method in the optimum way. This will result in the user being able to achieve their maximum speed of text input or to utilize a minimal level of effort when inputting text. An example is shown in Figure 4.
Specifically, Figure 4 illustrates a partial press visualization of characters that are included with a probability above a certain threshold. In this example, the letters "v" and "y" are highlighted with a medium intensity highlighting while the letter "g" is highlighted with a high intensity highlighting. Thus, the character probabilities are indicated. The reason for this particular probability density function may be that the shape of the press is such that "y" and "v" are included with some reasonable level of probability as input to a language engine or language-based processor. The reasonable level of probability can be configured variously, but can be set with a limit of something like 1/20 of the probability of the most probable key. Thus, if the most probable key has an 80% probability, a key must have at least a 4% probability in order to be considered. However, if the most probable key has only a 20% probability, then a key with only a 1% probability may be considered reasonably possible. Alternatively, the reasonable probability may be set a specific percentage, such as 3% or 5% likelihood.
The approach illustrated in Figure 4 could be modified in various ways.
For example, the character probabilities modified by the language engine could be indicated. This might entail providing feedback from the language engine to the user interface. For example, in the case shown in Figure 4, if tv" is not a likely next character based on the language model prediction then it may not be highlighted unless the probability based on the touch event is very high.
The various parameters available from the display module can be utilized in a variety of ways. Figures 5(a), 5(b), and 5(c) illustrate how hovering data from the touch screen can be used in certain embodiments of the present invention. As shown in Figure 5(a), a finger can be detected hovering a few mm above a display, but not touching, and the finger can be detected as moving to the right.
Next, as shown in Figure 5(b), a finger touch can be detected in the area of the "g" key. This finger touch can trigger a reporting event that leads the user interface to report a key stroke to a language-based processor.
Finally, as shown in Figure 5(c), in this case it is most likely that the user made an en-or in the direction of movement (if at all), either by undershooting or overshooting the target. Based on combining the hovering information and the press event, a probability density function can be created and passed to a language engine in a language-based processor. For example, in this case the probability density function may be as follows: "g" 60%, "F' 20%, and "h" 20%.
It should be noted that the above example of using hovering direction information to correct press coordinates could also be applied in general touch cases, not just to text input.
Figure 6 illustrates user equipment according to certain embodiments.
User equipment 600 may include various components. For example, user equipment 600 can include a sensor 610, an image processor 620, a language-based processor 630, and an intermediate processor 640. The user equipment 600 may also include memory 650. The user equipment 600 may further include a display 660. The various features of the user equipment 600 may be variously interconnected, such as by a bus 670.
The bus 670 may be any suitable communication bus for interconnecting the components. Although one bus 670 is shown, multiple buses connecting several of the features together, rather than all of the features, is also permitted. The bus may be a logical or virtual bus, and other types of interconnections besides bus interconnections are also permitted.
The display 660 may be any suitable display, such as a liquid crystal display, an organic electroluminescent display, or an electronic ink display.
The display 660 may be equipped with the sensor 610, which may be any suitable sensor such as a resistive sensor, a capacitive sensor, a surface acoustic wave sensor, or a piezoelectric sensor. In one particular example, a projected capacitive touch sensor may be employed.
The sensor 610 may be operably connected to an image processor 620. The image processor 620 can be configured to receive information from the sensor and convert it into an output As noted above, some image processors provide only an x,y coordinate as an output. However, an enhanced image processor may provide a greater amount of information about touching that occurs to the sensor (each touch can be referred to as a press event).
A language-based processor 630 can be operably connected to receive an output of the image processor 620. The language-based processor 630 can run a language engine that proposes candidate words to a user via the display 660. The candidate words can be based on letters received and probability density functions associated with those letters.
An intermediate processor 640 can take the information of the image processor 620 and convert it into the characters and probability density functions. The intermediate processor MO can be a separate device, or it can be integrated with the image processor 620 or the language-based processor 630. Moreover the image processor 620 and the language-based processor 630 can likewise be integrated together, or may be separate devices from one another.
The memory 650 can be any suitable storage device or computer-readable medium. It can be used to store software for the user equipment 600 as well as to store data that is being processed and other information.
The memory 650 can be on a chip or integrated on a same chip with one or more of the processors of the user equipment 600. The memory 650 may be a flash Random Access Memory (RAM). The memory may store computer readable instructions that in combination with the processor cause an apparatus to perform any of the methods described herein.
Figure 7 illustrates a method according to certain embodiments of the present invention. As shown in Figure 7, a method can include receiving 710 a coordinate-based user input at a user interface. The user interface can be a touch-sensitive screen. The coordinate-based user input can be the touch of a human digit, such as a finger, thumb, or toe. Other user inputs, such as the touch of a pen or stylus are also permitted.
The method can also include determining 720 a probability density function based on the user input. The probability density function is based on a layout of a keypad, such as a touchscreen keypad. The keypad can be an alphanumeric keypad, a numeric keypad, or any other kind of keypad. The keypad can be arranged in a QWERTY layout or in a traditional telephone handset layout. Other layouts are also possible. The determining 720 can include analyzing 725 at least one of amplitude of a touch event; height, width, and angle of the touch event; finger approach and retreat direction and speed; force level; or timing between press and release in the touch event.
The method can further include providing 730 the probability density function to a language-based processor. The probability density function can be expressed as respective probabilities with respect to a plurality of possible characters. The possible characters can be any alpha-numeric or other symbolic character, such is able to be pressed on the keypad. Additionally, a possible character of "null" or "non-stroke" may be included, if there is ambiguity about whether the touch was even an intentional touch.
The method can additionally include providing 740 the probability density function to the user interface to be displayed visually to a user of the user interface. The method can also include receiving 750 a modified probability density function from the language-based processor and providing 760 the modified probability density function to the user interface to be displayed visually to a user of the user interface.
Figure 8 illustrates another method according to certain embodiments of the present invention. The method includes receiving 810, at a language-based processor, a first probability density function of a first user input, wherein the first probability density function is based on a layout of a keypad.
The method also includes receiving 820 a second probability density function of a second user input, wherein the second probability density function is based on the layout of the keypad. Additional reception of additional probability density functions is also possible. The method can further include processing 830 the first probability density function and the second probability density function in the language-based processor to obtain a most probable word. The first probability density function can be expressed as respective probabilities with respect to a plurality of possible key strokes. The language processing can evaluate each key stroke based on the probabilities, and can also re-evaluate previous keystrokes based on the following keystrokes when providing candidate words.
The methods of Figures 7-8 may be performed by, for example, the user equipment illustrated in Figure 6. In a particular embodiment, a computer-readable medium is encoded with instructions that, when executed in hardware, perfomi a process, the process corresponding to the methods of Figures 7 or 8. The computer-readable medium can be a non-transitory medium such as a storage medium.
Various embodiments of the present invention may provide differing advantages. For example, certain embodiments of the present invention may enable a text input solution to automatically adapt to a wide variety of different users, usage styles, and usage contexts. For example, a user might on some occasions try to enter text left handed while riding a bicycle, although the user typically enters text right handed. In the left handed text entry situation, the presses may be noted as less precise and as a consequence the level of correction applied by the language engine can be correspondingly greater.
Moreover, in certain embodiments, there are no settings required.
Instead, the solution according to certain embodiments can automatically adjust press-by-press to a user's behavior. Indeed, certain embodiments can appropriately determine an intended word that is typed by a collaboration of different users using a variety entry means (stylus, finger, and thumb, for
example).
One having ordinary skill in the art will readily understand that the invention as discussed above may be practiced with steps in a different order, and/or with hardware elements in configurations which are different than those which are disclosed. Therefore, although the invention has been described based upon these preferred embodiments, it would be apparent to those of skill in the art that certain modifications, variations, and alternative constructions would be apparent, while remaining within the spirit and scope of the invention. For example, although alphanumeric keys are used in many examples, a similar processing may be applied to keys of a musical instrument interface. Similarly, although the expression "touchscreen" is intended to cover apparatus which are able to both present a graphical display to a user, and receive touch-related user input, in some embodiments the touchscreen may not be capable of presenting a graphical display and may only receive touch-related input. In the latter case, a keyboard may be permanently marked over the touchscreen by screen-printing, for example.
In order to determine the metes and bounds of the invention, therefore, reference should be made to the appended claims.
Claims (33)
- WE CLAIM: 1. A method, comprising: receiving a coordinate-based user input at a user interface; determining a probability density function based on the user input, wherein the probability density function is based on a layout of a keypad; and providing the probability density function to a language-based processor.
- 2. The method of claim 1, wherein the keypad comprises an alphanumeric keypad.
- 3. The method of claim I or claim 2, wherein the user interface comprises a touch-sensitive screen.
- 4. The method of any of claims 1-3, wherein the probability density function is expressed as respective probabilities with respect to a plurality of possible characters.
- 5. The method of any of claims 1-4, wherein the coordinate-based user input comprises the touch of a human digit.
- 6. The method of any of claims 1-5, further comprising: providing the probability density flrnction to the user interface to be displayed visually to a user of the user interface.
- 7, The method of any of claims 1-6, further comprising: receiving a modified probability density function from the language-based processor; and providing the modified probability density function to the user interface to be displayed visually to a user of the user interface.
- 8. The method of any of claims 1-7, wherein the determining the probability density function comprises analyzing at least one of amplitude of a touch event; height, width, and angle of the touch event; finger approach and retreat direction and speed; force level; or timing between press and release in the touch event.
- 9, A method, comprising: receiving, at a language-based processor, a first probability density function of a first user input, wherein the first probability density function is based on a layout of a keypad; receiving a second probability density function of a second user input, wherein the second probability density function is based on the layout of the keypad; and processing the first probability density function and the second probability density function in the language-based processor to obtain a most probable word.
- 10. The method of claim 9, wherein the first probability density function is expressed as respective probabilities with respect to a plurality of possible key strokes.
- ii. An apparatus, comprising: at least one memory including computer program code; and at least one processor, wherein the at least one memory and computer program code are configured to, with the at least one processor, cause the apparatus at least to: upon receiving a coordinate-based user input at a user interface, determine a probability density function based on the user input, wherein the probability density function is based on a layout of a keypad, and provide the probability density function to a language-based processor.
- 12. The apparatus of claim 11, wherein the keypad comprises an alphanumeric keypad.
- 13. The apparatus of claim 11 or claim 12, wherein the user interface comprises a touch-sensitive screen.
- 14. The apparatus of any of claims 1143, wherein the probability density function is expressed as respective probabilities with respect to a plurality of possible characters.
- 15. The apparatus of any of claims 11-14, wherein the coordinate-based user input comprises the touch of a human digit.
- 16. The apparatus of any of claims 11-15, wherein the at least one memory and computer program code are configured to, with the at least one process, cause the apparatus at least to provide the probability density function to the user interface to be displayed visually to a user of the user interface.
- 17. The apparatus of any of claims 11-16, wherein the at least one memory and computer program code are configured to, with the at least one process, cause the apparatus at least to, upon receiving a modified probability density function from the language-based processor, provide the modified probability density function to the user interface to be displayed visually to a user of the user interf ace.
- 18. The apparatus of any of claims 11-17, wherein the at least one memory and computer program code are configured to, with the at least one process, cause the apparatus at least to determine the probability density function by analyzing at least one of amplitude of a touch event; height, width, and angle of the touch event; finger approach and retreat direction and speed; force level; or timing between press and release in the touch event.
- 19. An apparatus, comprising: at least one memory including computer program code; and at least one processor, wherein the at least one memory and computer program code are configured to, with the at least one process, cause the apparatus at least to: upon receiving, at a language-based processor, a first probability density function of a first user input, wherein the first probability density function is based on a layout of a keypad, and receiving a second probability density function of a second user input, wherein the second probability density function is based on the layout of the keypad, process the first probability density function and the second probability density function in the language-based processor; and obtain a most probable word based on processing the first probability density function and the second probability density function.
- 20. The apparatus of claim 19, wherein the first probability density function is expressed as respective probabilities with respect to a plurality of possible key strokes.
- 21. A computer-readable medium encoded with instructions that, when executed in hardware, perform a process, the process comprising a method according to any of claims 1-10.
- 22. An apparatus, comprising: receiving means for receiving a coordinate-based user input at a user interface; determining means for determining a probability density function based on the user input, wherein the probability density function is based on a layout of a keypad; and providing means for providing the probability density function to a language-based processor.
- 23. The apparatus of claim 22, wherein the keypad comprises an alphanumeric keypad.
- 24. The apparatus of claim 22 or claim 23, wherein the user interface comprises a touch-sensitive screen.
- 25. The apparatus of any of claims 22-24, wherein the probability density function is expressed as respective probabilities with respect to a plurality of possible characters.
- 26. The apparatus of any of cbims 22-25, wherein the coordinate-based user input comprises the touch of a human digit.
- 27. The apparatus of any of claims 22-26, further comprising: display means for providing the probability density function to the user interface to be displayed visually to a user of the user interface.
- 28. The apparatus of any of claims 22-27, further comprising: further receiving means for receiving a modified probability density function from the language-based processor; and display means for providing the modified probability density function to the user interface to be displayed visually to a user of the user interface.
- 29. The apparatus of any of claims 22-28, wherein the determining means comprises analyzing means for analyzing at least one of amplitude of a touch event; height, width, and angle of the touch event; finger approach and retreat direction and speed; force level; or timing between press and release in the touch event.
- 30. An apparatus, comprising: first receiving means for receiving, at a language-based processor, a first probability density function of a first user input, wherein the first probability density function is based on a layout of a keypad; second receiving means for receiving a second probability density function of a second user input, wherein the second probability density function is based on the layout of the keypad; and processing the first probability density function and the second probability density function in the language-based processor to obtain a most probable word.
- 31. The apparatus of claim 30, wherein the first probability density function is expressed as respective probabilities with respect to a plurality of possible key strokes.
- 32. A computer readable medium having computer-readable code stored thereon, the computer-readable code comprising: code for receiving a coordinate-based user input at a user interface; code for determining a probability density function based on the user input, wherein the probability density function is based on a layout of a keypad; and code for providing the probability density function to a language-based processor.
- 33. A computer readable medium having computer-readable code stored thereon, the computer-readable code comprising: code for receiving, at a language-based processor, a first probability density function of a first user input, wherein the first probability density function is based on a layout of a keypad; code for receiving a second probability density function of a second user input, wherein the second probability density function is based on the layout of the keypad; and code for processing the first probability density function and the second probability density function in the language-based processor to obtain a most probable word.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1022122.4A GB2486917A (en) | 2010-12-31 | 2010-12-31 | Method for determining the intended character when a keypad receives input |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1022122.4A GB2486917A (en) | 2010-12-31 | 2010-12-31 | Method for determining the intended character when a keypad receives input |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201022122D0 GB201022122D0 (en) | 2011-02-02 |
GB2486917A true GB2486917A (en) | 2012-07-04 |
Family
ID=43599126
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1022122.4A Withdrawn GB2486917A (en) | 2010-12-31 | 2010-12-31 | Method for determining the intended character when a keypad receives input |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2486917A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104111797A (en) * | 2014-06-24 | 2014-10-22 | 联想(北京)有限公司 | Information processing method and electronic equipment |
WO2015051680A1 (en) * | 2013-10-08 | 2015-04-16 | 百度在线网络技术(北京)有限公司 | Method and apparatus for controlling virtual keyboard of mobile terminal |
WO2017116580A1 (en) * | 2015-12-29 | 2017-07-06 | Google Inc. | Continuous keyboard recognition |
US10359870B2 (en) | 2011-04-15 | 2019-07-23 | Nokia Technologies Oy | Apparatus, method, computer program and user interface |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5748512A (en) * | 1995-02-28 | 1998-05-05 | Microsoft Corporation | Adjusting keyboard |
EP1569079A1 (en) * | 2004-02-27 | 2005-08-31 | Research In Motion Limited | Text input system for a mobile electronic device and methods thereof |
US20060028450A1 (en) * | 2004-08-06 | 2006-02-09 | Daniel Suraqui | Finger activated reduced keyboard and a method for performing text input |
GB2438716A (en) * | 2006-05-25 | 2007-12-05 | Harald Philipp | touch sensitive interface |
US20100036655A1 (en) * | 2008-08-05 | 2010-02-11 | Matthew Cecil | Probability-based approach to recognition of user-entered data |
CN101719022A (en) * | 2010-01-05 | 2010-06-02 | 汉王科技股份有限公司 | Character input method for all-purpose keyboard and processing device thereof |
-
2010
- 2010-12-31 GB GB1022122.4A patent/GB2486917A/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5748512A (en) * | 1995-02-28 | 1998-05-05 | Microsoft Corporation | Adjusting keyboard |
EP1569079A1 (en) * | 2004-02-27 | 2005-08-31 | Research In Motion Limited | Text input system for a mobile electronic device and methods thereof |
US20060028450A1 (en) * | 2004-08-06 | 2006-02-09 | Daniel Suraqui | Finger activated reduced keyboard and a method for performing text input |
GB2438716A (en) * | 2006-05-25 | 2007-12-05 | Harald Philipp | touch sensitive interface |
US20100036655A1 (en) * | 2008-08-05 | 2010-02-11 | Matthew Cecil | Probability-based approach to recognition of user-entered data |
CN101719022A (en) * | 2010-01-05 | 2010-06-02 | 汉王科技股份有限公司 | Character input method for all-purpose keyboard and processing device thereof |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10359870B2 (en) | 2011-04-15 | 2019-07-23 | Nokia Technologies Oy | Apparatus, method, computer program and user interface |
WO2015051680A1 (en) * | 2013-10-08 | 2015-04-16 | 百度在线网络技术(北京)有限公司 | Method and apparatus for controlling virtual keyboard of mobile terminal |
CN104111797A (en) * | 2014-06-24 | 2014-10-22 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104111797B (en) * | 2014-06-24 | 2017-07-21 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
WO2017116580A1 (en) * | 2015-12-29 | 2017-07-06 | Google Inc. | Continuous keyboard recognition |
CN108351710A (en) * | 2015-12-29 | 2018-07-31 | 谷歌有限责任公司 | Continuous keyboard identification |
Also Published As
Publication number | Publication date |
---|---|
GB201022122D0 (en) | 2011-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10268370B2 (en) | Character input device and character input method with a plurality of keypads | |
US10642933B2 (en) | Method and apparatus for word prediction selection | |
Nesbat | A system for fast, full-text entry for small electronic devices | |
US8384683B2 (en) | Method for user input from the back panel of a handheld computerized device | |
US9665278B2 (en) | Assisting input from a keyboard | |
CN108700996B (en) | System and method for multiple input management | |
US8760428B2 (en) | Multi-directional calibration of touch screens | |
US20130113714A1 (en) | Electronic Device Having Single Hand Multi-Touch Surface Keyboard and Method of Inputting to Same | |
US20120293434A1 (en) | Touch alphabet and communication system | |
CN106445369A (en) | Input method and device | |
CN104137038A (en) | Intelligent touchscreen keyboard with finger differentiation | |
US8806384B2 (en) | Keyboard gestures for character string replacement | |
CN103713845B (en) | Method for screening candidate items and device thereof, text input method and input method system | |
US7855719B2 (en) | Touch input method and portable terminal apparatus | |
GB2486917A (en) | Method for determining the intended character when a keypad receives input | |
US20140105664A1 (en) | Keyboard Modification to Increase Typing Speed by Gesturing Next Character | |
CN101788879A (en) | Soft keyboard layout and scan input method | |
US20140215397A1 (en) | Apparatus and Method Pertaining to Predicted-Text Derivatives | |
CN111367459B (en) | Text input method using pressure touch pad and intelligent electronic device | |
CN114356113A (en) | Input method and input device | |
US9250728B2 (en) | Apparatus and method pertaining to predicted-text entry | |
CA2846561C (en) | Method and apparatus for word prediction selection | |
US20150268734A1 (en) | Gesture recognition method for motion sensing detector | |
US20170024053A1 (en) | Touch alphabet and communication system | |
CN108733227B (en) | Input device and input method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |