WO2005088522A1 - Systeme et procede d'entree de textes - Google Patents

Systeme et procede d'entree de textes Download PDF

Info

Publication number
WO2005088522A1
WO2005088522A1 PCT/CA2005/000383 CA2005000383W WO2005088522A1 WO 2005088522 A1 WO2005088522 A1 WO 2005088522A1 CA 2005000383 W CA2005000383 W CA 2005000383W WO 2005088522 A1 WO2005088522 A1 WO 2005088522A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
input
characters
character
detectors
Prior art date
Application number
PCT/CA2005/000383
Other languages
English (en)
Inventor
Michael Goodgoll
Original Assignee
Michael Goodgoll
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Michael Goodgoll filed Critical Michael Goodgoll
Publication of WO2005088522A1 publication Critical patent/WO2005088522A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/0202Constructional details or processes of manufacture of the input device
    • G06F3/0219Special purpose keyboards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/0202Constructional details or processes of manufacture of the input device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0235Character input methods using chord techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0238Programmable keyboards

Definitions

  • TITLE SYSTEM AND METHOD FOR TEXT ENTRY
  • the invention relates to apparatus and methods for data entry of characters and other data input commands on a variety of devices which have text entry capabilities.
  • the user of a text entry device should be able to enter text in such a way so as to have the associated actions that are required to enter text utilize the full potential of coordinated, sequential, overlapping, natural movements of the five fingered hand.
  • the full data entry capacity of the all the fingers is utilized.
  • Text entry means should also be universal in nature, in that they may be used and readily adapted so as to be able to enter the characters that are associated with various languages. With the growing number of users who make use of devices to enter text for various purposes, it should be borne in mind, that English may not be the language that is used by all people with respect to their choice of a language in which to communicate. Increasingly, throughout the world, many people are bilingual or even trilingual. Therefore, a text entry means should have its keys or mechanisms by which characters are entered oriented such that they may be used with ease for the entry of characters associated with multiple languages.
  • Text entry means should also more importantly provide an efficient means by which text may be input, so as to combine a high speed of input with a low error rate associated with the text entry.
  • text entry means In order for text entry means to be more efficient, they generally should place less reliance on visual monitoring, meaning that the required actions for entering characters must be such that the user may commit such actions to memory and place less reliance on visually monitoring the actual inputting of a character.
  • SMS short messaging services
  • the text entry mechanisms that are associated with SMS services are in general inefficient, as due to the size of devices associated with SMS services, the keys that are associated with such devices do not allow for ease of manipulation. Further, the user is frequently required to strike a key associated with the device more than two times to produce a desired character. As a result, the speed by which text may be input is hindered due to the inherent limitations that are associated with the text entry functionality that is provided with SMS capable devices.
  • Embodiments of the invention are generally directed to a method and system for text entry, wherein no reliance is placed on visually monitoring the method of text being entered, and wherein the method and system allow for universal application so as to be able to allow for the input of characters that are associated with various languages.
  • a method of employing a text input device having at least two input detectors to generate a character set comprising: generating a first single character output; generating a second single character output with a second specific gesture associated with a first timing requirement; and generating a third single character output with a third specific gesture associated with a second timing requirement.
  • a method for configuring a text input device for use with a single hand comprising: providing a text input device having at least two input detectors; selecting a character set having a plurality of characters, wherein each characters is associated with one or more sounds in a language; defining a set of gesture types; selecting at least some of the characters and assigning each selected character to a specific gesture corresponding to one of the gesture types based on a correlation between the sounds associated with the character and the specific gesture, wherein each specific gesture defines a manipulation of one or more of the input detectors.
  • a computer system wherein a user by activating input detectors, with at least three types of gestures, is able to produce a single character with each gesture, comprising: at least two input detectors; a recognition module which receives the information from the input detectors; and an output generation module which receives information from the recognition module, wherein the output generation module produces a single character based on any information received from the recognition module indicating any one type of gesture.
  • Figure 1 illustrates a text input device having input detectors operated by a single hand as an embodiment of the current invention.
  • Figure 2 illustrates the correlation between fingers and additional points on a right hand to the input detectors of a text input device.
  • Figures 3A-3C and 3E-3J illustrate a modified stem-and-leaf diagram for mapping of specific gesture to unique character outputs.
  • Figure 3D illustrates a standard stem-and-leaf diagram.
  • Figure 4 illustrates two modified stem-and-leaf diagrams for use with the Hindi language script and mapping characters to the first, second and third gesture types.
  • Figure 5A illustrates a general phonemic layout of the modified stem-and-leaf diagram.
  • Figure 5B illustrates a phonemic layout for Hindi where the characters mapped to the first, second and third gesture types are featured on a single modified stem-and-leaf diagram.
  • Figure 5C illustrates the Bharakhadi chart.
  • Figure 6 illustrates a correlation between spoken sounds and input detectors in relation to a general phonemic layout of the modified stem-and-leaf diagram.
  • Figure 7 illustrates a cellular phone adapted for use with the method of the current invention.
  • Figure 8 illustrates a computer system with a text input device to provide a user interface according to the current invention.
  • the present invention is a method for data entry preferably through single handed operation of a text input device, but not restricted to single handed operation.
  • the text input device has at least two input detectors, and in preferred embodiments the text input device has five or six input detectors. .
  • each input detector 110-160 is activated by a specific part of the hand.
  • each of five input detectors 110-150 is activated by one of each of five fingers on a single hand 110'-150', where the thumb is defined to be a finger.
  • the thumb 110' of a users right hand activates the first input detector 110 or "e" key.
  • other parts of the hand can be used to activate input detectors, such as the palm 160a, wrist 160b and the heel of the thumb 160c.
  • the five fingers 110'-150' activate five different input detectors 110-150 and the sixth input detector 160 is activated by the palm 160a, wrist 160b or heel of the thumb 160c.
  • the use of this method allows the input of numerous unique characters through the activation of input detectors by a single hand with minimal targeting by finger movements and without necessarily requiring a specific fixed locality of the hand for text or other input as the hand is free to move in any orientation or locality. A ubiquitous eyes free use is therefore possible.
  • the text input device 100 may be any device which is suitable to have input detectors associated with it. Examples of text input devices
  • the invention includes, but are not limited to, modified standard keyboards, cell phones, PDA's, SMS text messaging devices, video game controllers and wearable clothing which include input detectors.
  • the invention is a data entry device to be connected to a computing device.
  • the output is an electrical signal corresponding to the desired character based on actuation of the input detectors.
  • the invention is a portion of a computing device wherein software associated with the computing device translates the signals from the input detectors and outputs characters directly in accordance with the logic of this method.
  • a gesture can be related to a movement or several sequential movements of the fingers or parts of the hand to activate any of the input detectors 110-160 which then provides a single output, character or command, from the text input device 100.
  • the term character can describe any form of output from a computing device.
  • the output can include a single character from a character set, such as the character "b" from the English character set.
  • a simple example of a gesture would be to tap the index finger once which could output the character "b", another gesture would be to tap the thumb once which could output the character "c”.
  • both of these gestures are called 'specific gestures'. Specific gestures are grouped into categories. Each category represents one 'gesture type'.
  • a gesture type is a broad description of the sequential movements necessary to output a character.
  • the gesture type would include all specific gestures involving one short tap of any one finger.
  • Gesture types can be defined by the dynamically timed sequence of activations of the input detectors. There can be numerous gesture types. Each gesture type category includes various specific gestures. The activations associated with the gesture type, defines the total number of specific gestures possible from the gesture type. In the preferred embodiment, four different gesture types provide seventy-five unique outputs. The four gesture types are defined by the order and duration of input detector activation.
  • a "short” activation occurs when an input detector is tapped briefly.
  • a “long” activation occurs when an input detector is held for at least a predefined duration, typically twice the time necessary for a short activation.
  • the method allows for a fluctuating, variable speed of input even while the duration of the long gesture remains constant. This 'long' duration may be preset to suit user demands. Time durations for activations are mentioned for purposes of illustration, and they may be altered to suit user demands.
  • the timing measurements may be replaced with pressure, speed or other tactile requirements depending on the gesture type implemented.
  • the first gesture type is a single long activation of any input detector held for at least a predefined duration. Tapping a single input detector for less than a predefined duration results in a null output.
  • the first gesture produces a single character associated with the particular character set in use. There are a total of 5 possible specific gestures and corresponding outputs for the first gesture type - one corresponding to each of the five buttons.
  • the first gesture may also be referred to as a single long activation.
  • the second gesture type is a sequence of two short activations of any of the single input detectors. After the first short activation, a first timing requirement must be met for the second short activation to be registered as a second gesture type input.
  • a second gesture type will not be registered by the text input device 100.
  • a correctly entered second gesture produces a single character associated with the particular character set in use. There are a total of 25 possible specific gestures and corresponding outputs for the second gesture type.
  • the second gesture is also referred to as a short-short activation.
  • the third gesture type is a sequence of a short activation followed by a long activation.
  • a second timing requirement is necessary such that the duration of the long activation must exceed the duration of a predetermined time period.
  • a correctly entered third gesture produces a single character associated with the particular character set in use. There are a total of 25 possible specific gestures and corresponding outputs for the third gesture type.
  • the third gesture is also referred to as a short-long activation.
  • the fourth gesture type is a sequence of a long activation followed by a short activation.
  • a third timing requirement is necessary such that the short activation must occur while the long activation is held.
  • a correctly entered fourth gesture produces a single character associated with the particular character set in use.
  • the 20 specific gestures are possible as there are 25 possible permutations of the 5 input detectors, and the 5 permutations are removed where the same input detector is used in the long and short activations. Using the same input detector in the long and short activations would not be possible for the fourth gesture type because the short activation would have to occur on the same finger and detector that is being held for the long activation.
  • the fourth gesture is also referred to as a long-short activation.
  • a total number of 75 unique characters can be outputted by the text input device 100. This allows the user to efficiently generate text for languages that have many characters. That is, 5 characters for the first gesture type, 25 characters for the second gesture type, 25 characters for the third gesture type and 20 characters for the fourth gesture type.
  • a sample effect of the addition and the subtraction of input detectors and/or gesture types on number of possible unique characters can be seen in Table 1.
  • TABLE 1 NUMBER OF UNIQUE OUTPUTS POSSIBLE USING SPECIFIC GESTURES
  • a sixth input detector provides additional functionality by providing additional options to the user. These options include outputs from the particular character set in use, cursor functions, command keys, and navigation to different character sets such as numerics and ASCI, or navigation to different language scripts such as English, Kannada, Japanese, Hindi and Bengali as examples. The navigation to these other features would be through the use of the sixth input detector in the gestures as defined above. [0038] For the text input device 100 to provide reliable input-output correspondence, a single short activation not followed by a second activation within a defined time period will timeout and be ignored. The timeout period of the long activation can be user-set to correspond to the learning level and expertise of the user.
  • the duration of the short activation as well as the duration of the long activation can also be user definable.
  • the first gesture type has a fixed activation time
  • a user must deliberately hold an input detector to output characters corresponding to the specific gesture.
  • the longer activation time to initiate data entry may tend to limit the speed of data entry for a user.
  • a visual, auditory or tactile feedback mechanism can signal the user that a long activation is imminent or that the activation is complete and the output has been registered. Feedback will cue the user to release the activation and proceed with data entry, allowing faster overall data entry.
  • FIG. 3A-3J wherein mapping of specific gestures to characters is shown.
  • the mapping of each specific gesture to a character is described using a modified stem- and-leaf diagram 300.
  • the modified stem-and-leaf diagram 300 can optionally be displayed to the user while learning to use the text input device 100.
  • This diagram is a tool for using the text input device and it is not a necessity for use of the inventive method or system described herein.
  • Each cell along the diagonal, from the lower left to the upper right, defines a home key which corresponds to one of five input detectors 110-150.
  • the cell with the "e” corresponds to the first detector 1 10
  • the cell with "n” corresponds to the second detector 120 and so on.
  • the home key cells are also shaded as column 320.
  • the use of the modified stem-and-leaf diagram 300 and its complete description are outlined below in conjunction with five input detectors 110-150 of a text input device 100, operated by five fingers of a single hand 110'-150'. The description provided is used to explain a tool for learning the input technique of the current method and in no way limits the scope of the method.
  • the mapped character layout shown in Figure 3A is based on the English language. The depicted layout is merely used to describe the association with specific gestures and the mapped character output in diagram 300.
  • the diagram represents a collection of five, five- by-one arrays, as organized in Figures 3B and 3C.
  • the cells of the columns of the combined arrays correlate to the five input detectors 110- 150, capable of being activated by a user with five fingers 110'-150' of a single hand.
  • the 30 characters detailed in the cells of the matrix represent the 25 outputs possible by the second gesture, plus the five outputs possible by the first gesture.
  • the character in the upper left corresponds to the first gesture having the single long activation of a single input detector.
  • the characters in the lower right correspond to the second gesture having the short-short activation of a single input detector. It is noted that any single short activation will output a null code.
  • the remaining characters with non- shaded backgrounds relate to the short-short sequence of the second gesture, using two activators.
  • the first activation selects the column of the character
  • the second activation selects the row of the character.
  • sequential short-short activations start in the column of the first short activation and proceed towards the shaded cell in the row of the second activation.
  • the layout of diagram 300 provides an easy to follow look-up table for learning to use the text input device 100 to be able to input all the characters which may be associated with the text input device 100.
  • the thumb 110' corresponds to the lowest row and the little finger 150' corresponds to the upper row for a right-handed user.
  • a horizontal mirror of the diagram would be used for a left-handed user.
  • a similar diagram can be used.
  • an active display screen e.g., CRT, LCD, holographic projector, or the like
  • a change of characters on the modified stem-and-leaf diagram 300 displayed to the user can be performed by any defined gesture, the specific gestures preferably including the sixth input detector 160.
  • a series of diagrams can be prepared for the user which represent the characters associated with alternative character sets accessed preferably using the sixth input detector 160. Sequences using a sixth input detector 160 can be depicted as two horizontal bars above or below the modified stem-and-leaf diagram. The outputs in the first row are activated by selecting one of the other five input detectors 110-150 followed by the sixth input detector 160 while the outputs in the lower row are activated by first selecting the sixth input detector 160 followed by one of the other five input detectors 110-150.
  • the characters associated with the third and fourth gesture types may be presented in a similar second display 300b presented in addition to a primary display 300a, as in Figure 4 (where only the third gesture is implemented). This avoids overcrowding of the display in languages such as Hindi or Japanese.
  • a third display can be provided for the fourth gesture type.
  • the additional characters for the third and fourth gesture types can be orientated in different segments of the cells of a single display 300, with each cell corresponding to a generic input sequence and not to specific gestures.
  • the specific gestures relate to a character's position in each cell. In either display method, the same method of look up relating to the mapped position to the input detector sequence is used for the third and fourth gestures as for the second gesture.
  • Figure 3E shows a select portion of the full modified stem-and-leaf diagram 300. The selected portion is also featured in the remaining Figures 3F-3J.
  • solid dots 350 refer to long activations as in the first gesture and arrows 360 refer to short activations. Arrows that extend the length of two cells imply two short activations of each detector corresponding to the home key of the columns for the tail and head of the arrow. This will further be explained in the example for the second gesture.
  • the user depresses the first input detector 110 with their thumb 1 0', and holds it for a predefined period or longer.
  • the user depresses the second input detector 120 with their index finger 120', and holds it for the predefined period, and so on for the other fingers to perform the outputs corresponding to the specific first gesture types.
  • the process is further detailed in Figure 3G where the solid dot 350 represents a single long activation of the input detector in the column corresponding to the solid dot 350.
  • the second gesture type represents a short-short activation sequence. To express one of these characters, two sequential activations are necessary, where each is held for a shorter period than the predetermined period of the first gesture. The activation order needed is given by the shaded cells in the row and the column of the desired character. To output the letter "a”, for example, the user activates two input detectors, the second input detector 120 followed by the third input detector 130. Alternatively, to output the letter "i", the user activates the third input detector 130 followed by the second input detector 120. The order in which the two keystrokes are performed is determined by moving horizontally towards the shaded diagonal of Figure 3A. Hence, for outputs depicted on cells above the shaded diagonal, activation flows from left to right.
  • FIG 4 a matrix is displayed showing Hindi outputs.
  • the first diagram 300a is used in the same way as above for the modified stem-and-leaf diagram 300.
  • the second diagram 300b describes the use of the third gesture type.
  • the tail of the arrow 360 depicts the first key pressed and the head of the arrow 360 depicts the last key pressed as previously mentioned for the second gesture type.
  • the solid dot 350 corresponds to the column of the key that is held during activation.
  • a short thumb 110' activation followed by a long index finger 120' activation would result in the 'dha' character.
  • a short activation of the thumb 110' followed by a long activation of the thumb 110' would result in the 'pha' character.
  • This same graphical representation would apply for the fourth gesture, although specific figures are not shown.
  • mapping of the stem-and-leaf diagram was laid out to provide users with increased input efficiency and increased speed based on the statistical use of characters used in the English language.
  • mapping of the characters to the gestures takes advantage of structural symmetry present between either or both, of the phonemic system of the language and the script system, with that of the system of gestures used in this invention.
  • Prior art keyboard design has long recognized that relating the mapping of keys to the phonemic system of the language is advantageous. Dvorak placed the five vowels of the English language under one hand.
  • a keyboard developed for Indian languages maps the unaspirated consonants to the unshifted level and the corresponding aspirated character to the same key.
  • the present invention relates the phonemic system of the language (when present in the script), to the movements of the fingers and not to their targets. This becomes evident when one considers that an embodiment of the method exists through placing activators of some kind on the fingertips themselves. These activators can be used on any surface and without any targeting requirement. Reliable and accurate text output could be achieved even while the user simultaneously sweeps their hand from side to side across the contact surface of a table, wall or their own body.
  • gestures are mapped to the phonemic dimensions of the language.
  • Some languages such as Tamil and English are termed "diglossic" because the script and pronunciation do not reflect each other.
  • some of the gestures would not relate to phonemics but instead, the gestures would relate to other aspects of the spoken language such as the frequency of use for a character in a particular language. Both the frequency aspect of the written language and the phonemic aspect are then used for these languages. This concept may be extended to any language.
  • Figure 5B depicts a diagram mapping phonemic relations of Hindi characters to specific gestures used to output the specific Hindi characters. It is assumed that the reader holds a sufficient knowledge and understanding of the Hindi language to understand the following example.
  • the use of specific finger names is in relation to the use of a five input detector text input device 100 with the right hand.
  • Voiceless sounds are symmetrically placed on the gestures proceeding towards the thumb 110', or in other words, the cells in the lower right portion of the diagram from the shaded diagonal.
  • Voiced sounds are placed on the gestures proceeding towards the little finger 150', or in other words, the cells in the upper left portion of the diagram from the shaded diagonal.
  • the voiceless stop sounds lie horizontally along the lower row.
  • the voiced stop sounds lie vertically along the far left column. Voiceless consonants are thus mapped to the second gesture type using a short- short activation sequence.
  • the corresponding aspirated sounds follow the same logic.
  • the third gesture requires that the second, long activation be held beyond a predetermined period. This held gesture corresponds to the aspirated sounds within this group of sounds.
  • the voiceless labial sounds are placed on the thumb but the voiced labial sounds would theoretically be placed on the little finger 15O 1 if a perfectly ordered relationship is desired. However, other factors need to be addressed when deciding on the mapping of characters to gestures.
  • the little finger 150' does not have an independent neurological manipulation. It shares neuronal pathways with the ring finger 140'. Thus, it is more difficult to control the little finger 150'.
  • the voiced labial sounds are therefore shifted to the more agile and stronger ring finger 140'.
  • the five nasal consonants are used quite frequently and are mapped to the gestures that are easily performed.
  • Characters representing "ma” and "na”, the two most used nasal consonants, are associated with the second gesture with alternating sequences of the index finger 120' and middle finger 130'.
  • Another frequent nasal consonant, Nukta is placed on the third gesture using the middle finger 130' to index finger 120' sequence.
  • the Anuswara is placed as the third gesture type using the middle finger 130' to index finger 120' sequence.
  • the vowels are mapped to the single long first gesture type and to the second gesture and third gesture types that use a repetition of a single finger.
  • the three vowels "A”, “I”, “U” are input using the third gesture type of each of the three inner fingers of the hand - index 120', middle 130' and ring 140' fingers.
  • the short and long vowels for "I” and "U” are created by two identical consecutive gestures. The first input generates a short vowel and the subsequent immediate repetition of the gesture transforms the previous short vowel input into a long vowel. This requires an additional keystroke. However, the frequency of the long vowels are minimal compared to the short vowels. It is more efficient to map the long vowels to additional keystrokes involving repeating the same gestures of the same fingers, than to assign a low frequency character permanently to one of the gestures available. That would eliminate the individually mapped gesture from common use.
  • Halant character representing a joining action of two distinct characters into one combined character. It is frequently used and thus, is mapped to an easily performed second gesture, the ring finger 140' followed by the index finger 120'.
  • a specific example of special scripting rules in Hindi is when the initial character of a word is "ra”. If followed by “u” or “uu” it takes two different specific glyphs that are conjuncts of the "reph” and "u” glyph. These two symbols are automatically selected based on processing logic.
  • the vowel "R” has two matras depending on the shape of the preceding consonant.
  • the oblique stroke or the inverted “v” matra is automatically placed as necessary by processing logic.
  • the use of only one specific gesture by the user will allow the input of both of these characters in their correct context based on the processing logic. As such, the user does not have to select an entirely new character for the associated sound.
  • FIG. 7 a cellular phone 400 is shown.
  • the cellular phone 400 is held in a single hand, either the left or right, and has several buttons or input detectors situated around the casing.
  • a cellular phone 400 to be held in the right hand has five input detectors or buttons 401-405.
  • the first button 401 is associated with the thumb
  • the second button 402 is associated to the index finger
  • the third button 403 is associated to the middle finger
  • the fourth button 404 is associated to the ring finger
  • the fifth button 405 is associated to the little finger.
  • the screen 407 is still visible to the user while in use.
  • the button layout is to be mirrored on the opposite faces of the cellular phone so that the screen 407 is visible for the user.
  • buttons of this device That is, entire character sets for given languages are accessible by the activation of the buttons or input detectors through the gestures.
  • a sixth button may be added near the palm for the purpose of a shift key as described before.
  • the present invention may be implemented by means of software, hardware, or any combination of the two.
  • the present invention may also be implemented by reprogramming currently known devices, for example, the standard QWERTY keyboard may be used to implement the present invention, by programming the system to which the keyboard is connected to recognize the appropriate number of keys to be the required input detectors.
  • Computer system 700 will be in the preferred embodiment, a combination of hardware and software components, which are used to implement the methodology required for the present invention.
  • the computer system 700 is used to illustrate the hardware/software components that may be used by the various devices in order to implement the method of the invention.
  • Computer system 700 will comprise a text input device 100.
  • the text input device 100 may be any device by which various gesture types may be detected, and may include for example, a standard QWERTY keyboard, a touch screen, a cellular phone's key pad, or any other such device. As discussed above, the text input device 100, will contain input detectors 110-150. The number of input detectors may vary, and here, for purposes of illustration, five input detectors are being used to describe the system 700. The input detectors 110-150 may be any means by which different gesture types may be detected and differentiated. The input detectors 110-150 are all connected to a recognition module 702. The input detectors 110-150 are connected to the recognition module 702 by means of a wired connection, such as a bus connection or a wireless connection, such as a radio frequency connection.
  • a wired connection such as a bus connection or a wireless connection, such as a radio frequency connection.
  • the recognition module 702 receives, information from each input detector 110-150 when it is activated.
  • the recognition module 702 may further contain or be connected to an output generation module 704.
  • the output generation module 704 will contain the timing requirements and the correlation between gesture types and characters that a gesture, or combination of gestures is to produce.
  • the recognition module 702 will receive information from the input detectors 110-150 and transmit that information to the output generation module 704.
  • the output generation module may be a hardware component or software component that has been programmed so as to contain or have access to the various rules (gestures and timing requirements) associated with the production of characters.
  • the output generation module 704 will receive information from the recognition module 702, and will process the information based on the rules that it has access to.
  • the output of the output generation module 704 will be the electronic representation of the character that the user has entered, and this electronic representation may be sent to a display 706 wherein it is displayed, or any other component (for example, storage means that may be associated with the system 700).
  • the description of the computer system 700 is intended to be description of the hardware/software components that are required in order to implement this invention, as there are other components which may be used to generate the output based on the given input.

Abstract

La présente invention a trait à un procédé d'utilisation d'un dispositif d'entrée de textes comportant au moins deux détecteurs d'entrée pour la génération d'un ensemble de caractères, comprenant : la génération d'une première sortie de caractère unique par un premier geste spécifique ; la génération d'une deuxième sortie de caractère unique par un deuxième geste spécifique associé à une première exigence de synchronisation ; et la génération d'une troisième sortie de caractère unique par un geste spécifique associé à une deuxième exigence de synchronisation.
PCT/CA2005/000383 2004-03-11 2005-03-11 Systeme et procede d'entree de textes WO2005088522A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US55182804P 2004-03-11 2004-03-11
US60/551,828 2004-03-11

Publications (1)

Publication Number Publication Date
WO2005088522A1 true WO2005088522A1 (fr) 2005-09-22

Family

ID=34975782

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2005/000383 WO2005088522A1 (fr) 2004-03-11 2005-03-11 Systeme et procede d'entree de textes

Country Status (1)

Country Link
WO (1) WO2005088522A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975091A (zh) * 2016-07-05 2016-09-28 南京理工大学 一种基于惯性传感器的虚拟键盘人机交互技术

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002065267A1 (fr) * 2001-02-14 2002-08-22 Taylor, Russell, Jeffrey Appareil de saisie de texte global
WO2003083632A2 (fr) * 2002-03-28 2003-10-09 Textm Inc. Systeme, procede et produit de programme informatique pour des entrees de donnees a une seule main
CA2481498A1 (fr) * 2002-04-04 2003-10-16 Xrgomics Pte. Ltd Systeme de clavier reduit emulant la cartographie et la dactylographie d'un clavier de type qwerty

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002065267A1 (fr) * 2001-02-14 2002-08-22 Taylor, Russell, Jeffrey Appareil de saisie de texte global
WO2003083632A2 (fr) * 2002-03-28 2003-10-09 Textm Inc. Systeme, procede et produit de programme informatique pour des entrees de donnees a une seule main
CA2481498A1 (fr) * 2002-04-04 2003-10-16 Xrgomics Pte. Ltd Systeme de clavier reduit emulant la cartographie et la dactylographie d'un clavier de type qwerty

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975091A (zh) * 2016-07-05 2016-09-28 南京理工大学 一种基于惯性传感器的虚拟键盘人机交互技术

Similar Documents

Publication Publication Date Title
US7170430B2 (en) System, method, and computer program product for single-handed data entry
US7283126B2 (en) System and method for providing gesture suggestions to enhance interpretation of user input
US6493464B1 (en) Multiple pen stroke character set and handwriting recognition system with immediate response
US20110209087A1 (en) Method and device for controlling an inputting data
Jain et al. User learning and performance with bezel menus
JP2013515295A (ja) データ入力システムおよびその方法
CN103870192B (zh) 一种基于触摸屏的输入方法和装置、汉语拼音输入法和系统
US20130339895A1 (en) System and method for text entry
KR20050119112A (ko) 터치 스크린용 명료 텍스트 입력 방법 및 감소된 키보드시스템
KR20000035960A (ko) 고속 타자 장치와 방법
KR20080106265A (ko) 컴퓨팅 시스템에 데이터를 입력하는 시스템 및 방법
CA2390503C (fr) Systeme et methode permettant d'offrir des suggestions de gestes pour ameliorer l'interpretation de l'intervention de l'utilisateur
US20140173522A1 (en) Novel Character Specification System and Method that Uses Remote Selection Menu and Touch Screen Movements
Kristensson Discrete and continuous shape writing for text entry and control
KR100651396B1 (ko) 문자 인식 장치 및 방법
JP2010517159A (ja) 電気電子機器のボタン効率増大方法
CN102866826A (zh) 一种字符输入方法及其装置
CN103049082B (zh) 一种适用于触控面板的点字输入方法
KR20100039650A (ko) 터치 스크린을 이용한 한글 입력방법 및 그 장치
US9563282B2 (en) Brahmi phonemics based keyboard for providing textual inputs in indian languages
KR100470525B1 (ko) Ohai기술의 사용자 인터페이스
WO2005088522A1 (fr) Systeme et procede d'entree de textes
Udapola et al. Braille messenger: Adaptive learning based non-visual touch screen text input for the blind community using braille
KR100356037B1 (ko) 수기 입력 방식에서 다문자를 인식할 수 있는 장치 및 그방법
KR100484128B1 (ko) 단획필기체한글입력장치

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase