WO2014072734A1 - Gesture input method and apparatus - Google Patents

Gesture input method and apparatus Download PDF

Info

Publication number
WO2014072734A1
WO2014072734A1 PCT/GB2013/052948 GB2013052948W WO2014072734A1 WO 2014072734 A1 WO2014072734 A1 WO 2014072734A1 GB 2013052948 W GB2013052948 W GB 2013052948W WO 2014072734 A1 WO2014072734 A1 WO 2014072734A1
Authority
WO
WIPO (PCT)
Prior art keywords
gestures
input means
gesture
movement
text
Prior art date
Application number
PCT/GB2013/052948
Other languages
French (fr)
Inventor
Dave RAWCLIFFE
Original Assignee
Rawcliffe Dave
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rawcliffe Dave filed Critical Rawcliffe Dave
Priority to EP13795285.9A priority Critical patent/EP2917812A1/en
Publication of WO2014072734A1 publication Critical patent/WO2014072734A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0234Character input methods using switches operable in different directions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0235Character input methods using chord techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Definitions

  • the invention relates to a method for entering text on an electronic device such as a mobile phone.
  • the invention also relates to an electronic device, system and an input device operable by a user for entering text.
  • Known touch sensitive computing devices such as mobile phones and tablets
  • Users require quick and easy means for entering text on a document or email.
  • Known character and text input methods include virtual input e.g. keyboard on touch-sensitive display, and physical keyboards.
  • Known input methods also include handwriting recognition devices that translate text written by a user on a touch sensitive pad. More complex systems can convert combinations of strokes on a touch sensitive display into Chinese characters.
  • Known systems have a number of disadvantages.
  • Virtual keyboards occupy a substantial part of touch-sensitive display because the minimum size of each virtual key is determined by the size of a human finger.
  • Text input on a virtual keyboard can be slow and inaccurate compared to physical devices such as a conventional 101 -key US traditional keyboard.
  • Traditional keyboards are either bulky, or when miniaturised they are difficult to use and add weight and complexity to a device.
  • Text input on games consoles e.g. of a player's name
  • Handwriting or character recognition requires advanced processing, complex look-up registers and tends to require a large interface on a device.
  • finger-based touch input though intuitive, can suffer from low precision due to a small interface, or the "fat finger” issue. Issues with known text entry systems are exacerbated when a user is in motion, e.g., walking, and unable to pay close attention to the interface. Summary of the invention
  • the invention seeks to address problems associated with text or character entry systems by limiting the set of gestures required to 'swipes' and 'taps' such that text can be entered reliably, consistently and quickly.
  • the use of a small set of simple gestures reduces the probability of error.
  • the gestures are unambiguous and can accurately be recognised by a device. They are easy to learn and do not require precision from the user, either in location or execution (a tap is a tap anywhere on the device, a swipe can start and finish with a wide degree of tolerance). Thus both user input and device recognition errors are reduced and speed of text entry is increased.
  • gestures By combining gestures the invention provides an alternative means of generating symbols, such as alpha-numeric or ASCII characters such that the traditional "QWERTY" keyboard, whether physical or virtual is redundant, or merely required as a backup device.
  • symbols such as alpha-numeric or ASCII characters such that the traditional "QWERTY" keyboard, whether physical or virtual is redundant, or merely required as a backup device.
  • gestures can be combined in pairs.
  • the invention resides in a method for entering symbols, such as text or characters, for display on an electronic device in response to the gestures of a user operating the device.
  • a device which recognises the gestures and generates the symbols can also display and use the symbols (e.g. a mobile phone) or it may operate as an input means to another device (e.g. a TV remote or a replacement keyboard) via a communication port.
  • the gesture can be made by operating the device and/or or operating input means connected to the device.
  • the device has an input means configured to detect a gesture, and said input means can be a communication port.
  • the method comprises detecting at the input means a gesture from a set of gestures, wherein the set consists of five (5) gestures. In other words, only five (5) gestures need be recognised by the input port.
  • the gesture can take the form of a signal provided directly at the input means and/or provided by a transducer of the input means.
  • the method includes monitoring the sequence of gestures detected by said input means.
  • the method further includes identifying from a register a symbol, or character corresponding to a sub-sequence of said gestures.
  • the sub-sequence is a set of sequential gestures received at the input means.
  • the sub-sequence can be a sequence of gestures made in a chronological order.
  • the register can be a look-up table or similar data table held in memory.
  • the method operates the device to communicate to the receiving device (which may be the same device) the symbol corresponding to the subsequence, for the receiving device to display or otherwise process the symbol.
  • One gesture can correspond to one signal provided by the input means and/or the transducer of the input means.
  • the device can process the signal in a number of ways, for example it can display characters as they are recognised in response to gestures made.
  • the device can be a portable electronic device such as a tablet or mobile phone, having a touch-sensitive display.
  • a gesture can be made by actuating an transducer which functions as an input means and is connected to the device, pressing a physical button of the device, a virtual button of the device, shown on the display or the gesture can be made by making a tapping or swiping movement across the surface of the display.
  • the display can function as the input means, and the number of gestures that need to be recognised is limited to five (5). This aids learning, and reduces both device recognition and user input errors when making a gesture.
  • the input means can have predetermined axes, such as Cartesian axes, and the set of gestures can consist of: up; down; left; right; and a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left and right orientations or axes. These movements are less subject to interpretation, and require less processing or interpretation by processing means of the display to recognise or decipher what gesture was made. In other words, the sequence of gestures interpreted by the input means are deterministic because after entry there is no further interpretation or selection required for entereing a symbol, or sending a signal to be displayed.
  • the input means can have predetermined axes, such as Cartesian axes, and the set of gestures that consists of: a movement extending linearly upwards; a movement extending linearly downwards; a movement extending linearly from left to right; a movement extending linearly from right to left; and a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes.
  • These tap and swipe type movements are uncomplicated to make and require less processing or interpretation by an input means or a device having an input means.
  • Non-linear characters such as those that emulate handwriting are subject to interpretation.
  • up, down, left and right can be interpreted as the peripheral positions on an input means or apparatus, such as the display, as viewed from a user's perspective.
  • Each gesture can have a single direction.
  • the gesture and its direction can be made, and recognised, in a two-dimensional plane or a on a plane configured in three-dimensions.
  • the push/tap/press action can be an action or movement extending perpendicularly through the plane defining the up, down, left and right orientations or axes.
  • the invention can be described, by way of example, as using a combination of a pair of gestures, wherein said number of gestures recognisable by an input means or a device is limited to five (5). Twenty-five (25) different combinations are possible.
  • the device could read from the register a symbol that corresponds to a sequence of at least a pair (2) of gestures wherein the number of gestures required to be recognised by an input means or a device is limited to nine (9) gestures or signals detected at the input.
  • the input means has predetermined axes, such as Cartesian axes, and is configured to detect a gesture or a signal created by a gesture, from the group of: a movement extending linearly: upward; downward; from left to right; from right to left; upward from left to right; upward from right to left; downward from right to left; or downward from left to right; and a push, press, tap or similar linear movement on a plane substantially perpendicular, or through, a plane defined by the up, down, left and right orientations and.
  • Eighty-one (81 ) different combinations are possible.
  • an input means and/or device capable of recognising and distinguishing between single and multi-touch events (e.g. a single tap and a double-tap where the same gesture is repeated within a defined time interval, typically of the order of 0.3 seconds), provides a set of six (6) if a double-tap is included.
  • a set of six (6) gestures can consist of: a movement extending linearly: upward; downward; from left to right; from right to left; upward from left to right; upward from right to left; downward from right to left; or downward from left to right; a push, press, tap or similar linear movement on a plane substantially perpendicular, or through, a plane defined by the up, down, left and right orientations; and a double-tap.
  • an input means and/or device capable of recognising and distinguishing between single and multi-touch events for all five (5) gestures (e.g. a single up-swipe and a double up-swipe where the same gesture is repeated within a defined time interval, typically of the order of 0.3 seconds), provides a set of gestures wherein the number of gestures recognisable by an input means or a device is limited to ten (10).
  • Single and double movements of the set of five (5) gestures consisting of: a movement extending linearly upwards; a movement extending linearly downwards; a movement extending linearly from left to right; a movement extending linearly from right to left; a single-tap such as a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes.
  • One hundred (100) different combinations are possible.
  • an input means and/or device capable of recognising and distinguishing between five (5) single gestures, consisting of: a movement extending linearly upwards; a movement extending linearly downwards; a movement extending linearly from left to right; a movement extending linearly from right to left; a single-tap such as a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes; combined with a sub- sequence recognition algorithm which treats a repeated single gesture as a different gesture for the purpose of using the look-up table would generate forty- five (45) different combinations (e.g.
  • the sub-sequence of the invention can reside in recognizing a repetition of each of the five (5) single gestures and, subsequently, one of the four (4) other gestures. This provides a combination of twenty (20) variations. Then, in the sequence, each of the five (5) repeated single gestures can be followed by any of the five (5) gestures so. This provides a combination of twenty-five (25). Together, forty-five (45) combinations are possible.
  • the sub-sequence can comprise at least a first gesture and a subsequent gesture.
  • the sub-sequence can consist of two (2) sequential gestures, or a pair of gestures whose sequence can be determined (e.g. from left and right hands). Note that if a sequence cannot be determined the number of distinguishable pairs of combinations of five gestures drops to fifteen (15) (5 factorial). In other words, two (2) gestures made by a user one after the other are recognised and combined by the device and a look-up table or similar type or register is used to find the symbol corresponding to the gestures made. Note that intermediate actions can be made by a user between gestures, but these actions are not associated with the generation of a symbol and are, instead, associated with the selection of a shift key, or switching a register, or some other ancillary function.
  • the display can be a touch-sensitive display screen, and the display has areas including: a text area for displaying and editing text; and a function area for managing the entry of gestures, the selection of a register and editing text.
  • Managing the entry of gestures can involve recognising the gesture made, storing the associated signal in memory while a further gesture is made and using the sub-sequence of gestures to look-up the associated symbol and returning it to the device (which can display it at the cursor location on the display, or otherwise use it).
  • the function area includes a plurality of zones, including at least one of: a selection zone for selecting the status of the function area, which can change the status of the function area by, for example, switching it between an enabled state, a disabled state and a minimized state; an indication zone for indicating the last gesture recognised by the input means and, optionally, showing further symbols entered, or showing the last character entered; an editing zone for editing using keyboard-type text-adjustment functions such as delete, space and return; and an amending zone for changing the status of at least a shift-key and/or a register-key.
  • a help key which displays the mapping of gestures to characters, can be provided.
  • the invention resides in an electronic device having a display and configured for entering symbols such as text or characters on a display in response to a gesture of a user operating the device, wherein the device has: an input means configured to detect a gesture, wherein the input means is configured to recognise gestures from a set of gestures and wherein the set consists of five (5) gestures; and a controller configured to track the sequence of gestures detected by said input means, and wherein the controller is configured to identify from a register a symbol corresponding to a sub-sequence of said gestures and return the symbol corresponding to the sub-sequence for display or other processing.
  • the electronic device can have input means having predetermined axes, such as Cartesian axes, and the input means is configured to recognise a set of gestures consisting of: up; down; left; right; and a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes.
  • input means having predetermined axes, such as Cartesian axes
  • the input means is configured to recognise a set of gestures consisting of: up; down; left; right; and a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes.
  • the input means can have predetermined axes, such as Cartesian axes, and the input means is configured to recognise a set of gestures consisting of: a movement extending linearly upwards; a movement extending linearly downwards; a movement extending linearly from left to right; a movement extending linearly from right to left; and a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes.
  • a set of gestures consisting of: a movement extending linearly upwards; a movement extending linearly downwards; a movement extending linearly from left to right; a movement extending linearly from right to left; and a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes.
  • the input means can be configured to recognise gestures from a set of nine (9) gestures, and the input means has predetermined axes, such as Cartesian axes, and recognise a set of gestures consisting of: a movement extending linearly: upward; downward; from left to right; from right to left; upward from left to right; upward from right to left; downward from right to left; or downward from left to right; and a push, press, tap or similar linear movement on a plane substantially perpendicular, or through, a plane defined by the up, down, left or right orientations.
  • the sub-sequence can have at least a first gesture and a subsequent gesture, or consists of a two (2) sequential gestures.
  • the display screen can be touch-sensitive, and the display can have areas including: a text area for displaying and editing text; and a function area for managing the entry of gestures, the selection of a register and editing text, wherein the function area includes a plurality of zones, including at least one of: a selector, configured to change the status of the function area; an indicator, configured to show at least the last gesture recognised by the input means; an editor, configured to enable keyboard-type text-adjustment functions such as delete, space and return; and a status selector, configured to enable a user to change the status of at least a shift-key and a register-key.
  • a selector configured to change the status of the function area
  • an indicator configured to show at least the last gesture recognised by the input means
  • an editor configured to enable keyboard-type text-adjustment functions such as delete, space and return
  • a status selector configured to enable a user to change the status of at least a shift-key and a register-key.
  • the invention resides in input means for operating in conjunction with different aspects of the invention disclosed herein.
  • the input means are configured to convert a gesture into a signal, and provide said signal to a device for processing.
  • the input means can be any one of a number of apparatus or transducers, used alone or in combination to operate a device that is configured to recognise gestures made by a user.
  • An input means can be at least one of: a joystick, wherein the forward, backward, left and right movements correspond, respectively, to an up; down; left; right movement, while the a press on the pivot-point of the joystick, or button on the joystick corresponds to a push, press, tap or movement; a computer mouse; cursor keys and space bar on a qwerty-keyboard; a touch sensitive pad, wherein a finger or pen can make up; down; left and right swipe-type movements, while a finger press or dot corresponds to a push, press, tap or movement; an eye and/or head movement recognition device (e.g. for use of someone who was tetraplegic); or an accelerometer configured to detect movement of a device, or configured to detect movement of a digit and/or hand and/or head and/or a limb when attached thereto.
  • a joystick wherein the forward, backward, left and right movements correspond, respectively, to an up; down; left; right movement, while the a press
  • the input means can be a five-transducer hand held device configured to fit, ergonomically within the palm of a human hand such that the fingers fold over the peripheral edge, where a plurality of buttons corresponding to the tap, up, down, left and right gestures are located.
  • the input means can include a ten- transducer, or button, hand held device.
  • the input means in the form of a hand held device can use an accelerometer to detect one or more of the gestures, such as left, right, up and down movement of a hand about the wrist of a user.
  • the invention resides in a system having a device and an input means as described herein, and/or a system operating the method described herein.
  • the invention resides in a computer readable storage medium storing one or more programs, said programs having instructions, which when executed by an electronic device perform the method described herein.
  • the invention resides in the components and/or combination of components that enables a sequence of gestures to be recognised such that input errors were negligible and said gestures were interpreted by an interface and communicated to a processor for output on a display or to otherwise use e.g. to control another component or device.
  • the components such as the interface, processor and device herein can be present in a single electronic device, or a system, and communicate with each other and to implement the invention within said device or system.
  • the method, device and system of the invention improve the reliability of the communication and the interoperability of the components, such as the interface and display, to enter symbols in an electronic device.
  • Figure 1 is a schematic view of an interface of an electronic device having a touch-sensitive display with different functional areas
  • Figure 2 is a schematic view of the device of Figure 1 , showing the functional areas in a "not active" state;
  • Figures 3a and 3b are schematic views of the device of Figure 1 , showing an alternative arrangement of the functional areas;
  • Figure 4 is a schematic diagram of the components of a device or system according to the invention.
  • Figures 5a and 5b are tables or registers showing which symbols correspond to which sequence of gestures
  • Figure 6 is a flow chart showing the relationship between the functions of the device or system
  • Figures 7a and 7b show schematic views of the lower and upper surfaces, respectively, of an input device using one hand
  • Figure 8 shows two schematic side views an alternative input device, using both hands.
  • Figure 9 shows the rear of a digital camera with a multi-selector switch.
  • Figures 1 to 3b show an electronic device 10 having a display 12.
  • the display is a touch-sensitive display and functions as the input means 14.
  • the display is shown with two (2) areas: a text area 16, where an alphanumeric text message can be shown; and a function area 18, that can be operated to create the message and edit the text therein.
  • the function area can be described as a 'swipeboard' 18 because in use a user will operate the device and text entry by, primarily, touching or swiping this area of the input means 14.
  • the swipeboard also includes one or more virtual, or on-screen buttons.
  • the swipeboard 18 has a number of sub-areas including: a status indicator 20 that shows what mode the swipeboard 18 is active e.g. the word 'swipe' will show on the indicator 20 when the entire display 12 functions as an input means 14 for entering text into the text area 16; one or more mode indicators 22a, 22b, 22c which function as switches, enabling a user to select and deselect different text entry modes, such as selecting between an upper and lower case text entry by toggling a 'shift' register, using key 22a, or such as selecting which register is to be used for character or symbol entry, using key 22b or 22c; an edit zone 24a, 24b, 24d for editing the text shown in the text area 16, the edit zone having virtual buttons including delete 24a, space 24b and return 24d; and gesture indicator 26 that show the last gesture and/or the character generated by the swipeboard 18.
  • the swipeboard 18 can, in some modes, include an arrow 28 indicating the position of the text-cursor, which in turn indicates where text will be entered on the
  • Figure 4 is, by way of example, a schematic diagram of the device 10, or of a system comprising the functions of the device, upon which the invention can be implemented and the general method described herein can be implemented using, at least in part, software operating on a computer system.
  • the device 10 can have the components in Figure 4, which is an example of a computer system having a device 100.
  • the device 1 00 includes a bus 102, at least one processor 104, at least one communication port 106, a main memory 108, a removable storage media 1 10, a read only memory 1 12 and a random access memory 1 14.
  • the components of device 100 can be configured across two (2) or more devices, or the components can reside in a single device 10.
  • the device can also include a battery 1 16.
  • the port 106 can be complimented by input means 1 18 and output connection 120.
  • the processor 104 can be any such device such as (but not limited to) an Intel(R), AMD(R) or ARM processor.
  • the processor may be specifically dedicated to the device.
  • the port 106 can be a wired connection, such as an RS-232 connection, or a Bluetooth connection or any such wireless connection.
  • the port can be configured to communicate on a network such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the device 100 connects.
  • the read only memory 1 12 can store instructions for the processor 104.
  • the bus 102 communicably couples the processor 104 with the other memory 1 10, 1 12, 1 14, 108 and port 1 06, as well as the input and output connections 1 18, 120.
  • the bus can be a PCI /PCI-X or SCSI based system bus depending on the storage devices used, for example.
  • the removable memory 1 10 can be any kind of external hard-drives, floppy drives, flash drives, for example.
  • the device and components therein is provided by way of example and does not limit the scope of the invention.
  • the processor 104 can implement the methods described herein.
  • the processor 104 can be configured to retrieve and/or receive information from a remote server or device.
  • the device can be an input means 14 that is in addition to, or is an alternative, to the input means of the functional area 18.
  • the device 100 also includes a gesture receipt application 122, to receive gestures via the bus 102 which may have been input via, for example but not limited to, a touch sensitive device, a joystick or transducers These devices can be operable to control a touch-sensitive display module 124.
  • the application 124 stores the gesture and notifies the processor which gesture it has received (so that for example, feedback can be given to the user).
  • the application has a gesture processing module 126, which combines gestures; translates them to symbols via a look up table held in memory (1 08, 1 1 2, or 1 14); and maintains state e.g. whether waiting for a first or second gesture, which register is active, the last symbol generated.
  • a character module 128 returns the symbol or text editing action (space, delete return) to the output 120 of the device for processing by the active application.
  • the method and/or device herein enable a user with the ability to perform text entry and editing operations using combinations of pairs of simple gestures using a touch sensitive device or other means of generating gestures.
  • swipeboard 18 is configured to recognize five (5) gestures made by a user on the surface of a touch screen display 12 of the device 10.
  • the gestures are a tap on the screen and four movements across the surface of the screen, namely upwards, downwards, to the left and to the right. These gestures can be referred to as Cuneiform gestures. These movements can be combined to provide a minimum of 25 combinations (when two gestures are combined).
  • the movements tap, up, down, right and left can be represented symbolically as ⁇ , ⁇ ,
  • FIG. 5a is a table showing 25 pairs of gestures and their corresponding letter. If the shift key 22a were active, the same gestures would combine to select upper-case letters. If characters, such as punctuation marks, are required then further registers 22b, 22c, such as those shown in Figure 5b can be used to access further symbols. To access such symbols, the appropriate register key 22a, 22b or 22c is toggled to select which register the device accesses as set out in Figures 5a and 5b.
  • register key 22a (which sets the register as in figure 5a), allows a user to enter one of 25 Roman characters.
  • Using the register key 22b allows a user to enter one or more characters from the second register of figure 5b.
  • Using the register key 22c allows a user to enter one or more characters from the third register of figure 5b, which includes a small number of user definable options.
  • the symbols 'q' and 'Q' are on the second and third registers. Note that these registers comprise symbols and are not limited to those shown in registers of the Figures. A further number of registers can be provided to access additional letters according to language requirements.
  • Such patterns assist learning and a natural alphabetisation (as in the right side of Table 1 ), with the sequence tap, up, down, right and left.
  • the swipeboard 18 can be configured according to the area required for gesture recognition purposes.
  • the touch sensitive display 12 is configured with a swipeboard on said display, and the swipeboard provides a sub-area for gesture recognition within the swipeboard 1 8 image, as shown in Figure 1 .
  • the swipeboard 18 can be minimized in size such that the swipeboard becomes, functionally, the whole of the surface of the touch sensitive device for gesture recognition.
  • the functional areas are in a minimised or inactive state (e.g. when the user is reading or scrolling the text).
  • Figure 2 indicates the swipeboard in a "not active" state. Taps and gestures on the surface of the touch sensitive device will be passed to the application to e.g. select text or scroll the display as the application determines.
  • the swipeboard 18 can take the form of an image that functions as a control-panel for accessing, for example, the register toggle 22b, while the remainder of the screen can be used for gesture recognition.
  • the swipeboard can be positioned in the most appropriate position according to the location at which text is being entered.
  • Figures 3a and 3b show the swipeboard 18 and the different positions of the swipeboard and arrow 28 with respect to the text entered by a user. The user's taps and gestures are then used to generate symbols until the swipeboard is deactivated by again tapping 20.
  • the character or symbol entry systems disclosed herein can use an input means in addition, or alternatively, to virtual or physical QWERTY keyboards.
  • errors associated with touching the wrong key or multiple keys are obviated.
  • the device 10 When the swipeboard 18 is active, and awaiting a sequence of two (2) gestures that will represent a character or symbol to be entered, the device 10 functions in one of two (2) states. The device is either waiting a first gesture, or awaiting a second gesture.
  • the device then combines the numbers from the two (2) gestures, recognises the state of the shift 22a and register 22b, 22c keys and uses an algorithm to determine which character or symbol to generate and pass back to the application on the device to process (normally by inserting it at the cursor position). The character or symbol is also shown in an indicator 26 on the swipeboard 18. The device then clears the previous two (2) gestures and associated numbers from memory and awaits the first gesture associated with the next character or symbol.
  • a user can toggle which symbol will be generated in response to a sequence of two (2) gestures by pressing one of the register 22a, 22b, 22c buttons on the virtual display 18, 12.
  • register 1 if register 1 is selected then '' ⁇ ⁇ 3 ⁇ 4 ⁇ 3 ⁇ 5 "a" ("A” if shift active), and " ⁇
  • register 2 is selected then “ ⁇ ” generates ".” (period), and " ⁇
  • register 3 is selected then “••”generates " * " (star), and " ⁇
  • ” generates " ".
  • the indicator 26 shows the first gesture made.
  • the second gesture it shows the character or symbol corresponding to the sequence of two (2) gestures.
  • a user can correct or remove the first gesture entered gesture by pressing on the indicator 26 (the virtual button on the screen 12) whilst it is showing the last gesture made - the gesture is removed from the store and the "swipeboard" returns to its awaiting first gesture state.
  • swipeboard 18 If the swipeboard 18 is being used in whole device gesture recognition mode i.e. as per Figures 2, 3a and 3b, the swipeboard points to the current text insertion point or cursor point.
  • the swipeboard 18 can be positioned by the user in the best place to enter text by pressing and holding a digit on the swipeboard icon and dragging it about the screen in a conventional manner. Gestures on the whole device, outside the swipeboard 18 area are interpreted as either first or second gestures until the status indicator 20 is activated, when the swipeboard returns to the format as shown in Figure 1 .
  • the processor 104 can control the device 10 to implement a mode of operation of the invention as described in Figure 6, in which a device operates the invention as shown in Figure 1 through various 'states'.
  • Other modes of operation are possible within the scope of the invention.
  • the device In an initialise state 202, the device displays a screen as shown in Figure 1 and is awaiting activation. Touching the swipeboard 18 or one of the other buttons of the swipeboard triggers the device to start or stop the process 204. If a gesture triggers the device to start or stop the gesture will be recognised and shown in an indicator 26.
  • the device When the process is started at 204, the device enters a first waiting state 206, wherein it is awaiting a first gesture. The device then determines whether a user made a gesture or activated a key at a first check state 208. The device then enters a first decision state 210 to determine whether the user requested one of
  • Options i) to iv) are indicative of a user selecting one of the swipeboard's keys.
  • the device recognises the gesture in a first gesture state 212 and stores a value, such as a number, associated with the gesture.
  • a value such as a number
  • the device then enters a second waiting state 214.
  • the device determines whether a user made a gesture or activated a key at a second check state 216.
  • the device then enters a second decision state 218 to determine whether the user requested one of
  • Options i) to iv) are indicative of a user selecting one of the swipeboard's keys.
  • the device recognises the gesture in a second gesture state 220 and stores a value, such as a number, associated with the gesture.
  • a character generator 222 retrieves from the register the character corresponding to the last two (2) gestures made in sequence and sends the generated character to the application using the invention as a method of entering text. At the same time, said character is shown in the indicator 26. After the generated character is sent to the application the invention returns to the start 204.
  • the device determines that a key was activated and takes one of the following actions before returning to the start 204, said actions including:
  • Pressing 22a generates subsequent characters from the first register. If pressed when the application is already in the first register it toggles between lower and uppercase. A double tap on 22a acts as a shift lock.
  • the device determines that a key was activated and takes one of the following actions before returning to the start 204, said actions including:
  • the input means can, by way of example, be a touch sensitive display, as described above. Additionally or alternatively, the input means can be a hand held device configured and/or optimised for entering text by making sequences of gestures.
  • the computer code or program detects operates the device, which recognises via the input means, such as a touch sensitive device the gestures (tap, up, down, right, left), which are enumerated so that a tap has value 0, up 1 , down 2, right 3, left 4.
  • the shift and register keys are enumerated as 'first' (abc) 0, firstShift (ABC) 1 , second 2, third 3.
  • a function combines the first and second gesture with the register information and returns the chosen character.
  • G2 the second
  • R is the enumeration of the shift and register keys.
  • a 100 character string is created and stored in memory (108, 1 12 or 1 14) (here split over four lines for clarity) corresponding to and implementing Figures 5a and 5b:
  • the code can appear as
  • G1 is the enumerated first gesture G2 the second.
  • Figures 7a and 7b show, respectively an underside and a top side of a device in the form of an input disc or device 300 suitable for one-handed operation.
  • the drawing shows a disc, but the physical implementation would be a device which sits most comfortably in the hand with the buttons positioned economically for the fingers.
  • Activation of keys located on the peripheral edge of the device provide the means to make gestures corresponding to the movements tap, up, down, right and left, which can be represented symbolically as ⁇ , ⁇ ,
  • a display screen 302 is provided in order that the disc can function as a stand-alone device (e.g. a mobile phone). Without a display screen the disc could function as an input mechanism to another device (e.g. as remote control for a television).
  • Buttons include tap 304, up 306, down 308, right 310 and left 312 keys for entering gestures.
  • Functional buttons include a register 314, shift 316, space 318, return, 320 and delete 322 keys.
  • the thumb When holding a device in one hand the thumb has greater mobility than the other digits so can be used to operate numerous keys (304, 318, 320 ,322), while the little finger could activate the register key (314).
  • the characters generated are used in the text display area (302).
  • Figure 8 shows an alternative embodiment of an input disc 400 having a plurality of keys 402.
  • the disc 400 is economically configured for use by both hands.
  • the disc can have 10 keys, one for each digit when using both hands, to generate the gestures.
  • Such a device would not function as a stand-alone device but would act as a replacement for a traditional physical key-board.
  • both the thumbs When holding a device in this way both the thumbs have greater mobility than the other digits so would also be used for the shift, space, return, and other keys.
  • Figure 9 shows alternative embodiment using the multi selector switch at the rear of a digital camera 500 to enter text on its LCD screen 510 using the multi-selector switch 512.
  • Pressing OK 514 corresponds to a tap, up 516 to an upwards gesture, down 518 to a downwards gesture, right 520 to a rightwards direction and left 522 to a leftwards gesture.
  • a user making gestures to enter text or symbols on an electronic device can be used to cross-reference or look-up table or table to retrieve a corresponding symbol and return it to the device to use, typically by entering it at the cursor point on a text display.
  • Any number of gestures can be used, for example a set of five (5), nine (9) or ten (10) gestures, and the number of gestures in a sequence can be varied, but for example can be fixed at two (2) or three (3) gestures in a sequence.
  • gestures and the number of gestures used from a sequence to select and enter a symbol in a text have different advantages depending on the application.
  • the advantage of a set of five (5) gestures combined in pairs, as preferred in this invention, is that it is the minimum required to generate the Roman alphabet (albeit without q).
  • a larger set of gestures, having perhaps ten (10) or more gestures, in combination with a longer sequence, of perhaps four (4) or more gestures, provides a greater number of combinations and, therefore, a larger number of look-up references from a data set or register.
  • the invention preferably has a limited set of five (5) gestures. These gestures are simple to make and the combinations are easy to learn thus less susceptible to user error especially because they extend linearly.
  • the input mean such as a touch-sensitive display
  • the processor and associated logic is less complex because each gesture is interpreted as only one of a set of gestures, such as five (5) in the example given.
  • the input means, method and device of the invention functions to inhibit errors made.
  • Figure 9 shows a multi selector switch on the back of a digital camera to generate the gestures and thus enable a photographer to keep notes on the camera. This illustrates how flexible and compact the teaching of this invention enables text and symbol entry to become.

Abstract

The invention resides in a method, or device or system configured to operate the method, or entering symbols such as text or characters for display or use on an electronic device in response to a gesture of a user operating the device. The device has an input means configured to detect a gesture. The method including: detecting at the input means a gesture from a set of gestures, wherein the set consists of five (5) gestures; monitoring the sequence of gestures detected by said input means; identifying from a register a symbol [or character] corresponding to a sub-sequence of said gestures; and returning to the electronic device the symbol corresponding to the sub-sequence for the electronic device to display or otherwise use. The input means can have predetermined axes, such as Cartesian axes, and the set of gestures consists of: up; down; left; right; and a tap. The sub-sequence comprises at least a first gesture and a subsequent gesture, or consists of two (2) sequential gestures such as a pair of gestures.

Description

GESTURE INPUT METHOD AND APPARATUS
The invention relates to a method for entering text on an electronic device such as a mobile phone. The invention also relates to an electronic device, system and an input device operable by a user for entering text.
Background of the invention
The user interfaces of known touch sensitive computing devices, such as mobile phones and tablets, are continuously improving. Users require quick and easy means for entering text on a document or email. Known character and text input methods include virtual input e.g. keyboard on touch-sensitive display, and physical keyboards. Known input methods also include handwriting recognition devices that translate text written by a user on a touch sensitive pad. More complex systems can convert combinations of strokes on a touch sensitive display into Chinese characters.
Known systems have a number of disadvantages. Virtual keyboards occupy a substantial part of touch-sensitive display because the minimum size of each virtual key is determined by the size of a human finger. Text input on a virtual keyboard can be slow and inaccurate compared to physical devices such as a conventional 101 -key US traditional keyboard. Traditional keyboards are either bulky, or when miniaturised they are difficult to use and add weight and complexity to a device. Text input on games consoles (e.g. of a player's name) often involves the cumbersome navigation around an on screen display of characters. Handwriting or character recognition requires advanced processing, complex look-up registers and tends to require a large interface on a device. Further, finger-based touch input, though intuitive, can suffer from low precision due to a small interface, or the "fat finger" issue. Issues with known text entry systems are exacerbated when a user is in motion, e.g., walking, and unable to pay close attention to the interface. Summary of the invention
The invention seeks to address problems associated with text or character entry systems by limiting the set of gestures required to 'swipes' and 'taps' such that text can be entered reliably, consistently and quickly. The use of a small set of simple gestures reduces the probability of error. The gestures are unambiguous and can accurately be recognised by a device. They are easy to learn and do not require precision from the user, either in location or execution (a tap is a tap anywhere on the device, a swipe can start and finish with a wide degree of tolerance). Thus both user input and device recognition errors are reduced and speed of text entry is increased.
By combining gestures the invention provides an alternative means of generating symbols, such as alpha-numeric or ASCII characters such that the traditional "QWERTY" keyboard, whether physical or virtual is redundant, or merely required as a backup device. By way of example, gestures can be combined in pairs.
From one aspect, the invention resides in a method for entering symbols, such as text or characters, for display on an electronic device in response to the gestures of a user operating the device. A device which recognises the gestures and generates the symbols can also display and use the symbols (e.g. a mobile phone) or it may operate as an input means to another device (e.g. a TV remote or a replacement keyboard) via a communication port. In other words, the gesture can be made by operating the device and/or or operating input means connected to the device. The device has an input means configured to detect a gesture, and said input means can be a communication port.
The method comprises detecting at the input means a gesture from a set of gestures, wherein the set consists of five (5) gestures. In other words, only five (5) gestures need be recognised by the input port. The gesture can take the form of a signal provided directly at the input means and/or provided by a transducer of the input means. The method includes monitoring the sequence of gestures detected by said input means. The method further includes identifying from a register a symbol, or character corresponding to a sub-sequence of said gestures. The sub-sequence is a set of sequential gestures received at the input means. The sub-sequence can be a sequence of gestures made in a chronological order. The register can be a look-up table or similar data table held in memory. The method operates the device to communicate to the receiving device (which may be the same device) the symbol corresponding to the subsequence, for the receiving device to display or otherwise process the symbol. One gesture can correspond to one signal provided by the input means and/or the transducer of the input means. The device can process the signal in a number of ways, for example it can display characters as they are recognised in response to gestures made.
The device can be a portable electronic device such as a tablet or mobile phone, having a touch-sensitive display. A gesture can be made by actuating an transducer which functions as an input means and is connected to the device, pressing a physical button of the device, a virtual button of the device, shown on the display or the gesture can be made by making a tapping or swiping movement across the surface of the display. The display can function as the input means, and the number of gestures that need to be recognised is limited to five (5). This aids learning, and reduces both device recognition and user input errors when making a gesture. The input means can have predetermined axes, such as Cartesian axes, and the set of gestures can consist of: up; down; left; right; and a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left and right orientations or axes. These movements are less subject to interpretation, and require less processing or interpretation by processing means of the display to recognise or decipher what gesture was made. In other words, the sequence of gestures interpreted by the input means are deterministic because after entry there is no further interpretation or selection required for entereing a symbol, or sending a signal to be displayed. No further interpretation is required because the cuneiform nature of the 5 gestures are such that an input device can clearly identify where one gesture stops and another gesture begins. The input means can have predetermined axes, such as Cartesian axes, and the set of gestures that consists of: a movement extending linearly upwards; a movement extending linearly downwards; a movement extending linearly from left to right; a movement extending linearly from right to left; and a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes. These tap and swipe type movements are uncomplicated to make and require less processing or interpretation by an input means or a device having an input means. Non-linear characters such as those that emulate handwriting are subject to interpretation.
The definition of up, down, left and right can be interpreted as the peripheral positions on an input means or apparatus, such as the display, as viewed from a user's perspective. Each gesture can have a single direction. The gesture and its direction can be made, and recognised, in a two-dimensional plane or a on a plane configured in three-dimensions. The push/tap/press action can be an action or movement extending perpendicularly through the plane defining the up, down, left and right orientations or axes.
The invention can be described, by way of example, as using a combination of a pair of gestures, wherein said number of gestures recognisable by an input means or a device is limited to five (5). Twenty-five (25) different combinations are possible.
Though this invention prefers a set of five (5) gestures combined in pairs, a skilled person would appreciate that increasing the number of gestures recognised increases the number of combinations possible and therefore the number of characters which can be generated.
By way of an example the device could read from the register a symbol that corresponds to a sequence of at least a pair (2) of gestures wherein the number of gestures required to be recognised by an input means or a device is limited to nine (9) gestures or signals detected at the input. In such a case, the input means has predetermined axes, such as Cartesian axes, and is configured to detect a gesture or a signal created by a gesture, from the group of: a movement extending linearly: upward; downward; from left to right; from right to left; upward from left to right; upward from right to left; downward from right to left; or downward from left to right; and a push, press, tap or similar linear movement on a plane substantially perpendicular, or through, a plane defined by the up, down, left and right orientations and. Eighty-one (81 ) different combinations are possible.
By way of a further example an input means and/or device capable of recognising and distinguishing between single and multi-touch events (e.g. a single tap and a double-tap where the same gesture is repeated within a defined time interval, typically of the order of 0.3 seconds), provides a set of six (6) if a double-tap is included. A set of six (6) gestures can consist of: a movement extending linearly: upward; downward; from left to right; from right to left; upward from left to right; upward from right to left; downward from right to left; or downward from left to right; a push, press, tap or similar linear movement on a plane substantially perpendicular, or through, a plane defined by the up, down, left and right orientations; and a double-tap. Thirty-six (36) different combinations are possible
By way of a further example an input means and/or device capable of recognising and distinguishing between single and multi-touch events for all five (5) gestures (e.g. a single up-swipe and a double up-swipe where the same gesture is repeated within a defined time interval, typically of the order of 0.3 seconds), provides a set of gestures wherein the number of gestures recognisable by an input means or a device is limited to ten (10). Single and double movements of the set of five (5) gestures consisting of: a movement extending linearly upwards; a movement extending linearly downwards; a movement extending linearly from left to right; a movement extending linearly from right to left; a single-tap such as a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes. One hundred (100) different combinations are possible.
By way of a final example an input means and/or device capable of recognising and distinguishing between five (5) single gestures, consisting of: a movement extending linearly upwards; a movement extending linearly downwards; a movement extending linearly from left to right; a movement extending linearly from right to left; a single-tap such as a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes; combined with a sub- sequence recognition algorithm which treats a repeated single gesture as a different gesture for the purpose of using the look-up table would generate forty- five (45) different combinations (e.g. tap followed by up would generate a symbol; tap followed by another tap would not generate a symbol but would wait for another gesture. So 'tap, up' would generate a symbol, 'tap, tap, up' would generate a different symbol). In other words, the sub-sequence of the invention can reside in recognizing a repetition of each of the five (5) single gestures and, subsequently, one of the four (4) other gestures. This provides a combination of twenty (20) variations. Then, in the sequence, each of the five (5) repeated single gestures can be followed by any of the five (5) gestures so. This provides a combination of twenty-five (25). Together, forty-five (45) combinations are possible.
The sub-sequence can comprise at least a first gesture and a subsequent gesture. The sub-sequence can consist of two (2) sequential gestures, or a pair of gestures whose sequence can be determined (e.g. from left and right hands). Note that if a sequence cannot be determined the number of distinguishable pairs of combinations of five gestures drops to fifteen (15) (5 factorial). In other words, two (2) gestures made by a user one after the other are recognised and combined by the device and a look-up table or similar type or register is used to find the symbol corresponding to the gestures made. Note that intermediate actions can be made by a user between gestures, but these actions are not associated with the generation of a symbol and are, instead, associated with the selection of a shift key, or switching a register, or some other ancillary function.
The display can be a touch-sensitive display screen, and the display has areas including: a text area for displaying and editing text; and a function area for managing the entry of gestures, the selection of a register and editing text. Managing the entry of gestures can involve recognising the gesture made, storing the associated signal in memory while a further gesture is made and using the sub-sequence of gestures to look-up the associated symbol and returning it to the device (which can display it at the cursor location on the display, or otherwise use it). The function area includes a plurality of zones, including at least one of: a selection zone for selecting the status of the function area, which can change the status of the function area by, for example, switching it between an enabled state, a disabled state and a minimized state; an indication zone for indicating the last gesture recognised by the input means and, optionally, showing further symbols entered, or showing the last character entered; an editing zone for editing using keyboard-type text-adjustment functions such as delete, space and return; and an amending zone for changing the status of at least a shift-key and/or a register-key. A help key, which displays the mapping of gestures to characters, can be provided.
In another aspect, the invention resides in an electronic device having a display and configured for entering symbols such as text or characters on a display in response to a gesture of a user operating the device, wherein the device has: an input means configured to detect a gesture, wherein the input means is configured to recognise gestures from a set of gestures and wherein the set consists of five (5) gestures; and a controller configured to track the sequence of gestures detected by said input means, and wherein the controller is configured to identify from a register a symbol corresponding to a sub-sequence of said gestures and return the symbol corresponding to the sub-sequence for display or other processing.
The electronic device can have input means having predetermined axes, such as Cartesian axes, and the input means is configured to recognise a set of gestures consisting of: up; down; left; right; and a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes.
The input means can have predetermined axes, such as Cartesian axes, and the input means is configured to recognise a set of gestures consisting of: a movement extending linearly upwards; a movement extending linearly downwards; a movement extending linearly from left to right; a movement extending linearly from right to left; and a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes.
The input means can be configured to recognise gestures from a set of nine (9) gestures, and the input means has predetermined axes, such as Cartesian axes, and recognise a set of gestures consisting of: a movement extending linearly: upward; downward; from left to right; from right to left; upward from left to right; upward from right to left; downward from right to left; or downward from left to right; and a push, press, tap or similar linear movement on a plane substantially perpendicular, or through, a plane defined by the up, down, left or right orientations. The sub-sequence can have at least a first gesture and a subsequent gesture, or consists of a two (2) sequential gestures.
The display screen can be touch-sensitive, and the display can have areas including: a text area for displaying and editing text; and a function area for managing the entry of gestures, the selection of a register and editing text, wherein the function area includes a plurality of zones, including at least one of: a selector, configured to change the status of the function area; an indicator, configured to show at least the last gesture recognised by the input means; an editor, configured to enable keyboard-type text-adjustment functions such as delete, space and return; and a status selector, configured to enable a user to change the status of at least a shift-key and a register-key.
In accordance with another aspect the invention resides in input means for operating in conjunction with different aspects of the invention disclosed herein. The input means are configured to convert a gesture into a signal, and provide said signal to a device for processing. The input means can be any one of a number of apparatus or transducers, used alone or in combination to operate a device that is configured to recognise gestures made by a user. An input means can be at least one of: a joystick, wherein the forward, backward, left and right movements correspond, respectively, to an up; down; left; right movement, while the a press on the pivot-point of the joystick, or button on the joystick corresponds to a push, press, tap or movement; a computer mouse; cursor keys and space bar on a qwerty-keyboard; a touch sensitive pad, wherein a finger or pen can make up; down; left and right swipe-type movements, while a finger press or dot corresponds to a push, press, tap or movement; an eye and/or head movement recognition device (e.g. for use of someone who was tetraplegic); or an accelerometer configured to detect movement of a device, or configured to detect movement of a digit and/or hand and/or head and/or a limb when attached thereto.
The input means can be a five-transducer hand held device configured to fit, ergonomically within the palm of a human hand such that the fingers fold over the peripheral edge, where a plurality of buttons corresponding to the tap, up, down, left and right gestures are located. The input means can include a ten- transducer, or button, hand held device.
The input means in the form of a hand held device can use an accelerometer to detect one or more of the gestures, such as left, right, up and down movement of a hand about the wrist of a user.
In another aspect, the invention resides in a system having a device and an input means as described herein, and/or a system operating the method described herein.
In another aspect, the invention resides in a computer readable storage medium storing one or more programs, said programs having instructions, which when executed by an electronic device perform the method described herein.
Overall, the skilled person would recognise from the teaching herein that the invention resides in the components and/or combination of components that enables a sequence of gestures to be recognised such that input errors were negligible and said gestures were interpreted by an interface and communicated to a processor for output on a display or to otherwise use e.g. to control another component or device.
The components such as the interface, processor and device herein can be present in a single electronic device, or a system, and communicate with each other and to implement the invention within said device or system. The method, device and system of the invention improve the reliability of the communication and the interoperability of the components, such as the interface and display, to enter symbols in an electronic device.
In light of the teaching of the present invention, the skilled person would appreciate that aspects of the invention were interchangeable and transferrable between the aspects described herein, and can be combined to provide improved aspects of the invention. Further aspects of the invention will be appreciated from the following description.
Brief description of the Figures
In order that the invention can be more readily understood, reference will now be made, by way of example, to the drawings in which:
Figure 1 is a schematic view of an interface of an electronic device having a touch-sensitive display with different functional areas;
Figure 2 is a schematic view of the device of Figure 1 , showing the functional areas in a "not active" state;
Figures 3a and 3b are schematic views of the device of Figure 1 , showing an alternative arrangement of the functional areas;
Figure 4 is a schematic diagram of the components of a device or system according to the invention;
Figures 5a and 5b are tables or registers showing which symbols correspond to which sequence of gestures;
Figure 6 is a flow chart showing the relationship between the functions of the device or system;
Figures 7a and 7b show schematic views of the lower and upper surfaces, respectively, of an input device using one hand;
Figure 8 shows two schematic side views an alternative input device, using both hands; and
Figure 9 shows the rear of a digital camera with a multi-selector switch.
Detailed description of embodiments
Figures 1 to 3b show an electronic device 10 having a display 12. The display is a touch-sensitive display and functions as the input means 14. The display is shown with two (2) areas: a text area 16, where an alphanumeric text message can be shown; and a function area 18, that can be operated to create the message and edit the text therein. The function area can be described as a 'swipeboard' 18 because in use a user will operate the device and text entry by, primarily, touching or swiping this area of the input means 14. The swipeboard also includes one or more virtual, or on-screen buttons.
The swipeboard 18 has a number of sub-areas including: a status indicator 20 that shows what mode the swipeboard 18 is active e.g. the word 'swipe' will show on the indicator 20 when the entire display 12 functions as an input means 14 for entering text into the text area 16; one or more mode indicators 22a, 22b, 22c which function as switches, enabling a user to select and deselect different text entry modes, such as selecting between an upper and lower case text entry by toggling a 'shift' register, using key 22a, or such as selecting which register is to be used for character or symbol entry, using key 22b or 22c; an edit zone 24a, 24b, 24d for editing the text shown in the text area 16, the edit zone having virtual buttons including delete 24a, space 24b and return 24d; and gesture indicator 26 that show the last gesture and/or the character generated by the swipeboard 18. The swipeboard 18 can, in some modes, include an arrow 28 indicating the position of the text-cursor, which in turn indicates where text will be entered on the screen.
Figure 4 is, by way of example, a schematic diagram of the device 10, or of a system comprising the functions of the device, upon which the invention can be implemented and the general method described herein can be implemented using, at least in part, software operating on a computer system.
By way of example, the device 10 can have the components in Figure 4, which is an example of a computer system having a device 100. The device 1 00 includes a bus 102, at least one processor 104, at least one communication port 106, a main memory 108, a removable storage media 1 10, a read only memory 1 12 and a random access memory 1 14. The components of device 100 can be configured across two (2) or more devices, or the components can reside in a single device 10. The device can also include a battery 1 16. The port 106 can be complimented by input means 1 18 and output connection 120.
The processor 104 can be any such device such as (but not limited to) an Intel(R), AMD(R) or ARM processor. The processor may be specifically dedicated to the device. The port 106 can be a wired connection, such as an RS-232 connection, or a Bluetooth connection or any such wireless connection. The port can be configured to communicate on a network such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the device 100 connects. The read only memory 1 12 can store instructions for the processor 104.
The bus 102 communicably couples the processor 104 with the other memory 1 10, 1 12, 1 14, 108 and port 1 06, as well as the input and output connections 1 18, 120. The bus can be a PCI /PCI-X or SCSI based system bus depending on the storage devices used, for example. The removable memory 1 10 can be any kind of external hard-drives, floppy drives, flash drives, for example. The device and components therein is provided by way of example and does not limit the scope of the invention. The processor 104 can implement the methods described herein.
The processor 104 can be configured to retrieve and/or receive information from a remote server or device. The device can be an input means 14 that is in addition to, or is an alternative, to the input means of the functional area 18.
The device 100 also includes a gesture receipt application 122, to receive gestures via the bus 102 which may have been input via, for example but not limited to, a touch sensitive device, a joystick or transducers These devices can be operable to control a touch-sensitive display module 124. The application 124, stores the gesture and notifies the processor which gesture it has received (so that for example, feedback can be given to the user). The application has a gesture processing module 126, which combines gestures; translates them to symbols via a look up table held in memory (1 08, 1 1 2, or 1 14); and maintains state e.g. whether waiting for a first or second gesture, which register is active, the last symbol generated. A character module 128 returns the symbol or text editing action (space, delete return) to the output 120 of the device for processing by the active application.
Overall, the method and/or device herein enable a user with the ability to perform text entry and editing operations using combinations of pairs of simple gestures using a touch sensitive device or other means of generating gestures.
All or part of a touch screen display 12 of the device 10 can be used as a swipeboard 18, which functions as an input means. According to an aspect of the invention, the swipeboard 18 is configured to recognize five (5) gestures made by a user on the surface of a touch screen display 12 of the device 10. The gestures are a tap on the screen and four movements across the surface of the screen, namely upwards, downwards, to the left and to the right. These gestures can be referred to as Cuneiform gestures. These movements can be combined to provide a minimum of 25 combinations (when two gestures are combined).
The movements tap, up, down, right and left can be represented symbolically as ·,†, |,→ and <— respectively.
By way of example, the invention will be described wherein using a combination of two (2) sequential gestures recognized by the device 10. Five (5) gestures, when paired, provide 25 combinations. These 25 combinations are mapped to 25 of the 26 letters of the Roman alphabet. Figure 5a is a table showing 25 pairs of gestures and their corresponding letter. If the shift key 22a were active, the same gestures would combine to select upper-case letters. If characters, such as punctuation marks, are required then further registers 22b, 22c, such as those shown in Figure 5b can be used to access further symbols. To access such symbols, the appropriate register key 22a, 22b or 22c is toggled to select which register the device accesses as set out in Figures 5a and 5b. Using the register key 22a (which sets the register as in figure 5a), allows a user to enter one of 25 Roman characters. Using the register key 22b allows a user to enter one or more characters from the second register of figure 5b. Using the register key 22c allows a user to enter one or more characters from the third register of figure 5b, which includes a small number of user definable options. The symbols 'q' and 'Q' are on the second and third registers. Note that these registers comprise symbols and are not limited to those shown in registers of the Figures. A further number of registers can be provided to access additional letters according to language requirements.
In the present example, there is a connection between characters to be entered on the display and the sequence of gestures required to select a character. By way of example, the letter V corresponds to an downward movement followed by a upward movement (e.g. v = |†). Such patterns assist learning and a natural alphabetisation (as in the right side of Table 1 ), with the sequence tap, up, down, right and left.
By way of example, all five vowels all start with a tap. All letters with tails or descenders (g,j,p,q and y) start with an upstroke. Frequently occurring letters (n,l,t,s) use a repeat (e.g. s = <—<—, t =→→). There is a natural sequence within "cuneiform": tap, swipe up, swipe down, swipe right and swipe left. In this example a cuneiform alphabet is suggested (see right side of Figure 5a) in which the first three letters are not the symbols 'a,b and c' but '··, »†, ·[' (a,u,i). By way of example, the expression "an idea" can be represented as "··†† ·[ [<— ·→ ··".
The swipeboard 18 can be configured according to the area required for gesture recognition purposes. In Figure 1 , the touch sensitive display 12 is configured with a swipeboard on said display, and the swipeboard provides a sub-area for gesture recognition within the swipeboard 1 8 image, as shown in Figure 1 . Alternatively, the swipeboard 18 can be minimized in size such that the swipeboard becomes, functionally, the whole of the surface of the touch sensitive device for gesture recognition. The functional areas are in a minimised or inactive state (e.g. when the user is reading or scrolling the text). To be clear, Figure 2 indicates the swipeboard in a "not active" state. Taps and gestures on the surface of the touch sensitive device will be passed to the application to e.g. select text or scroll the display as the application determines. However when the swipeboard is activated by tapping 20 as in Figures 3a and 3b, the swipeboard 18 can take the form of an image that functions as a control-panel for accessing, for example, the register toggle 22b, while the remainder of the screen can be used for gesture recognition. The swipeboard can be positioned in the most appropriate position according to the location at which text is being entered. Figures 3a and 3b show the swipeboard 18 and the different positions of the swipeboard and arrow 28 with respect to the text entered by a user. The user's taps and gestures are then used to generate symbols until the swipeboard is deactivated by again tapping 20.
The character or symbol entry systems disclosed herein can use an input means in addition, or alternatively, to virtual or physical QWERTY keyboards. When using the invention, errors associated with touching the wrong key or multiple keys are obviated.
When the swipeboard 18 is active, and awaiting a sequence of two (2) gestures that will represent a character or symbol to be entered, the device 10 functions in one of two (2) states. The device is either waiting a first gesture, or awaiting a second gesture.
On receiving the first gesture via an input means such as a swipeboard 18, the device converts the first gesture to a number (· = 0,† = 1 ,| = 2,→ = 3 = 4) associated with the gesture The number is stored, and the gesture recognized is displayed in a gesture indicator 26 upon the swipeboard 18 in order that a user can have visual confirmation of the gesture made. The device then awaits the second gesture. On receiving the second gesture the swipeboard converts the second gesture to a number (· = 0,† = 1 ,| = 2,→ = 3 ,<— = 4).
The device then combines the numbers from the two (2) gestures, recognises the state of the shift 22a and register 22b, 22c keys and uses an algorithm to determine which character or symbol to generate and pass back to the application on the device to process (normally by inserting it at the cursor position). The character or symbol is also shown in an indicator 26 on the swipeboard 18. The device then clears the previous two (2) gestures and associated numbers from memory and awaits the first gesture associated with the next character or symbol.
In use, a user can toggle which symbol will be generated in response to a sequence of two (2) gestures by pressing one of the register 22a, 22b, 22c buttons on the virtual display 18, 12. By way of example, if register 1 is selected then ''· ·¾βηβΓ3ΐβ5 "a" ("A" if shift active), and "†|" generates "y" ("Y" if shift active). If register 2 is selected then "··" generates "." (period), and "†|" generates 7". If register 3 is selected then "••"generates "*" (star), and "†|" generates "=".
After the first gesture the indicator 26 shows the first gesture made. After the second gesture it shows the character or symbol corresponding to the sequence of two (2) gestures. A user can correct or remove the first gesture entered gesture by pressing on the indicator 26 (the virtual button on the screen 12) whilst it is showing the last gesture made - the gesture is removed from the store and the "swipeboard" returns to its awaiting first gesture state.
If after receiving the second gesture a user wishes to repeat the character now shown in indicator 26, activating/pressing the virtual indicator 26 button now repeats the character (i.e. in exactly the same way as a QWERTY keyboard would when a key is struck repeatedly).
If the swipeboard 18 is being used in whole device gesture recognition mode i.e. as per Figures 2, 3a and 3b, the swipeboard points to the current text insertion point or cursor point. The swipeboard 18 can be positioned by the user in the best place to enter text by pressing and holding a digit on the swipeboard icon and dragging it about the screen in a conventional manner. Gestures on the whole device, outside the swipeboard 18 area are interpreted as either first or second gestures until the status indicator 20 is activated, when the swipeboard returns to the format as shown in Figure 1 .
In particular, the processor 104 can control the device 10 to implement a mode of operation of the invention as described in Figure 6, in which a device operates the invention as shown in Figure 1 through various 'states'. Other modes of operation are possible within the scope of the invention.
In an initialise state 202, the device displays a screen as shown in Figure 1 and is awaiting activation. Touching the swipeboard 18 or one of the other buttons of the swipeboard triggers the device to start or stop the process 204. If a gesture triggers the device to start or stop the gesture will be recognised and shown in an indicator 26.
When the process is started at 204, the device enters a first waiting state 206, wherein it is awaiting a first gesture. The device then determines whether a user made a gesture or activated a key at a first check state 208. The device then enters a first decision state 210 to determine whether the user requested one of
i) repeating the previous character,
ii) toggling the status of the shift key, or toggling the selected register, iii) pressing a key in the edit zone 24 to delete 24a, space 24b or return 24d, or iv) requesting help 24c
v) making a first gesture.
Options i) to iv) are indicative of a user selecting one of the swipeboard's keys.
If a user made a gesture, the device recognises the gesture in a first gesture state 212 and stores a value, such as a number, associated with the gesture. The resulting gesture is shown on a status indicator 26.
The device then enters a second waiting state 214. The device then determines whether a user made a gesture or activated a key at a second check state 216. The device then enters a second decision state 218 to determine whether the user requested one of
i) undoing the first gesture, by pressing on the gesture shown on the indicator 26,
ii) toggling the status of the shift key, or toggling the selected register, iii) pressing a key in the edit zone 24 to delete 24a, space 24b or return 24d, or iv) requesting help 24c v) making a second gesture.
Options i) to iv) are indicative of a user selecting one of the swipeboard's keys.
If a user made a further gesture, the device recognises the gesture in a second gesture state 220 and stores a value, such as a number, associated with the gesture. A character generator 222 retrieves from the register the character corresponding to the last two (2) gestures made in sequence and sends the generated character to the application using the invention as a method of entering text. At the same time, said character is shown in the indicator 26. After the generated character is sent to the application the invention returns to the start 204.
If, after the first waiting state 206, a gesture is not made on the swipeboard 18 then the device determines that a key was activated and takes one of the following actions before returning to the start 204, said actions including:
i) detecting that the previous character is to be repeated 224, and sending a repeat of the previous character to the application 226,
ii) changing the status 228 and toggling the status of the shift key, or toggling the selected register, before returning to the start 204, or
iii) detecting 230 an edit request and sending 232 a delete, space or return command to the application.
To be clear, if a user presses either 22b or 22c once the swipeboard generates the next character from that register. Pressing either 22b or 22c twice generates all further characters from that register until another register is chosen (i.e. the register is locked).
Pressing 22a generates subsequent characters from the first register. If pressed when the application is already in the first register it toggles between lower and uppercase. A double tap on 22a acts as a shift lock.
If, after the first waiting state 214, a gesture is not made on the swipeboard 18 then the device determines that a key was activated and takes one of the following actions before returning to the start 204, said actions including:
i) undoing the first gesture 234, by pressing on the gesture shown on the indicator 26, sending a delete instruction to the application and returning to the start 204,
ii) changing the status 228 and toggling the status of the shift key, or toggling the selected register, before returning to the start 204, or
iii) detecting 230 an edit request and sending 232 a delete, space or return command to the application.
The input means can, by way of example, be a touch sensitive display, as described above. Additionally or alternatively, the input means can be a hand held device configured and/or optimised for entering text by making sequences of gestures.
The computer code or program detects operates the device, which recognises via the input means, such as a touch sensitive device the gestures (tap, up, down, right, left), which are enumerated so that a tap has value 0, up 1 , down 2, right 3, left 4. The shift and register keys are enumerated as 'first' (abc) 0, firstShift (ABC) 1 , second 2, third 3.
A function combines the first and second gesture with the register information and returns the chosen character. G1 is the enumeration of first gesture (tap = 0, up = 1 , down = 2, right = 3, left = 4), G2 the second, R is the enumeration of the shift and register keys.
A 100 character string is created and stored in memory (108, 1 12 or 1 14) (here split over four lines for clarity) corresponding to and implementing Figures 5a and 5b:
The character string is:
"auieojnypghvlbdmcktzrxfws"
"AUIEOJNYPGHVLBDMCKTZRXFWS"
". :;,?q\V() !'\"-_01 23456789"
"*£$€@Q+^}~A#x&|±][%uuuu" The required character is generated by combining G1 , G2 and R to give the index within the string of the desired character:
String index - ((25 * R) + (5 * G1 ) + G2).
If this value is less than 96, the character at that point in the string is returned. If it is 96 or greater one of the four user defined strings is returned. So if:
String index = 96 return "First user defined string: name?"
String index = 97 return "Second user defined string: address?"
String index = 98 return "Third user defined string"
String index = 99 return "Fourth user defined string"
By way of example, the code can appear as
enumeration gestures {tap = 0, up = 1 , down = 2, right = 3, left = 4};
/* the five gestures recognised by the touch sensitive device 7
enumeration registerKey { first = 0, firstShift = 1 , second = 2, third = 3}; /* the registers used to determine which character to return 7
charFromFirstGesture: (int) G1 and Second: (int) G2 theRegisterKey: (int) R
/*
the function used to combine the first and second gesture with the register information and return the chosen character. G1 is the enumerated first gesture G2 the second.
7
translateGestureToCharacterString = "
auieojnypghvlbdrcktzmxfws
AUIEOJNYPGHVLBDRCKTZMXFWS
.:;,?q\V()!'\"-_0123456789
*£$€@Q+^}~A#><&|±][%uuuu";
This is the core character string which implements the translation from gestures to characters set out in Figures 5a and 5b. Each register contains 25 characters (corresponding to the 25 possible ways of combining two sets of five gestures). Int index - ((25 * R) + (5 * G1 ) + G2);
If (index < 96) Return
character from translateGestureToCharacterString at index
Else
/* there is capacity for four user defined responses (e.g. salutation, name)
7
Case (index)
(96) : return "First user defined string: name?"
(97) : return "Second user defined string: address?"
(98): return "Third user defined string"
(99): return "Fourth user defined string"
default: return "oops"; break;
Figures 7a and 7b show, respectively an underside and a top side of a device in the form of an input disc or device 300 suitable for one-handed operation. By way of an example the drawing shows a disc, but the physical implementation would be a device which sits most comfortably in the hand with the buttons positioned economically for the fingers. Activation of keys located on the peripheral edge of the device provide the means to make gestures corresponding to the movements tap, up, down, right and left, which can be represented symbolically as ·, †, |, → and <— respectively. By way of an example a display screen 302 is provided in order that the disc can function as a stand-alone device (e.g. a mobile phone). Without a display screen the disc could function as an input mechanism to another device (e.g. as remote control for a television).
Buttons include tap 304, up 306, down 308, right 310 and left 312 keys for entering gestures. Functional buttons include a register 314, shift 316, space 318, return, 320 and delete 322 keys.
When holding a device in one hand the thumb has greater mobility than the other digits so can be used to operate numerous keys (304, 318, 320 ,322), while the little finger could activate the register key (314). The characters generated are used in the text display area (302).
Figure 8 shows an alternative embodiment of an input disc 400 having a plurality of keys 402. The disc 400 is economically configured for use by both hands. The disc can have 10 keys, one for each digit when using both hands, to generate the gestures. Such a device would not function as a stand-alone device but would act as a replacement for a traditional physical key-board.
When holding a device in this way both the thumbs have greater mobility than the other digits so would also be used for the shift, space, return, and other keys.
Figure 9 shows alternative embodiment using the multi selector switch at the rear of a digital camera 500 to enter text on its LCD screen 510 using the multi-selector switch 512. Pressing OK 514 corresponds to a tap, up 516 to an upwards gesture, down 518 to a downwards gesture, right 520 to a rightwards direction and left 522 to a leftwards gesture.
In light of the teaching therein, it would be clear to the skilled person that a user making gestures to enter text or symbols on an electronic device, from a limited set of gestures, in a sequence, can be used to cross-reference or look-up table or table to retrieve a corresponding symbol and return it to the device to use, typically by entering it at the cursor point on a text display. Any number of gestures can be used, for example a set of five (5), nine (9) or ten (10) gestures, and the number of gestures in a sequence can be varied, but for example can be fixed at two (2) or three (3) gestures in a sequence.
Particular combinations of the number of gestures and the number of gestures used from a sequence to select and enter a symbol in a text have different advantages depending on the application. The advantage of a set of five (5) gestures combined in pairs, as preferred in this invention, is that it is the minimum required to generate the Roman alphabet (albeit without q). A larger set of gestures, having perhaps ten (10) or more gestures, in combination with a longer sequence, of perhaps four (4) or more gestures, provides a greater number of combinations and, therefore, a larger number of look-up references from a data set or register. The invention preferably has a limited set of five (5) gestures. These gestures are simple to make and the combinations are easy to learn thus less susceptible to user error especially because they extend linearly. Similarly, by detecting at the input mean, such as a touch-sensitive display, linear movements there is a reduced chance of a misinterpretation of the signal generated by the gesture. Further, the processor and associated logic is less complex because each gesture is interpreted as only one of a set of gestures, such as five (5) in the example given. Compared to input means required to interpret Roman alphanumeric symbols or Chinese characters, the input means, method and device of the invention functions to inhibit errors made.
In the light of the teaching of this invention the skilled person would appreciate that this invention would be advantageous in any situation which required text entry and the device was capable of recognising or generating, by whatever means, a limited set of gestures and combining them in sequence according to aspects of this invention to generate characters or symbols. So, for example, Figure 9 shows a multi selector switch on the back of a digital camera to generate the gestures and thus enable a photographer to keep notes on the camera. This illustrates how flexible and compact the teaching of this invention enables text and symbol entry to become.
Similarly a skilled person would appreciate that though this invention prefers a limited set of five (5) gestures in a sequence of two (2), being easy to implement and learn and the minimum necessary to generate the Roman alphabet, use of a larger set of gestures, and/or a longer sequence, and/or further registers would enable the symbols and glyphs of any language to be generated.
The present invention has been described above purely by way of example, and modifications can be made within the spirit and scope of the invention, which extends to equivalents of the features described and combinations of one or more features described herein. The invention also consists in any individual features described or implicit herein.

Claims

1 . A method for entering symbols such as text or characters for display or use on an electronic device in response to a gesture of a user operating the device, wherein the device has an input means configured to detect a gesture, the method including:
detecting at the input means a gesture from a set of gestures, wherein the set consists of five (5) gestures;
monitoring the sequence of gestures detected by said input means;
identifying from a register a symbol or character corresponding to a subsequence of said gestures; and
returning to the electronic device the symbol corresponding to the sub-sequence for the electronic device to display or otherwise use.
2. A method according to claim 1 , wherein the input means has predetermined axes, such as Cartesian axes, and the set of gestures consists of : up; down; left; right; and a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes.
3. A method according to claim 1 or 2, wherein the input means has predetermined axes, such as Cartesian axes, and the set of gestures consists of : a movement extending linearly upwards; a movement extending linearly downwards; a movement extending linearly from left to right; a movement extending linearly from right to left; and a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes.
4. A method according to claim 1 , wherein the device reads from the register a symbol that corresponds to a sequence of at least two (2) gestures or signals detected at the input, and the input means has predetermined axes, such as Cartesian axes, and is configured to detect a gesture or a signal created by a gesture, from the group of:
a movement extending linearly:
upward; downward; from left to right; from right to left; upward from left to right;
upward from right to left;
downward from right to left; or
downward from left to right; and
a push, press, tap or similar linear movement on a plane substantially perpendicular, or through, a plane defined by the up, down, left and right orientations.
5. A method according to any preceding claim, wherein the sub-sequence comprises at least a first gesture and a subsequent gesture, or consists of two (2) sequential gestures such as a pair of gestures.
6. A method according to any preceding claim, wherein the display is a touch-sensitive display screen, and the display has areas including:
a text area for displaying and editing text; and
a function area for managing [handling /operating] the entry of gestures, the selection of a register and editing text,
wherein the function area includes a plurality of zones, including at least one of: a selection zone for selecting the status of the function area;
an indication zone for indicating the last gesture recognised by the input means;
an editing zone for editing using keyboard-type text-adjustment functions such as delete, space and return; and an amending zone for changing the status of at least a shift-key and a register-key.
7. An electronic device having a display and configured for entering symbols such as text or characters on the a display in response to a gesture of a user operating the device, wherein the device has:
an input means configured to detect a gesture, wherein the input means is configured to recognise gestures from a set of gestures and wherein the set consists of five (5) gestures; and
a controller configured to track the sequence of gestures detected by said input means, and wherein the controller is configured to identify from a register a symbol corresponding to a sub-sequence of said gestures and display the symbol corresponding to the sub-sequence.
8. An electronic device according to claim 7, wherein the input means has predetermined axes, such as Cartesian axes, and the input means is configured to recognise a set of gestures consisting of: up; down; left; right; and a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes.
9. An electronic device according to claim 7 or 8, wherein the input means has predetermined axes, such as Cartesian axes, and the input means is configured to recognise a set of gestures consisting of: a movement extending linearly upwards; a movement extending linearly downwards; a movement extending linearly from left to right; a movement extending linearly from right to left; and a push, press, tap or movement that extends linearly on a plane extending from, or through, the plane defined by the up, down, left or right orientations or axes.
10. An electronic device according to claim 7, wherein the input means is configured to recognise gestures from a set of nine (9) gestures, and the input means has predetermined axes, such as Cartesian axes, and recognise a set of gestures consisting of:
a movement extending linearly:
upward; downward; from left to right; from right to left; upward from left to right;
upward from right to left;
downward from right to left; or
downward from left to right; and
a push, press, tap or similar linear movement on a plane substantially perpendicular, or through, a plane defined by the up, down, left or right orientations.
1 1 . An electronic device according to any of claims 7 to 10, wherein the subsequence comprises at least a first gesture and a subsequent gesture, or consists of two (2) sequential gestures.
12. An electronic device according to any of claims 7 to 1 1 , wherein the display screen is touch-sensitive, and the display has areas including:
a text area for displaying and editing text; and
a function area for managing [handling /operating] the entry of gestures, the selection of a register and editing text,
wherein the function area includes a plurality of zones, including at least one of: a selector, configured to change the status of the function area;
an indicator, configured to show at least the last gesture recognised by the input means;
an editor, configured to enable keyboard-type text-adjustment functions such as delete, space and return; and
a status selector, configured to enable a user to change the status of at least a shift-key and a register-key.
13. An input means for the method of claims 1 to 6 and/or the device of claims 7 to 12, configured to convert a gesture into a signal, wherein the input means includes at least one of:
a joystick, wherein the forward, backward, left and right movements correspond, respectively, to an up; down; left; right movement, while the a press on the pivot-point of the joystick, or button on the joystick
corresponds to a push, press, tap or movement;
a multi-selector switch wherein the up, down, left and right sides of the selector correspond, respectively, to an up; down; left; right movement, while the a press on the centre (OK) of the multi-selector switch
corresponds to a push, press, tap or movement;
cursor keys and space bar on a qwerty-keyboard;
a touch sensitive pad, wherein a finger or pen can make up; down; left and right swipe-type movements, while a finger press or dot corresponds to a push, press, tap or movement;
an eye and/or head movement recognition device; or
an accelerometer configured to detect movement of a device, or configured to detect movement of a digit and/or hand and/or a limb when attached thereto.
14. An input means for the method of claims 1 to 3 and/or the device of claims 7 to 10, wherein the input means includes a five-transducer or button hand held device.
15. An input means for the method of claims 1 to 3 and/or the device of claims 7 to 10, wherein the input means includes a ten-transducer or button hand held device.
16. A system comprising a device according to any of claims 7 to 10 connected to an input means according to any of claims 13 to 15.
17. A computer readable storage medium storing one or more programs, said programs having instructions, which when executed by an electronic device perform a method according to any of claims 1 to 6.
18. A method, device, system or medium as hereinbefore described and/or as shown in the Figures.
PCT/GB2013/052948 2012-11-08 2013-11-08 Gesture input method and apparatus WO2014072734A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP13795285.9A EP2917812A1 (en) 2012-11-09 2013-11-08 Gesture input method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1220200.8 2012-11-08
GB1220200.8A GB2507777A (en) 2012-11-09 2012-11-09 Conversion of combinations of gestures into character input, using restricted gesture set

Publications (1)

Publication Number Publication Date
WO2014072734A1 true WO2014072734A1 (en) 2014-05-15

Family

ID=47470351

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2013/052948 WO2014072734A1 (en) 2012-11-08 2013-11-08 Gesture input method and apparatus

Country Status (3)

Country Link
EP (1) EP2917812A1 (en)
GB (1) GB2507777A (en)
WO (1) WO2014072734A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017077353A1 (en) * 2015-11-05 2017-05-11 Bálint Géza Data entry device for entering characters by a finger with haptic feedback
CN114138110A (en) * 2021-11-05 2022-03-04 成都映潮科技股份有限公司 Gesture recording method and system based on android system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1513053A2 (en) * 2003-09-05 2005-03-09 Samsung Electronics Co., Ltd. Apparatus and method for character recognition
US20080136681A1 (en) * 2006-12-04 2008-06-12 Electronics And Telecommunications Research Institute Apparatus and method for constituting character using head motion
US20080158162A1 (en) * 2005-01-05 2008-07-03 Jaewoo Ahn Method And Apparatus For Inputting Character Through Direction Input Unit
US20120235838A1 (en) * 2009-11-25 2012-09-20 Foxit Corporation Method and device for character input by diection key

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5390260A (en) * 1993-06-28 1995-02-14 International Business Machines, Corp. Method and apparatus for on-line, real time recognition of stroked hand-drawn characters
US5596656B1 (en) * 1993-10-06 2000-04-25 Xerox Corp Unistrokes for computerized interpretation of handwriting
US6970599B2 (en) * 2002-07-25 2005-11-29 America Online, Inc. Chinese character handwriting recognition system
NO315777B1 (en) * 2000-09-27 2003-10-20 Bware As Method and system for obtaining a user interface against an electrical device
TW201015382A (en) * 2008-10-09 2010-04-16 Univ Nat Chiao Tung Virtual input system and method
US9104306B2 (en) * 2010-10-29 2015-08-11 Avago Technologies General Ip (Singapore) Pte. Ltd. Translation of directional input to gesture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1513053A2 (en) * 2003-09-05 2005-03-09 Samsung Electronics Co., Ltd. Apparatus and method for character recognition
US20080158162A1 (en) * 2005-01-05 2008-07-03 Jaewoo Ahn Method And Apparatus For Inputting Character Through Direction Input Unit
US20080136681A1 (en) * 2006-12-04 2008-06-12 Electronics And Telecommunications Research Institute Apparatus and method for constituting character using head motion
US20120235838A1 (en) * 2009-11-25 2012-09-20 Foxit Corporation Method and device for character input by diection key

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017077353A1 (en) * 2015-11-05 2017-05-11 Bálint Géza Data entry device for entering characters by a finger with haptic feedback
CN114138110A (en) * 2021-11-05 2022-03-04 成都映潮科技股份有限公司 Gesture recording method and system based on android system and storage medium

Also Published As

Publication number Publication date
GB201220200D0 (en) 2012-12-26
GB2507777A (en) 2014-05-14
EP2917812A1 (en) 2015-09-16

Similar Documents

Publication Publication Date Title
US8125440B2 (en) Method and device for controlling and inputting data
US9176668B2 (en) User interface for text input and virtual keyboard manipulation
JP5166008B2 (en) A device for entering text
JP5730667B2 (en) Method for dual-screen user gesture and dual-screen device
US9122318B2 (en) Methods of and systems for reducing keyboard data entry errors
US6741235B1 (en) Rapid entry of data and information on a reduced size input area
JP4484255B2 (en) Information processing apparatus having touch panel and information processing method
US20110209087A1 (en) Method and device for controlling an inputting data
US7230607B2 (en) 6-key keyboard for touch typing
JP2011530937A (en) Data entry system
KR20120006976A (en) Data entry system
WO2010010350A1 (en) Data input system, method and computer program
JP2006524955A (en) Unambiguous text input method for touch screen and reduced keyboard
WO2009002787A2 (en) Swipe gestures for touch screen keyboards
US20150100911A1 (en) Gesture responsive keyboard and interface
US9760278B2 (en) Finger expressions for touch screens
US20110302534A1 (en) Information processing apparatus, information processing method, and program
WO2012015333A9 (en) Device for typing and inputting symbols into portable communication means
JP6740389B2 (en) Adaptive user interface for handheld electronic devices
WO2015191644A1 (en) Finger position sensing and display
WO2014072734A1 (en) Gesture input method and apparatus
TW200816035A (en) Portable electronic apparatus, input system and method thereof
Hirche et al. Adaptive interface for text input on large-scale interactive surfaces
JP2007213615A (en) Touch type key input device, touch type key input method and program
Ahmed et al. SwingBoard: introducing swipe based virtual keyboard for visually impaired and blind users

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13795285

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2013795285

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2013795285

Country of ref document: EP