METHODS FOR MAPPING GESTURES TO GRAPHICAL USER INTERFACE
COMMANDS
Field of the Invention
The present invention relates to methods for mapping gestures to graphical user interface commands. In particular, it relates to methods for mapping gestures performed on a touch screen with configurations of bunched fingers. The invention has been developed primarily for use with infrared-type touch screens, and will be described hereinafter with reference to this application. However, it will be appreciated that the invention is not limited to this particular field of use.
Background of the Invention
Any discussion of the prior art throughout the specification should in no way be considered as an admission that such prior art is widely known or forms part of the common general knowledge in the field.
Input devices based on touch sensing (touch screens) have long been used in electronic devices such as computers, personal digital assistants (PDAs), handheld games and point of sale kiosks, and are now appearing in other portable consumer electronics devices such as mobile phones. Generally, touch-enabled devices allow a user to interact with the device, for example by touching one or more graphical elements such as icons or keys of a virtual keyboard presented on a display, or by writing or drawing on a display.
Several touch-sensing technologies are known, including resistive, capacitive, projected capacitive, surface acoustic wave, optical and infrared, all of which have advantages and disadvantages in areas such as cost, reliability, ease of viewing in bright light, ability to sense different types of touch object (e.g. finger, gloved finger, stylus) and single or multi-touch capability. For example resistive touch screens are inexpensive and can sense virtually any rigid touch object, but have poor screen viewability in bright light and can only sense single touches. Projected capacitive has multi-touch capability but cannot sense a non-conductive stylus or a gloved finger, and likewise has poor screen viewability in bright light. Optical and infrared have good screen viewability in bright light, some multi-touch capability and are sensitive to virtually any touch object, but
there is the potential for the detectors to be saturated by sunlight. Furthermore some touch-sensing technologies, including optical, infrared and surface acoustic wave, are sensitive to near-touches as well as to actual touches, whereas other technologies such as resistive require an actual touch.
The concept of gestural inputs, where a user places and/or moves one or more touch objects (usually fingers, with the thumb considered to be a finger) across a touch- sensitive surface, or places one or more touch objects on a touch- sensitive surface in a particular sequence, is an increasingly popular means for enhancing the power of touch input devices beyond the simple 'touch to select' function, with a large number of gestures of varying complexity for touch input devices known in the art (see for example US Patent Application Publication Nos 2006/0026535 Al, 2006/0274046 Al and 2007/0177804 Al). As discussed in US 2006/0097991 Al, touch technologies such as projected capacitive with interrogation of every node can accurately detect several simultaneous touch events and are well suited to gestural input, with gestures interpreted according to the number of fingers used. US 2007/0177804 Al discusses the concept of a 'chord' as a set of fingers contacting a multi-touch surface, and suggests the use of a gesture dictionary assigning functions to different motions of a chord. On the other hand for touch technologies with no multi-touch capability (e.g. resistive and surface capacitive), gestural input based on chords is of limited applicability.
Still other touch technologies, typically those that detect touch objects based on the occlusion of one or more light or acoustic beam paths, occupy a middle ground in that they can routinely detect the presence of multiple simultaneous touch objects, but generally cannot unambiguously determine their location. This 'double touch ambiguity' is illustrated in Fig. 1, for the example of a conventional 'infrared' touch screen 2 with arrays of emitters 4 and detectors 6 along opposing edges of an input area 8. Two touch objects 10 occlude two 'X' beam paths 12 and two 'Y' beam paths 14 such that their presence is detected, but their location cannot be distinguished from the two 'phantom' points 16. This ambiguity, and some of its implications for gestural
input, is described in more detail in International Patent Application Publication No WO 2008/138046 Al.
Essentially the same ambiguity is encountered in 'optical' touch screens that detect touch objects using cameras in two adjacent corners (US Patent No 6,943,779 and US Patent Application Publication No 2006/0232792 Al), surface acoustic wave devices (US Patent Nos 6,856,259 and 7,061,475), and projected capacitive systems with column/row interrogation (simpler and faster than interrogating every node, US Patent Application Publication No 2008/0150906 Al). The ambiguity can sometimes be resolved by various techniques described in the abovementioned documents, but in any event the touch technologies in this class are still suitable for gestural input to a limited extent. In particular, the 'double touch ambiguity' does not arise for every arrangement of two or more touch objects. For example, with reference to Fig. 1, if two touch objects occlude different 'X' beam paths 12 but the same 'Y' beam path 14, or different 'Y' beam paths but the same 'X' beam path, their locations can be determined
unambiguously.
The number of gestures that can be mapped to various combinations of fingers can be increased by taking account of the distances between a user's fingers on the touch- sensitive surface. For example US Patent No 7,030,861 discloses the concept of discriminating between 'neutral' gestures, where a user's hand is relaxed with the fingers relatively close together, and 'spread-hand' gestures, where a user's fingers are intentionally spread apart, with different functions assigned accordingly. Referring to Fig. 2, the requirement for a touch technology to resolve two closely spaced touch objects, represented by the contact points of two adjacent fingers 18 on a touch screen 2, separated by a gap 20 significantly smaller than either finger, is in many ways distinct from the requirement to detect and locate two simultaneous touch events. That is, a touch technology capable of detecting and locating two widely separated touch objects will not necessarily be able to do so when the objects are closely spaced or in mutual physical contact. For example, in projected capacitive touch screens where conductive objects (such as a user's fingers) are detected in an analogue fashion via their interaction with a grid of conductive traces, the ability to resolve two closely spaced
objects can be influenced by changes in the conductivity of the touch objects (e.g. damp versus dry fingers). Another touch technology that can in principle detect multiple touch objects, but will have difficulty in resolving closely spaced objects, is the 'in-cell optical' technology described in US Patent No 7,166,966 for example, where photodetector pixels located amongst the emitter pixels of an LCD detect a touch object via shadowing of ambient light. In this case, unless the ambient light source is fortuitously located, two adjacent fingers as shown in Fig. 2 will appear as a single shadow. In the case of 'optical' (as opposed to 'infrared') touch screens, the ability to resolve closely spaced touch objects varies with position on the touch area. To explain with reference to Fig. 3, an optical touch screen 22 with co-located emitters 24 and cameras 26 in two adjacent corners, and three retro -reflective sides 28, locates a touch object 10 by triangulation of the shadowed beam paths 30. It will be appreciated that while two closely spaced objects at position 32 may be resolved (depending on the camera resolution) because the gap 20 will be seen by at least one of the cameras, as shown by the beam path 33, they will not be resolved at position 34.
Summary of the Invention
It is an object of the present invention to overcome or ameliorate at least one of the disadvantages of the prior art, or to provide a useful alternative. It is an object of the present invention in its preferred form to provide methods for mapping gestures performed on a touch screen to graphical user interface commands. In accordance with a first aspect of the invention there is provided a method for mapping gestures to graphical user interface commands, said method comprising the steps of: detecting a gesture performed with a bunched finger configuration on a surface of a touch screen associated with said graphical user interface, said bunched finger configuration comprising two or more fingers; and generating a graphical user interface command in response to said gesture.
In one preferred form, the bunched finger configuration comprises two or more fingers held in mutual contact. In another preferred form, the bunched finger configuration
comprises two or more fingers perceived by the touch screen to be separated by a gap less than half the size of any of the fingers. In yet another preferred form, the bunched finger configuration comprises two or more fingers perceived by the touch screen to be separated by a gap less than a quarter the size of any of the fingers.
The gesture preferably includes one or more taps on the surface of at least a subset of the fingers. Preferably, the gesture includes movement across the surface of at least a subset of the fingers. The fingers are preferably moved across the surface in unison. In one preferred form, the bunched finger configuration comprises two fingers, and the gesture is mapped to a rotation command such that a graphical object displayed on the graphical user interface is rotated. In another preferred form, the bunched finger configuration comprises two fingers, the gesture further comprises a tap of the two fingers prior to the movement, and the gesture is mapped to a fixed increment rotation command such that a graphical object displayed on the graphical user interface is rotated by fixed increments. In yet another preferred form, the gesture further comprises a double tap of a single finger on the graphical object prior to the movement, to define a centre of rotation, and the gesture is mapped to a rotation command such that the graphical object is rotated about the centre of rotation.
In accordance with a second aspect of the present invention there is provided a touch screen system when used to implement a method according to the first aspect of the invention. Brief Description of the Drawings
Preferred embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which:
Figure 1 illustrates the 'double touch ambiguity' experienced by many touch screen technologies;
Figure 2 illustrates in plan view the contact of two closely spaced fingers on a touch screen;
Figure 3 illustrates in plan view the operation of an 'optical' touch screen;
Figure 4 illustrates in plan view the operation of a type of infrared touch screen;
Figures 5(a) to 5(c) illustrate in side view some possible configurations for combining the infrared touch screen of Fig. 4 with a display;
Figures 6(a) and 6(b) illustrate in plan and side view the interaction of two closely spaced touch objects with the touch screen of Fig. 4;
Figure 7 illustrates an example edge detection algorithm for resolving two closely spaced touch objects;
Figure 8 illustrates an example implementation of bunched finger detection within a touch screen microcontroller;
Figures 9(a) and 9(b) illustrate how a swipe gesture can be interpreted as a command to either rotate or translate a graphical object depending on the configuration of fingers performing the gesture;
Figures 10(a) to 10(c) illustrate how a swipe gesture applied to a list of items can be interpreted differently depending on the configuration of fingers performing the gesture; Figures 11(a) to 11(d) illustrate the application of a bunched finger gesture in a drawing program; and
Figure 12 illustrates an example graphical user interface implementation of the preferred embodiment.
Detailed Description of the Invention
Referring to the drawings, Fig. 4 illustrates in plan view an infrared- style touch screen 36 with a field of substantially parallel energy paths in the form of two directional sheets of light 38 established in a touch area 40. The light is preferably in the infrared portion of the spectrum so as to be invisible to a user, but in certain embodiments may be in the visible or even ultraviolet portions of the spectrum. As described in US Patent
Application Publication No 2008/0278460 Al, the contents of which are incorporated herein by reference, the sheets of light are generated by a transmissive body 42
comprising a planar transmissive element 44 and two collimation/redirection elements 46 that include parabolic reflectors 48. In alternative embodiments the
collimation/redirection elements may include lenses, segmented reflectors or segmented lenses. Light 50 emitted by a pair of optical sources (e.g. infrared LEDs) 52 is launched into the transmissive element, then collimated and re-directed by the
collimation/redirection elements to produce two sheets of light 38 that propagate in front of the transmissive element towards X,Y arrays of integrated optical waveguides 54 with
in-plane lenses 56 to help collect the light, then guided to a position-sensitive (i.e. multielement) detector 58 such as a line camera or a digital camera chip. Methods for fabricating the optical waveguides and in-plane lenses from photo-curable polymers are described in US Patent No 7,218,812 and US Patent Application Publication No
2007/0190331A1, the contents of which are incorporated herein by reference. The optical sources and the detector are connected to a microcontoller 60 that controls the operation of the touch screen device. The connections may be physical or wireless, and for clarity the connections between the microcontroller and the optical sources have been omitted from Fig. 4. Preferably the two sheets of light are co-planar, but could alternatively be in parallel planes. When an object 10 such as a user's finger blocks portions of the sheets of light, it will be detected and its location within the touch area determined based on the detector elements that receive little or no light.
Generally the touch input device 36 includes a display (not shown in Fig. 4) more or less coincident with the touch area, for presenting a graphical user interface with which a user can interact via touch input. The type of display is not particularly important, and may for example be an LCD, an OLED display, a MEMs display or an e-paper display (also known as an e-book display or an electrophoretic display). Fig. 5(a) shows in side view one possible configuration for combining an infrared touch screen 36 of the type shown in Fig. 4 with a display 62. In this configuration the planar transmissive element 44 is located in front of the display. In a variant configuration shown in Fig. 5(b) a front glass sheet 64 of the display serves as the planar transmissive element 44 component of the transmissive body, with the collimation/redirection element 46 provided as a separate component. Fig. 5(c) shows in side view another configuration where the planar transmissive element 44 is located behind the display 62.
The configurations shown in Figs 5(b) and 5(c) have the advantage of there being nothing in front of the display (and particularly no high refractive index layers of conductive material such as ITO, unlike in resistive and capacitive touch screens) that may cause problems of glare or display dimming. This is particularly advantageous for displays such as e-paper that can have relatively poor contrast and that may be viewed for extended periods. Alternatively, in the configuration shown in Fig 5(a) the planar
transmissive element 44 can double as a display protector, and because it has no high refractive index layers its contribution to glare or screen dimming will be modest.
The methods of the present invention will now be described, by way of example only, with reference to the infrared touch screen illustrated in Fig. 4, equipped with a display for presenting a graphical user interface. However the methods are also applicable to other types of touch screen capable of resolving closely spaced touch objects.
Turning now to Fig. 6(a), there is shown in plan view the shadows 66 cast by the tips of two closely spaced or physically contacting fingers, shown as contact areas 67, placed on a display associated with a touch screen of the type shown in Fig. 4. For simplicity only a single axis sheet of light 38 is shown, directed towards an array of in-plane lenses 56 with associated receive waveguides 54. Note that the spacings between adjacent in- plane lenses can be made arbitrarily small, limited only by the resolution of the technique used to pattern them, and in certain embodiments are omitted altogether such that the lenses are contiguous. In the particular example shown in Fig. 6(a), lenses A, M and Y receive all of the available amount of signal light, lenses B, L, N and X receive a reduced amount of signal light, and lenses C to K and O to W receive essentially no signal light. When the two fingers are positioned in a line 70 oriented approximately perpendicular to the propagation direction of the single axis sheet of light 38, as shown in Fig. 6(a), the other (perpendicular) sheet of light, not shown in Fig. 6(a), will see them as a single touch object. This is in fact advantageous because it avoids the 'double touch ambiguity' as mentioned above with reference to Fig. 1. The same principle applies to arrangements of three or more fingers (or other touch objects) oriented more or less in a line approximately perpendicular to the propagation direction of one or other of the sheets of light.
Fig. 6(b) illustrates the situation in side view, with the two fingers 18 placed on the surface 61 of the touch screen/display 62 and cutting the sheet of light 38 located just above the display surface. Importantly, it can be seen that provided the sheet of light is sufficiently close to the display surface, which is a relatively straightforward aspect of the transmit and receive optics design, the sheet of light intersects the gap 20 between
the tips of the fingers, present even when the fingers are held in mutual contact in the region of the last finger joint 69 as shown.
Fig. 7 illustrates the basics of an example edge detection algorithm that can identify the configuration of the double finger contact shown in Figs 6(a) and 6(b). The points 71 representing the intensity of light collected by the lenses 56 and guided to the detector pixels are interpolated to produce a line 72 and the edges 73 of the fingers determined by comparison with a predetermined threshold level 74, enabling calculation of the width 68 of each finger and of the gap 20 between them. By way of specific example, with an arrangement of receive optics including 1 mm wide in-plane lenses 56 and associated waveguides 54 on a 1 mm pitch, the edge detection algorithm determines the width of each finger to be 10 mm and the gap to be 2 mm. It will be appreciated that for a given arrangement of lenses, the received signal intensity pattern will vary with the orientation of the two fingers. However provided at least one lens (e.g. lens L, M or N in Fig. 7) receives a signal level above the predetermined threshold, the fingers can be resolved. We have found that with 1 mm wide lenses our edge detection algorithms are surprisingly robust to changes in finger orientation: when the touch screen is touched with two adjacent adult-sized fingers, there is only a narrow range of orientation angles, approximately 45° to the two sheets of light, where the algorithms fail to resolve the two fingers. It will also be appreciated that with touch object detection based on a Cartesian grid of light paths, the ability of the algorithms to resolve closely spaced objects is independent of the position of the objects within the input area. As explained above with reference to Fig. 3, this is to be contrasted with the situation for 'optical' touch screens where the ability to resolve closely spaced objects is highly dependent on their position within the input area.
We note that a double finger contact could also be identified by an algorithm that detects minima in the signal levels received by the detector pixels. However inspection of the signal levels 71 in Fig. 7 shows that the minima can be quite broad in situations where touch objects shadow a number of adjacent lenses, so that the determination of the position of a minimum may be inaccurate. Furthermore a minimum-detecting algorithm may be confused by small amounts of stray light that reach isolated pixels, as represented by point 75 in Fig. 7.
When a pattern of detected light intensity is determined by the algorithms to be consistent with a touch from two or more finger-sized objects separated by distances considerably less than the dimensions of each object, the controller interprets this pattern as a 'bunched finger' configuration. A gesture (such as a tap or swipe) performed on the touch screen with a bunched finger configuration can then be mapped by the controller to a command to perform some action on objects displayed on a graphical user interface associated with the touch screen. For the purposes of this specification, we define a bunched finger configuration to be a configuration where a user is contacting the touch screen with a set of fingers, or fingerlike objects, comprising two or more fingers that are sufficiently close together as to appear to the device controller to be being held in mutual physical contact close to the point of contact with the touch screen surface, but are nonetheless resolvable as individual objects. Anatomically, this generally means that the fingers are touching each other at least in the region of the last finger joint. In preferred embodiments, two adjacent touch objects are taken to be 'sufficiently close' when the touch screen perceives them as being separated by a gap less than half the size of either object, preferably less than a quarter the size of either touch object. In certain embodiments a touch screen controller provides a routine to calibrate the device for the finger sizes and bunched finger configurations of a particular user, e.g. an adult or child, to enable the controller to distinguish more reliably between, say, a gesture performed with fingers in a bunched configuration and a gesture performed with fingers in a 'neutral'
configuration, i.e. slightly separated. Note that a user's fingers need not actually be being held in mutual physical contact for the controller to interpret the finger configuration as a 'bunched finger configuration', although to minimise the chance of the controller misinterpreting the finger configuration and mapping it to the wrong command, a user should preferably have their fingers held in mutual contact. A bunched finger configuration will often comprise two or more fingers (including the thumb) from the same hand, e.g. index and middle fingers, but may comprise fingers from both hands, such as the two index fingers. A bunched finger configuration can also include one or more isolated fingers, so long as it includes at least two fingers that are
bunched together. For example a configuration comprising bunched index and middle fingers and a more distant thumb is considered to be a bunched finger configuration. A bunched finger configuration can also be performed with touch objects other than fingers, although it is envisaged that it will be more convenient and intuitive for users to interact with the touch screen using their fingers.
A gesture performed with a bunched finger configuration may include movement of fingers (a swipe) across the touch screen surface. In preferred embodiments the bunched fingers are moved in unison, i.e. they remain bunched throughout the movement. This is preferable for gestures performed on a touch screen susceptible to the double touch ambiguity. However in other embodiments the fingers may be bunched at the beginning of a gesture and moved apart, while in other embodiments they may be moved towards each other such that the gesture ends with the fingers bunched. Preferably the fingers are bunched at least at the beginning of the gesture to ensure the gesture is mapped to the desired command, but if the command is not executed or completed until after the gesture is finished, the gesture need only include bunched fingers at some point during its performance. Similar considerations apply for gestures that include one or more taps of a user's fingers on the touch screen surface; so long as a gesture includes screen contact with at least two bunched fingers at some point during its execution, it is considered to be a 'bunched finger gesture'.
Bunched finger configurations are convenient to apply, because the mutual finger contact provides a user with tactile feedback on their relative finger positions, ensuring that a gesture is mapped to the desired command by the controller. They are to be distinguished from the 'neutral' gestures of the abovementioned US 7,030,861 patent that are applied with a relaxed hand. To apply a bunched finger gesture a user needs actively to bunch his/her fingers together, whereas when a user initiates contact with a relaxed hand, the fingers are relatively close together but not bunched. Inspection of Fig. 6(b) shows that because the sheet of light 38 is located above the touch surface 61, an infrared touch screen will be able to detect the two fingers 18 and the gap 20 between them before the fingers actually contact the surface. That is, the touch screen is also sensitive to 'near touches' of bunched finger configurations. Given
the limited height of the gap between two fingers held in mutual contact, for most practical implementations a user will place their fingers in contact with the touch surface. Nevertheless, the terms 'touch', 'contact' and the like are intended to encompass near touches.
Turning now to Fig. 8 there is illustrated one form of implementation of the bunch detection algorithm within the microcontroller 60 of Fig. 4. The microcontroller receives the sensor inputs for the row and column sensors from the position sensitive detector 58. The microcontroller executes two continuous loops 120, 121. A first loop 120 examines the sensor values to determine if any finger down conditions exist. The row and column details of potential finger down positions are stored. A second loop 121 continually examines the finger down positions, utilising the aforementioned algorithm of Fig. 7 to determine if any of the finger down positions further satisfy the bunched finger condition. The finger down positions and bunch status are stored in an I/O register 122 at a high frequency rate.
The microcontroller interfaces with a main computer system running a predetermined operating system (e.g. Microsoft Windows, Linux, Mac OS etc). Depending on the operating system implementation details, the positions and bunch status are read out by device drivers and forwarded to user interface routines over a main computer I/O bus. The device drivers implement bunch finger tracking for the generation of swipe gestures.
The finger down positions are then forwarded to a second loop 121 which continuously reviews the finger down positions to determine if a bunched finger condition exists.
We turn now to a description of some non-limiting examples of graphical user interface commands that can be mapped to gestures (such as taps or swipes) performed with a bunched finger configuration. EXAMPLE 1
Fig. 9(a) illustrates a 'two finger rotate' gesture performed with a configuration of two bunched fingers 75 on a graphical user interface 76 displaying a graphical object 77. When the graphical object is touched by two bunched fingers and the fingers moved
(swiped) in unison across the screen, preferably in a roughly circular fashion 78, the object is rotated about its centre of mass, the default centre of rotation. That is, the gesture comprising a swipe of the two bunched fingers is 'mapped' to a graphical user interface command such that the device controller generates a rotation command in response to the gesture and applies it to a graphical object. If on the other hand the graphical object 77 is touched with a single finger 18, a swipe of the finger will be mapped by the device controller to a 'pan' or 'translate' gesture as shown in Fig. 9(b). A similar 'touch and drag' gesture performed with two fingers that are not interpreted as being bunched (e.g. if the device controller sees them as being separated by a distance comparable to the width of either finger) may also be mapped a 'pan' gesture, or to some other gesture such as a 'fixed increment' rotation, e.g. with increments of 15, 30 or 90 degrees that may be user-definable.
A number of variations on the 'two finger rotate' gesture shown in Fig. 9(a) are possible. In one example, if several graphical objects are displayed, the object to be rotated can be selected with a single finger tap before the rotation is commenced. In another example, a 'fixed increment' rotation function can be selected by tapping the object to be rotated with the two bunched fingers before commencing the 'two finger rotate' gesture. In yet another example, instead of rotating the object about its centre of mass, some other centre of rotation can be predefined with a double tap with a single finger.
EXAMPLE 2
Referring now to Figs 10(a) to 10(c), some other examples of gestures with two bunched fingers will be explained with reference to a list of items 79 partially displayed in a window 80 of a graphical user interface 76. As shown in Fig. 10(a), a downwards or upwards swipe 82 with a single finger 18 is mapped to a 'scroll' command allowing further items in the list to be displayed. As shown in Fig. 10(b), a swipe 84 with two widely separated fingers 18 is mapped to a 'pan' command where the entire window 80 is translated. As shown in Fig. 10(c), a downwards or upwards swipe 86 with a configuration of two bunched fingers 75 is mapped to a 'move' command whereby a selected item 88 is moved to another location in the list. The desired item is selected by the initial contact with the fingers on the display and 'dropped' into its new location by removing the fingers from the display. In another variation, an item can be selected by
initial contact with a configuration of two bunched fingers then 'copied and moved' into a new location by a downwards or upwards swipe of one finger only. This is an example of a gesture that begins but does not end with a bunched finger configuration. EXAMPLE 3
Figs 11(a) to 11(d) illustrate the application of a bunched finger gesture in a drawing package. Two separate graphical objects 90 displayed on a graphical user interface 76 are selected with single finger contacts 18 as shown in Fig. 11(a), then panned into an overlapping arrangement as the two fingers are moved towards each other, possibly but not necessarily into a bunched configuration, as shown in Fig. 11(b). Finally, a single tap with a configuration of two bunched fingers 75 on the overlap region 92 (Fig. 11(c)) is mapped to a command to join the two objects to form a single object 94 as shown in Fig. 11(d). This is an example of a gesture that ends but does not begin with a bunched finger configuration. Depending on the separation of the two objects 90 as shown in Fig. 11(a), it may be convenient to initiate this 'merge' command by selecting and panning the objects with a finger from each hand. We note that the 'merge' command, i.e. a tap with two bunched fingers as shown in Fig. 11(d), can alternatively be performed in isolation, to merge two graphical objects that were previously moved into or created in overlapping fashion.
Fig. 12 illustrates one form of implementation of gesture or swipe recognition within a user interface. In this arrangement, the sensor pad outputs are continuously read out by the device drivers 126 of the user interface 125 running as part of the computer operating system. The information read from the microcontroller 60 includes a bunch finger status flag which is active when a bunch finger determination has been made. The device drivers 126 store current and previous finger position information to determine if a two finger swipe is currently active. Where multiple different bunch finger data is received in a continuous stream of data values, an extended bunch finger swipe is determined to have occurred.
The device driver outputs are forwarded to user interface controller 127 which controls the overall user interactive experience and includes an object store of current objects being displayed. The user interface controller 127 updates the current interactive user
interface position, outputting to a display output controller 129 which includes the usual frame buffer for output to the output display via the microcontroller 60.
While the methods of the present invention have been described and exemplified for use on an infrared- style input device of the type shown in Fig. 4, the skilled person will understand that many variations are possible, provided the detection optics have sufficient resolution. For example the detection optics may be in the form of an array of discrete photodetectors located along the edge of the input area, or the sensing light may be in the form of discrete beams generated for example by an array of transmit waveguides (as disclosed in US Patent No 5,914,709) or an array of discrete emitters (as in 'conventional' infrared touch). Two sets of parallel beams are usually required to determine the X,Y coordinates of a touch object, with the sets generally being perpendicular to each other although this is not essential (see US Patent No 4,933,544 for example). In yet another variation the sensing field is in the form of acoustic beams transmitted and received by arrays of reflectors spaced along the edges of the input area, as disclosed in US Patent No 6,856,259 for example.
It will be appreciated that the illustrated embodiments expand the range of gestures that can be mapped to commands of a touch-enabled graphical user interface equipped. Although the invention has been described with reference to specific examples, it will be appreciated by those skilled in the art that the invention may be embodied in many other forms.