WO2014088722A1 - Multi-touch symbol recognition - Google Patents

Multi-touch symbol recognition Download PDF

Info

Publication number
WO2014088722A1
WO2014088722A1 PCT/US2013/066615 US2013066615W WO2014088722A1 WO 2014088722 A1 WO2014088722 A1 WO 2014088722A1 US 2013066615 W US2013066615 W US 2013066615W WO 2014088722 A1 WO2014088722 A1 WO 2014088722A1
Authority
WO
WIPO (PCT)
Prior art keywords
touch
drag
area
finger
drag area
Prior art date
Application number
PCT/US2013/066615
Other languages
French (fr)
Inventor
Khosro M. Rabii
Dat Tien PHAM
Original Assignee
Qualcomm Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Incorporated filed Critical Qualcomm Incorporated
Priority to KR1020157017398A priority Critical patent/KR20150091365A/en
Priority to CN201380062934.1A priority patent/CN104885051A/en
Priority to EP13786836.0A priority patent/EP2929423A1/en
Publication of WO2014088722A1 publication Critical patent/WO2014088722A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • G06F3/0446Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means using a grid-like structure of electrodes in at least two directions, e.g. using row and column electrodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Definitions

  • the present embodiments relate to touch screen devices, and in particular, to methods and apparatus for the detection of anchor-drag multitouch gestures.
  • wireless computing devices such as wireless telephones, personal digital assistants (PDAs), and tablet computers that are small, lightweight, and easily carried by users.
  • PDAs personal digital assistants
  • portable computing devices may use touch screen displays that detect user gestures on the touch screen and translate the detected gestures into commands to be performed by the device. Such gestures may be performed using one or more fingers or a stylus type pointing implement.
  • Multi-touch screens are designed to recognize and track several simultaneous touches. For example, when a user moves two fingers on a screen, information indicating touch/movement for both fingers is provided by a multi-touch screen.
  • One drawback of implementing multi-touch technology on portable computing devices is the processing overhead typically required for recognizing multi- touch.
  • Processing overhead measures the total amount of work the central processing unit (CPU) of the device can perform and the percentage of that total capacity which is used by individual computing tasks, such as touch detection. In total, these tasks must require less than the processor's overall capacity.
  • Simple touch gestures may typically be handled by a touchscreen controller, which is a separate processor associated with the touch screen, but more complex touch gestures require the use of a secondary processor, often the mobile device's CPU, to process large amounts of touch data.
  • large amounts of touch data must be processed to determine the nature of the touch, sometimes only to conclude that a touch was a "false positive,” consuming large amounts of CPU capacity and device power.
  • the processing overhead required for complex touch recognition may require a large percentage of the overall CPU capacity, impairing device performance.
  • touch processing complexity increases proportional to touch-node capacity, which in turn increases proportional to display size. Therefore, because there is a trend in many portable computing devices toward increasing display size and touch complexity, touch processing is increasingly reducing device performance and threatening battery life. Further, user interaction with a device through touch events is highly sensitive to latency, and user experience can suffer from low throughput interfaces between the touchscreen panel and the host processor resulting in processing delay and response lag.
  • Another embodiment comprises a method of implementing a multitouch recognition function on a computing device equipped with a touch panel, the method comprising detecting a first touch event at a first location; defining a base area on the touchscreen display based at least in part on the first location; determining a drag area of the touch panel based at least in part on a predetermined geometric boundary in relation to the base area; temporarily limiting subsequent touch processing on the touch panel to the drag area; and detecting a second touch event within the drag area.
  • the first touch is made by a first finger of a hand and the second touch is made by a second finger of the hand
  • determining a drag area further comprises estimating a Euclidean distance and angle between the first finger and the second finger.
  • FIG. 1 illustrates an embodiment of an anchor drag touch system
  • FIG. 2 illustrates an embodiment of a class of anchor-drag touch symbols
  • FIG. 3 illustrates an embodiment of a mobile computing device equipped with touch processing
  • FIG. 4 illustrates one embodiment of an anchor-drag gesture recognition process
  • FIG. 5 illustrates an embodiment of an anchor-drag touch processing technique
  • FIG. 6 illustrates another embodiment of an anchor-drag touch processing technique where processing is limited to a drag area.
  • the gesture recognition techniques described herein define an anchor-drag touch class to enable touch symbol recognition with nominal processing overhead, even in large touch screen panels.
  • a user's first touch on a touch panel for example with a thumb of one hand, may be used to define an area termed the "base area.” This finger may be termed the base finger. From the location of this base area, a potential "drag area" may be estimated in which the user might use a second finger, for example the index finger of that same hand, to make a second touch on the touch panel.
  • This touch may be a drag touch in one of many unique shapes, each of which may be associated with a specific command. Because the drag area occupies only a portion of the larger touch panel, the touch processing overhead required to detect the second touch is minimized.
  • the anchor-drag touch class gestures are easily distinguishable, enabling reliable detection and further reducing processing overhead which is typically required to conduct de-noising and filtering over the touch panel to identify "false positives," or unintentional touches.
  • a further advantage is that, for applications that require a user specify a display region such as in a photo or video editor, the anchor- drag touch class provides an organic method for the user to select a display region.
  • Implementations disclosed herein provide systems, methods and apparatus for recognizing an anchor-drag touch class of multitouch gestures.
  • the anchor-drag techniques described are implemented to input information onto a touchscreen while reducing power usage and decreasing latency and processing overhead in touchscreen technologies.
  • the touchscreen system detects a first "anchor" position that may be set, in one example, by a user's thumb. Once the anchor position has been set, the system limits the potential area wherein further touch detection will be made to that area that is accessible by another finger of the user's same hand, for instance the user's forefinger.
  • the system enables touchscreen systems using generic touchscreen controllers (or touchscreen processors) to easily process coordinated touches without the use of host processing, even in large touchscreen display panels.
  • touchscreen controllers or touchscreen processors
  • Such gesture recognition techniques may extend the battery life of mobile touchscreen devices as well as enhance user experience by reducing latency.
  • Embodiments may be implemented in hardware, software, firmware, or any combination thereof.
  • information and signals may be represented using any of a variety of different technologies and techniques.
  • data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
  • specific details are given to provide a thorough understanding of the examples. However, it will be understood by one of ordinary skill in the art that the examples may be practiced without these specific details.
  • electrical components/devices may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, such components, other structures and techniques may be shown in detail to further explain the examples.
  • the examples may be described as a process, which is depicted as a flowchart, a flow diagram, a finite state diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel, or concurrently, and the process can be repeated. In addition, the order of the operations may be rearranged.
  • a process is terminated when its operations are completed.
  • a process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a software function, its termination corresponds to a return of the function to the calling function or the main function.
  • the mobile computing device 100 includes a touch sensitive display 102.
  • a first finger 132 and second finger 134 of a user's hand 130 define a base area 110 and drag area 120, respectively.
  • the first finger 132 and second finger 134 are separated by a distance 136 and form an angle 138.
  • the mobile computing device 100 shown is a tablet computer, it will be understood by those skilled in the art that this is for purposes of illustration only and that the touch sensitive display 102 may be employed in a variety of mobile computing device such as image capture devices, mobile communications devices such as smart phones, electronic reader devices (e.g., e-readers), game consoles, portable media players, personal digital assistants, portable medical devices, or laptops.
  • mobile computing device such as image capture devices, mobile communications devices such as smart phones, electronic reader devices (e.g., e-readers), game consoles, portable media players, personal digital assistants, portable medical devices, or laptops.
  • display 102 is discussed herein as being incorporated into mobile computing devices, such touch screen technology as well as the described gesture recognition techniques may be employed on stationary computing devices as well, such as desktop computers, large display screens, or workstations.
  • Touch sensitive display 102 comprises a touch screen.
  • a touch screen can detect the presence and location of a touch within a display area as well as display visual information in the display area.
  • Capacitive technology operates by sensing the electric current from a user's finger, which interrupts the electrostatic field of the touch screen, resulting in a detected touch.
  • a touchscreen can include a projected capacitive touch (PCT) sensor arranged over a display.
  • the PCT sensor can include an array of capacitors formed by a number of sensor electrodes in the form of overlapping electrodes, such as row electrodes and column electrodes that are arranged in a grid pattern.
  • Resistive technology detects a touch through pressure sensing, which occurs when a finger or stylus touches the touch screen and two conductive layers come into contact with one another and close an electrical circuit.
  • Certain embodiments of the device may employ a multi-touch analog resistive system (MARS or AMR).
  • MARS multi-touch analog resistive system
  • Optical touch sensing requires no pressure to operate, detecting movement of objects near the touch screen with a plurality of optical sensors mounted on or near the surface of the touch screen.
  • SAW Surface acoustic wave
  • SAW touch screens rely on the absorption of sound waves to detect a touch, so either a finger or gloved finger will work for touch detection.
  • SAW touch screens usually require a special soft- tipped stylus.
  • Display 102 may incorporate any of these technologies as well as other known touch sensitive technologies.
  • touch technology includes a diverse set of different technologies. So long as the underlying touch technology can be used to sense required touch resolution (pitch) accurately, the proposed Anchor-Drag systems described herein can be recognized and processed efficiently. Table 1. Touc h Technolo ies
  • a first touch from a user defines the base area 110.
  • the first touch may be performed with a first finger 132 of the user's hand 130, for example by a thumb.
  • the device 100 may use the distance 136 between the first finger 132 and a second finger 134, for example an index finger of the same hand 130, as well as an angle 138 formed between the two fingers 132, 134 to estimate the drag area 120.
  • the distance 136 and the angle 138 may be based on a likely size of the user's hand, for example by using an average distance 136 and angle 138 of a plurality of users' hands.
  • the distance 136 and angle 138 may be based specifically on the size of the user's hand 130, for instance by having the user place thumb and forefinger on the device during a measuring process, or by gathering data about the user's hand size during previous interactions with the touch display 102.
  • angle 138 may be a Euclidean angle.
  • the device 100 may discard any touch data that does not occur within the drag area 120 for a certain period of time.
  • the device 100 may also discard touch data within the drag area 120 which is not a recognized drag symbol, as discussed in more detail below.
  • the device 100 may establish the base area 110 as permanent or semi-permanent so that only subsequent touch data within the drag area 120 will be processed. If no drag touch is recognized within drag area 120 after a predetermined amount of time, certain embodiments may open up touch processing once again to the entire display 102, and may require a new drag touch to set a new base area 1 10.
  • the anchor-drag gesture is carried out by two fingers 132, 134 of a single hand 130 of a user performing sequential touches.
  • the anchor-drag gesture may be performed in a variety of other ways, for example two sequential touches by one finger or a stylus, two sequential touches by two fingers of two hands, or even by a single touch.
  • the drag area may be calculated using a different method than Euclidean distance and angle. For instance, a drag area may be displayed to the user in a predetermined area of the screen after being initiated by a base touch.
  • the device 100 is able to limit subsequent touch processing to the drag area 120.
  • drag area 120 comprises a boundary which is a subset of the area of the touch display 102
  • the anchor-drag technique targets an area from which to receive touch data which is smaller than the touch panel, reducing touch processing overhead.
  • the combination of a base area 110 and drag area 120 further reduces processing overhead by enabling the touchscreen system to skip constant de-noising and filtering, as the anchored-drag gestures are easily distinguishable from unintentional touches to the touch display 102.
  • the drag area will be set according to Euclidean distances between the touches.
  • an anchor-drag touch class 200 comprises a set of single-hand coordinated touch gestures for use with touchscreen devices. Each gesture comprises an anchor touch and a drag touch, the anchor touch corresponding to a base area 210 on the touch screen, and the drag touch corresponding to a drag area wherein a specific geometric shape 220 can be entered by the user.
  • a user may position a first finger 232 of a hand 230, for example a thumb, to perform the anchor touch within the base area 210.
  • the base area 210 may be a predefined area displayed to the user for the purpose of indicating an area in which the anchor touch will be recognized.
  • the base area 210 may be defined anywhere on the touch screen where the touchscreen device recognizes an anchor touch.
  • the user moves a second finger 234, for example the index finger of the same hand 230, along the surface of the touchscreen to perform the drag touch.
  • the drag touch may be one shape 220 of a set of geometric shapes, and each shape 220 may be recognized by the device as being associated with a unique set of information or with a function for device control.
  • Some embodiments of the anchor-drag touch class 200 may, in addition to recognizing a plurality of shapes 220, recognize a variety of characteristics of how the user creates the shape, and may associate a different function or set of information with the shape depending upon the characteristics. For example, when the user performs the drag touch to generate shape 220, the starting point 240 of the drag touch and direction 250 in which the shape is drawn may determine the function or information associated with the shape. Other characteristics not illustrated, such as pressure, speed, or size of the shape may also be used to determine what function or information is associated with the shape.
  • the base area 210 may be set such that subsequent touch commands can only be assumed applicable to the drag area.
  • the anchor touch may be used to define a new set of more complex gestures, such as by varying the push level of the base finger 232 or using the base finger 232 to perform an additional touch within the base area 210.
  • the additional touch may be a tap or another drag touch indicating a new or additional function for the device to perform.
  • FIG. 3 illustrates a block diagram of a mobile computing device 300 in accordance with one embodiment of the present disclosure which could perform the anchor-drag touch recognition techniques described above with respect to FIGS. 1 and 2.
  • the device 300 comprises a display 310, a touch screen subsystem 320, a gesture database 330 and a host processor 340.
  • the illustrated embodiment is not meant to be limitative and device 300 may include a variety of other components as required for other functions.
  • the display 310 of device 300 may include a touch screen panel 312 and a display component 314.
  • display component 314 may be any flat panel display technology, such as an LED, LCD, plasma, or projection screen.
  • Display component 314 may be coupled to the host processor 340 for receiving information for visual display to a user. Such information includes, but is not limited to, visual representations of files stored in a memory of device 300, software applications installed on device 300, user interfaces, and network-accessible content objects.
  • display component 314 may also be used to display a boundary or other depiction of the base area 1 10, 210, drag shape 220, or drag area 120 discussed above with respect to FIGS. 1 and 2.
  • Touch screen panel 312 may employ one or a combination of many touch sensing technologies, for instance capacitive, resistive, surface acoustic wave, or optical touch sensing. To accommodate recognition of the anchor-drag touch class described herein, the touch sensing technology may support multitouch gestures. In some embodiments, touch screen panel 312 may overlay or be positioned over display component 314 such that visibility of the display component 314 is not impaired. In other embodiments, the touch screen panel 312 and display component 314 may be integrated into a single panel or surface. The touch screen panel 312 may be configured to cooperate with display component 314 such that a user touch on the touch screen panel 312 is associated with a portion of the content displayed on display component 314 corresponding to the location of the touch on touch screen panel 312. Display component may also be configured to respond to a user touch on the touch screen panel 312 by displaying, for a limited time, a visual representation of the touch, for example a drag shape 220 as described in FIG. 2.
  • Touch screen panel 312 may be coupled to a touch screen subsystem 320, the touch screen subsystem 320 comprising a touch detection module 322 and a processing module 324.
  • the touch screen panel 312 may cooperate with touch screen subsystem 320 to enable device 300 to sense the location, pressure, direction and/or shape of a user touch or touches on display 310.
  • the touch detection module 322 may include instructions that when executed can scan the area of the touch screen panel 312 for touch events and to provide the coordinates of touch events to the processing module 324.
  • the touch detection module 322 may be an analog touch screen front end module comprising a plurality of software drivers.
  • the processing module 324 of the touch screen subsystem 320 may be configured to analyze touch events and to communicate touch data to host processor 340.
  • the processing module 324 may, in some embodiments, include instructions that when executed act as a touch screen controller (TSC).
  • TSC touch screen controller
  • the specific type of TSC employed will depend upon the type of touch technology used in panel 312.
  • the processing module 324 may be configured to start up when the touch detection module 322 indicates that a user has touched touch screen panel 312 and to power down after release of the touch. This feature may be useful for power conservation in battery- powered devices such as mobile computing device 300.
  • Processing module 324 may be configured to perform filtering on touch event data received from touch detection module. For example, in a display 310 where the touch screen panel 312 is placed on top of a display component 314 comprising and LCD screen, the LCD screen may contribute noise to the coordinate position measurement of the touch event. This noise is a combination of impulse noise and Gaussian noise.
  • the processing module 324 may be configured with median and averaging filters to reduce this noise. Instead of using only a single sample for the coordinate measurement of the touch event, the processing module 324 may be programmed to instruct the touch detection module 322 to provide two, four, eight, or 16 samples. These samples may then be sorted, median filtered, and averaged to give a lower noise, more accurate result of the touch coordinates.
  • the processing module 324 is a processor specifically configured for use with the touch screen subsystem 320, while host processor 340 may be configured to handle the general processing requirements of device 300.
  • the processing module 324 and the host processor 340 may be in communication with each other as well as a gesture data store 330. For example, processing module 324 may determine that a sequence of touch events matches a pattern identified in gesture data store 330 as an anchor-drag touch gesture.
  • Processing module 324 may retrieve a function or other information associated with the recognized gesture from gesture data store 330 and send instructions to host processor 340 to carry out the function or display the information on display 310.
  • processing module 324 may limit subsequent touch processing to a drag area, such as the predicted drag area 120 described in FIG. 1. Touch events outside of the predicted drag area 120 may either be discarded or, in some embodiments which limit scanning as well as touch processing to the drag area, not sensed.
  • the anchor-drag touch class described in this disclosure enables the processing module 324 to process touch data with less reliance on the host processor 340 than in typical touch processing architectures by creating an easily detectable set of touch gestures and by allowing the processing module 324 to limit processing to a subset of touch screen panel 312.
  • FIG. 4 illustrates one embodiment of a process 400 that may be used to determine whether a touch event on a touch screen is an anchor-drag touch.
  • the anchor-drag touch may be one illustrated in anchor-drag touch class 200 described above with respect to FIG. 2, and may be executed by the touch screen subsystem 320 of FIG. 3.
  • the process 400 begins at block 405 when a first touch event on a touch screen is identified and recognized as an anchor touch. The tap may be detected as an anchor by its persistence and/or permanence on the touchscreen.
  • the process 400 then moves to block 410, where the location of the anchor touch is established as the base area.
  • the base area may be defined by a single point, for example an x-y coordinate pair located at the approximate center of the anchor touch. In other embodiments, the base area may be defined by a boundary, such as the boundary around the anchor touch.
  • a drag area is calculated based at least in part on the location of the base area.
  • Other factors influencing the calculation of the drag area may be, in certain embodiments, an estimated or actual distance from an end of a user's thumb to an end of the user's index finger of the same hand. This distance may represent the distance from fingertip to fingertip either when the user's hand is fully extended or when the user's fingers are curved to interact with the touch screen. As discussed above, this distance may be based on an average user hand size or may be based upon the actual user's hand size as determined by a measuring process or a learning algorithm which tracks gesture data over time.
  • the drag area calculated by process 400 may be represented by a boundary of varying size, depending upon the size of drag gestures which the process 400 seeks to recognize and the precision with which a user will "draw" the drag gesture.
  • the process 400 transitions to block 420 where an additional touch event is detected. This moves the process 400 to decision block 425, where it is determined whether the additional touch was within the calculated drag area. If the touch was not within the drag area, the process 400 moves to block 430 where the touch data is discarded, and then the process 400 loops back to block 420 to detect an additional touch event. If the additional touch is within the drag area, the process 400 transitions to block 435 to analyze the parameters of the touch. Such parameters may include for example, the pressure, direction, shape, start point, end point, and/or duration of the additional touch event.
  • the process 400 moves to decision block 440 to determine whether the parameters match the parameters of the drag gestures defined in the anchor drag touch class 200. If no match is found, the process 400 moves to block 430 where the touch data is discarded, and then the process 400 loops back to block 420 to detect an additional touch event. If a drag gesture is found which has parameters matching those of the additional touch, the process 400 transitions to block 445 where a function or set of information associated with the drag touch is retrieved. This may be accomplished in certain embodiments by the processing module 324 accessing touch gesture data store 330. In some embodiments, the drag touch must occur while the anchor touch is still in place on the touch screen. In other embodiments, the user may release the anchor touch before performing the drag gesture. In yet other embodiments, the user may simultaneously perform the anchor touch and the associated drag gesture and both touch events may be processed and analyzed together.
  • FIG. 5 illustrates one example of a process 500 that may be used by the touch screen subsystem 320 and host processor 340 of FIG. 3 to process data associated with touch events.
  • a process 500 may be used by the touch screen subsystem 320 and host processor 340 of FIG. 3 to process data associated with touch events.
  • numerous variations and additions to this process are possible, a few of which are discussed below.
  • the process 500 begins at block 505 where, when in an idle mode, a touch screen subsystem repeatedly scans a touch panel for a user touch.
  • a touch panel may be made up of rows and columns, with each row and column being connected to at least one conductive wire coupled to the touch screen subsystem 320.
  • the touch screen subsystem 320 may turn on one row and one column at a time to determine whether a user touch is occurring in the intersection of that row and column. After scanning all row and column combinations, the touch screen subsystem 320 may begin the scanning process over. In certain embodiments this scanning process may be carried out by touch detection module 322.
  • touch screen subsystem 320 determines that a touch event has occurred at a scanned point, the process 500 moves to block 510.
  • touch detection module 322 may be configured to detect at least a first touch event and a second touch event during the touch detection step 510. Detection of a touch at block 510 may activate the processing module 324.
  • the process 500 then moves to block 515, wherein the touch screen subsystem 320 performs filtering to identify whether the touch event was an intentional touch or an accidental touch, also known as a "false positive.” This may be accomplished by processing module 324 in a manner similar to the noise filtering techniques described above with respect to FIG. 3.
  • the process 500 transitions to decision block 520 to determine whether a touch event was detected. If the touch screen subsystem 320 determines at decision block 420 that the filtered data does not represent an intentional touch event, the process cycles back to block 505 to repeat the idle mode scanning process. Certain embodiments may power off the touch processing module 324 during idle mode. In some embodiments adapted to detect multitouch gestures, the scanning process of block 505 may be executed continuously throughout the other steps of the process in order to detect additional touch events. In such embodiments, processing module 324 may remain powered on during the idle process if the module 324 is performing filtering or other touch processing techniques.
  • processing module 324 may configure touch detection module 322 to provide the coordinates of the detected touch so that processing module 324 may measure a plurality of parameters associated with the touch event. These parameters may comprise, for example, the pressure, direction, shape, start point, end point, and/or duration of the touch event.
  • the process then transitions to decision block 530 in which it determines whether an anchor-drag touch is identified by the measurement data.
  • this step may be performed by the touch processor 324 comparing the parameters of the touch event with anchor-drag touch parameters stored in a database such as gesture data store 330 of FIG. 3.
  • Certain embodiments may accomplish the anchor-drag identification step 530 by the process 400 illustrated in FIG. 4.
  • Step 530 may in some embodiments require the process 500 to recognize a first touch event representing an anchor touch, and to loop back to step 505 to detect a second touch event representing a drag touch.
  • an anchor-drag gesture is identified at block 530, the process transitions to block 535 where the touch screen subsystem 320 identifies a function or information associated with the anchor-drag gesture and sends the function or information to the host processor 340 for execution.
  • the function or information associated with the gesture may be stored in gesture data store 330 and accessed by processing module 324. In this way, the process 500 minimizes the use of the host processor 340 through the use of the anchor-drag gesture, restricting device host processing to merely performing the associated function on device 300 or displaying the associated information on display 310.
  • process 500 does not identify an anchor-drag gesture at block 530, the process 500 moves to block 540 where the touch screen subsystem 320 sends the measurement data to host processor 340. The process 500 then transitions to block 545 where host processor 340 performs traditional touch tracking. In response to host processor touch tracking, process 500 will transition to decision block 550 to determine whether any touch gesture was identified. If a touch gesture is not identified at block 550, the process 500 loops back to block 545 for the host processor to continue touch tracking. If after a certain period of time no touch event is identified, the process 500 may optionally loop back to block 505 to begin the touch screen subsystem idle process.
  • host processor 340 may execute a function associated with the touch gesture or display information associated with the touch gesture. The process 500 then loops back to block 505 to begin scanning for new touch events.
  • the process 600 illustrated in FIG. 6 is one embodiment of a touch processing limitation technique which may be carried out by touchscreen subsystem 320 of FIG. 3.
  • Process 600 may also be incorporated, in some embodiments, as a sub process of touch processing process 500, for example after block 530 for identifying an anchor-drag gesture.
  • process 600 may employed for a period of time as follow-up process to process 400 for recognizing anchor-drag gestures in order to limit subsequent touch processing to the drag area.
  • the process begins at block 605 where the touch screen subsystem 320 identifies a drag area from a base area. This may be accomplished in a similar manner to the technique discussed above with respect to block 415 of process 400. With the drag area defined, the process 600 transitions to block 610, where the touch screen subsystem limits subsequent touch processing to the drag area for a time period referred to herein as a "drag gesture session.” This processing limitation allows a device user to perform a plurality of drag gestures within the drag area without performing additional anchor touches for each drag gesture. During a drag gesture session, touch events outside the drag area, as well as touch events within the drag area which are determined not to be valid drag gestures, are discarded.
  • Some embodiments of the process may optionally transition to block 615, in which the touch screen subsystem 320 limits all touch scanning to the touch panel coordinates within the boundary of the drag area. This differs from the processing limitation of step 610 in that touch events outside of the drag area are not just discarded, such events are never registered because the process 600 does not scan for touch events outside of the drag area.
  • the process 600 then transitions to block 620 in which the touch screen subsystem detects a drag gesture.
  • this step may be performed by the touch processor 324 comparing parameters of a touch event within the drag area- such as pressure, direction, shape, start point, end point, and/or duration of the touch event- with drag gesture parameters stored in a database such as gesture data store 330 of FIG. 3.
  • the process 600 transitions to block 625 in which a function or information set associated with the drag gesture is identified and sent to the host processor 340.
  • the process then loops back to block 620 to perform the step of detecting an additional drag gesture, and will continue this loop for the duration of a drag gesture session.
  • the amount of time for which process 600 will loop between blocks 620 and 625 to maintain the drag gesture session may vary in different embodiments. For example, some embodiments may maintain a drag gesture session for the duration of use of a specific software program or application, while other embodiments may continue the drag gesture session until the user provides an indication that the drag gesture session is over. Yet other embodiments may continue the drag gesture session until determining that a predetermined time period has lapsed during which no drag gesture was made. VII. Terminology
  • the technology is operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, processor-based systems, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • instructions refer to computer-implemented steps for processing information in the system. Instructions can be implemented in software, firmware or hardware and include any type of programmed step undertaken by components of the system.
  • a processor may be any conventional general purpose single- or multi-chip processor such as a Pentium ® processor, a Pentium ® Pro processor, a 8051 processor, a MIPS ® processor, a Power PC ® processor, or an Alpha ® processor.
  • the processor may be any conventional special purpose processor such as touchscreen controller, a digital signal processor or a graphics processor.
  • the processor typically has conventional address lines, conventional data lines, and one or more conventional control lines.
  • the system is comprised of various modules as discussed in detail.
  • each of the modules comprises various sub-routines, procedures, definitional statements and macros.
  • Each of the modules are typically separately compiled and linked into a single executable program. Therefore, the description of each of the modules is used for convenience to describe the functionality of the preferred system.
  • the processes that are undergone by each of the modules may be arbitrarily redistributed to one of the other modules, combined together in a single module, or made available in, for example, a shareable dynamic link library.
  • the system may be used in connection with various operating systems such as Linux®, UNIX® or Microsoft Windows®.
  • the system may be written in any conventional programming language such as C, C++, BASIC, Pascal, or Java, and ran under a conventional operating system.
  • C, C++, BASIC, Pascal, Java, and FORTRAN are industry standard programming languages for which many commercial compilers can be used to create executable code.
  • the system may also be written using interpreted languages such as Perl, Python or Ruby.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • the functions and methods described may be implemented in hardware, software, or firmware executed on a processor, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer- readable medium.
  • Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage medium may be any available media that can be accessed by a computer.
  • such computer- readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • any connection is properly termed a computer-readable medium.
  • the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave
  • DSL digital subscriber line
  • wireless technologies such as infrared, radio, and microwave
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Described herein are methods and devices that employ a predefined class of anchor-drag touches to minimize host processor use in a mobile computing device. As described, detecting anchor-drag touch gestures enables the touch screen controller to handle a large portion of touch processing, even in mobile devices with larger displays. A first touch establishes an anchor area, from which a drag area is calculated, and a second touch within the drag area provides a command to the device. Some embodiments may limit subsequent touch processing to the identified drag area.

Description

MULTI-TOUCH SYMBOL RECOGNITION
TECHNICAL FIELD
[0001] The present embodiments relate to touch screen devices, and in particular, to methods and apparatus for the detection of anchor-drag multitouch gestures.
BACKGROUND
[0002] Advances in technology have resulted in smaller and more powerful computing devices. For example, there currently exist a variety of portable computing devices, including wireless computing devices such as wireless telephones, personal digital assistants (PDAs), and tablet computers that are small, lightweight, and easily carried by users. In order to simplify user interfaces and to avoid pushbuttons and complex menu systems, such portable computing devices may use touch screen displays that detect user gestures on the touch screen and translate the detected gestures into commands to be performed by the device. Such gestures may be performed using one or more fingers or a stylus type pointing implement. Multi-touch screens (touch screens having multi-touch capability) are designed to recognize and track several simultaneous touches. For example, when a user moves two fingers on a screen, information indicating touch/movement for both fingers is provided by a multi-touch screen.
[0003] One drawback of implementing multi-touch technology on portable computing devices is the processing overhead typically required for recognizing multi- touch. Processing overhead measures the total amount of work the central processing unit (CPU) of the device can perform and the percentage of that total capacity which is used by individual computing tasks, such as touch detection. In total, these tasks must require less than the processor's overall capacity. Simple touch gestures may typically be handled by a touchscreen controller, which is a separate processor associated with the touch screen, but more complex touch gestures require the use of a secondary processor, often the mobile device's CPU, to process large amounts of touch data. Typically, large amounts of touch data must be processed to determine the nature of the touch, sometimes only to conclude that a touch was a "false positive," consuming large amounts of CPU capacity and device power. The processing overhead required for complex touch recognition may require a large percentage of the overall CPU capacity, impairing device performance.
[0004] The current generation of mobile processors is not well adapted to deal with increasing touch complexity and corresponding CPU overhead, especially in conjunction with the many other common high performance uses of mobile devices. Increasing the size of the mobile processor core or cache delivers performance increases only up to a certain level, beyond which heat dissipation issues make any further increase in core and cache size impractical. Overall processing capacity is further limited by the smaller size of many mobile devices, which limits the number of processors that can be included in the device. Additionally, because mobile computing devices are generally battery-powered, high performance uses also shortens battery life.
[0005] Despite mobile processing limitations, many common mobile applications such as maps, games, email clients, web browsers, etc., are making increasingly complex use of touch recognition. Further, touch processing complexity increases proportional to touch-node capacity, which in turn increases proportional to display size. Therefore, because there is a trend in many portable computing devices toward increasing display size and touch complexity, touch processing is increasingly reducing device performance and threatening battery life. Further, user interaction with a device through touch events is highly sensitive to latency, and user experience can suffer from low throughput interfaces between the touchscreen panel and the host processor resulting in processing delay and response lag.
SUMMARY
[0006] According to an embodiment, a touch processing system configured to recognize multitouch gestures comprises a touch panel; a touch detection module configured to capture a first touch event and a second touch event on the touch panel; and a processing module configured to determine if the second touch event is within a predefined boundary area from the first touch event, and discard the touch event if it is outside of the predefined boundary, the processing module further configured to track a position of a touch event within the predefined boundary and activate a predetermined object drag process based on the position of the touch event.
[0007] Another embodiment comprises a method of implementing a multitouch recognition function on a computing device equipped with a touch panel, the method comprising detecting a first touch event at a first location; defining a base area on the touchscreen display based at least in part on the first location; determining a drag area of the touch panel based at least in part on a predetermined geometric boundary in relation to the base area; temporarily limiting subsequent touch processing on the touch panel to the drag area; and detecting a second touch event within the drag area. In some embodiments, the first touch is made by a first finger of a hand and the second touch is made by a second finger of the hand, and determining a drag area further comprises estimating a Euclidean distance and angle between the first finger and the second finger.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] The disclosed aspects will hereinafter be described in conjunction with the drawings, provided to illustrate and not to limit the disclosed aspects, wherein like designations denote like elements.
[0009] FIG. 1 illustrates an embodiment of an anchor drag touch system;
[0010] FIG. 2 illustrates an embodiment of a class of anchor-drag touch symbols;
[0011] FIG. 3 illustrates an embodiment of a mobile computing device equipped with touch processing;
[0012] FIG. 4 illustrates one embodiment of an anchor-drag gesture recognition process;
[0013] FIG. 5 illustrates an embodiment of an anchor-drag touch processing technique; and
[0014] FIG. 6 illustrates another embodiment of an anchor-drag touch processing technique where processing is limited to a drag area.
DETAILED DESCRIPTION
[0015] The gesture recognition techniques described herein define an anchor-drag touch class to enable touch symbol recognition with nominal processing overhead, even in large touch screen panels. A user's first touch on a touch panel, for example with a thumb of one hand, may be used to define an area termed the "base area." This finger may be termed the base finger. From the location of this base area, a potential "drag area" may be estimated in which the user might use a second finger, for example the index finger of that same hand, to make a second touch on the touch panel. This touch may be a drag touch in one of many unique shapes, each of which may be associated with a specific command. Because the drag area occupies only a portion of the larger touch panel, the touch processing overhead required to detect the second touch is minimized.
[0016] The anchor-drag touch class gestures are easily distinguishable, enabling reliable detection and further reducing processing overhead which is typically required to conduct de-noising and filtering over the touch panel to identify "false positives," or unintentional touches. A further advantage is that, for applications that require a user specify a display region such as in a photo or video editor, the anchor- drag touch class provides an organic method for the user to select a display region.
[0017] Implementations disclosed herein provide systems, methods and apparatus for recognizing an anchor-drag touch class of multitouch gestures. The anchor-drag techniques described are implemented to input information onto a touchscreen while reducing power usage and decreasing latency and processing overhead in touchscreen technologies. As described in more detail below, the touchscreen system detects a first "anchor" position that may be set, in one example, by a user's thumb. Once the anchor position has been set, the system limits the potential area wherein further touch detection will be made to that area that is accessible by another finger of the user's same hand, for instance the user's forefinger. By using single-hand touch coordination and recognition of symbols created by the user's forefinger, the system enables touchscreen systems using generic touchscreen controllers (or touchscreen processors) to easily process coordinated touches without the use of host processing, even in large touchscreen display panels. By reducing the need for a device's host processor, such gesture recognition techniques may extend the battery life of mobile touchscreen devices as well as enhance user experience by reducing latency.
[0018] Embodiments may be implemented in hardware, software, firmware, or any combination thereof. Those of skill in the art will understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. [0019] In the following description, specific details are given to provide a thorough understanding of the examples. However, it will be understood by one of ordinary skill in the art that the examples may be practiced without these specific details. For example, electrical components/devices may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, such components, other structures and techniques may be shown in detail to further explain the examples.
[0020] It is also noted that the examples may be described as a process, which is depicted as a flowchart, a flow diagram, a finite state diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel, or concurrently, and the process can be repeated. In addition, the order of the operations may be rearranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a software function, its termination corresponds to a return of the function to the calling function or the main function.
I. Device Overview
[0021] Referring now to FIG. 1, an exemplary touch sensitive mobile computing device configured to recognize anchor-drag gestures will now be described in greater detail. As shown in FIG. 1, the mobile computing device 100 includes a touch sensitive display 102. Within the touch screen display, a first finger 132 and second finger 134 of a user's hand 130 define a base area 110 and drag area 120, respectively. The first finger 132 and second finger 134 are separated by a distance 136 and form an angle 138.
[0022] Although the mobile computing device 100 shown is a tablet computer, it will be understood by those skilled in the art that this is for purposes of illustration only and that the touch sensitive display 102 may be employed in a variety of mobile computing device such as image capture devices, mobile communications devices such as smart phones, electronic reader devices (e.g., e-readers), game consoles, portable media players, personal digital assistants, portable medical devices, or laptops. Further, although display 102 is discussed herein as being incorporated into mobile computing devices, such touch screen technology as well as the described gesture recognition techniques may be employed on stationary computing devices as well, such as desktop computers, large display screens, or workstations.
[0023] Touch sensitive display 102 comprises a touch screen. A touch screen can detect the presence and location of a touch within a display area as well as display visual information in the display area. There are several touch screen technologies currently available which support multi-touch input, including capacitive, resistive, and optical touch sensing using cameras. Capacitive technology operates by sensing the electric current from a user's finger, which interrupts the electrostatic field of the touch screen, resulting in a detected touch. In some implementations, a touchscreen can include a projected capacitive touch (PCT) sensor arranged over a display. The PCT sensor can include an array of capacitors formed by a number of sensor electrodes in the form of overlapping electrodes, such as row electrodes and column electrodes that are arranged in a grid pattern. Resistive technology detects a touch through pressure sensing, which occurs when a finger or stylus touches the touch screen and two conductive layers come into contact with one another and close an electrical circuit.
[0024] Certain embodiments of the device may employ a multi-touch analog resistive system (MARS or AMR). Optical touch sensing requires no pressure to operate, detecting movement of objects near the touch screen with a plurality of optical sensors mounted on or near the surface of the touch screen. Surface acoustic wave (SAW) touch screens rely on the absorption of sound waves to detect a touch, so either a finger or gloved finger will work for touch detection. However, a touch by a small, hard stylus will not be detected, so SAW touch screens usually require a special soft- tipped stylus. Display 102 may incorporate any of these technologies as well as other known touch sensitive technologies.
[0025] As depicted in below Table 1, touch technology includes a diverse set of different technologies. So long as the underlying touch technology can be used to sense required touch resolution (pitch) accurately, the proposed Anchor-Drag systems described herein can be recognized and processed efficiently. Table 1. Touc h Technolo ies
Figure imgf000009_0001
[0026] Within the area of display 102, a first touch from a user defines the base area 110. The first touch may be performed with a first finger 132 of the user's hand 130, for example by a thumb. The device 100 may use the distance 136 between the first finger 132 and a second finger 134, for example an index finger of the same hand 130, as well as an angle 138 formed between the two fingers 132, 134 to estimate the drag area 120. In some embodiments, the distance 136 and the angle 138 may be based on a likely size of the user's hand, for example by using an average distance 136 and angle 138 of a plurality of users' hands. In other embodiments, the distance 136 and angle 138 may be based specifically on the size of the user's hand 130, for instance by having the user place thumb and forefinger on the device during a measuring process, or by gathering data about the user's hand size during previous interactions with the touch display 102. In certain embodiments, angle 138 may be a Euclidean angle.
[0027] Once the drag area 120 is established, the device 100 may discard any touch data that does not occur within the drag area 120 for a certain period of time. The device 100 may also discard touch data within the drag area 120 which is not a recognized drag symbol, as discussed in more detail below. In some embodiments, once a drag touch is recognized, the device 100 may establish the base area 110 as permanent or semi-permanent so that only subsequent touch data within the drag area 120 will be processed. If no drag touch is recognized within drag area 120 after a predetermined amount of time, certain embodiments may open up touch processing once again to the entire display 102, and may require a new drag touch to set a new base area 1 10.
[0028] As illustrated, the anchor-drag gesture is carried out by two fingers 132, 134 of a single hand 130 of a user performing sequential touches. However, in other embodiments, the anchor-drag gesture may be performed in a variety of other ways, for example two sequential touches by one finger or a stylus, two sequential touches by two fingers of two hands, or even by a single touch. In such embodiments, the drag area may be calculated using a different method than Euclidean distance and angle. For instance, a drag area may be displayed to the user in a predetermined area of the screen after being initiated by a base touch.
[0029] By defining base area 110, the device 100 is able to limit subsequent touch processing to the drag area 120. Because drag area 120 comprises a boundary which is a subset of the area of the touch display 102, the anchor-drag technique targets an area from which to receive touch data which is smaller than the touch panel, reducing touch processing overhead. The combination of a base area 110 and drag area 120 further reduces processing overhead by enabling the touchscreen system to skip constant de-noising and filtering, as the anchored-drag gestures are easily distinguishable from unintentional touches to the touch display 102. In some embodiments, the drag area will be set according to Euclidean distances between the touches.
II. Anchor-Drag Touch Class
[0030] As illustrated in FIG. 2, an anchor-drag touch class 200 comprises a set of single-hand coordinated touch gestures for use with touchscreen devices. Each gesture comprises an anchor touch and a drag touch, the anchor touch corresponding to a base area 210 on the touch screen, and the drag touch corresponding to a drag area wherein a specific geometric shape 220 can be entered by the user.
[0031] A user may position a first finger 232 of a hand 230, for example a thumb, to perform the anchor touch within the base area 210. In some embodiments, the base area 210 may be a predefined area displayed to the user for the purpose of indicating an area in which the anchor touch will be recognized. In other embodiments, the base area 210 may be defined anywhere on the touch screen where the touchscreen device recognizes an anchor touch. While maintaining the anchor touch, the user moves a second finger 234, for example the index finger of the same hand 230, along the surface of the touchscreen to perform the drag touch. The drag touch may be one shape 220 of a set of geometric shapes, and each shape 220 may be recognized by the device as being associated with a unique set of information or with a function for device control. Although the anchor-drag gestures are illustrated as being accomplished by a single hand, it is possible that the anchor touch and drag touch could be performed with the use of both hands.
[0032] Some embodiments of the anchor-drag touch class 200 may, in addition to recognizing a plurality of shapes 220, recognize a variety of characteristics of how the user creates the shape, and may associate a different function or set of information with the shape depending upon the characteristics. For example, when the user performs the drag touch to generate shape 220, the starting point 240 of the drag touch and direction 250 in which the shape is drawn may determine the function or information associated with the shape. Other characteristics not illustrated, such as pressure, speed, or size of the shape may also be used to determine what function or information is associated with the shape.
[0033] In some of the present embodiments, once an anchor-drag touch is recognized, the base area 210 may be set such that subsequent touch commands can only be assumed applicable to the drag area. In other embodiments, after an anchor- drag touch has been recognized the anchor touch may be used to define a new set of more complex gestures, such as by varying the push level of the base finger 232 or using the base finger 232 to perform an additional touch within the base area 210. The additional touch may be a tap or another drag touch indicating a new or additional function for the device to perform.
III. System Components
[0034] FIG. 3 illustrates a block diagram of a mobile computing device 300 in accordance with one embodiment of the present disclosure which could perform the anchor-drag touch recognition techniques described above with respect to FIGS. 1 and 2. The device 300 comprises a display 310, a touch screen subsystem 320, a gesture database 330 and a host processor 340. The illustrated embodiment is not meant to be limitative and device 300 may include a variety of other components as required for other functions.
[0035] The display 310 of device 300 may include a touch screen panel 312 and a display component 314. Certain embodiments of display component 314 may be any flat panel display technology, such as an LED, LCD, plasma, or projection screen. Display component 314 may be coupled to the host processor 340 for receiving information for visual display to a user. Such information includes, but is not limited to, visual representations of files stored in a memory of device 300, software applications installed on device 300, user interfaces, and network-accessible content objects. In some embodiments, display component 314 may also be used to display a boundary or other depiction of the base area 1 10, 210, drag shape 220, or drag area 120 discussed above with respect to FIGS. 1 and 2.
[0036] Touch screen panel 312 may employ one or a combination of many touch sensing technologies, for instance capacitive, resistive, surface acoustic wave, or optical touch sensing. To accommodate recognition of the anchor-drag touch class described herein, the touch sensing technology may support multitouch gestures. In some embodiments, touch screen panel 312 may overlay or be positioned over display component 314 such that visibility of the display component 314 is not impaired. In other embodiments, the touch screen panel 312 and display component 314 may be integrated into a single panel or surface. The touch screen panel 312 may be configured to cooperate with display component 314 such that a user touch on the touch screen panel 312 is associated with a portion of the content displayed on display component 314 corresponding to the location of the touch on touch screen panel 312. Display component may also be configured to respond to a user touch on the touch screen panel 312 by displaying, for a limited time, a visual representation of the touch, for example a drag shape 220 as described in FIG. 2.
[0037] Touch screen panel 312 may be coupled to a touch screen subsystem 320, the touch screen subsystem 320 comprising a touch detection module 322 and a processing module 324. The touch screen panel 312 may cooperate with touch screen subsystem 320 to enable device 300 to sense the location, pressure, direction and/or shape of a user touch or touches on display 310. The touch detection module 322 may include instructions that when executed can scan the area of the touch screen panel 312 for touch events and to provide the coordinates of touch events to the processing module 324. In some embodiments, the touch detection module 322 may be an analog touch screen front end module comprising a plurality of software drivers.
[0038] The processing module 324 of the touch screen subsystem 320 may be configured to analyze touch events and to communicate touch data to host processor 340. The processing module 324 may, in some embodiments, include instructions that when executed act as a touch screen controller (TSC). The specific type of TSC employed will depend upon the type of touch technology used in panel 312. The processing module 324 may be configured to start up when the touch detection module 322 indicates that a user has touched touch screen panel 312 and to power down after release of the touch. This feature may be useful for power conservation in battery- powered devices such as mobile computing device 300.
[0039] Processing module 324 may be configured to perform filtering on touch event data received from touch detection module. For example, in a display 310 where the touch screen panel 312 is placed on top of a display component 314 comprising and LCD screen, the LCD screen may contribute noise to the coordinate position measurement of the touch event. This noise is a combination of impulse noise and Gaussian noise. The processing module 324 may be configured with median and averaging filters to reduce this noise. Instead of using only a single sample for the coordinate measurement of the touch event, the processing module 324 may be programmed to instruct the touch detection module 322 to provide two, four, eight, or 16 samples. These samples may then be sorted, median filtered, and averaged to give a lower noise, more accurate result of the touch coordinates.
[0040] The processing module 324 is a processor specifically configured for use with the touch screen subsystem 320, while host processor 340 may be configured to handle the general processing requirements of device 300. The processing module 324 and the host processor 340 may be in communication with each other as well as a gesture data store 330. For example, processing module 324 may determine that a sequence of touch events matches a pattern identified in gesture data store 330 as an anchor-drag touch gesture. Processing module 324 may retrieve a function or other information associated with the recognized gesture from gesture data store 330 and send instructions to host processor 340 to carry out the function or display the information on display 310.
[0041] When the touchscreen subsystem 320 detects a touch or sequence of touches that is recognized as an anchor-drag gesture, processing module 324 may limit subsequent touch processing to a drag area, such as the predicted drag area 120 described in FIG. 1. Touch events outside of the predicted drag area 120 may either be discarded or, in some embodiments which limit scanning as well as touch processing to the drag area, not sensed. The anchor-drag touch class described in this disclosure enables the processing module 324 to process touch data with less reliance on the host processor 340 than in typical touch processing architectures by creating an easily detectable set of touch gestures and by allowing the processing module 324 to limit processing to a subset of touch screen panel 312.
IV. Anchor-Drag Touch Recognition (FIG. 4)
[0042] FIG. 4 illustrates one embodiment of a process 400 that may be used to determine whether a touch event on a touch screen is an anchor-drag touch. The anchor-drag touch may be one illustrated in anchor-drag touch class 200 described above with respect to FIG. 2, and may be executed by the touch screen subsystem 320 of FIG. 3. [0043] The process 400 begins at block 405 when a first touch event on a touch screen is identified and recognized as an anchor touch. The tap may be detected as an anchor by its persistence and/or permanence on the touchscreen. The process 400 then moves to block 410, where the location of the anchor touch is established as the base area. In some embodiments, the base area may be defined by a single point, for example an x-y coordinate pair located at the approximate center of the anchor touch. In other embodiments, the base area may be defined by a boundary, such as the boundary around the anchor touch.
[0044] After establishing a base area, the process 400 transitions to block 415 where a drag area is calculated based at least in part on the location of the base area. Other factors influencing the calculation of the drag area may be, in certain embodiments, an estimated or actual distance from an end of a user's thumb to an end of the user's index finger of the same hand. This distance may represent the distance from fingertip to fingertip either when the user's hand is fully extended or when the user's fingers are curved to interact with the touch screen. As discussed above, this distance may be based on an average user hand size or may be based upon the actual user's hand size as determined by a measuring process or a learning algorithm which tracks gesture data over time. Another factor may be a Euclidean angle formed between the user's thumb and index finger. The drag area calculated by process 400 may be represented by a boundary of varying size, depending upon the size of drag gestures which the process 400 seeks to recognize and the precision with which a user will "draw" the drag gesture.
[0045] The process 400 transitions to block 420 where an additional touch event is detected. This moves the process 400 to decision block 425, where it is determined whether the additional touch was within the calculated drag area. If the touch was not within the drag area, the process 400 moves to block 430 where the touch data is discarded, and then the process 400 loops back to block 420 to detect an additional touch event. If the additional touch is within the drag area, the process 400 transitions to block 435 to analyze the parameters of the touch. Such parameters may include for example, the pressure, direction, shape, start point, end point, and/or duration of the additional touch event.
[0046] After determining the parameters of the additional touch, the process 400 moves to decision block 440 to determine whether the parameters match the parameters of the drag gestures defined in the anchor drag touch class 200. If no match is found, the process 400 moves to block 430 where the touch data is discarded, and then the process 400 loops back to block 420 to detect an additional touch event. If a drag gesture is found which has parameters matching those of the additional touch, the process 400 transitions to block 445 where a function or set of information associated with the drag touch is retrieved. This may be accomplished in certain embodiments by the processing module 324 accessing touch gesture data store 330. In some embodiments, the drag touch must occur while the anchor touch is still in place on the touch screen. In other embodiments, the user may release the anchor touch before performing the drag gesture. In yet other embodiments, the user may simultaneously perform the anchor touch and the associated drag gesture and both touch events may be processed and analyzed together.
V. Anchor-Drag Touch Processing (FIG. 5)
[0047] FIG. 5 illustrates one example of a process 500 that may be used by the touch screen subsystem 320 and host processor 340 of FIG. 3 to process data associated with touch events. As will be apparent, numerous variations and additions to this process are possible, a few of which are discussed below.
[0048] The process 500 begins at block 505 where, when in an idle mode, a touch screen subsystem repeatedly scans a touch panel for a user touch. This may be implemented by the touch screen subsystem 320 and touch sensing panel 312 of FIG. 3. In some embodiments, the touch panel may be made up of rows and columns, with each row and column being connected to at least one conductive wire coupled to the touch screen subsystem 320. To perform the step of block 505, the touch screen subsystem 320 may turn on one row and one column at a time to determine whether a user touch is occurring in the intersection of that row and column. After scanning all row and column combinations, the touch screen subsystem 320 may begin the scanning process over. In certain embodiments this scanning process may be carried out by touch detection module 322.
[0049] When touch screen subsystem 320 determines that a touch event has occurred at a scanned point, the process 500 moves to block 510. In multitouch applications such as the anchor-drag gesture class described herein, touch detection module 322 may be configured to detect at least a first touch event and a second touch event during the touch detection step 510. Detection of a touch at block 510 may activate the processing module 324. The process 500 then moves to block 515, wherein the touch screen subsystem 320 performs filtering to identify whether the touch event was an intentional touch or an accidental touch, also known as a "false positive." This may be accomplished by processing module 324 in a manner similar to the noise filtering techniques described above with respect to FIG. 3.
[0050] After filtering is completed at block 515, the process 500 transitions to decision block 520 to determine whether a touch event was detected. If the touch screen subsystem 320 determines at decision block 420 that the filtered data does not represent an intentional touch event, the process cycles back to block 505 to repeat the idle mode scanning process. Certain embodiments may power off the touch processing module 324 during idle mode. In some embodiments adapted to detect multitouch gestures, the scanning process of block 505 may be executed continuously throughout the other steps of the process in order to detect additional touch events. In such embodiments, processing module 324 may remain powered on during the idle process if the module 324 is performing filtering or other touch processing techniques.
[0051] If the touch screen subsystem 320 determines at decision block 520 that the filtered data represents an intentional touch event, the process 400 transitions to block 525 to calculate measurement data representing parameters of the touch event. In some embodiments, to calculate the measurement data, processing module 324 may configure touch detection module 322 to provide the coordinates of the detected touch so that processing module 324 may measure a plurality of parameters associated with the touch event. These parameters may comprise, for example, the pressure, direction, shape, start point, end point, and/or duration of the touch event.
[0052] After calculating the measurement data, the process then transitions to decision block 530 in which it determines whether an anchor-drag touch is identified by the measurement data. In some embodiments, this step may be performed by the touch processor 324 comparing the parameters of the touch event with anchor-drag touch parameters stored in a database such as gesture data store 330 of FIG. 3. Certain embodiments may accomplish the anchor-drag identification step 530 by the process 400 illustrated in FIG. 4. Step 530 may in some embodiments require the process 500 to recognize a first touch event representing an anchor touch, and to loop back to step 505 to detect a second touch event representing a drag touch. [0053] If an anchor-drag gesture is identified at block 530, the process transitions to block 535 where the touch screen subsystem 320 identifies a function or information associated with the anchor-drag gesture and sends the function or information to the host processor 340 for execution. The function or information associated with the gesture may be stored in gesture data store 330 and accessed by processing module 324. In this way, the process 500 minimizes the use of the host processor 340 through the use of the anchor-drag gesture, restricting device host processing to merely performing the associated function on device 300 or displaying the associated information on display 310.
[0054] If the process 500 does not identify an anchor-drag gesture at block 530, the process 500 moves to block 540 where the touch screen subsystem 320 sends the measurement data to host processor 340. The process 500 then transitions to block 545 where host processor 340 performs traditional touch tracking. In response to host processor touch tracking, process 500 will transition to decision block 550 to determine whether any touch gesture was identified. If a touch gesture is not identified at block 550, the process 500 loops back to block 545 for the host processor to continue touch tracking. If after a certain period of time no touch event is identified, the process 500 may optionally loop back to block 505 to begin the touch screen subsystem idle process. If the process at block 550 determines a touch gesture other than an anchor-drag touch was identified, host processor 340 may execute a function associated with the touch gesture or display information associated with the touch gesture. The process 500 then loops back to block 505 to begin scanning for new touch events.
VI. Touch Processing Limitation (FIG. 6)
[0055] The process 600 illustrated in FIG. 6 is one embodiment of a touch processing limitation technique which may be carried out by touchscreen subsystem 320 of FIG. 3. Process 600 may also be incorporated, in some embodiments, as a sub process of touch processing process 500, for example after block 530 for identifying an anchor-drag gesture. In other embodiments, process 600 may employed for a period of time as follow-up process to process 400 for recognizing anchor-drag gestures in order to limit subsequent touch processing to the drag area.
[0056] The process begins at block 605 where the touch screen subsystem 320 identifies a drag area from a base area. This may be accomplished in a similar manner to the technique discussed above with respect to block 415 of process 400. With the drag area defined, the process 600 transitions to block 610, where the touch screen subsystem limits subsequent touch processing to the drag area for a time period referred to herein as a "drag gesture session." This processing limitation allows a device user to perform a plurality of drag gestures within the drag area without performing additional anchor touches for each drag gesture. During a drag gesture session, touch events outside the drag area, as well as touch events within the drag area which are determined not to be valid drag gestures, are discarded.
[0057] Some embodiments of the process may optionally transition to block 615, in which the touch screen subsystem 320 limits all touch scanning to the touch panel coordinates within the boundary of the drag area. This differs from the processing limitation of step 610 in that touch events outside of the drag area are not just discarded, such events are never registered because the process 600 does not scan for touch events outside of the drag area.
[0058] The process 600 then transitions to block 620 in which the touch screen subsystem detects a drag gesture. In some embodiments, this step may be performed by the touch processor 324 comparing parameters of a touch event within the drag area- such as pressure, direction, shape, start point, end point, and/or duration of the touch event- with drag gesture parameters stored in a database such as gesture data store 330 of FIG. 3. After detecting a drag gesture, the process 600 transitions to block 625 in which a function or information set associated with the drag gesture is identified and sent to the host processor 340. The process then loops back to block 620 to perform the step of detecting an additional drag gesture, and will continue this loop for the duration of a drag gesture session.
[0059] The amount of time for which process 600 will loop between blocks 620 and 625 to maintain the drag gesture session may vary in different embodiments. For example, some embodiments may maintain a drag gesture session for the duration of use of a specific software program or application, while other embodiments may continue the drag gesture session until the user provides an indication that the drag gesture session is over. Yet other embodiments may continue the drag gesture session until determining that a predetermined time period has lapsed during which no drag gesture was made. VII. Terminology
[0060] The technology is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, processor-based systems, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
[0061] As used herein, instructions refer to computer-implemented steps for processing information in the system. Instructions can be implemented in software, firmware or hardware and include any type of programmed step undertaken by components of the system.
[0062] A processor may be any conventional general purpose single- or multi-chip processor such as a Pentium® processor, a Pentium® Pro processor, a 8051 processor, a MIPS® processor, a Power PC® processor, or an Alpha® processor. In addition, the processor may be any conventional special purpose processor such as touchscreen controller, a digital signal processor or a graphics processor. The processor typically has conventional address lines, conventional data lines, and one or more conventional control lines.
[0063] The system is comprised of various modules as discussed in detail. As can be appreciated by one of ordinary skill in the art, each of the modules comprises various sub-routines, procedures, definitional statements and macros. Each of the modules are typically separately compiled and linked into a single executable program. Therefore, the description of each of the modules is used for convenience to describe the functionality of the preferred system. Thus, the processes that are undergone by each of the modules may be arbitrarily redistributed to one of the other modules, combined together in a single module, or made available in, for example, a shareable dynamic link library.
[0064] The system may be used in connection with various operating systems such as Linux®, UNIX® or Microsoft Windows®.
[0065] The system may be written in any conventional programming language such as C, C++, BASIC, Pascal, or Java, and ran under a conventional operating system. C, C++, BASIC, Pascal, Java, and FORTRAN are industry standard programming languages for which many commercial compilers can be used to create executable code. The system may also be written using interpreted languages such as Perl, Python or Ruby.
[0066] Those of skill will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
[0067] The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
[0068] In one or more example embodiments, the functions and methods described may be implemented in hardware, software, or firmware executed on a processor, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer- readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer- readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
[0069] The foregoing description details certain embodiments of the systems, devices, and methods disclosed herein. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the systems, devices, and methods can be practiced in many ways. As is also stated above, it should be noted that the use of particular terminology when describing certain features or aspects of the invention should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the technology with which that terminology is associated.
[0070] It will be appreciated by those skilled in the art that various modifications and changes may be made without departing from the scope of the described technology. Such modifications and changes are intended to fall within the scope of the embodiments. It will also be appreciated by those of skill in the art that parts included in one embodiment are interchangeable with other embodiments; one or more parts from a depicted embodiment can be included with other depicted embodiments in any combination. For example, any of the various components described herein and/or depicted in the Figures may be combined, interchanged or excluded from other embodiments. [0071] With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
[0072] It will be understood by those within the art that, in general, terms used herein are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term "includes" should be interpreted as "includes but is not limited to," etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations.
[0073] In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "A or B" will be understood to include the possibilities of "A" or "B" or "A and B."
[0074] While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting.

Claims

WHAT IS CLAIMED IS:
1. A system configured to recognize multitouch gestures, the system comprising:
a touch panel;
a touch detection module configured to capture a first touch event and a second touch event on the touch panel; and
a processing module configured to determine if the second touch event is within a predefined boundary area from the first touch event, and discard the touch event if it is outside of the predefined boundary, the processing module further configured to track a position of a touch event within the predefined boundary and activate a predetermined object drag process based on the position of the touch event.
2. The system of Claim 1, wherein the system is implemented in a mobile phone, a computer, or digital imaging device.
3. The system of Claim 1, wherein the processing module comprises a touch screen subsystem having a touch screen controller.
4. The system of Claim 1, wherein the touch panel comprises one of resistive, surface capacitive, projected capacitive, infrared, surface acoustic wave, strain gauge, optical imaging, or dispersive signal touch screen technologies
5. The system of Claim 1, wherein the first touch is made by a first finger of a single hand and the second touch is made by a second finger of the single hand.
6. The system of Claim 5, wherein the spatial relationship is based at least in part on Euclidean distance and angle between the first finger and the second finger.
7. The system of Claim 1, wherein the second touch event comprises a geometric shape.
8. The system of Claim 1, wherein the drag area occupies an area smaller than the touch panel.
9. A method of implementing a multitouch recognition function on a computing device equipped with a touch panel, the method comprising:
detecting a first touch event at a first location;
defining a base area on the touchscreen display based at least in part on the first location; determining a drag area of the touch panel based at least in part on a predetermined geometric boundary in relation to the base area;
temporarily limiting subsequent touch processing on the touch panel to the drag area; and
detecting a second touch event within the drag area.
10. The method of Claim 9, wherein the first touch is made by a first finger of a hand and the second touch is made by a second finger of the hand, and wherein determining a drag area further comprises estimating a Euclidean distance and angle between the first finger and the second finger .
11. The method of Claim 9, wherein temporarily limiting subsequent touch processing on the touch panel to the drag area comprises discarding touch events located outside of the drag area.
12. The method of Claim 9, further comprising determining a geometric shape of the second touch event.
13. The method of Claim 12, further comprising associating a function with the geometric shape.
14. The method of Claim 12, further comprising detecting a third touch event within the drag area, determining an additional geometric shape of the third touch event, and associating a function with the combination of the geometric shape and the additional geometric shape.
15. The method of Claim 9, further comprising establishing a permanent drag area from the predetermined geometric boundary and limiting all subsequent touch processing to the permanent drag area for a duration of a predefined session.
16. A non-transitory computer-readable medium comprising code that, when executed, causes an processor to perform the method of:
detecting a first touch event;
defining a base area of the touchscreen display from the first touch event; determining a drag area of the touchscreen display, the drag area being defined within a predetermined geometric boundary in relation to the base area; temporarily limiting subsequent touch processing on the touchscreen to the drag area; and
detecting a second touch event within the drag area.
17. The non-transitory computer-readable medium of Claim 16, wherein the first touch is made by a first finger of a hand and the second touch is made by a second finger of the hand, and wherein determining a drag area further comprises estimating a Euclidean distance and angle between the first finger and the second finger .
18. The non-transitory computer-readable medium of Claim 16, wherein temporarily limiting subsequent touch processing on the touch panel to the drag area comprises discarding touch events located outside of the drag area.
19. The non-transitory computer-readable medium of Claim 16, further comprising determining a geometric shape of the second touch event.
20. The non-transitory computer-readable medium of Claim 19, further comprising associating a function with the geometric shape.
21. The non-transitory computer-readable medium of Claim 19, further comprising detecting a third touch event within the drag area, determining an additional geometric shape of the third touch event, and associating a function with the combination of the geometric shape and the additional geometric shape.
22. The non-transitory computer-readable medium of Claim 16, further comprising establishing a permanent drag area from the predetermined geometric boundary and limiting all subsequent touch processing to the permanent drag area for a duration of a predefined session.
23. An apparatus for multitouch recognition, comprising:
means for receiving touch data comprising a first touch event and a second touch event;
means for calculating a spatial relationship between a first location of the first touch event and a second location of the second touch event and establishing a drag area having geometric boundary in relation to the first location;
means for limiting subsequent touch processing to the drag area.
24. The apparatus of Claim 23, wherein the first touch is made by a first finger of a single hand and the second touch is made by a second finger of the single hand.
25. The apparatus of Claim 24, wherein the spatial relationship is based on Euclidean distance and angle between the first finger and the second finger
26. The apparatus of Claim 23, wherein the means for receiving touch data comprises a touch panel.
27. The apparatus of Claim 26, wherein the touch panel comprises one of resistive, surface capacitive, projected capacitive, infrared, surface acoustic wave, strain gauge, optical imaging, or dispersive signal touch screen technologies.
28. The apparatus of Claim 23, wherein the means for calculating a spatial relationship comprises a touch screen subsystem having a touch screen controller
29. The apparatus of Claim 23, wherein the means for limiting subsequent touch processing to the drag area comprises a touch screen subsystem having a touch screen controller.
30. The apparatus of Claim 23, further comprising means for determining a geometric shape of the second touch.
31. The apparatus of Claim 30, wherein the means for determining a geometric shape of the second touch comprises a touch screen subsystem having a touch screen controller.
32. The apparatus of Claim 30, further comprising means for associating a function with the geometric shape.
PCT/US2013/066615 2012-12-06 2013-10-24 Multi-touch symbol recognition WO2014088722A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020157017398A KR20150091365A (en) 2012-12-06 2013-10-24 Multi-touch symbol recognition
CN201380062934.1A CN104885051A (en) 2012-12-06 2013-10-24 Multi-touch symbol recognition
EP13786836.0A EP2929423A1 (en) 2012-12-06 2013-10-24 Multi-touch symbol recognition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/707,206 US20140160054A1 (en) 2012-12-06 2012-12-06 Anchor-drag touch symbol recognition
US13/707,206 2012-12-06

Publications (1)

Publication Number Publication Date
WO2014088722A1 true WO2014088722A1 (en) 2014-06-12

Family

ID=49551793

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/066615 WO2014088722A1 (en) 2012-12-06 2013-10-24 Multi-touch symbol recognition

Country Status (5)

Country Link
US (1) US20140160054A1 (en)
EP (1) EP2929423A1 (en)
KR (1) KR20150091365A (en)
CN (1) CN104885051A (en)
WO (1) WO2014088722A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9323380B2 (en) 2013-01-16 2016-04-26 Blackberry Limited Electronic device with touch-sensitive display and three-dimensional gesture-detection
US9335922B2 (en) 2013-01-16 2016-05-10 Research In Motion Limited Electronic device including three-dimensional gesture detecting display
US20140198059A1 (en) * 2013-01-16 2014-07-17 Research In Motion Limited Electronic device with touch-sensitive display and gesture-detection
JP6089880B2 (en) * 2013-03-28 2017-03-08 富士通株式会社 Information processing apparatus, information processing method, and information processing program
US9606716B2 (en) 2014-10-24 2017-03-28 Google Inc. Drag-and-drop on a mobile device
JP6436752B2 (en) * 2014-12-04 2018-12-12 キヤノン株式会社 Information processing apparatus, information processing method and program in information processing apparatus
US10503264B1 (en) 2015-06-16 2019-12-10 Snap Inc. Radial gesture navigation
US10530731B1 (en) 2016-03-28 2020-01-07 Snap Inc. Systems and methods for chat with audio and video elements
KR101928550B1 (en) * 2016-04-21 2018-12-12 주식회사 씨케이머티리얼즈랩 Method and device for supplying tactile message
US10684758B2 (en) 2017-02-20 2020-06-16 Microsoft Technology Licensing, Llc Unified system for bimanual interactions
US10558341B2 (en) * 2017-02-20 2020-02-11 Microsoft Technology Licensing, Llc Unified system for bimanual interactions on flexible representations of content
US10928960B1 (en) * 2020-02-21 2021-02-23 Mobilizar Technologies Pvt Ltd System and method to track movement of an interactive figurine on a touch screen interface
CN115793893B (en) * 2023-02-07 2023-05-19 广州众远智慧科技有限公司 Touch writing handwriting generation method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0816992A1 (en) * 1996-06-24 1998-01-07 Sharp Kabushiki Kaisha Coordinate input apparatus
US20040130537A1 (en) * 2002-12-23 2004-07-08 Lg.Philips Lcd Co., Ltd. Method for driving a touch panel device
EP2077490A2 (en) * 2008-01-04 2009-07-08 Apple Inc. Selective rejection of touch contacts in an edge region of a touch surface
JP2011028603A (en) * 2009-07-28 2011-02-10 Nec Casio Mobile Communications Ltd Terminal device and program
US20110105193A1 (en) * 2009-10-30 2011-05-05 Samsung Electronics Co., Ltd. Mobile device supporting touch semi-lock state and method for operating the same
EP2508964A1 (en) * 2009-12-02 2012-10-10 Sony Corporation Touch operation determination device, and touch operation determination method and program

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040240739A1 (en) * 2003-05-30 2004-12-02 Lu Chang Pen gesture-based user interface
US20100060588A1 (en) * 2008-09-09 2010-03-11 Microsoft Corporation Temporally separate touch input
US8570290B2 (en) * 2009-02-06 2013-10-29 Panasonic Corporation Image display device
KR101593598B1 (en) * 2009-04-03 2016-02-12 삼성전자주식회사 Method for activating function of portable terminal using user gesture in portable terminal
TWI398807B (en) * 2009-04-07 2013-06-11 Ite Tech Inc Posistion apparatus for touch device and posistion method thereof
TWI449557B (en) * 2009-05-27 2014-08-21 Johnson Health Tech Co Ltd The man - machine interface method and man - machine interface device of sports equipment
US9046967B2 (en) * 2009-07-02 2015-06-02 Uusi, Llc Vehicle accessory control interface having capactive touch switches
TW201109994A (en) * 2009-09-10 2011-03-16 Acer Inc Method for controlling the display of a touch screen, user interface of the touch screen, and electronics using the same
KR101660842B1 (en) * 2009-11-05 2016-09-29 삼성전자주식회사 Touch input method and apparatus
CN102971035B (en) * 2010-05-07 2016-03-02 马奎特紧急护理公司 For the user interface of breathing apparatus
US9235340B2 (en) * 2011-02-18 2016-01-12 Microsoft Technology Licensing, Llc Modal touch input
CN102810023B (en) * 2011-06-03 2015-09-23 联想(北京)有限公司 Identify method and the terminal device of gesture motion
KR101863926B1 (en) * 2011-07-19 2018-06-01 엘지전자 주식회사 Mobile terminal and method for controlling thereof
JP2013041350A (en) * 2011-08-12 2013-02-28 Panasonic Corp Touch table system
KR20130083064A (en) * 2011-12-28 2013-07-22 박도현 Computing apparatus and method for providing contents thereof
KR101898979B1 (en) * 2012-02-16 2018-09-17 삼성디스플레이 주식회사 Method of operating a touch panel, touch panel and display device
US20140002376A1 (en) * 2012-06-29 2014-01-02 Immersion Corporation Method and apparatus for providing shortcut touch gestures with haptic feedback
US9261989B2 (en) * 2012-09-13 2016-02-16 Google Inc. Interacting with radial menus for touchscreens
US9195368B2 (en) * 2012-09-13 2015-11-24 Google Inc. Providing radial menus with touchscreens

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0816992A1 (en) * 1996-06-24 1998-01-07 Sharp Kabushiki Kaisha Coordinate input apparatus
US20040130537A1 (en) * 2002-12-23 2004-07-08 Lg.Philips Lcd Co., Ltd. Method for driving a touch panel device
EP2077490A2 (en) * 2008-01-04 2009-07-08 Apple Inc. Selective rejection of touch contacts in an edge region of a touch surface
JP2011028603A (en) * 2009-07-28 2011-02-10 Nec Casio Mobile Communications Ltd Terminal device and program
US20110105193A1 (en) * 2009-10-30 2011-05-05 Samsung Electronics Co., Ltd. Mobile device supporting touch semi-lock state and method for operating the same
EP2508964A1 (en) * 2009-12-02 2012-10-10 Sony Corporation Touch operation determination device, and touch operation determination method and program

Also Published As

Publication number Publication date
KR20150091365A (en) 2015-08-10
CN104885051A (en) 2015-09-02
EP2929423A1 (en) 2015-10-14
US20140160054A1 (en) 2014-06-12

Similar Documents

Publication Publication Date Title
US20140160054A1 (en) Anchor-drag touch symbol recognition
US10275113B2 (en) 3D visualization
US8847904B2 (en) Gesture recognition method and touch system incorporating the same
EP1507192B1 (en) Detection of a dwell gesture by examining parameters associated with pen motion
US8217909B2 (en) Multi-finger sub-gesture reporting for a user interface device
JP4132129B2 (en) Method and system for facilitating stylus input
EP2267589A2 (en) Method and device for recognizing a dual point user input on a touch based user input device
US9632693B2 (en) Translation of touch input into local input based on a translation profile for an application
US10126873B2 (en) Stroke continuation for dropped touches on electronic handwriting devices
CN102768595B (en) A kind of method and device identifying touch control operation instruction on touch-screen
JP2012221072A (en) Information processing apparatus, information processing method, and computer program
CN102662462A (en) Electronic device, gesture recognition method and gesture application method
CN103164067B (en) Judge the method and the electronic equipment that touch input
JP2013525891A (en) Method and device for determining a user's touch gesture
JP2016529640A (en) Multi-touch virtual mouse
JP2017506399A (en) System and method for improved touch screen accuracy
EP3008556B1 (en) Disambiguation of indirect input
US20140298275A1 (en) Method for recognizing input gestures
US9727151B2 (en) Avoiding accidental cursor movement when contacting a surface of a trackpad
US20060017702A1 (en) Touch control type character input method and control module thereof
TW201528114A (en) Electronic device and touch system, touch method thereof
KR20140070264A (en) Method and apparatus for sliding objects across a touch-screen display
US20170123623A1 (en) Terminating computing applications using a gesture
JP2005322194A (en) Touch panel type character inputting method and its control module
CN117157611A (en) Touch screen and trackpad touch detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13786836

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2013786836

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20157017398

Country of ref document: KR

Kind code of ref document: A