CN106537305B - Method for classifying touch events and touch sensitive device - Google Patents

Method for classifying touch events and touch sensitive device Download PDF

Info

Publication number
CN106537305B
CN106537305B CN201580037941.5A CN201580037941A CN106537305B CN 106537305 B CN106537305 B CN 106537305B CN 201580037941 A CN201580037941 A CN 201580037941A CN 106537305 B CN106537305 B CN 106537305B
Authority
CN
China
Prior art keywords
classification
blob
touch
frames
instructions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201580037941.5A
Other languages
Chinese (zh)
Other versions
CN106537305A (en
Inventor
D·约翰逊
P·萨拉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing LLC filed Critical Microsoft Technology Licensing LLC
Publication of CN106537305A publication Critical patent/CN106537305A/en
Application granted granted Critical
Publication of CN106537305B publication Critical patent/CN106537305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • G06F3/04186Touch location disambiguation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/027Frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Sorting Of Articles (AREA)
  • Automatic Disk Changers (AREA)
  • Supplying Of Containers To The Packaging Station (AREA)
  • Transition And Organic Metals Composition Catalysts For Addition Polymerization (AREA)

Abstract

A method for touch classification, the method comprising: obtaining frame data representing a plurality of frames captured by a touch sensitive device; analyzing the frame data to define respective blobs in each of the plurality of frames, the blobs indicating touch events; computing a plurality of feature sets for the touch event, each feature set specifying attributes of a respective blob in each of a plurality of frames; and determining the type of the touch event via a machine learning classification configured to provide a plurality of non-bimodal classification scores for the plurality of frames based on the plurality of feature sets, each non-bimodal classification score indicating a level of uncertainty in the machine learning classification.

Description

Method for classifying touch events and touch sensitive device
Technical Field
The present invention extends to methods, systems, and computer program products for classifying touch events on a touch-sensitive surface of a computing device.
Background
Touches to a touch-sensitive surface of a computing device generally include both intentional touches and unintentional touches. An unintentional touch event may be caused by the palm of the user's hand inadvertently or otherwise contacting the touch surface. Other unintended touches may include the thumb or other part of the hand being on the bezel of the handheld device. Distinguishing between intended touches and unintended touches may allow such unintended touch events to be rejected or ignored by the computing device. The rejection of an unintentional touch event coupled with the correct recognition of an intentional touch event, such as an intentional finger or stylus (or pen) touch, may provide an improved user experience for the computing device. However, the accuracy of computing devices with respect to distinguishing between intended touches and unintended touches is still pending improvement.
Disclosure of Invention
Methods, systems, and computer program products are provided for classifying touch events on a touch-sensitive surface of a computing device.
According to one embodiment of the invention, there is provided a computer-implemented method of classifying touch events, comprising: obtaining frame data representing a plurality of frames captured by a touch sensitive device; analyzing the frame data to define a respective blob in each frame of the plurality of frames, the blob indicating a touch event; computing a plurality of feature sets for the touch event, each feature set specifying attributes of a respective blob in each of the plurality of frames; and determining the type of the touch event via a machine learning classification configured to provide a plurality of non-bimodal classification scores for the plurality of frames based on the plurality of feature sets, each non-bimodal classification score indicating a level of uncertainty in the machine learning classification.
According to another embodiment of the present invention, there is provided a touch sensitive device including: a touch-sensitive surface; a memory having stored therein blob defining instructions, feature calculation instructions, and machine learning classification instructions; and a processor coupled to the memory, the processor configured to obtain frame data representing a plurality of frames captured via the touch-sensitive surface, and configured to execute the blob defining instructions to analyze the frames to define a respective blob in each of the plurality of frames, the blob indicating a touch event; wherein the processor is further configured to execute the feature calculation instructions to calculate a plurality of feature sets for the touch event, each feature set specifying attributes of a respective blob in each of the plurality of frames; and wherein the processor is further configured to execute the machine learning classification instructions to determine the type of the touch event via machine learning classification, the machine learning classification configured to provide a plurality of non-bimodal classification scores based on a plurality of feature sets of the plurality of frames, each non-bimodal classification score indicative of a level of uncertainty in the machine learning classification.
According to still another embodiment of the present invention, there is provided a touch sensitive device including: a touch-sensitive surface; a memory having a plurality of instruction sets stored therein; and a processor coupled to the memory and configured to execute a plurality of instruction sets, wherein the plurality of instruction sets includes: first instructions that cause the processor to obtain frame data representing a plurality of sensor images captured by the touch-sensitive device; second instructions that cause the processor to analyze the frame data to define a respective connection portion in each sensor image of the plurality of sensor images, the connection portion being indicative of a touch event; third instructions that cause the processor to calculate a plurality of feature sets for the touch event, each feature set specifying attributes of a respective connected portion in each sensor image of the plurality of sensor images; fourth instructions that cause the processor to determine a type of the touch event via a machine learning classification, the machine learning classification configured to provide a plurality of non-bimodal classification scores based on a plurality of feature sets of frame data of the plurality of sensor images, each non-bimodal classification score indicating a level of uncertainty in the machine learning classification; and fifth instructions that cause the processor to provide an output to the computing system, the output indicating a type of the touch event; wherein the fourth instructions comprise gather instructions that cause the processor to gather information representing the touch event on the plurality of sensor images.
Drawings
For a more complete understanding of this disclosure, reference is made to the following detailed description and accompanying drawings, in which like reference numerals may be used to identify like elements in the figures.
FIG. 1 is a block diagram of a system configured for touch classification according to one example.
FIG. 2 is a flow diagram of a computer-implemented method for touch classification according to an example.
Fig. 3 is a flow diagram of a non-bimodal scoring process of the method of fig. 2 according to one example.
Fig. 4 is a flow diagram of a non-bimodal scoring process according to another example of the method of fig. 2.
FIG. 5 is a block diagram of a computing environment according to one example of an implementation for the disclosed methods and systems, or one or more components or aspects thereof.
While the disclosed systems and methods are susceptible of embodiment in various forms, specific embodiments have been shown in the drawings (and will be described below), with the understanding that the present disclosure is intended to be illustrative, and is not intended to limit the invention to the specific embodiments described and shown herein.
Detailed Description
Methods, systems, and computer program products are provided for classifying touch events on a touch-sensitive surface of a computing device. Machine learning classifiers are used to distinguish between intended and unintended touches. An unintentional touch event may be caused by the palm of the user's hand inadvertently or otherwise contacting the touch surface. Other unintended touches may include the thumb or other part of the hand being on the bezel of the handheld device. The differentiation may allow such unintentional touch events to be rejected or ignored by the computing device. The rejection of an unintentional touch event coupled with the correct recognition of an intentional touch event, such as an intentional finger or stylus (or pen) touch, may provide an improved user experience for the computing device. In some cases, the classification techniques may also distinguish between different types of intended touches (e.g., between finger and pen touch events). The distinguishing may also include generating data indicative of a confidence or uncertainty level of the classification.
These classification techniques may address challenges presented by touch systems configured for stylus or pen touches. For example, during application of indicia to a touch-sensitive surface with a stylus or other pen tool, a user may inadvertently place his or her palm (or other portion of the hand or wrist) on the surface. The computing device may then incorrectly interpret the inadvertent palm contact as a legitimate input activity, thereby causing potentially unwanted behavior of the computing device. Other inadvertent touches may involve the user accidentally swiping or hitting a hand (or a pen or stylus device held in the same hand) against other parts of the surface. Yet another stylus-related challenge that may be addressed by these classification techniques relates to correctly classifying fingers that are not holding the stylus, which typically contact the screen. What makes things more difficult is that when the palm touches down near the edge of the screen, only a small portion of the palm is detectable by the touch-sensitive surface. These classification techniques can also correctly classify palms despite the reduced area of the palm in contact with the touch-sensitive surface.
These classification techniques may provide a low computational complexity process that reliably distinguishes between intended touches and unintended touches in real time. These techniques may achieve low error rates without introducing undue delay in user interface responsiveness. False positives (input unintentional touches) and false negatives (missed intentional touches) are avoided by the configuration of machine learning classifiers and/or other aspects of these techniques. The machine learning classifier may be trained via sensor images (or frame data) collected for each type of touch (e.g., from multiple people). These classification techniques thus do not have to rely on simple algorithms in order to minimize latency in user interface processing. Machine learning classification provides reliable classification in real time, i.e., without introducing latency impact.
Improvements in accuracy may be realized in computing devices having varying amounts of memory and other computing resources available. Different machine learning classifiers may be used to accommodate different resource levels. For example, in some cases, the machine learning classifier is configured as a Random Decision Forest (RDF) classifier configured to provide a conditional probability distribution. The RDF classifier may involve storage of the RDF tree data structure on the order of tens of thousands of bytes of memory. RDF implementations may thus be useful for general-purpose processors (e.g., central processing units or graphics processing units) where touch classification occurs at a software level, such as an operating system level.
The classification techniques may also be implemented in computing environments where memory and other computing resources are more constrained. In some cases, the machine learning classification is provided via a decision tree classifier implemented as one or more look-up tables. The smaller classification data structure of the look-up table is useful when classification is implemented by microcontrollers and other resource-constrained hardware. The classification calculations may thus be implemented on a wide variety of computing platforms. Although described below in connection with RDF and look-up table examples, the classification techniques are not limited to any particular type of machine learning classifier. For example, neural networks, fuzzy logic, support vector machines, and logistic regression classifiers may be used.
The classification technique is configured to aggregate touch information over multiple frames of touch events to improve the accuracy of the classification. This aggregation avoids problems that may arise when attempting to classify touch events based on only a single image or frame. For example, the palm may appear similar to an intended touch when the palm first makes contact with the surface, or when the palm has been nearly removed from the surface. At any point, only a small portion of the palm may be detected. In some cases, aggregation involves a classification score that is aggregated over multiple frames. In other cases, aggregation involves aggregating attributes or features of touch events over multiple frames. In still other cases, the classification technique may use a combination of these two types of aggregation. Clustering may also help avoid false positives caused by other situations, such as when a typical large area of a palm touch tends to disappear due to the user being in an electrically floating state (e.g., the user does not have a good high frequency connection to ground).
The machine learning classifier is configured to provide a plurality of non-bimodal classification scores. In machine learning classification, each non-bi-modal classification score indicates an uncertainty or confidence level in the machine learning classification. The nature of the classification score may vary depending on, for example, the type of classifier used. For example, each classification score may be non-bimodal in the sense that the classification score is a probability value (e.g., floating point or other non-integer number falling between 0 and 1). A plurality of probability values (e.g., one probability value for each type of touch event) may be provided. Other types of classification scores may instead use integer numbers. For example, the classification score may be a score rating that falls within a range of possible scores (e.g., -9 to + 9). In such a case, the multiple non-bimodal scores may be combined (e.g., added) to determine a final composite rating. For example, a non-bimodal score from multiple lookup tables may be combined for each frame, which may then be aggregated across all frames associated with a touch event. Other types of non-bimodal scores may be used. For example, the probabilities of classification scores and rating types may be integrated to varying degrees to provide a hybrid classification approach.
The term "finger touch" is used herein to refer to any intentional or intentional touch event involving a user's hand or other body part. For example, a finger touch may involve the side of a thumb contacting a touch-sensitive surface, which may occur, for example, during a two-finger zoom gesture. The touch may be direct or indirect. For example, touching may be made with a gloved hand or otherwise donned body part.
The term "pen touch" is used to refer to a variety of different intended touches involving a pen, stylus, or other object held by a user to interact with a touch-sensitive surface. Computing devices may be configured for use with a variety of different or marked physical objects, including discs, special tools such as brushes or spray guns, mobile devices, toys, and other physical icons or tangible objects.
The term "touch" is used to refer to any interaction with a touch-sensitive surface that is detected by an input sensor associated with the touch-sensitive surface. Touch may not include or involve direct physical contact. The interaction may be indirect. For example, the touch-sensitive surface may be configured with a proximity sensor. The interaction may be detected via various physical properties, such as an electromagnetic field. The nature and/or source of touch may vary accordingly, including, for example, finger or hand contact, pen or stylus contact, hover-based input, marked objects, and any other object placed in contact with the input surface or otherwise adjacent to the touch surface. Accordingly, these classification techniques may be useful in connection with gesture and hover type touch events that involve projected capacitance, optical, and/or other sensing techniques.
The terms "palm" and "palm touch" are used to refer to contacts or other touch surface interactions involving any one or more body parts that a user does not intend to interpret as touching or otherwise interacting with a touch-sensitive surface. These body parts may include other parts of the hand other than the palm, such as the knuckles of the hand, the sides of the fingers, the wrist or forearm, or other body parts.
These classification techniques may be useful for a variety of handheld and other computing devices. Thus, the nature of the touch sensitive surface and thus the interaction with the touch sensitive surface may vary. Thus, intended touch is not limited to involve a user's fingertip or finger. These classification techniques are compatible and useful in connection with any touch-sensitive computing device having one or more touch-sensitive surfaces or areas (e.g., a touchscreen, a touch-sensitive bezel or housing, sensors for detecting hover-type inputs, optical touch sensors, etc.). Examples of touch-based computing devices include, but are not limited to, a touch-sensitive display device connected to a computing device, a touch-sensitive telephone device, a touch-sensitive media player, a touch-sensitive e-reader, a notebook, a netbook, an electronic book (dual screen), or a tablet computer, or any other device with one or more touch-sensitive surfaces. Thus, the size and form factor of the touch sensitive computing device may vary. For example, the size of the touch-sensitive surface may range from a display of a handheld or wearable computing device to a wall-mounted display or other large-size display screen. However, the touch-sensitive surface may or may not be associated with a display or touch screen, or may not include a display or touch screen. For example, the touch-sensitive surface may be provided as a trackpad, or may be a virtual surface spatially implemented as a plane for detecting touch inputs (e.g., as may be implemented using microsoft corporation's Kinect device).
These classification techniques are described in connection with capacitive touch systems. Although reference is made herein to capacitive sensing, the touch classification techniques described herein are not limited to any particular type of touch sensor. The touch-sensitive surface may alternatively use resistive, acoustic, optical, and/or other types of sensors. The touch-sensitive surface may thus alternatively detect changes in pressure, light, displacement, heat, resistance, and/or other physical parameters. The manner in which the touch-sensitive surface detects an input device, such as a stylus or pen, may vary. For example, the pen may be passive and/or active. Any active pen may emit or retransmit a signal that is detected through the touch-sensitive surface. For proximity detection purposes, a passive pen may include a magnet or other object or material (e.g., a stylus tip) that interferes with the electromagnetic field or other characteristics of the touch-sensitive surface. Other aspects of the nature of touch sensor technology may vary.
FIG. 1 depicts a touch sensitive device 100 configured to implement touch classification. The device 100 includes a touch system 102 and a touch-sensitive surface 104. The touch-sensitive surface 104 may be a touch screen or other touch-sensitive display. Any number of touch sensitive surfaces 104 may be included. In this example, the device 100 also includes a processor 106 and one or more memories 108. The touch system 102 may serve as an interface or other middleware between the touch-sensitive surface 104 and an operating environment supported by the processor 106 and memory 108. The processor 106 may be a general-purpose processor, such as a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or any other special-purpose processor or processing unit. Any number of such processors or processing units may be included.
The touch system 102 may be communicatively coupled to the processor 106 and/or the memory 108 to provide data indicative of touch events occurring at the touch-sensitive surface 104. The touch event data can specify a location and a type of touch event. The data may also represent a level of uncertainty for the type assignment. More, less, or alternative information may be provided by the touch system 102 in conjunction with the event type. For example, the touch event type can be provided with data indicating the touch event identification code. The location of the touch event and/or other information may be provided separately by the touch system 102.
In the example of fig. 1, the touch system 102 includes one or more touch sensors 110, firmware and/or drivers 112, a processor 114, and one or more memories 116. The processor 114 is communicatively coupled to each memory 116 and/or firmware/driver 112. The processor 114 is configured to obtain frame data captured via the touch-sensitive surface 104. The frame data represents a plurality of frames captured via the touch-sensitive surface 104. The frame data for each frame may include a matrix of values or pixels that together form an image of the extent to which the touch event occurred at the surface 104. The value of each pixel is indicative of the amount of touch that the sensor 110 has detected at a particular location on the surface 104. The frame data may include raw output data of the touch sensor 110 and/or a processed representation including the raw output data.
The manner in which the processor 114 obtains the frame data may vary. For example, frame data may be received via firmware/driver 112 and/or obtained by accessing memory 116.
The frame data may alternatively or additionally be obtained by the processor 106. In some cases, the processor 106 obtains frame data for the purpose of implementing touch type determination. In such cases, the processor 114 may be involved in controlling the sensor 110 and/or configured to implement one or more pre-processing tasks or other tasks to prepare for the determination. Processing of frame data and other aspects of the touch classification technique may be implemented by any combination of the processor 106 and the processor 114. In other examples, device 100 includes a single processor (i.e., processor 106, processor 114, or different processors) for the purpose of obtaining and processing frame data.
The configuration and arrangement of the touch system hardware in the device 100 may vary. For example, each touch sensor 100 may alternatively be configured as a component of the touch-sensitive surface 104. Drivers and other information provided via firmware 112 may alternatively be stored in memory 116.
The processor 114 is configured to execute a plurality of instruction sets stored in the memory 116 and/or the memory 108. These sets of instructions may be arranged as respective software modules. Modules or other sets of instructions may be integrated to any desired extent. The instruction set includes blob defining instructions 118, feature calculating instructions 120, and machine learning classification instructions 122. The blob defining instructions 118 may involve defining blobs or connected components for touch events across multiple frames of frame data. A corresponding blob may be defined in each frame in which a touch event occurs. The feature computation instructions 120 may involve computing attributes or features of the blob. Each feature may characterize an aspect of the blob that may be helpful in identifying the type of touch event. The machine-learned classification instructions 122 relate to determining the type of touch event based on the feature set by machine-learned classification. The output of the classification includes a plurality of non-bimodal classification scores that identify a level of uncertainty for the classification. As a result, the feature set(s) may be applied to a machine learning classifier to generate a classification score. Each instruction set is described in connection with a number of examples below. Additional instructions, modules, or sets of instructions may be included. For example, one or more sets of instructions for generating touch types based on the output of the machine learning classifier and/or transmitting data indicative of the touch types to the processor 106 or other components of the device 100 may be included.
The processor 114 is configured to execute blob defining instructions 118 to analyze the frames to define a corresponding blob for the touch event in each of the plurality of frames. A given touch event may span any number of frames. The definition of blobs over multiple frames establishes the input data to be processed when classifying touch events.
The blob defining instructions 118 may cause the processor 114 to perform a plurality of pre-processing actions to prepare the frame data for analysis. For example, frame data (e.g., raw frame data) may be upsampled and/or thresholded prior to the analysis (e.g., blob definition analysis). Upsampling the frame data may include 4 times via bilinear interpolation or other upsampling rates. Other upsampling processes may be used, such as bicubic, convolutional, and nearest neighbor techniques. Other processes may be used. In some cases, the frame data is analyzed without upsampling, without thresholding, or without upsampling and without thresholding. For example, in some cases where a look-up table classifier is used, the frame data is not up-sampled.
Thresholding the frame data may involve eliminating small fluctuations in the frame data due to noise. For example, a predetermined intensity threshold is used to reset all pixels in the frame data having an intensity value below the threshold to a value of 0. Additional or alternative filtering techniques may be used to remove noise. In the case where frame data is upsampled, thresholding is implemented after the upsampling.
The frame data threshold may be different from other thresholds used by the touch system 102. For example, the frame data threshold may be lower than the threshold used by the touch system 102 to detect touch events. A lower threshold may be useful in detecting palm and/or other touch events before and after actual contact with the surface 104. For example, detecting the palm immediately prior to the contact (e.g., palm approaching surface 104) may provide one or more additional frames of data for the purpose of distinguishing palm touches from intended touches. In some cases, the threshold may be established as a configurable or predetermined percentage of the touch system threshold, which may be a variable threshold depending on the threshold level detected by the touch system 102.
The threshold may alternatively or additionally be configured to and involve generating a binary representation of the frame data on a pixel-by-pixel basis. Each pixel is either "on" or "off" based on the intensity value of the pixel relative to the threshold. The binary representation of the frame data may be used to simplify the process for defining blobs in the frame data.
Blobs are defined in the frame data by analyzing the frame data to determine which "on" pixels (e.g., pixels having non-zero intensity) are adjacent to other "on" pixels. Such adjacent pixels are considered to be connected to each other. The groups of connected pixels are then considered to be connected components or blobs in the frame image. Each frame may have a plurality of blobs, each of which may be separately classified. Connected component analysis is performed across the entire set of frames to detect all blobs in the frame. Blob defining instructions 118 may direct processor 114 to assign an identifier to each blob for purposes of tracking and directing future processing related to each blob.
In many cases, each blob corresponds to a discrete object of the touch surface 104. However, there are cases in which two or more blobs are caused by the same object. For example, a touch contact of a palm is sometimes divided into two or more separate blobs. In other cases, a single touch blob is the result of multiple objects that are too close to each other (e.g., the tips of fingers touching each other). Connected component analysis of frame data may thus include further processing of the frame data to address these potential challenges. For example, the blob definition instructions 118 may cause the processor 114 to further analyze the frame data to determine whether blob splitting and/or blob merging is warranted.
In blob splitting analysis, each blob may be analyzed to determine whether the blob is the result of contact by multiple fingertips belonging to a proximate finger. The determination may use a classifier. Blobs that are believed to correspond to multiple fingers are broken into separate sub-blobs for analysis. In one example, the coordinates of each fingertip contact are determined based on the location of a local maximum (e.g., a pixel whose intensity value is greater than or equal to all 8 contiguous neighbors) in the blob. Assume that a fingertip is located at each local maximum of intensity in the blob and partition the blob into sub-blobs by assigning each pixel in the original blob to the sub-blob associated with the fingertip whose position (e.g., local maximum) is closest to that pixel.
The decision as to whether a given blob is in fact generated from multiple fingertips may be implemented using a classifier. In one example, the classifier is implemented via one or more decision trees. Various different classifiers may be used. The classifier may or may not be machine-learned. For example, the classifier may be implemented manually based on experience or other data.
In one exemplary blob splitting process, a list of coordinates (which may be sub-pixel accuracy) of all local maxima in the blob having intensities above or equal to a given threshold is generated. The maxima in the list are ordered in one of their possible orders of shortest traversal (i.e., shortest path through all local maxima in the list). The threshold is found from the training data as the minimum intensity value achieved by the local maximum corresponding to the fingertip.
The features employed by the classifier may be or include the area of the blob, the number of local maxima, the distance between each consecutive local maximum in the sorted group (i.e., in its shortest traversal order), and the intensity variation along the line connecting each consecutive local maximum in the sorted group. The classifier may thus be configured to respond to the distance between local maxima and the extent to which the intensity falls between the local maxima.
Once a blob has been deemed to correspond to multiple fingertips, the blob is broken up into separate blobs, each blob being associated with a respective one of the local maxima. Each such blob is formed from, or includes, a subset of pixels from the original blob. The pixels in the original blob are associated with corresponding ones of the new blobs having the closer (or closest) local maxima.
Blob splitting may be implemented where more complexity is involved in touch classification. For example, blob splitting may be implemented where a random decision forest classifier is used for touch classification. In other cases, some cases, including lookup tables and other low resource classifiers for touch classification, do not implement blob splitting. Nevertheless, in either of these cases, lump splitting may or may not be achieved.
In the case where blob splitting is not implemented, blob defining instructions 118 may direct processor 114 to identify and store local maxima within each blob. As each maximum is identified, changes to each blob in subsequent frames (e.g., local maxima found within each blob) can be tracked. For example, if a blob becomes two separate blobs when the fingertips are spread apart, the local maxima may be used to support separate classification of the two separate blobs. The location of the maximum may additionally or alternatively be used to correlate the touch event with the detection of a potential touch by other portions of the touch system 102. For example, firmware 112, memory 116, and/or another memory may include instructions related to detecting touch events without classification.
Blob merging may be implemented as an alternative or additional blob definition process. When the clumps are combined, the closely spaced clumps are combined together to form a single clump. Merging may be implemented to minimize the number of blobs that are later processed and sorted. Thus, merging may be implemented in lookup tables and other low-resource classification platforms. The merging may also be useful for merging a near finger touch with a corresponding palm. The distance at which the blobs are merged together may be configurable. In one example, the distance is two image pixels, which in some touch systems may correspond to approximately 9 millimeters.
In some cases, blob defining instructions 118 also include instructions that direct processor 114 to assign blobs to traces. The term "trace" is used to refer to the collection of blobs (and underlying frame data) generated by a particular object over a series of consecutive frames (e.g., when the object is in contact with the surface 104). Trace allocation and definition may be implemented after any blob splitting or merging. The definition of a trace allows for tracking of movements and other changes that a blob undergoes across multiple frames. By assigning blobs to particular traces, the blobs in the respective frames may be associated with each other as part of the same touch event. For example, a plurality of blobs in each subsequent frame may be associated with each other as part of a touch event involving the user's fingertip moving across the surface 104 in a swipe gesture. Each new blob may be assigned to a new trace or an activity trace. An active trace is a trace that has been defined as a trace of a previous frame.
In some cases, new blobs may be allocated to activity traces via bi-directional matching or other analysis of blobs in the respective frames. Various matching techniques may be used to associate blobs detected in the current frame with blobs in the previous frame. The cost function employed in the two-way matching may be based on the distance between the location of each new blob and the expected location of the contact point of each trace as estimated from the traces of the previous trace. The blob presented by each subsequent frame determines whether to expand or remove a trace.
The touch system 102 stores a list of activity traces and their associated attributes. In the example of fig. 1, the list is stored in a database 124, which database 124 in turn may be stored in one of the memories 116. Other storage locations and/or data structures may be used. Traces that cannot be extended by the current blob are removed from the active trace list. For each blob that may not be matched to or otherwise associated with any existing activity trace, a new activity trace is added to the list. Blob defining instructions 118 may be configured to associate blobs with traces via a trace identifier (such as a trace ID number).
As the trace becomes mature, the blob definition instructions 118 may pass control to the blob feature calculation instructions 120. A trace is considered mature when it has been extended by at least a predetermined number of frames. For example, the number of frames may be three frames.
Assigning blobs to traces may be useful when the features to be applied in machine learning classification include features of the traces. Trace features may be applied in addition to the features of individual blobs, as described below in connection with examples involving Random Decision Forest (RDF) classifiers. Traces may also be used in other cases, including those where trace features are not used in classification. In still other cases, blobs are not allocated to traces. For example, traces may not be used in lookup tables and other low resource classification techniques with small feature sets.
The processor 114 is configured to execute the feature calculation instructions 120 to calculate a plurality of feature sets for each touch event. Each feature set specifies attributes of a respective blob in each of a plurality of frames. These attributes may be defined as the blobs are defined or computed at a later point in time, such as when the trace is considered mature. For example, the blob feature calculation instructions 120 may be configured to cause the processor 114 to aggregate the feature set data prior to applying the feature set data to the machine learning classifier when determining the type of touch event. In other cases, the feature set data for a particular blob is applied to a machine learning classifier while the computation of further feature data sets is ongoing. For example, the application of the feature set data for the first frame may be applied to the classifier concurrently with the calculation of the feature set data for the second frame. Such concurrent processing may be useful in cases where the feature data is applied to the classifier separately for each frame.
The blob properties to be computed may vary. In one example involving a Random Decision Forest (RDF) classifier, any combination of the following blob attributes may be calculated for each blob: (i) area; weighted centroid (e.g., Σ)(x, y) e blob(x, y) intensity (x, y)); (ii) minimum, maximum and average intensities; (iii) minimum, maximum and average intensity gradient magnitudes; (iv) perimeter; (v) a roundness metric, such as an isoperimetric quotient (i.e., 4 PI area/circumference 2); (vi) distance from weighted centroid to nearest image edge; (vii) average intensity of blob pixels on the edge of an image (i.e., the pixel is along the first or last row or column in the image)(ii) a (viii) Width at the edge of the image (i.e., the number of blob pixels on the edge of the image); (ix) appearance of a 5x 5 image slice (from raw or thresholded frame data) centered at the weighted centroid; and (x) the appearance of a 17x17 image slice (from raw or thresholded frame data) centered at the weighted centroid. The appearance of an image slice may be quantified via an analysis of the intensity values of the corresponding pixels in the slice. The size of the image slice features may vary. The image slice may also be centered at a location other than the weighted centroid, or otherwise disposed relative to the blob.
More, fewer, or alternative attributes may be calculated for the RDF classifier and/or other types of machine learning classifiers. For example, in one example involving a look-up table classifier, the following features are computed for each blob: the height of the blob (e.g., the maximum pixel intensity value in the blob); the area of the agglomerates; and, the texture of the blob. The texture feature may indicate a gradient of intensity values within the blob. For example, the texture may be calculated as the sum of the absolute values of the differences between each pixel in the blob and its 8 nearest neighbors divided by 8 times the area of the blob. Blobs generated by the palm tend to be flatter, or have a smoother interior, and thus have less texture than blobs generated by multiple closely spaced finger touches. Computing fewer features may reduce the code space and processing time of the compute instruction 120. Fewer features may also reduce the amount of memory involved in storing the results of the calculations for each blob.
In some cases involving traces, the blob feature computation instructions 120 may also cause the processor 114 to compute one or more features of the trace of touch events. In addition to the feature set for each blob within a trace, a trace feature set for the trace may be computed, examples of which are listed above. The trace feature set may be calculated upon the trace reaching maturity.
The trace feature set may include accumulated features that are one or more of the blob attributes referenced above. In one example involving a Random Decision Forest (RDF) classifier, the minimum, maximum, and average of the following blob attributes are computed over a predetermined number of frames of the trace: (i) area; (ii) a change in area between consecutive frames; (iii) a change in position between consecutive frames; (iv) the number of the equal-week quotient; (v) strength; (vi) an intensity gradient magnitude; (vii) distance to nearest image edge; (viii) average intensity of pixels on the edge of the image; and (ix) width at the edges of the image.
The trace feature set may additionally or alternatively include features relating to the attributes of individual volume blobs within the first F frame of the trace. Thus, in some cases, computing blob attributes at each frame may be one way to collect data for a trace. For example, the following features may be calculated for each individual blob: (i) area; (ii) a change in area between consecutive frames; (iii) a change in position between consecutive frames; (iv) an equal-circumference quotient or other roundness measure; (v) minimum, maximum and average intensities; (vi) minimum, maximum and average intensity gradient magnitudes; (vii) distance from the nearest image (or frame) edge; (viii) average intensity of pixels on the edge of the image; (ix) width at the edge of the image; (x) Intensities of pixels in a 5x 5 image slice (from raw or thresholded frame data) centered at the weighted centroid; (xi) Intensities of pixels in a 17x17 image slice (from raw or thresholded frame data) centered at a weighted centroid; (xii) The intensity difference between each pixel pair from a 5x 5 image slice; (xiii) The intensity difference between each pixel pair from a 17x17 image slice; (xiv) The minimum difference in intensity between two concentric rings in a 17x17 patch, for example, min { I (p): p-c | r _1} -max { I (p): p | p-c | r _2}, where r _1< r _2, where I is the image intensity matrix and c is the weighted centroid. More, fewer, or alternative features may be computed in connection with the trace.
The processor 114 is configured to execute the machine learning classification instructions 122 to determine the type or nature of the touch event. The type is determined via machine learning classification. The classification is configured to provide a plurality of non-bimodal classification scores based on a plurality of feature sets of a plurality of frames. In machine learning classification, each non-bimodal classification score indicates an uncertainty or confidence level. For example, each classification score may be a probability that a touch event has a particular type. In other examples, the classification algorithm is a rating along each individual that indicates whether a touch event has a range or scale of a particular type. In some cases, these individual ratings may be aggregated to determine a cumulative rating. The cumulative rating or other classification score may be compared to a threshold value described below to determine the touch event type.
Once the touch event type is determined, the classification instructions 122 may also be configured to cause the processor 114 to provide data indicative of the type to the device 100. In the example of fig. 1, the processor 114 may provide the type data to the processor 106. The type data may be stored to or provided to any component of the host device. The type data may be provided along with other data indicative of the touch event. In some cases, the other data includes the trace ID and coordinates of the touch event in the current (e.g., last) frame. In other cases, other data may be provided with the touch type. For example, the touch type may be provided along with the touch event coordinates.
The machine learning classification instructions 122 may be invoked once or iteratively according to the classification. The manner in which the instructions 122 are invoked may depend on when the feature set data is computed. For touch events whose trace data is computed, the processor 114 may apply the entire feature set data to the machine learning classifier collectively in a single call for the trace. In some other cases, the classification instructions 122 are implemented iteratively as feature set data is calculated for each frame. For example, the feature set data for each blob is applied separately to a machine learning classifier. Further feature set data may then be calculated while the previously calculated feature set data is applied to the machine learning classifier.
The machine learning classification instructions 122 may cause the processor 14 to access one or more classifier data structures 126. The classifier data structure 126 may be stored in the memory 116 and/or other memory. The format and other characteristics of the classifier data structure 126 may vary depending on the type of machine learning classifier to be used. For ease of illustration and description, two examples of classification data structures are shown in fig. 1, as device 100 is typically configured with only a single classifier. These two exemplary data structures are a Random Decision Forest (RDF) data structure 128 and a look-up table data structure 130. Although feature set data is typically applied to only one data structure, more than one classifier may be used in some cases. For example, the outputs of multiple classifiers may be compared or processed to determine a final classification.
RDF classification is one of a number of distinctive classification techniques that may be used by the classification instruction 122. The discriminative classifier generates or returns a plurality of non-bimodal classification scores in the form of a discrete probability distribution over a set of classifications for a given input array. In this example, the classifications correspond to possible types of touch events, such as intentional fingertip touches, capacitive stylus or other pen touches, and unintentional touches (such as a palm). For example, exemplary outputs are {0.45,0.32,0.23}, 45% of the chance that the touch event is an intentional fingertip touch, 32% of the chance that the touch event is a pen touch, and 23% of the chance that the touch event is an unintentional touch event (e.g., a palm touch). Each probability score thus represents a probability that a touch event is of a particular type. In other cases, more, fewer, or alternative classifications may be used. For example, in the case of distinguishing between intended and unintended touch events, only two classifications may be used.
The discriminative classifier may be configured to accept as input a plurality of feature data sets. The input to the discriminative classifier may thus be blob and trace feature data computed for the touch trace (e.g., the first few frames or another predetermined number of frames) according to the computation instructions 120. In the trace example, the feature set for each trace is applied to the classifier when the trace reaches maturity. The classifier thus provides a probability distribution output for each maturity trace.
The RDF data structure 128 includes a set of random decision trees RDT1, RDT2, RDT3, … …, RDTn, where n may be any number. In one example, the data structure includes 20 trees, each tree having a maximum decision height of 11. In a random decision forest process, the same input array is applied to each tree. The output of the RDF for a given input is calculated by averaging the outputs of each of the trees for the given input. The number and height of these trees may vary.
Each RDT is a binary decision tree in which each internal (i.e., non-leaf) node has an associated "split" binary function. When a split binary function is applied to an input, the function returns a decision whether the input is to be routed to the right child node or the left child node of the node in the next level of nodes in the tree. The classification process in RDT for a given input X begins by processing X in the root node of the tree by applying the root's associated splitting function, and continues the process recursively on the input at the child nodes corresponding to the results of the splitting function. Finally, the process reaches the leaf node where the discrete probability distributions on the classification set C are returned as the process output.
The splitting function within these trees can be implemented as an inequality comparison between the value of a feature and a threshold, i.e., f < τ, where f is the value of a particular feature and τ is the threshold. The specific features and thresholds to be used at each node are learned during the training process.
During training, at each node, the best feature is selected from a sample of the space of each type of feature. For feature space, as many samples as the square root of the spatial dimension are sampled.
In the example of fig. 1, only a single RDF structure 128 is shown and is used to provide all three classification scores (e.g., fingertip, pen, palm). In some cases, however, multiple RDF classifiers may be used to provide these classification scores. For example, two RDF classifiers, each with a corresponding set of trees, may be used to provide the three classification scores. In one example, one RDF classifier may be used to distinguish between intended touches and unintended touches. Another RDF classifier is then used in conjunction with the intended touch to distinguish between pen and fingertip touches.
In some cases, the output of the RDF classification may be adjusted. For example, these adjustments may address situations involving frames with multiple recently matured traces. After all recently matured trace touch types in a frame are classified, the classification instructions 122 may cause the processor 114 to adjust the types of traces that are near the trace classified as an unintentional (e.g., palm) touch event. If the current location of the blob of the first trace falls within the threshold of the blob of the second trace classified as an unintentional touch, the first trace is also classified as an unintentional touch. Such adjustments may be useful because these adjacent blobs are typically generated by inadvertent touches from the knuckles or fingers of a hand having a palm resting on the surface 104.
Additional or alternative machine learning classifiers may be used by the classification instructions 122. For example, other types of discriminative classification techniques may be used. In some examples, the machine learning classifier is a look-up table classifier. In connection with devices having limited processing and/or memory resources (such as when the processor 114 is a microcontroller), look-up table based classification may be useful. The use of a lookup table can greatly reduce the memory footprint (footprint) and processing time for this classification.
In the example shown in FIG. 1, the lookup table data structure 130 includes a pair of lookup tables that distinguish between intended and unintended touch events. The feature set data is applied to each look-up table. Each table then provides a corresponding individual non-bimodal classification score or rating, as described below. The first lookup table may be configured to provide a first rating as to whether the touch event is an intended touch. The second lookup table may be configured to provide a second rating as to whether the touch event is an unintentional touch. Each of these individual ratings or scores for a respective frame may then be combined to generate a frame classification rating score for the respective frame. Additional look-up tables may be provided, for example, to further distinguish touch event types. In other cases, the data structure 130 includes only a single lookup table.
In this look-up table based classification example, the feature set data is applied to the classifier on a frame-by-frame basis. For example, the feature set data for the respective frame is applied to each lookup table in the data structure 130. Each table then provides a corresponding individual non-bimodal classification score or rating for that frame, as described below. The frame classification rating scores for the frames during which the touch event exists are then aggregated (e.g., summed) to determine a cumulative multi-frame classification score for the touch event.
The manner for combining across frames and then aggregating the classification ratings or scores is described below in connection with an example in which individual ratings are combined by subtracting ratings from each other. The individual ratings may be combined in a variety of other ways. For example, individual ratings or scores may be configured such that the combination involves an addition operation, an averaging operation, and/or other operations. Clustering of classification scores across frames may also be accomplished in ways other than the summation operation described above. The classification instructions 122 may then cause the processor 114 to determine whether the accumulated multi-frame classification score crosses a threshold. In some examples, multiple classification thresholds are provided, one for each possible touch event type. If the threshold(s) are not exceeded, the uncertainty level may be considered too high for reliably classifying touch events. At this point, the blob defining instructions 118 and the computing instructions 120 may again be invoked to provide feature set data for the next frame. The new feature set data may then be re-applied to the lookup table(s) for further scoring, clustering, and thresholding.
The look-up table classifier may be configured to use the features of the blob as an index into the look-up table(s). The features calculated for each blob may be height, size, and texture, as described above. More, fewer, or alternative features may be included. In one example, each entry in the table is a two-bit rating indicating the likelihood that a blob has a particular touch event type (i.e., the table's associated classification). A rating of 3 indicates that the blob is very likely a member of the classification. A rating of 2 indicates that the blob is slightly likely to be a member of the classification. A rating of 1 indicates that the blob is likely, but not highly likely, a member of the classification. A rating of 0 indicates that the blob is highly unlikely to be a member of the classification.
In this example, the individual blob rating scores are obtained from two classification tables, one for intended touches (e.g., fingers or pens) and the other for unintended touches (e.g., palms). Each blob is found in both tables by applying the feature set data to both tables. A frame-specific blob classification rating (or "blob classification rating") may then be calculated for the blob by: the individual ratings from the respective tables are combined by subtracting them from each other as follows:
block mass classification rating-finger table rating-palm table rating
As a result, the blob classification rating ranges from-3 to +3, where positive values indicate that the blob is more likely to be an intentional touch, and where negative values indicate that the blob is more likely to be an unintentional touch. The absolute value of the rating is an indication of the certainty of the classification or the level of uncertainty in the rating. The blob classification rating is calculated for each blob in the touch image.
The blob classification rating for a particular touch event may then be accumulated or aggregated across multiple frames to generate an aggregated blob rating for that touch event. For example, if the blob has blob classification ratings of +2, +1, and +2 in the first three frames, the cumulative blob rating is +5, the sum of the three blob classification ratings.
The cumulative blob rating is then used to determine the touch event type. The cumulative rating may be compared to one or more thresholds. In one example, two thresholds are used to support differentiation into one of three possible classifications. Negative touch ratings less than or equal to the palm threshold are classified as palm touch events. A positive touch rating greater than or equal to the finger threshold is classified as a finger/pen touch event. All other touch ratings are classified as unknown. The palm and finger thresholds are configurable, but are illustratively set to-9 for the palm threshold and +6 for the finger threshold.
The cumulative blob rating may also be used to determine whether processing further frame data is warranted. If the cumulative rating falls within the unknown range, further frame data may be processed to include another blob classification rating into the aggregate. For example, the first image (or frame) near the palm is assigned a blob classification rating of + 1. Since this is the first image of the palm, the cumulative rating is also + 1. In the next image, the blob classification rating is again assigned to + 1. The cumulative rating is now + 2. In the next image, most of the palms touch down, so that the blob classification rating becomes-2, and the cumulative rating is now 0. The next three images are then assigned a blob classification rating of-3, resulting in a cumulative rating of-9, at which time the touch is classified as a palm touch event. All previous touches will be classified as unknown. Even if the first two touch images look slightly more like fingers than palms, the classifier still achieves the correct final classification rather than the false positive classification.
The classification instructions 122 may be configured to cause the processor 114 to associate the blobs with touch events. In some cases, the touch event may be initially identified by firmware 112 or other components of touch system 02. For each potential touch event, a bounding box for the touch event may be defined and compared to all maxima found in the current touch image (or frame data) to identify which blob is associated with the touch event. Most touch images will have only a single blob within the bounding box and thus be associated with a touch event. In unlikely events where multiple blobs have a maximum value within the bounding box, the cumulative rating of touch events may be calculated as the average of the cumulative ratings of such overlapping blobs.
The classification instructions 122 may also incorporate multiple adjustments into the composite rating to address one or more uniqueness scenarios. Each adjustment is included based on whether a rule or condition is satisfied or present. The adjustment may supplement the classification capabilities of the lookup table without adding significant complexity to the process.
And (4) edge effect regulation. Often the palm off the edge of the touch screen may have a tendency to look like an intentional touch due to its small size. In these cases, to improve performance, the number of edge pixels in each blob may be tracked. The classification instructions 122 may determine whether the number of edge pixels exceeds a threshold. If the number of edge pixels exceeds a threshold, the blob classification rating score is adjusted. In one example, the difference between the number of edge pixels and the threshold is subtracted from the blob classification rating score. The adjustment biases the ratings of the blobs toward the palm rating threshold. Blobs caused by a single intended touch tend to have a small number of edge pixels and are therefore not affected by the adjustment rules. Blobs caused by palms at the edges of the surface 104 are more likely to have a greater number of edge pixels and are therefore biased toward palm classification to minimize false positives. While blobs caused by multiple closely spaced fingers at an edge may also lean toward palm ranking and cause false negatives according to this rule, the rate of such false negatives may be acceptably low insofar as the pose near the edge is typically a single finger pose.
And adjusting the palm proximity. A user writing with a stylus typically places their palm on the screen while grasping the stylus with 2 fingers and thumb. The other two fingers of this hand approach the surface 104 and typically contact the surface 104 during writing. This situation forms a blob that can appear to be an intentional touch. To minimize false positives in these cases, the blob classification rating score may be adjusted by subtracting a quotient calculated by dividing the blob area of the blob by a threshold area. The adjustment may be made when the area of the blob is above a threshold and the blob is near another blob that looks like a palm (e.g., ranks a negative blob). This adjustment tends to bias finger touches very close to the palm toward finger ratings to minimize false positives. Even when stylus touches are also close to the palm, the adjustment cannot individually affect the classification of these stylus touches because the stylus touches have an area less than the threshold.
And (4) adjusting the anti-touch. When the user does not have a good high frequency connection to the touch system ground (i.e., the user is hovering), the large touch area may tend to disappear or even turn to a reverse touch (i.e., a negative area in the touch image where a normal touch would be positive). This situation can lead to false positives, as the normal large area of the palm is significantly reduced in size, and the separation to other inadvertent touches (such as fingers not holding a stylus) is increased. To minimize false positives in these cases, two adjustments may be incorporated into the classification ranking process. The first rule is that once the touch event cumulative rating crosses the palm threshold, any blob in the next frame that overlaps with that blob is also assigned a palm rating. This adjustment may improve performance in a floating situation, as portions of the palm may tend to disappear over time. Although the palm tends to look normal (and may be assigned a palm rating) during an early touchdown, as more and more portions of the palm touch the surface, portions of the palm tend to disappear, while the remaining portions may appear to be an intentional touch. However, since the remaining portions overlap with the palms seen in earlier frames, these remaining portions are still assigned a palm rating.
A second rule that may be used to improve performance in the floating case involves tracking anti-clumps in addition to normal clumps. Anti-blooming may be detected as connected pixel components in the touch image having pixel values less than or equal to a negative threshold. As anti-blobs are also defined in the frame data, the blob classification rating score may then be adjusted by subtracting a value from the blob classification rating score if the corresponding blob overlaps the anti-blob. The adjustment may be limited to those cases where the anti-blob is large (e.g., the size of the anti-blob exceeds a threshold). These normal blobs are then biased toward the palm rating by subtracting a value from the blob classification rating of the normal blobs that overlap with the large anti-blobs. The value subtracted may be a fixed value or a value that becomes larger for larger blob areas. In connection with situations involving alignment of multiple closely spaced fingers on a diagonal, setting the threshold for large anti-blobs to a relatively large size (e.g., 50 pixels) may help avoid erroneous application of this adjustment.
The machine learning classifier may be trained via an offline data collection phase during which multiple image sequences involving stylus, fingertip touch, and unintentional non-fingertip touch events are received. These sequences may be implemented in a wide variety of possible device usage scenarios, and/or involve a wide variety of users. There may be differences in pressure, posture, orientation, and other touch event characteristics. Sequences of intended touch events (e.g., fingertip touches, or stylus, or non-fingertip touches) are collected separately, thereby avoiding manual tagging of touch events. The feature data set for each computed trace in these sequences becomes a training example.
In the RDF example, the RDF classifier may be trained by training each tree in the data structure independently. The tree is trained one node at a time, starting from the root node of the tree. Each node is trained using an input training set. Initially, the entire training set is the input set used to train the root node.
An input training set T is given to a node n, which is trained by sampling the space of each split function and its parameters a certain number of times (e.g., a number corresponding to the square root of the size of the space of each split function and its parameters). For each sampled (parameterized) split function, a number of possible thresholds (e.g., 100 thresholds) are also sampled.
For a given split combination of split function type split f, split function parameterization θ and threshold τ ∑ (split f, θ, τ), each input x ∈ T is split according to whether the value of split f _ θ (x) is below or above or equal to threshold τ.
The training process identifies the split combination that achieves the greatest information gain on the splits of all elements in the input set T of the node over all sampled split combinations. If the gain is too small or if the node n is at a maximum preselected height (e.g., 11), then the node is set as a leaf node and the probability of each classification (e.g., fingertip touch, pen or stylus, or palm) associated with the leaf node is set as the ratio of the number of samples of that classification to the total number of samples in the input set "T" for the node. On the other hand, if the gain is significantly high or the height of node n is less than the maximum preselected height, a split combination Σ achieving the maximum gain is associated to the node n, the input set T of the node is split into two subsets T _ L and T _ R using Σ, and node n is assigned two sub-nodes, a left sub-node and a right sub-node, each being recursively trained using the input sets T _ L and T _ R, respectively.
Similar methods for collecting training samples may be used in training the look-up table classifier. For example, the sequence of touch images is generated in a self-labeling manner. For example, each sequence contains only one classified touch (e.g., all fingers or all palms). Blobs and corresponding feature data are then extracted from the touch image and passed to a training process for use as training samples.
In one lookup table training example, training samples are grouped in buckets (e.g., 8 buckets) according to height features. For example, in a touch system where the maximum height of any pixel in the touch image is approximately 1600, samples having a height between 0 and 199 are assigned to bucket 0, samples having a height of 200 and 399 are assigned to bucket 1, and so on. To improve the generalization of these classifiers, a slight "tailing" of the samples can also be achieved. For example, samples of height 210 may be assigned to both buckets 0 and 1. The amount of smearing is configurable and may vary, but in one example, samples within 10% of the packet boundary are assigned to both groups. Minimum and maximum values for area and texture features are then determined for all samples within each height grouping. The tail is also applied at this time so that the minimum and maximum values are adjusted downward/upward by a small amount (e.g., 10%). These samples are then split into multiple sets (e.g., 16) according to the area feature (evenly distributed between the smeared minima/maxima of the area feature, as determined above). The samples within these area buckets are then further split into multiple buckets (e.g., 16) according to texture features (evenly distributed and smeared as above). As samples are split by area and texture, these samples may again be smeared by approximately 10%. In examples where samples are streaked, samples that are not streaked may be given a higher priority than streaked samples. The higher priority may be provided by: each non-streaked sample is treated as a plurality of samples (e.g., 9), while each streaked sample is treated as only a single sample.
The number of samples in each of these final buckets may then be counted. These counts are then compared to several thresholds. If the number of samples is greater than or equal to the "very likely" threshold, the classifier table value for that bucket is set to the highest classification score (e.g., 3). If the number of samples is greater than or equal to the "likely" threshold, the table value is set to the next highest score (e.g., 2). If the number of samples is greater than or equal to the "possible" threshold, the table value is set to the next high score (e.g., 1). In the 2-bit scoring example, the table value is set to 0 otherwise. These thresholds are configurable and may vary with the number of training samples used, but in one example, thresholds 90, 9, and 1 may be used for the highly likely, and likely thresholds, respectively. The 2-bit score is then stored in a table according to grouping, setting, and bucket partitioning. The table entries thus also affect the minimum values of area and texture features and the bucket size. The minimum value of the height feature and the bucket size may be fixed to 0 and 200, respectively.
During classification, any value outside of the table boundaries is given the lowest rating (e.g., 0) by default, with the following exceptions. The clumps having a height greater than or equal to 1600 are lumped into 1400-1599 buckets. Blobs having an area higher than any palm area seen when training a bucket of the specified height are given a palm rating of 1. These anomalies can help the classifier correctly generalize over very large blobs that never were seen at the time of training.
FIG. 2 depicts an exemplary method 200 for touch classification. The method is computer-implemented. For example, one or more computers of the touch-sensitive device 100 shown in FIG. 1 and/or another touch-sensitive device may be configured to implement the method or a portion thereof. The implementation of each action may be guided by respective computer readable instructions executed by a processor of the touch system 102, the apparatus 100, and/or another processor or processing system. More, fewer, or alternative acts may be included in the method. For example, method 200 may not include acts that involve output functionality.
Method 200 may begin with one or more actions directed to capturing frame data. The manner in which the frame data is captured may vary. The frame data may be captured by different devices or processors and/or in connection with different methods implemented by the same processor or device implementing method 200.
In the embodiment of fig. 2, the method 200 begins in act 202, where frame data is obtained in act 202. The frame data represents a plurality of frames (or touch sensor images) captured by the touch sensitive device. The frame data may be received directly from the hardware or other component(s) of the touch system 102, such as firmware 112 (fig. 1), for real-time processing. Alternatively or additionally, the frame data may be previously captured and stored frame data. The frame data may thus be obtained by accessing a memory, such as one of the memories described in connection with fig. 1 (i.e., memories 108, 116) and/or another memory.
In act 204, the frame data is processed to define a respective blob in each of the plurality of frames. The blob indicates a touch event. These blobs may be tracked or associated with each other across multiple frames as described herein to distinguish between multiple touch events occurring in the same frame.
The analysis may include upsampling the frame data in act 206, thresholding the frame data in act 208, blob splitting in act 210, and/or blob merging in act 212. As described above, each of these processing actions may be implemented. Method 200 may include any one or more of these processing actions. For example, in some RDF examples, the blob splitting of act 210 is implemented, but the blob merging of act 212 is not implemented. In contrast, in some look-up table examples, the blob merging of act 212 is implemented, but the blob splitting of act 210 is not implemented.
In some cases (e.g., some RDF examples), a trace of blobs across multiple frames is defined or otherwise updated for touch events, at act 214. Trace definition may occur after blob definition. Blob data extracted from the current frame is processed to update the traces identified in the previous frame. Act 214 thus defines a new trace, an extended activity trace, and/or a terminated activity trace. The activity trace is either extended by an additional frame if blobs are present in the current frame or terminated due to the lack of blobs, as described above. Data indicating activity traces is also stored in data store 216. Act 214 may include accessing data store 216 as shown and processing the activity trace.
In other cases (e.g., some look-up table examples), frame data for a particular frame is processed separately from frame data for subsequent frames. Act 204 may involve analysis of frame data for a single frame. That frame data is then ready for further processing (e.g., feature set calculation and application to a machine learning classifier) separate from the processing of the frame data for subsequent frames.
In act 218, a plurality of feature sets is calculated for the touch event. Each feature set specifies attributes of a respective blob in each of a plurality of frames. The features and attributes calculated at act 218 may vary, as described above. The number of features or attributes may also vary with the complexity of the classifier. As shown in FIG. 2, in some cases, the feature set data may be stored in a data store 216 in which the activity trace data is stored.
In some cases (e.g., in the RDF example), the feature set may be aggregated over multiple frames at act 220, and trace features may be computed at act 222. For example, acts 220 and 222 may be implemented where frame data for multiple frames in which traces are defined and/or touch events are otherwise available. In determining the type of touch event, the clustering occurs prior to applying the plurality of features to the machine learning classifier.
The feature set data is applied in act 224 to determine the type of touch event via machine learning classification. The classification is configured to provide a plurality of non-bimodal classification scores based on a plurality of feature sets of a plurality of frames, as described above. In machine learning classification, each non-bimodal classification score indicates an uncertainty or confidence level. In some cases, the data store 216 may be accessed as shown to support this classification.
In cases involving trace definition and expansion, the timing of the classification of act 224 may depend on when the trace is mature (e.g., the trace is expanded three times). In these cases, when the just-expanded activity trail reaches maturity, its touch type is determined in act 224. Act 224 may thus be implemented concurrently and independently of the frame data processing and feature set computation of acts 204 and 218.
Processing of a maturity trail that has been classified but remains active in the current frame (e.g., as a result of being expanded by blobs in the current frame) may or may not be dependent on the previous classification. In some cases, act 224 may be configured such that a track that has matured in previous frames automatically passes its touch type to the blob through which the track expands. In other cases, the classification of act 224 is repeated given the newly aggregated feature-set data (i.e., including the data contributed by the current frame).
Machine learning classification may include applying the one or more feature sets to a machine learning classifier at act 226. In some cases (e.g., some RDF examples), feature set data for multiple frames is applied to the classifier in common. In such cases, trace feature data may also be applied. In other cases (e.g., some look-up table examples), the feature set data is applied to the classifier on a frame-by-frame basis.
After the feature set data is applied to the classifier, one or more thresholds may be applied to the classification score at act 228. The threshold(s) may be applied to determine the touch event type and/or to determine whether processing of further frame data is warranted (e.g., whether the uncertainty level is too high for the event type to be known).
Method 200 may include one or more processes involving providing an output. In the example of FIG. 2, output data indicative of the touch event type and location coordinates is provided at act 230. The coordinates may indicate a location of a last blob in a plurality of frames associated with the touch event. Additional or alternative output data may be provided. For example, a trace ID may be provided.
In some cases, one or more of the method acts shown in fig. 2 may be iterated over multiple frames or otherwise involve iteration. For example, in the look-up table example, the feature set data is applied to the look-up table classifier iteratively for each frame. Further details regarding such iterations are described in connection with the example of fig. 4.
The order of the acts of the method may differ from the examples shown. For example, in some cases, these actions are implemented in a pipelined fashion, e.g., performed in conjunction with each arriving frame. These actions may be implemented in parallel or concurrently while processing frame data, blobs and/or traces of different frames. For example, the feature set calculation of act 218 may be implemented concurrently with some of the machine learning classification process of act 224.
FIG. 3 illustrates further details regarding the touch event determination action 224 (FIG. 2) in connection with an example involving a Random Decision Forest (RDF) classifier. In this example, the touch classification process begins at act 302 by obtaining an aggregated feature set for a plurality of frames. Once the blob for the touch event has been tracked for a sufficient number of frames (i.e., the trace has been expanded to maturity), an aggregate feature set may be provided, as described above. The aggregated feature set may also include trace feature data, as described above. The aggregated feature set is then applied to each random decision tree in the RDF classifier at act 304. The output of each tree is averaged in act 306 to generate a plurality of non-bimodal classification scores. In this case, each non-bimodal classification score represents a probability that the touch event has a corresponding type. A threshold value may then be applied to the probability score to determine the touch event type. One or more thresholds may be specified for each touch event type. For example, if the probability scores for the finger, stylus, and finger classifications are 0.3 or less, 0.6 or more, and 0.2 or less, respectively, then the touch event may be classified as an intentional stylus touch event. An output may then be provided that indicates the touch event type, the trace ID, and the location of the trace (i.e., blob) in the current frame.
FIG. 4 illustrates further details regarding touch event determination action 224 (FIG. 2) in connection with an example involving a look-up table (LUT) classifier. In this example, the touch classification process begins at act 402 by obtaining a feature set for the current frame. The feature set is then applied to a plurality of look-up tables in act 404. Each look-up table determines an individual (i.e., table-specific) non-bimodal classification score. For example, the plurality of lookup tables may include a first lookup table configured to provide a first rating (e.g., 0 to 3) for an intended touch of the touch event and a second lookup table for a second rating (e.g., 0 to 3) for determining the touch event as an unintended touch. The individual classification scores are then combined in act 406 to calculate a frame-specific classification score for the blob (i.e., a blob classification score). For example, the second rating may be subtracted from the first rating such that the blob classification score falls within a range from-3 to +3, with negative numbers indicating more unintentional touches and positive numbers indicating more intentional touches.
Frame-specific blob classification scores for the touch events are aggregated at act 408. The aggregation may include adding the current blob classification score to any previously computed blob classification scores for earlier frames. Cumulative (i.e., multi-frame) classification scores may be computed therefrom.
Decision block 410 then determines whether a threshold for touch type classification is met. A respective threshold value may be provided for each touch event type. For example, if the cumulative score is greater than +7, the touch event is classified as an intentional touch. If the cumulative score is less than-6, the touch event is classified as an unintentional touch. If either threshold is met, control may pass to act 412, where the touch event type is determined and provided as an output in act 412. If neither threshold is met, control returns to act 402 to obtain further frame data for the next frame and iterates the feature application act 404, the score combination act 406, and the score aggregation act 408 in conjunction with the further frame data.
In the example of FIG. 4, a number of actions may be implemented to adjust the classification score or classification in connection with several special cases. Decision block 414 may be used to determine whether an adjustment to a blob that overlaps with a touch event that is considered a palm touch event should occur in a subsequent frame. In this example, if the current touch event is classified as a palm touch event, control passes to act 416, where a flag, state, or other variable is set to classify any overlapping blobs in the next frame as palm touch events in act 416.
Other adjustments may be implemented as adjustments to the classification score. In the example of FIG. 4, these adjustments are implemented in connection with computing a blob classification score at act 406. The adjustment at act 418 may involve resolving the anti-blob situation by subtracting a value from the blob classification score if the blob overlaps with a sufficiently large anti-blob (e.g., greater than 50 pixels), as described above. Another adjustment at act 420 may involve resolving when the blob is near the palm touch event by subtracting a quotient calculated by dividing the blob area of the blob by the threshold area, as described above. Yet another adjustment at act 422 may involve accounting for edge effects by subtracting the difference between the number of edge pixels and the threshold from the blob classification rating score, as described above.
With reference to FIG. 5, an exemplary computing environment 500 may be used to implement one or more aspects or elements of the above-described methods and/or systems. The computing environment 500 may be used by, incorporated into, or correspond to the touch-sensitive device 100 (FIG. 1) or one or more elements thereof. For example, the computing environment 500 can be used to implement the touch system 102 (FIG. 1) or a host device or system in communication with the touch system 102. The computing environment 500 may be a general-purpose computer system used to implement one or more of the acts described in conjunction with fig. 2-4. Computing environment 500 may correspond to one of a wide variety of computing devices including, but not limited to, Personal Computers (PCs), server computers, tablet and other handheld computing devices, laptop or mobile computers, communication devices such as mobile phones, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, audio or video media players, and the like.
The computing environment 500 has sufficient computing power and system memory to allow basic computing operations. In this example, the computing environment 500 includes one or more processing units 510, which may be referred to herein, individually or collectively, as a processor. The computing environment 500 may also include one or more Graphics Processing Units (GPUs) 515. Processor 510 and/or GPU 515 may include integrated memory and/or communicate with system memory 520. Processor 510 and/or GPU 515 may be a special-purpose microprocessor, such as a Digital Signal Processor (DSP), Very Long Instruction Word (VLIW) processor, or other microprocessor, or may be a general-purpose Central Processing Unit (CPU) having one or more processing cores. The processor 510, GPU 515, system memory 520, and/or any other components of the computing environment 500 may be packaged or otherwise integrated as a system on a chip (SoC), Application Specific Integrated Circuit (ASIC), or other integrated circuit or system.
Computing environment 500 may also include other components, such as, for example, a communication interface 530. One or more computer input devices 540 (e.g., a pointing device, keyboard, audio input device, video input device, tactile input device, device for receiving wired or wireless data transmissions, etc.) may also be provided. Input device 540 may include one or more touch-sensitive surfaces, such as a track pad. Various output devices 550 may also be provided, including a touch screen or touch sensitive display(s) 555. Output devices 550 may include a variety of different audio output devices, video output devices, and/or devices for communicating wired or wireless data transmissions.
Computing environment 500 may also include various computer-readable media for storing information such as computer-readable or computer-executable instructions, data structures, program modules or other data. Computer readable media can be any available media that can be accessed by storage device 560 and includes both volatile and nonvolatile media, whether in removable storage 570 and/or non-removable storage 580.
Computer-readable media may include computer storage media and communication media. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the processing units of the computing environment 500.
The touch event classification techniques described herein may be implemented with computer-executable instructions, such as program modules, executed by the computing environment 500. Program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The techniques described herein may also be practiced in distributed computing environments where tasks are performed by one or more remote processing devices that are linked through one or more communications networks or in a cloud of one or more devices. In a distributed computing environment, program modules may be located in both local and remote computer storage media including media storage devices.
The techniques may be implemented in part or in whole as hardware logic circuits or components that may or may not include a processor. The hardware logic components may be configured as Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), program specific standard products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and/or other hardware logic circuitry.
The classification techniques described above can robustly distinguish between intended and unintended touch events in a touch sensitive device. A machine learning classifier is used to determine whether a touch event is due to an intentional finger touch or an unintentional palm touch. The classifier may also determine whether the touch event is a pen or stylus contact or touch. Some classifiers (e.g., RDF classifiers) may be configured for software implementation, for example, at an operating system level or other level where there are no hard constraints on memory availability. Other classifiers may be configured for implementation in resource-limited platforms, such as microcontrollers currently used in touch processing systems. In such cases, a look-up table classifier (e.g., a 3D look-up table classifier) with a more limited set of features may be used.
In some cases, attributes or features of the touch event are computed and tracked across multiple frames before being applied to the classifier. After the trace of touch events reaches a predetermined number of frames (e.g., 3), the feature set can be applied to a classifier to determine a plurality of probability scores for the touch events. Each probability score is a non-bimodal score that indicates a probability that a touch event has a particular type. The determination of the touch type may then be determined based on the probability score.
Other scenarios may involve different approaches for aggregating information of touch events over time, as well as different types of classification scores. In some cases, a look-up table approach is used to generate a non-dual mode classification score for each frame. The classification scores may then be aggregated (e.g., summed) over multiple frames to determine a cumulative multi-frame classification score for the touch event.
The classification score provided by the machine learning classifier indicates a likelihood that the touch is intended or unintended. The non-bimodal nature of the classification scores allows these scores to also indicate the level of uncertainty determined by the classifier. The determination is based on information obtained across frame boundaries and thus from multiple points in time. In this way, for example, an early frame about a palm first touch down may look like a weak "touch" classification that later becomes a strong "palm" classification as more information becomes available. Error performance can thereby be improved.
The classification score may be adjusted according to one or rules. These rules may be applied or enforced according to the situation presented by the feature set data. These situations may involve touch events at the edges of the frame, finger touches near the palm, and touch image distortions caused by "hovering" the user. These adjustments may bias the classifier results toward or away from palm touches or other touch types.
The technology described herein is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the technology herein include, but are not limited to, personal computers, server computers (including server-client architectures), hand-held or laptop devices, mobile phones or devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. Some or all of the method or process steps and functions may be performed by a networked or remote processor in communication with a client or local device being operated by a user. A potential advantage of offloading functions from the local device to the remote device is to save computational and power resources of the local device.
The techniques herein may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The techniques herein may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
While the present invention has been described with reference to specific examples, which are intended to be illustrative only and not to be limiting of the invention, those of ordinary skill in the art will appreciate that changes, additions and/or deletions may be made to the disclosed embodiments without departing from the spirit and scope of the invention.
The foregoing description is given for clearness of understanding only, and no unnecessary limitations should be understood therefrom, as modifications within the scope of the invention may be apparent to those having ordinary skill in the art.

Claims (29)

1. A computer-implemented method of classifying touch events, comprising:
obtaining frame data representing a plurality of frames captured by a touch sensitive device;
analyzing the frame data to define a respective blob in each frame of the plurality of frames, the blob indicating a touch event;
computing a plurality of feature sets for the touch event, each feature set specifying attributes of a respective blob in each of the plurality of frames; and
determining a type of the touch event via a machine learning classification configured to provide a plurality of non-bimodal classification scores for the plurality of frames based on the plurality of feature sets, each non-bimodal classification score indicating a level of uncertainty in the machine learning classification.
2. The computer-implemented method of claim 1, wherein the machine learning classification is configured to generate the non-bimodal classification scores such that each non-bimodal classification score represents a probability that the touch event has a respective type.
3. The computer-implemented method in accordance with claim 2, wherein each of the non-dual mode classification scores is generated by a machine learning classifier configured to accept the plurality of feature sets as input.
4. A computer-implemented method as described in claim 3, wherein the machine learning classifier comprises a random decision forest classifier.
5. The computer-implemented method of claim 1, further comprising:
defining a trace of the blob across the plurality of frames for the touch event; and
a trace feature set for the trace is computed,
wherein determining the type comprises applying the set of trace features to a machine learning classifier.
6. The computer-implemented method of claim 1, wherein computing the plurality of feature sets comprises aggregating data indicative of the plurality of feature sets prior to applying the plurality of feature sets to a machine learning classifier when determining the type of the touch event.
7. The computer-implemented method of claim 1, wherein each set of features comprises data indicative of an appearance of an image patch disposed at a respective blob in each frame.
8. The computer-implemented method of claim 1, wherein each set of features comprises data indicative of an intensity gradient of frame data of a respective blob in each frame.
9. The computer-implemented method of claim 1, wherein each set of features includes data indicating an equi-weekly quotient or other measure of roundness of a respective blob in each frame.
10. The computer-implemented method of claim 1, wherein the machine-learned classification comprises a look-up table based classification.
11. The computer-implemented method of claim 1, wherein determining the type comprises applying a set of features for a respective frame of the plurality of frames to a plurality of lookup tables, each lookup table providing a respective individual non-bimodal classification score of the plurality of non-bimodal classification scores.
12. The computer-implemented method of claim 11, wherein determining the type comprises combining each of the individual non-bimodal classification scores of the respective frames to generate a blob classification rating score for the respective frame.
13. The computer-implemented method of claim 12, wherein:
the plurality of lookup tables includes a first lookup table configured to provide a first rating that the touch event is an intended touch and further includes a second lookup table for determining that the touch event is a second rating of an unintended touch; and
determining the type includes subtracting the second rating from the first rating to determine the blob classification rating score for the respective frame.
14. The computer-implemented method of claim 12, wherein determining the type comprises aggregating the blob classification rating scores across the plurality of frames to determine a cumulative multi-frame classification score for the touch event.
15. The computer-implemented method of claim 14, wherein determining the type comprises:
determining whether the accumulated multi-frame classification score crosses one of a plurality of classification thresholds; and
if not, the feature set application, classification score combination, and rating score aggregation actions are iterated in conjunction with further feature sets of the plurality of feature sets.
16. The computer-implemented method of claim 14, wherein determining the type further comprises classifying other blobs in subsequent ones of the plurality of frames that overlap the touch event as palm touch events once the accumulated multi-frame classification score exceeds a palm classification threshold for the touch event.
17. The computer-implemented method of claim 12, wherein combining each of the individual non-bimodal classification scores comprises adjusting the blob classification rating score by subtracting a value from the blob classification rating score if the corresponding blob overlaps with an anti-blob.
18. The computer-implemented method of claim 12, wherein combining each of the individual non-bimodal classification scores comprises adjusting the blob classification rating score by subtracting a quotient calculated by dividing a blob area of the blob by a threshold area when the blob has an area greater than the threshold area and when the blob is within a threshold distance of other blobs having bimodal classification scores indicative of palms.
19. The computer-implemented method of claim 12, wherein combining each of the individual non-bimodal classification scores comprises:
determining whether a number of edge pixels in the respective blob exceeds a threshold; and
adjusting the blob classification rating score by subtracting the difference between the number of edge pixels and the threshold from the blob classification rating score if the threshold is exceeded.
20. A touch sensitive device, comprising:
a touch-sensitive surface;
a memory having stored therein blob defining instructions, feature calculation instructions, and machine learning classification instructions; and
a processor coupled to the memory, the processor configured to obtain frame data representing a plurality of frames captured via the touch-sensitive surface, and to execute the blob defining instructions to analyze the frames to define a respective blob in each frame of the plurality of frames, the blob indicating a touch event;
wherein the processor is further configured to execute the feature calculation instructions to calculate a plurality of feature sets for the touch event, each feature set specifying attributes of a respective blob in each of the plurality of frames; and
wherein the processor is further configured to execute the machine learning classification instructions to determine the type of the touch event via machine learning classification, the machine learning classification configured to provide a plurality of non-bimodal classification scores based on a plurality of feature sets of the plurality of frames, each non-bimodal classification score indicating a level of uncertainty in the machine learning classification.
21. The touch sensitive device of claim 20, wherein each non-bimodal classification score represents a probability that the touch event has a respective type.
22. The touch sensitive device of claim 20, wherein:
each non-bimodal classification score is a blob classification score rating for a respective frame of the plurality of frames; and
the processor is further configured to execute the machine learning classification instructions to sum the blob classification score ratings over the plurality of frames.
23. The touch sensitive device of claim 22, wherein the processor is further configured to execute the machine learning classification instructions to combine look-up table ratings from a plurality of look-up tables to calculate each blob classification score rating.
24. The touch sensitive device of claim 20, wherein the processor is further configured to execute the blob defining instructions to split a connected portion into a plurality of blobs for separate analysis.
25. The touch sensitive device of claim 20, wherein the processor is further configured to execute the blob defining instructions to define a trace of each blob of the touch event across the plurality of frames.
26. The touch sensitive device of claim 20, wherein the processor is further configured to execute the blob defining instructions to merge multiple connected portions for analysis as a single blob.
27. A touch sensitive device, comprising:
a touch-sensitive surface;
a memory having a plurality of instruction sets stored therein; and
a processor coupled to the memory and configured to execute a plurality of instruction sets,
wherein the plurality of instruction sets comprises:
first instructions that cause the processor to obtain frame data representing a plurality of sensor images captured by the touch-sensitive device;
second instructions that cause the processor to analyze the frame data to define a respective connection portion in each sensor image of the plurality of sensor images, the connection portion being indicative of a touch event;
third instructions that cause the processor to calculate a plurality of feature sets for the touch event, each feature set specifying attributes of a respective connected portion in each sensor image of the plurality of sensor images;
fourth instructions that cause the processor to determine a type of the touch event via a machine learning classification, the machine learning classification configured to provide a plurality of non-bimodal classification scores based on the plurality of feature sets, each non-bimodal classification score indicating a level of uncertainty in the machine learning classification; and
fifth instructions that cause the processor to provide an output to the computing system, the output indicating a type of the touch event;
wherein the fourth instructions comprise gather instructions that cause the processor to gather information representing the touch event on the plurality of sensor images.
28. The touch sensitive device of claim 27, wherein:
the fourth instructions are configured to cause the processor to apply the plurality of feature sets to a machine learning classifier; and
the aggregation instructions are configured to cause the processor to aggregate the plurality of feature sets of the plurality of sensor images prior to applying the plurality of feature sets.
29. The touch sensitive device of claim 27, wherein the aggregation instructions are configured to cause the processor to aggregate the plurality of non-dual mode classification scores.
CN201580037941.5A 2014-07-11 2015-07-07 Method for classifying touch events and touch sensitive device Active CN106537305B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US14/329,052 2014-07-11
US14/329,052 US9558455B2 (en) 2014-07-11 2014-07-11 Touch classification
PCT/US2015/039282 WO2016007450A1 (en) 2014-07-11 2015-07-07 Touch classification

Publications (2)

Publication Number Publication Date
CN106537305A CN106537305A (en) 2017-03-22
CN106537305B true CN106537305B (en) 2019-12-20

Family

ID=53758517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201580037941.5A Active CN106537305B (en) 2014-07-11 2015-07-07 Method for classifying touch events and touch sensitive device

Country Status (11)

Country Link
US (2) US9558455B2 (en)
EP (1) EP3167352B1 (en)
JP (1) JP6641306B2 (en)
KR (1) KR102424803B1 (en)
CN (1) CN106537305B (en)
AU (1) AU2015288086B2 (en)
BR (1) BR112016029932A2 (en)
CA (1) CA2954516C (en)
MX (1) MX2017000495A (en)
RU (1) RU2711029C2 (en)
WO (1) WO2016007450A1 (en)

Families Citing this family (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11327599B2 (en) 2011-04-26 2022-05-10 Sentons Inc. Identifying a contact type
WO2013059488A1 (en) 2011-10-18 2013-04-25 Carnegie Mellon University Method and apparatus for classifying touch events on a touch sensitive surface
CN104169848B (en) 2011-11-18 2017-10-20 森顿斯公司 Detect touch input force
KR20140114766A (en) 2013-03-19 2014-09-29 퀵소 코 Method and device for sensing touch inputs
US9612689B2 (en) 2015-02-02 2017-04-04 Qeexo, Co. Method and apparatus for classifying a touch event on a touchscreen as related to one of multiple function generating interaction layers and activating a function in the selected interaction layer
US9013452B2 (en) 2013-03-25 2015-04-21 Qeexo, Co. Method and system for activating different interactive functions using different types of finger contacts
US10380434B2 (en) * 2014-01-17 2019-08-13 Kpit Technologies Ltd. Vehicle detection system and method
US9558455B2 (en) * 2014-07-11 2017-01-31 Microsoft Technology Licensing, Llc Touch classification
CN104199572B (en) * 2014-08-18 2017-02-15 京东方科技集团股份有限公司 Touch positioning method of touch display device and touch display device
US9329715B2 (en) 2014-09-11 2016-05-03 Qeexo, Co. Method and apparatus for differentiating touch screen users based on touch event analysis
US11619983B2 (en) 2014-09-15 2023-04-04 Qeexo, Co. Method and apparatus for resolving touch screen ambiguities
US9864453B2 (en) * 2014-09-22 2018-01-09 Qeexo, Co. Method and apparatus for improving accuracy of touch screen event analysis by use of edge classification
US10606417B2 (en) 2014-09-24 2020-03-31 Qeexo, Co. Method for improving accuracy of touch screen event analysis by use of spatiotemporal touch patterns
US10282024B2 (en) 2014-09-25 2019-05-07 Qeexo, Co. Classifying contacts or associations with a touch sensitive device
JP6543790B2 (en) * 2015-03-18 2019-07-17 株式会社トヨタIt開発センター Signal processing device, input device, signal processing method, and program
US10642404B2 (en) 2015-08-24 2020-05-05 Qeexo, Co. Touch sensitive device with multi-sensor stream synchronized data
US10083378B2 (en) * 2015-12-28 2018-09-25 Qualcomm Incorporated Automatic detection of objects in video images
TWI606376B (en) * 2016-08-08 2017-11-21 意象無限股份有限公司 Touch Sensor Device And Touch-Sensing Method With Error-Touch Rejection
US10313348B2 (en) * 2016-09-19 2019-06-04 Fortinet, Inc. Document classification by a hybrid classifier
CN106708317A (en) * 2016-12-07 2017-05-24 南京仁光电子科技有限公司 Method and apparatus for judging touch point
US11580829B2 (en) 2017-08-14 2023-02-14 Sentons Inc. Dynamic feedback for haptics
US11009411B2 (en) 2017-08-14 2021-05-18 Sentons Inc. Increasing sensitivity of a sensor using an encoded signal
US11057238B2 (en) 2018-01-08 2021-07-06 Brilliant Home Technology, Inc. Automatic scene creation using home device control
US11175767B2 (en) * 2018-02-19 2021-11-16 Beechrock Limited Unwanted touch management in touch-sensitive devices
CN110163460B (en) * 2018-03-30 2023-09-19 腾讯科技(深圳)有限公司 Method and equipment for determining application score
KR102104275B1 (en) * 2018-06-01 2020-04-27 경희대학교 산학협력단 Touch system using a stylus pen and method for detecting touch thereof
KR102606766B1 (en) 2018-06-01 2023-11-28 삼성전자주식회사 Electro-magnetic sensor and mobile device including the same
US11009989B2 (en) * 2018-08-21 2021-05-18 Qeexo, Co. Recognizing and rejecting unintentional touch events associated with a touch sensitive device
WO2020176627A1 (en) * 2019-02-27 2020-09-03 Li Industries, Inc. Methods and systems for smart battery collection, sorting, and packaging
EP3926491A4 (en) * 2019-03-29 2022-04-13 Sony Group Corporation Image processing device and method, and program
US10942603B2 (en) 2019-05-06 2021-03-09 Qeexo, Co. Managing activity states of an application processor in relation to touch or hover interactions with a touch sensitive device
US11231815B2 (en) 2019-06-28 2022-01-25 Qeexo, Co. Detecting object proximity using touch sensitive surface sensing and ultrasonic sensing
KR20190104101A (en) * 2019-08-19 2019-09-06 엘지전자 주식회사 Method, device, and system for determining a false touch on a touch screen of an electronic device
CN111881287B (en) * 2019-09-10 2021-08-17 马上消费金融股份有限公司 Classification ambiguity analysis method and device
US11301099B1 (en) * 2019-09-27 2022-04-12 Apple Inc. Methods and apparatus for finger detection and separation on a touch sensor panel using machine learning models
EP3835929A1 (en) * 2019-12-13 2021-06-16 Samsung Electronics Co., Ltd. Method and electronic device for accidental touch prediction using ml classification
US11528028B2 (en) 2020-01-05 2022-12-13 Brilliant Home Technology, Inc. Touch-based control device to detect touch input without blind spots
US11755136B2 (en) 2020-01-05 2023-09-12 Brilliant Home Technology, Inc. Touch-based control device for scene invocation
US11592423B2 (en) 2020-01-29 2023-02-28 Qeexo, Co. Adaptive ultrasonic sensing techniques and systems to mitigate interference
US11620294B2 (en) * 2020-01-30 2023-04-04 Panasonic Avionics Corporation Dynamic media data management
GB2591764B (en) * 2020-02-04 2024-08-14 Peratech Holdco Ltd Classifying pressure inputs
US11599223B1 (en) 2020-03-13 2023-03-07 Apple Inc. System and machine learning method for separating noise and signal in multitouch sensors
CN111524157B (en) * 2020-04-26 2022-07-01 南瑞集团有限公司 Touch screen object analysis method and system based on camera array and storage medium
US11899881B2 (en) 2020-07-17 2024-02-13 Apple Inc. Machine learning method and system for suppressing display induced noise in touch sensors using information from display circuitry
KR20220023639A (en) * 2020-08-21 2022-03-02 삼성전자주식회사 Electronic apparatus and method for controlling thereof
US11954288B1 (en) 2020-08-26 2024-04-09 Apple Inc. System and machine learning method for separating noise and signal in multitouch sensors
US11481070B1 (en) 2020-09-25 2022-10-25 Apple Inc. System and method for touch sensor panel with display noise correction
EP4099142A4 (en) 2021-04-19 2023-07-05 Samsung Electronics Co., Ltd. Electronic device and operating method thereof
AU2022296590A1 (en) 2021-06-24 2024-01-04 Icu Medical, Inc. Infusion pump touchscreen with false touch rejection
US11537239B1 (en) 2022-01-14 2022-12-27 Microsoft Technology Licensing, Llc Diffusion-based handedness classification for touch-based input
NL2031789B1 (en) * 2022-05-06 2023-11-14 Microsoft Technology Licensing Llc Aggregated likelihood of unintentional touch input
US11989369B1 (en) * 2023-03-30 2024-05-21 Microsoft Technology Licensing, Llc Neural network-based touch input classification

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012001428A1 (en) * 2010-07-02 2012-01-05 Vodafone Ip Licensing Limited Mobile computing device
WO2013059488A1 (en) * 2011-10-18 2013-04-25 Carnegie Mellon University Method and apparatus for classifying touch events on a touch sensitive surface

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8018440B2 (en) 2005-12-30 2011-09-13 Microsoft Corporation Unintentional touch rejection
US8103109B2 (en) 2007-06-19 2012-01-24 Microsoft Corporation Recognizing hand poses and/or object classes
US8519965B2 (en) * 2008-04-23 2013-08-27 Motorola Mobility Llc Multi-touch detection panel with disambiguation of touch coordinates
US8502787B2 (en) 2008-11-26 2013-08-06 Panasonic Corporation System and method for differentiating between intended and unintended user input on a touchpad
KR101648747B1 (en) * 2009-10-07 2016-08-17 삼성전자 주식회사 Method for providing user interface using a plurality of touch sensor and mobile terminal using the same
US8587532B2 (en) * 2009-12-18 2013-11-19 Intel Corporation Multi-feature interactive touch user interface
KR20110138095A (en) 2010-06-18 2011-12-26 삼성전자주식회사 Method and apparatus for coordinate correction in touch system
US8754862B2 (en) 2010-07-11 2014-06-17 Lester F. Ludwig Sequential classification recognition of gesture primitives and window-based parameter smoothing for high dimensional touchpad (HDTP) user interfaces
WO2012057887A1 (en) 2010-10-28 2012-05-03 Cypress Semiconductor Corporation Capacitive stylus with palm rejection
US9244545B2 (en) 2010-12-17 2016-01-26 Microsoft Technology Licensing, Llc Touch and stylus discrimination and rejection for contact sensitive computing devices
JP2014507726A (en) * 2011-02-08 2014-03-27 ハワース, インコーポレイテッド Multimodal touch screen interaction apparatus, method and system
US20130106761A1 (en) 2011-10-28 2013-05-02 Atmel Corporation Touch Sensor with Lookup Table
US20130176270A1 (en) 2012-01-09 2013-07-11 Broadcom Corporation Object classification for touch panels
WO2013114599A1 (en) 2012-02-02 2013-08-08 株式会社タカラトミー Radiation measuring device
US8973211B2 (en) * 2012-02-04 2015-03-10 Hsi Fire & Safety Group, Llc Detector cleaner and/or tester and method of using same
US8902181B2 (en) 2012-02-07 2014-12-02 Microsoft Corporation Multi-touch-movement gestures for tablet computing devices
EP2817696A4 (en) 2012-02-21 2015-09-30 Flatfrog Lab Ab Touch determination with improved detection of weak interactions
US9542045B2 (en) 2012-03-14 2017-01-10 Texas Instruments Incorporated Detecting and tracking touch on an illuminated surface using a mean-subtracted image
JP2015525381A (en) 2012-05-04 2015-09-03 オブロング・インダストリーズ・インコーポレーテッド Interactive user hand tracking and shape recognition user interface
EP2662756A1 (en) 2012-05-11 2013-11-13 BlackBerry Limited Touch screen palm input rejection
US20130300696A1 (en) 2012-05-14 2013-11-14 N-Trig Ltd. Method for identifying palm input to a digitizer
US8902170B2 (en) * 2012-05-31 2014-12-02 Blackberry Limited Method and system for rendering diacritic characters
US9483146B2 (en) 2012-10-17 2016-11-01 Perceptive Pixel, Inc. Input classification for multi-touch systems
US20140232679A1 (en) * 2013-02-17 2014-08-21 Microsoft Corporation Systems and methods to protect against inadvertant actuation of virtual buttons on touch surfaces
US10578499B2 (en) * 2013-02-17 2020-03-03 Microsoft Technology Licensing, Llc Piezo-actuated virtual buttons for touch surfaces
MX2015011642A (en) 2013-03-15 2016-05-16 Tactual Labs Co Fast multi-touch noise reduction.
KR102143574B1 (en) * 2013-09-12 2020-08-11 삼성전자주식회사 Method and apparatus for online signature vefication using proximity touch
US9329727B2 (en) * 2013-12-11 2016-05-03 Microsoft Technology Licensing, Llc Object detection in optical sensor systems
US9430095B2 (en) * 2014-01-23 2016-08-30 Microsoft Technology Licensing, Llc Global and local light detection in optical sensor systems
US9558455B2 (en) * 2014-07-11 2017-01-31 Microsoft Technology Licensing, Llc Touch classification
US9818043B2 (en) * 2015-06-24 2017-11-14 Microsoft Technology Licensing, Llc Real-time, model-based object detection and pose estimation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012001428A1 (en) * 2010-07-02 2012-01-05 Vodafone Ip Licensing Limited Mobile computing device
WO2013059488A1 (en) * 2011-10-18 2013-04-25 Carnegie Mellon University Method and apparatus for classifying touch events on a touch sensitive surface

Also Published As

Publication number Publication date
KR102424803B1 (en) 2022-07-22
CN106537305A (en) 2017-03-22
JP2017529582A (en) 2017-10-05
KR20170030613A (en) 2017-03-17
BR112016029932A2 (en) 2017-08-22
AU2015288086B2 (en) 2020-07-16
CA2954516A1 (en) 2016-01-14
US20170116545A1 (en) 2017-04-27
RU2017100249A3 (en) 2019-02-12
US10679146B2 (en) 2020-06-09
EP3167352A1 (en) 2017-05-17
AU2015288086A1 (en) 2017-01-05
RU2017100249A (en) 2018-07-16
US9558455B2 (en) 2017-01-31
MX2017000495A (en) 2017-05-01
CA2954516C (en) 2022-10-04
WO2016007450A1 (en) 2016-01-14
EP3167352B1 (en) 2021-09-15
JP6641306B2 (en) 2020-02-05
US20160012348A1 (en) 2016-01-14
RU2711029C2 (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN106537305B (en) Method for classifying touch events and touch sensitive device
US20220383535A1 (en) Object Tracking Method and Device, Electronic Device, and Computer-Readable Storage Medium
US10572072B2 (en) Depth-based touch detection
CN109829368B (en) Palm feature recognition method and device, computer equipment and storage medium
US20140204013A1 (en) Part and state detection for gesture recognition
AU2015314949A1 (en) Classification of touch input as being unintended or intended
WO2014127697A1 (en) Method and terminal for triggering application programs and application program functions
Jinda-Apiraksa et al. A simple shape-based approach to hand gesture recognition
Joo et al. Real‐Time Depth‐Based Hand Detection and Tracking
Misra et al. Development of a hierarchical dynamic keyboard character recognition system using trajectory features and scale-invariant holistic modeling of characters
She et al. A real-time hand gesture recognition approach based on motion features of feature points
Ranawat et al. Hand gesture recognition based virtual mouse events
US20140232672A1 (en) Method and terminal for triggering application programs and application program functions
Bai et al. Dynamic hand gesture recognition based on depth information
Alam et al. A unified learning approach for hand gesture recognition and fingertip detection
Ke et al. [Retracted] A Visual Human‐Computer Interaction System Based on Hybrid Visual Model
Zhang et al. Research on gesture recognition based on improved template matching algorithm
US20120299837A1 (en) Identifying contacts and contact attributes in touch sensor data using spatial and temporal features
CN112596603A (en) Gesture control method, device, equipment and storage medium for nuclear power station control system
Lee et al. Classification Network-Guided Weighted K-means Clustering for Multi-Touch Detection
KR101171239B1 (en) Non-touch data input and operating method using image processing
CN111191675A (en) Pedestrian attribute recognition model implementation method and related device
CN111626364B (en) Gesture image classification method, gesture image classification device, computer equipment and storage medium
Nagar et al. Hand shape based gesture recognition in hardware
CN117788925A (en) Aluminum plastic film defect classification model training method and device and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant