WO2015057687A1 - Hover position calculation in a touchscreen device - Google Patents

Hover position calculation in a touchscreen device Download PDF

Info

Publication number
WO2015057687A1
WO2015057687A1 PCT/US2014/060456 US2014060456W WO2015057687A1 WO 2015057687 A1 WO2015057687 A1 WO 2015057687A1 US 2014060456 W US2014060456 W US 2014060456W WO 2015057687 A1 WO2015057687 A1 WO 2015057687A1
Authority
WO
WIPO (PCT)
Prior art keywords
capacitance
unit cell
unit cells
peak
capacitances
Prior art date
Application number
PCT/US2014/060456
Other languages
English (en)
French (fr)
Inventor
Patrick HILLS
Patrick Prendergast
Wei He
Original Assignee
Cypress Semiconductor Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cypress Semiconductor Corporation filed Critical Cypress Semiconductor Corporation
Priority to CN201480062806.1A priority Critical patent/CN106030482B/zh
Publication of WO2015057687A1 publication Critical patent/WO2015057687A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • G06F3/0443Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means using a single layer of sensing electrodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/04166Details of scanning methods, e.g. sampling time, grouping of sub areas or time sharing with display driving
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • G06F3/0446Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means using a grid-like structure of electrodes in at least two directions, e.g. using row and column electrodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/041012.5D-digitiser, i.e. digitiser detecting the X/Y position of the input means, finger or stylus, also when it does not touch, but is proximate to the digitiser's interaction surface and also measures the distance of the input means within a short range in the Z direction, possibly with a separate measurement setup
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04108Touchless 2D- digitiser, i.e. digitiser detecting the X/Y position of the input means, finger or stylus, also when it does not touch, but is proximate to the digitiser's interaction surface without distance measurement in the Z direction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/04162Control or interface arrangements specially adapted for digitisers for exchanging data with external devices, e.g. smart pens, via the digitiser sensing hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • G06F3/0442Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means using active external devices, e.g. active pens, for transmitting changes in electrical potential to be received by the digitiser

Definitions

  • This disclosure relates generally to electronic systems, and, more particularly, touchscreen interfaces and operation.
  • Capacitance sensing systems can sense electrical signals generated on electrodes that reflect changes in capacitance. Such changes in capacitance can indicate a touch event (e.g., the proximity of an object to particular electrodes).
  • Capacitive sense elements may be used to replace mechanical buttons, knobs and other similar mechanical user interface controls. The use of a capacitive sense element allows for the elimination of complicated mechanical switches and buttons, providing reliable operation under harsh conditions.
  • capacitive sense elements are widely used in modern consumer applications, providing user interface options in existing products. Capacitive sense elements can range from a single button to a large number of sensors arranged in the form of a capacitive sense array for a touch-sensing surface.
  • Transparent touch screens that utilize capacitive sense arrays are ubiquitous in today's industrial and consumer markets. They can be found on cellular phones, GPS devices, set-top boxes, cameras, computer screens, MP3 players, digital tablets, and the like.
  • the capacitive sense arrays work by measuring the capacitance of a capacitive sense element, and looking for a change in capacitance indicating a touch or presence of a conductive object.
  • a conductive object e.g., a finger, hand, or other object
  • the capacitance changes of the capacitive touch sense elements can be measured by an electrical circuit.
  • the electrical circuit converts the capacitances of the capacitive sense elements into digital values.
  • a method for calculating position of a conductive object hovering above a plurality of mutual capacitance sensors begins by measuring capacitance on a plurality of mutual capacitance sensors, each mutual capacitance sensor represented as a unit cell in an array of unit cells. After measuring the capacitances, the method identifies a peak unit cell based on the measured capacitances and calculates an edge cutoff value from the measured capacitances. After the edge cutoff value is calculated, a plurality of unit cells with measured capacitance within a range defined by the edge cutoff value is selected and the position of the hovering conductive object calculated.
  • a user interface device comprises a first plurality capacitance sensing electrodes disposed along a first axis of an array, a second plurality of capacitance sensing electrodes disposed along a second axis of an array, and a controller.
  • the controller may be configured to calculate position of a conductive object hovering above a plurality of mutual capacitance sensors.
  • the controller may measure capacitance on a plurality of mutual capacitance sensors, each mutual capacitance sensor represented as a unit cell in an array of unit cells. After measuring the capacitances, the controller may identify a peak unit cell based on the measured capacitances and calculate an edge cutoff value from the measured capacitances. After the controller calculates the edge cutoff value, it may calculate position based on unit cells within a range defined by the edge cutoff value.
  • a controller is disclosed that is configured to calculate position of a conductive object hovering above a plurality of mutual capacitance sensors.
  • the controller may measure capacitance on a plurality of mutual capacitance sensors, each mutual capacitance sensor represented as a unit cell in an array of unit cells. After measuring the capacitances, the controller may identify a peak unit cell based on the measured capacitances and calculate an edge cutoff value from the measured capacitances. After the controller calculates the edge cutoff value, it may calculate position based on unit cells within a range defined by the edge cutoff value.
  • Figure 1A illustrates a representation of self capacitance, according to one embodiment.
  • Figure IB illustrates a representation of mutual capacitance between a row and a column electrode comprised of diamond-shaped sense elements, according to one embodiment.
  • Figure 1C illustrates a representation of mutual capacitance between a row and a column of bar-shaped electrodes, according to one embodiment.
  • Figure 2A illustrates an array of diamond- shaped sense elements arranged in a two-dimensional array, according to one embodiment.
  • Figure 2B illustrates an array of bar-shaped electrodes arranged in a two dimensional array, according to one embodiment.
  • Figure 3A illustrates as sensing circuit for self capacitance measurement, according to one embodiment.
  • Figure 3B illustrates as sensing circuit for mutual capacitance measurement, according to one embodiment.
  • Figure 4A illustrates connections between a plurality of sensing channels and a plurality of measurable capacitances, according to one embodiment.
  • Figure 4B illustrates connections between a single sensing channel and a plurality of measurable capacitances, according to one embodiment.
  • Figure 5 illustrates a flow of information and control signals in a capacitance sensing system, according to one embodiment.
  • Figure 6A illustrates measured changes in capacitance numerically on a capacitance sensing array, according to one embodiment.
  • Figure 6B illustrates measured changes in capacitance graphically on a capacitance sensing array, according to one embodiment.
  • Figure 6C illustrates a plurality of detected peaks on a capacitance sensing array, according to one embodiment.
  • Figure 6D illustrates a centroid calculation with a 5x5 window of sensors, according to one embodiment.
  • Figure 6E illustrates the result of a centroid calculation with a 5x5 window of sensors for two conductive objects, according to one embodiment.
  • Figure 6F illustrates a representation of tracking a plurality of conductive objects moving across a capacitance sensing array.
  • Figure 7A illustrates a stack-up of a touchscreen, according to one embodiment.
  • Figure 7B illustrates a touchscreen system, according to one embodiment.
  • Figure 8A illustrates contact timing diagrams for tap, double-tap, and click-and- drag gestures, according to one embodiment.
  • Figure 8B illustrates a plurality of conductive objects moving across a capacitance sensing array to produce a "rotate" gesture, according to one embodiment.
  • Figure 8C illustrates a plurality of conductive objects moving across a capacitance sensing array to produce a "pinch” or “zoom-out” gesture, according to one embodiment.
  • Figure 8D illustrates a plurality of conductive objects moving across a capacitance sensing array to produce a "grow” or "zoom-in” gesture, according to one embodiment.
  • Figure 8E illustrates a plurality of conductive objects moving across a capacitance sensing array to produce a "pan” gesture, according to one embodiment.
  • Figure 8F illustrates a conductive object moving across a capacitance sensing array to produce a "next item” or “next page” gesture, according to one embodiment.
  • Figure 8G illustrates a conductive object moving across a capacitance sensing array to produce a "scroll" gesture, according to one embodiment.
  • Figure 9 illustrates a method for measuring capacitance on a touchscreen and outputting a result, according to one embodiment.
  • Figure 10A illustrates an array of unit cells with mutual capacitance difference values according to one embodiment.
  • Figure 10B illustrates an array of unit cells with mutual capacitance difference values, wherein each value is updated with a 3x3 sum of all values about a center value, according to one embodiment.
  • Figure IOC illustrates an array of mutual capacitance difference values with a first peak unit cell, according to one embodiment.
  • Figure 10D an array of mutual capacitance difference values, wherein each value is updated with a 5x5 sum of all values about a center value and an updated peak unit cell, according to one embodiment.
  • Figure 11A illustrates a 3x3 matrix of unit cells, according to one embodiment.
  • Figure 1 IB illustrates a 5-sensor group of unit cells with additional sensors at each cardinal direction, according to one embodiment.
  • Figure 12 illustrates an embodiment for detecting hover and calculating position with summed values, according to one embodiment.
  • Figure 13 illustrates an array of unit cells with an edge zone, according to one embodiment.
  • Figure 14A illustrates an array of unit cells with a path of a hovering contact an edge zones at each edge of the array, according to one embodiment.
  • Figure 14B illustrates a close-up of the array from Figure 14A with actual, detected, and corrected paths for a hovering contact, according to one embodiment.
  • Figure 15 illustrates a method for applying various correction factors based on a hover contact's presence in an edge zone, according to one embodiment.
  • Figure 16 illustrates a method for applying a common mode filter and verifying a hover detection, according to one embodiment.
  • Figure 17A illustrates a method for applying a common mode filter to hover data, according to one embodiment.
  • Figure 17B illustrates hover data after applying a common mode filter, according to one embodiment.
  • Figure 18A illustrates a method for verifying hover detection with self capacitance measurement data, according to one embodiment.
  • Figure 17B illustrates self capacitance hover data as applied in the method of Figure 18 A, according to one embodiment.
  • Figure 19A illustrates a table of data used for determining which unit cells to include in hover position calculation, according to one embodiment.
  • Figure 19B illustrates an application of an EdgeCutoff from Figure 19A , according to one embodiment.
  • Figure 20 illustrates a method for determining hover location and distance for hover detections, according to one embodiment.
  • Figure 21 illustrates a method for distinguishing and processing a hover from a large object, according to one embodiment.
  • Figure 22A illustrates a method for calculating a position of a hover contact over an array, according to one embodiment.
  • Figure 22B illustrates an example of calculation of a position of a hover contact over an array, according to one embodiment.
  • Figures 23 A, 23B, and 23C illustrate calculation of ratios representative of a grip, a hover over the edge of an array, and a hover near the edge of an array, according to various embodiments.
  • Figure 24 illustrates a method for calculating and using a ratio of measurements to identify and process various types of contacts, according to one embodiment.
  • Figure 25A illustrates maximum values of various contact types on an array of unit cells, according to one embodiment.
  • Figure 25B illustrates ratios of peaks to 5x5 sum values of various object and contact types, according to one embodiment.
  • Figure 26 illustrates peak values and 5x5 sum values plotted to show mode partitions, according to one embodiment.
  • Figure 27A illustrates a plurality of partitions for a device operating in finger mode, according to one embodiment.
  • Figure 27B illustrates a plurality of partitions for a device operating in glove mode, according to one embodiment.
  • Figure 27C illustrates a plurality of partitions for a device operating in stylus mode, according to one embodiment.
  • Figure 28 illustrates a method for determining the correct mode of a touchscreen device, according to one embodiment.
  • Figure 29 illustrates a process for determining the correct mode of a touchscreen device, according to another embodiment.
  • Figure 30 illustrates a state diagram for moving between various states of a touchscreen device, according to one embodiment.
  • a capacitor is formed by two conductive plates that are separated by a space filled with a dielectric material.
  • the capacitance of a capacitor made of two large plates (in farads), C, is given by:
  • the conductive plates may be conventional metal plates (such as copper electrodes).
  • the conductive plates may be formed from a transparent conductive material (such as indium tin oxide, "ITO"), silver or carbon ink, or metal mesh.
  • a conductive plate may be a human finger or palm. Any material that is capable of conducting electricity may serve as a conductive plate of a capacitor.
  • a capacitor can store a charge transferable to other portions of a circuit.
  • the charge stored by a capacitor (in coulombs), q, is given by:
  • a capacitance may be measured as a self capacitance, the capacitance of a single conductive plate (electrode) to its surroundings which serve as the second conductive plate, or as mutual capacitance, the capacitance between two specific conductive plates.
  • Self and mutual capacitances may be changed by the presence of additional conductive plates, such as a finger, in proximity to the conductive plates under test.
  • conductive plates are referred to as “electrodes" or “sensors.” This is not intended to be limiting as circuits may describe the conductive plates of a capacitor in different terms.
  • a finger is a conductive plate for the purposes of creating a capacitor, it may not be referred to as an "electrode” or “sensor.” While fingers are used in the following description to be representative of the conductive object that is sensed by the capacitance sensor and measurement circuit, other conductive objects may be used.
  • FIG. 1A illustrates a representation of self capacitance in a system 101 according to one embodiment.
  • An electrode 110 may be disposed on a substrate 115.
  • a capacitance 117 may exist between electrode 110 and at least one other electrode 112 according to Equation (1).
  • electrodes 110 and 112 may be formed from copper.
  • electrodes 110 and 112 may be formed from a transparent conductive material such as indium tin oxide (ITO).
  • ITO indium tin oxide
  • electrodes 110 and 112 may be formed from silver or carbon ink, metal mesh, or another conductive material.
  • Substrate 115 may be glass in one embodiment.
  • substrate 115 may be a plastic film (such as polyethylene terephthalate, "PET", or some other polycarbonate), a flexible printed circuit board material, or a rigid printed circuit board material (such as FR4).
  • substrate 115 may be a separate layer or it may be part of a larger, integrated system as shown in Figures 7A and 7B below. While capacitance 117 is shown to be between electrode 110 and electrodes 112, which are coupled to a ground voltage potential, one of ordinary skill in the art would understand that the capacitances between electrodes 110 and 112 may exist at any voltage potential and that a ground connection is not required.
  • FIG. 1A illustrates a representation of mutual capacitance in a system 102 according to one embodiment.
  • a first electrode 120 including multiple diamond- shaped elements may be disposed on a substrate (not shown) along a first axis.
  • a second electrode 122 including multiple diamond-shaped elements may be disposed along a second axis.
  • electrodes 120 and 122 may be formed from copper, a transparent conductive material such as ITO, silver or carbon ink, metal mesh, or other conductive materials or combinations of conductive materials.
  • the substrate e.g., see substrate 115 of Fig. 1A
  • the substrate may be glass, plastic film (such as polyethylene terephthalate, "PET", or some other polycarbonate), a flexible printed circuit board material, or a rigid printed circuit board material (such as FR4).
  • the substrate may be a separate layer or it may be part of a larger, integrated system as shown in Figures 7A and 7B below, for example.
  • electrodes 120 and 122 may be disposed on two different substrates that are adhered together. In other embodiments, electrodes 120 and 122 may be disposed on two sides of the same substrate or may be disposed on the same side of a substrate and the connections for either electrode 120 or electrode 122 formed by a jumper between individual elements of electrodes 120 and 122 and disposed over a dielectric material.
  • FIG. 1C illustrates another representation of mutual capacitance in a system 103 according to another embodiment.
  • a first electrode 130 may be disposed on a substrate (e.g., see substrate 115 of Fig. 1A) along a first axis.
  • a second electrode 132 may be disposed along a second axis.
  • Electrodes 130 and 132 may be bar- shaped in one embodiment. In another embodiment, electrodes 130 and 132 may have more complex structures that are based on the bar-shaped theme. At the intersection of electrodes 130 and 132 there may exist a mutual capacitance 137.
  • electrodes 130 and 132 may be formed from copper.
  • electrodes 130 and 132 may be formed from a transparent conductive material such as ITO.
  • electrodes 110 and 112 may be formed from silver or carbon ink, metal mesh, or another conductive material.
  • Mutual capacitances 127 and 137 may be used to detect the location of one or more conductive objects on or near a surface (e.g. Figures 6A through 6E).
  • An array of mutual capacitances may be used to detect one or more conductive objects on an edge of a device with a touch surface.
  • the edge on which the conductive object is placed may be a surface perpendicular to the substrate on which the electrodes are disposed, as shown in Figure 7A.
  • electrodes 130 and 132 may be formed from copper, a transparent conductive material such as indium tin oxide (ITO), silver or carbon ink, metal mesh, or other conductive materials or combinations of conductive materials.
  • the substrate e.g., see substrate 115 of Fig. 1A
  • the substrate may be glass, plastic film (such as PET or some other polycarbonate), a flexible printed circuit board material, or a rigid printed circuit board material (such as FR4). Additionally, among embodiments, the substrate may be a separate layer or it may be part of a larger, integrated system as shown in Figures 7A and 7B below, for example.
  • electrodes 130 and 132 may be disposed on two different substrates that are adhered together.
  • electrodes 130 and 132 may be disposed on two sides of the same substrate or may be disposed on the same side of a substrate and the connections for either electrode 130 or electrode 132 formed by a jumper between individual elements of electrodes 130 and 132 and disposed over a dielectric material.
  • Figure 2A illustrates an array of electrodes 202 similar to those shown in Figure
  • a first plurality of electrodes 220 including multiple diamond-shaped elements may disposed on a substrate (not shown) along a first axis.
  • a second plurality of electrodes 222 including multiple diamond-shaped elements may disposed on a substrate along a second axis.
  • Close-up 225 illustrates the intersection between the first plurality of electrodes 220 and the second plurality of electrodes 222.
  • This region of mutual capacitance may be described as a unit cell 229 of the array of electrodes 202.
  • Unit cells exist at every intersection and may be used to detect the location of a conductive object or to detect the presence of at least one conductive object along an edge of a touchscreen-enabled device as shown in Figure 7A.
  • Figure 2B illustrates an array of electrodes 203 similar to those shown in Figure
  • a first plurality of electrodes 230 may be disposed on a substrate (not shown) along a first axis.
  • a second plurality of electrodes 232 may be disposed on a substrate along a second axis.
  • Electrodes 230 and 232 may be bar-shaped in one embodiment. In another embodiment, electrodes 230 and 232 may have more complex structures that are based on the bar-shaped theme. Close-up 235 illustrates the intersection between the first plurality of electrodes 230 and the second plurality of electrodes 232.
  • FIG. 2A there may be a mutual capacitance at the intersection an electrode from the first plurality of electrodes 230 and an electrode of the second plurality of electrodes 232 and this region of mutual capacitance may be described as a unit cell 239 of the array of electrodes 203.
  • Unit cells exist at every intersection and may be used to detect the location of a conductive object or to detect the presence of at least one conductive object along an edge of a touchscreen-enabled device as shown in Figure 7A.
  • Unit cells 229 and 239 and their measured capacitance values may be used to detect the location of one or more conductive objects on or near a surface (e.g. Figures 6A through 6E).
  • An array of unit cells may be used to detect one or more conductive objects of various types, including bare fingers, gloved fingers, styli (either active or passive) or an object hover above the surface.
  • Unit cells may be used individually, in combination, or both to determine object and interaction type.
  • Unit cells 229 and 239 may be conceptualized geometrically as the smallest unit of tessellation. That is, the smallest repeatable unit of measurement on the array. Unit cells 229 and 239 may also be conceptualized by stating that every point within the unit cell is closer to the center of that unit cell (the center of the intersection between the electrodes on different axes) than it is to the center of any other unit cell. Unit cells 229 and 239 may be conceptualized functionally as the native resolution of the arrays 202 and 203. That is, each row and column may be identified and a position defined on each row and column. For a rectangular array with twelve columns and nine rows, there may be 108 discrete locations.
  • Unit cells 229 and 239 may be conceptualized as pixels of an array, wherein each pixel may be assigned a location and a measurable value specific to that location.
  • An example of a pixel-based interpretation of unit cells is given in Figures 6A and 6B below.
  • Unit cells 229 and 239 may also be referred to as "nodes” wherein each intersection of the row and column electrodes is a node of the array.
  • Unit cells may be referred merely as intersections in a mutual capacitance sensing array, as shown in Figures 2B and 2C.
  • the term "intersection” is merely shorthand for their construction as an intersection between row and column electrodes.
  • FIG. 3A illustrates one embodiment of a self capacitance measurement circuit 301.
  • Self capacitance sensor 310 Cs may be formed between an electrode 110 as shown in Figure 1A and ground.
  • the non-grounded side of self capacitance sensor 310 may be coupled to a pin 312 of capacitance measurement circuit 301.
  • a switch network 315 may be used to generate a current by alternately charging self capacitance sensor 310 to a voltage (VDD) and discharging the accumulated charge onto an integration capacitor 322, which may be part of channel 320.
  • the current from switch network 315 and self capacitance sensor 310 may be given by:
  • Switch network 315 and integration capacitor 322 may be coupled to an input of operational amplifier 324 with a reference voltage (VREF) to allow step-wise linear charging of integration capacitor 322.
  • the voltage across integration capacitor 322 may be measured by analog-to-digital converter (ADC) 326, the output of which may be analyzed by processing block 330. After the voltage across integration capacitor 322 by ADC 326, the voltage across integration capacitor 322 may be reset by switch SW3, allowing a new measurement.
  • ADC analog-to-digital converter
  • FIG. 3B illustrates one embodiment of a mutual capacitance measurement circuit 302.
  • Mutual capacitance sensor 311 may be formed at the intersection of two electrodes (120 and 122 of Figure IB; 130 and 132 of Figure 1C), which also have a parasitic capacitance 318 (Cp).
  • Each plate of mutual capacitance sensor 311 may be coupled to a pin of mutual capacitance sensing circuit 302.
  • a first pin 313 may be coupled to a signal generator (TX) 316 and a second pin 314 may be coupled to channel 320.
  • the alternating voltage of signal generator 316 may produce a current from mutual capacitance sensor 311 to an integrating capacitor 322 of channel 320.
  • the voltage across integration capacitor 322 may be measured by ADC 326, the output of which may be analyzed by processing block 330. After the voltage across integration capacitor 322 by ADC 326, the voltage across integration capacitor 322 may be reset by switch SW4, allowing a new measurement.
  • the current from mutual capacitance sensor 311 may be used to bias an input of a self capacitance measurement circuit 301 similar to that shown in Figure 3A. The bias provided by the mutual capacitance induced current may provide greater dynamic range of the combination of the integration capacitor 322 and ADC 326.
  • channel 320 of Figures 3 A and 3B are shown to comprise an operational amplifier (324) and an ADC (326), one of ordinary skill in the art would understand that there are many ways to measure a voltage on an integration circuit and that the embodiments of Figures 3A and 3B are intended as exemplary and not limiting.
  • ADC 326 may be replaced by a comparator and a counting mechanism gated by the output of the comparator to produce a digital representation of the capacitance on the integrating circuit.
  • the number of counts from the counting mechanism may represent the time required to charge the integrating circuit to a reference voltage of the comparator. Larger charging currents may produce faster charging of the integrating circuit and lower count values.
  • Capacitance measurement circuits such as those in shown in Figures 3A and 3B may implemented on an integrated circuit (IC) alone or with several instances of each to measure the capacitances of a plurality of inputs.
  • IC integrated circuit
  • Figure 4 A illustrates a circuit 401 for measuring multiple capacitances 411.1 through 411.N according to one embodiment.
  • circuit 401 four capacitances 411.1 through 411.N may be coupled to pins 414.1 through 414. N of sensing IC 405. Each mutual capacitance 411.1 through 411. N may be coupled to channels 320.1 through 320.N and the outputs of each of channels 320.1 through 320.N coupled to a processing block 330 through multiplexor 410.
  • Figure 4B illustrates a circuit 402 for measuring multiple capacitances 411.1 through 411.N according to another embodiment.
  • circuit 402 four capacitances 411.1 through 411.N may be coupled to pins 414.1 through 414. N of sensing IC 405.
  • Each capacitance 411.1 through 41 l.N may be coupled to an input of multiplexor 410, the output of which may be coupled to channel 320.
  • the output of channel 320 may be coupled to processing block 330.
  • Figures 4A and 4B illustrate the logical extremes of individual channels for each capacitance or a single channel for all capacitances.
  • different combinations of the circuits of Figures 4A and 4B may be implemented.
  • multiple channels 320 may be coupled to multiple capacitances 411.
  • the capacitances may be distributed evenly across all the available channels.
  • the capacitances may be distributed unevenly, with certain channels configured to measure capacitance on more pins than other channels.
  • Figures 4A and 4B illustrate four capacitances, pins, or channels, one of ordinary skill in the art would understand that more or less than four of each may be used.
  • the number of capacitances, pins, and channels may be the same or they may be different, depending on the design requirements.
  • Capacitances 411.1 through 41 l.N may be coupled to signals opposite to pins 414.1 through 414. N to produce a current input to channel 320 representative of a measured capacitance as described in Figure 3.
  • capacitances 411.1 through 41 l.N may be coupled to signals to produce a current used for calibration of circuits 401 and 402.
  • Figures 4A and 4B illustrate a multiplexor
  • a plurality of switches may be configured to perform similar functionality as a multiplexor.
  • the representation of the mechanism by which capacitances 411.1 through 41 l.N are coupled to channel 320 or how channels 320.1 through 320.N are coupled to processing block 330 by a multiplexor is merely exemplary and not intended to limit the description to a specific circuit element.
  • FIG. 5 illustrates one embodiment of a touchscreen system 501.
  • a touchscreen 510 may be coupled to a sensing IC 505 though pins 414 (e.g., 312 in Figure 3A, 313 and 314 in Figure 3B, and 414 in Figures 4A and 4B).
  • Sensing IC 505 may comprise a channel 320 coupled to the touchscreen electrodes of touchscreen 510 (illustrated in Figures 2 A and 2B).
  • the output of channel 320 may be sent to CPU 512 for processing (as shown in Figures 3A and 3B) and then either communicated to a host 530 through communication interface 516 or stored in a memory 514 and communicated to host 530 through communication interface 516 from memory 514.
  • the output of channel 320 may be stored in memory 514 directly (before processing by CPU 512) and either processed by CPU 512 from memory 514 and then communicated to host 530 through communication interface 516 or communicated to host 530 from memory 514 through communication interface 516 without CPU intervention.
  • Tuning and calibration routines may be stored in memory 514 and implemented by CPU 512 through tuning block 513.
  • Calibration of signals from touchscreen 510 through and by channel 320 may provide capacitance measurement data with greater signal-to-noise ratios and fidelity to user interactions.
  • Capacitance measurement data from channel 320 may be representative of the total capacitance measured by channel 320. That is, the capacitance of self or mutual capacitances of Figures 1A through 1C may be converted to a digital value.
  • the digital value may include the parasitic capacitance (318 of Figures 3 A and 3B) as well as the native mutual capacitance with no figure present (311 of Figure 3B) and the capacitance of the conductive object or finger.
  • the parasitic capacitance and native mutual capacitance may be subtracted from the measured value as a baseline to yield difference values that are representative of the capacitance from the conductive object or finger. Difference values may be analyzed by processing block 330 to determine if a conductive object is proximate to the array as well as higher-level user interactions.
  • CPU 512 or host 530 may further use capacitance and/or difference values to detect various types of objects and interactions.
  • different levels of data may be communicated to host 530 for processing away from CPU 512. While CPU 512 may perform all of the processing in the specification below, more or less of the data analysis and manipulation may be off-loaded to host 530 based on the processing requirements and overhead of CPU 512, host 530, and the system operation generally.
  • Figure 6 A illustrates numerical difference values 601 for a plurality of intersections 611 of a mutual capacitance sensing array.
  • Numerical difference values 601 may be derived from the raw values of, for example, channel 320 ( Figure 3B) for every unit cell (229 of Figure 2A and 239 of Figure 2B) or mutual capacitance, CM, 311 ( Figure 3B).
  • numerical difference values may be the difference between the raw count values output from channel 320 and a baseline value.
  • the baseline value may be stored globally for the entire array. In another embodiment, the baseline value may be stored for each intersection individually.
  • baseline value may be stored for multiple groups of sensors depending on each sensor's position on the touchscreen, noise performance of individual sensors, other design restraints. Baseline values may be determined during development in one embodiment. In another embodiment, baseline values may be calculated at start-up or may be updated during operation of the touchscreen to account for variations in noise experienced by the touchscreen electrodes, physical changes on the touchscreen (heat, humidity, etc.), or other sources of drift in the output channel (e.g., channel 320).
  • the numerical difference values 601 of Figure 6A may be illustrated graphically as heat map 602 in Figure 6B.
  • the shade of each cell or mutual capacitance 601 of heat map 602 may indicate of the numerical difference values 601 of Figure 6A. Darker cells may indicate of greater capacitive coupling of a mutual capacitance electrode with a conductive object and less capacitive coupling between the mutual capacitance electrodes themselves.
  • Figure 6C illustrates an example of peak detection scheme 603 based on the data from Figures 6A and 6B.
  • the peak detection scheme 603 may compare each unit cell (229 of Figure 2 A and 239 of Figure 2B) or mutual capacitance 611 (Figures 6 A and 6B) to those around it. Peak detection may use a sum of the surrounding values to provide a common mode filter to peak detection, as shown Figures 10B and 10D. Unit cells or mutual capacitance intersections with the highest different value may be identified as peaks and given an identifier and position.
  • a first peak 631 may be given a first position (X-axis 632 and Y-axis 634).
  • a second peak 635 may be given a second position (X-axis 636 and Y-axis 638).
  • Figure 6D illustrates an example of centroid calculation 604 wherein an array of sensors surrounding each peak is defined and processed.
  • a centroid for each peak may be given by:
  • P is the calculated position
  • i is the unit cell under test
  • 5 is the signal at each unit cell (under test and surrounding unit cells).
  • First peak 631 may be used to define a first array 641 including the 25 unit cells around and including the unit cell at first peak 631.
  • Second peak 635 may be used to define a second array 645 including the 25 unit cells around and including peak 631.
  • the values of first array 641 and second array 645 may be processed to find the centroid or center of mass of the conductive object based on the values contained within each array (641 and 645). While symmetrical 5x5 arrays are illustrated in and describe with regard to Figure 6D, in various embodiments, the arrays may have different dimensions and consequently different numbers of unit cells. Such various embodiments may include 3x3, 4x4, or larger arrays.
  • the arrays may position peaks in the center or the peaks may be offset. Additionally, the arrays may be asymmetrical, with a greater number of rows or columns, or irregular, where each row or column may have a different number of unit cells.
  • Figure 6E illustrates an example of the first and second centroids 651 and 655 calculated from first and second arrays 641 and 645 of Figure 6D, when no virtual sensors are determined activated.
  • Figure 6F illustrates an example of two conductive objects 661 and 665 moving across a touchscreen and their positions along tracks 663 and 667, respectively.
  • FIG. 7A illustrates one embodiment of a touchscreen stackup of touchscreen system 501 (from Figure 5).
  • Touchscreen stackup 701 may include a display 740.
  • Above display 740 may be disposed a sensor layer 750.
  • sensor layer 750 and a conductive object, such as a finger or stylus may be disposed a cover layer 760.
  • sensor layer 750 is shown to be on the same layer of a substrate, this is merely exemplary. In one embodiment, sensor layer 750 may be disposed on the bottom of cover layer 760, reducing the number of layers from three to two in touchscreen stackup 701. In another embodiment, sensor layer 750 may be disposed on the top of display 740, also removing a layer from touchscreen stackup 701. In another embodiment one or both axes of the electrodes shown on sensor layer 750 may be disposed at various depths within the display. For example, sensor layer 750 may implemented as in-cell, on-cell, or a hybrid of in- cell and on-cell. Additionally, sensor layer 750 may share certain electrodes with display 740.
  • Touchscreen stackup 701 is illustrated in a touchscreen system 702 in Figure 7B, according to one embodiment.
  • Touchscreen/display 705 (analogous to touchscreen stackup 701 of Figure 7 A) may be coupled to touch controller 710 and display controller/driver 715.
  • Touch controller 710 may be configured to sense either self capacitance (Figure 3 A) or mutual capacitance ( Figure 3B) or both.
  • the output of the touch controller 710 may be communicated to an application processor 730.
  • Touch controller 710 may also be configured to receive commands and data from application processor 730.
  • Information that may be communicated to application processor 730 by touch controller 710 may include the following data for each identified conductive object on the array:
  • Z-Axis Intensity The strength of the touch which may be indicative of the size of the conductive object or the pressure with which the conductive object presses against the touch surface (in some embodiments, Z-axis intensity may be used to indicate a distance of an object from the surface or sensing electrodes);
  • Virtual Sensor Activation State identify, location, and signal level of various active virtual sensors
  • Application processor 730 may also be coupled to display controller/driver 715 to control what is shown on touchscreen/display 705.
  • Figure 8A illustrates examples of capacitance measurement data for a single conductive object as might be interpreted to be single-touch gestures.
  • a detection of a conductive object in Figure 8A is illustrated as a digital ON/OFF or HIGH/LOW of the conductive object on the capacitance sensor.
  • a single-tap gesture 810 may be detected as a presence of a conductive object detected and then the absence of a conductive object detected to define first touch 811.
  • a double-tap gesture 812 may be detected as a presence of a conductive object detected and then the absence of a conductive object detected to define a first touch 811, then within a specified time a second touch 813 detected.
  • a click-and-drag gesture 814 may be detected as a presence of a conductive object detected and then the absence of a conductive object detected to define a first touch 811, then within a specific time a second touch 815 detected.
  • a click-and-drag gesture may also move a cursor on a display as the second touch remains on the touch surface and moves across the surface.
  • Figures 8B through 8E illustrate examples of gestures based on detection of two conductive objects.
  • conductive objects 821 and 823 may move in a circular motion about some center point, either clockwise or counter-clockwise to produce a rotate gesture 802.
  • conductive objects 821 and 823 may move closer together along a substantially linear path to produce a "pinch” or "zoom out” gesture 803.
  • conductive objects 821 and 823 may move farther apart along a substantially linear path to produce a "grow” or "zoom in” gesture 804.
  • conductive objects 821 and 823 may move along substantially parallel paths to produce a "pan” gesture 805.
  • Figures 8F and 8G illustrate gestures based on detection of a single contact moving across a capacitance sensing array.
  • conductive object 821 may move in a substantially straight line to produce a "next item" gesture 806.
  • conductive object 821 may move in a circular motion about some center point, either clockwise or counter-clockwise to produce a scroll gesture 807.
  • Gestures of Figures 8A and 8C-8F may be detected on virtual sensors to achieve additional functionality without a user contacting the touchscreen directly.
  • FIG. 9 illustrates one embodiment of a method 901 of sensing a touchscreen array and determining the appropriate display.
  • Capacitance is first measured in step 910.
  • Step 910 may correspond to self capacitance measurement or mutual capacitance measurement and may use sensing circuits similar to those described in Figures 3A or 3B. In other embodiments, other self or mutual capacitance sensing methods may be used.
  • Raw capacitance values may be used to create a baseline in step 920. Baseline values may then be subtracted from the raw capacitance values in step 930 to generate difference values (as shown in Figure 6A). In one embodiment, difference values may be used to determine calibration parameters for hardware configuration.
  • Calibration parameters may include coupling various unit cells (e.g.
  • Difference values from step 930 may be used to determine the type of object (e.g. bare finger, gloved finger, or stylus) that has influenced the measured capacitance values as well as the type of interaction (e.g. contact of hover) in step 940. Once the type of object and interaction has been determined, the position of that object may be calculated in step 950.
  • type of object e.g. bare finger, gloved finger, or stylus
  • type of interaction e.g. contact of hover
  • Successive processing of positions from multiple scans may be used to detect motion of at least one object over the array of capacitance sensing electrodes in step 960 and the motions of those objects, as well as their mere presence or absence may be used to determine gestures in step 960 as illustrated in Figures 8A through 8G.
  • data may be communicated directly to host 530 of Figure 5. This may allow for faster processing or off-load the touch controller (using CPU 512) to perform other processing of analysis of the capacitance sensing information.
  • the entire method 901 of Figure 9 may be completed by touch controller 710 of Figure 7.
  • various steps of method 901 may be completed by an external processor such as application processor 730 of Figure 7.
  • data may be communicated to and from touch controller 710 through communication interface 516 of Figure 5.
  • Information communicated to the host may be stored in a memory (such as memory 514) or communicated through a processing unit (such as CPU 512).
  • additional processing steps may be completed by touch controller 710 or application processor 730 and the results of those steps used in performing the steps of method 901 illustrated in Figure 9.
  • a contact hovering over an array of sensors like those shown in Figures 2A and 2B may generate significantly lower difference values than the difference values shown in Figure 6A and with far greater relative variability across the panel and over time.
  • An example of this is illustrated in Figure 10A, wherein an 8x12 array 1001 of mutual capacitance sensors (unit cells) 1011 may have difference values with a maximum of 80, unit cell 1021, and a minimum of -9, meaning that the noise in the system may be such that the measured capacitance is actually lower than the baseline.
  • an identified peak unit cell may move quickly between successive scans of the array, wherein each scan measures the capacitance of a necessary number of unit cells, or intersections between rows and columns of electrodes.
  • FIG. 10B illustrates an embodiment of the 8x12 array 1001 of mutual capacitance unit cells 1011 wherein a value of each intersection is replaced with a sum difference values about each unit cell.
  • the unit cell 1021 now has a value of 275, which is the sum of the previous peak of 80 and the eight other unit cells surrounding the previous peak and contained within a 3x3 matrix of unit cells 1023.
  • Each unit cell (e.g., unit cell 1011) in the [array 1001 of mutual capacitance unit cells] may also be summed with its surrounding other eight unit cells, which results in the values shown in Figure 10B for the unit cell 1021 and the grouping of unit cells 1023.
  • FIG. 1 illustrates another embodiment of an 8x12 array 1003 with mutual capacitance values for each unit cell.
  • the unit cell with the highest value (peak) is 24, corresponding to unit cell 1031 (E4).
  • the peak at unit cell 1031 is only a single unit higher than three other cells (E5, D6, and E6) and the peak may move with only a minor movement of a contact over the array 1001 or in response to noise, causing jitter in the detection and position calculation from steps 940 and 950 of Figure 9.
  • Figure 10D illustrates an embodiment of the 8x12 array 1001 of Figure IOC with each of the middle 4x8 unit cells (columns C-F and rows 3-10) summed with the remaining 24 unit cells comprising a 5x5 matrix of unit cells (1033 from Figure IOC).
  • the highest value (peak 1042) is 362, corresponding to cell E6.
  • Cell E6 has a value 14 units greater than the next highest cell (E5). This means that small changes in the capacitance may be less likely to move the peak, allowing for a more stable window of values for calculating hover position.
  • Figure 11A illustrates one embodiment of a 3x3 matrix of unit cells 1101 that may be used to calculate a processed capacitance for each unit cell.
  • a unit cell under test, 1111 may be updated to include the sum of the values of each unit cell within one unit cell above, below, left, and right of the unit cell under test.
  • the new value may be 275.
  • a 3x3 matrix of unit cells 1101 is shown in Figure 11A, one of ordinary skill in the art would understand that matrices of different dimensions and shapes may be used.
  • a square fully populated matrix is shown, one of ordinary skill in the art would understand that matrices with rounded corners (missing cells) may be used.
  • a 5x5 matrix may be used (e.g. 1033 of Figure IOC), including the unit cell under test 1111 and the 24 surrounding unit cells.
  • a 5x5 matrix may be used (e.g. 1033 of Figure IOC), including the unit cell under test 1111 and the 24 surrounding unit cells.
  • the unit cell with a value of 80 from Figure 10A would then have a value of 445.
  • Figure 1 IB illustrates an embodiment of a 5-unit cell group of unit cells 1102 that may be used to calculate a processed capacitance for each unit cell.
  • a unit cell under test, 1112 may be updated to include the sum of the values of each unit cell immediate above, below, left, and right of the unit cell under test 1112.
  • unit cells located diagonal from the unit cell under test may not be included in the processed capacitance.
  • the example from Figures 10A and 10B with a difference value of 80 may be updated to be 105.
  • a 9-sensor matrix may be used, including values of the two unit cells immediately above, below, left, and right of the unit cell under test 1112, In this embodiment, the above example may be updated to have a processed capacitance of the unit cell under test 1112 of 155. In still another embodiment, only the values along diagonals from the unit cell under test 1112 may be used.
  • unit cells may be used that include unit cells representative of the array and the unit cell under test.
  • FIG. 12 illustrates one embodiment of a method 1200 for generating processed capacitances for unit cells for hover position calculation.
  • Capacitance may be first measured in step 1210.
  • capacitance may be measured according to Figures 3A or 3B or other self or mutual capacitance methods useful for determining a change in capacitance as a plurality of intersections of mutual capacitance electrodes or at location of self capacitance electrodes.
  • each measured capacitance may be stored in a location corresponding to each intersection, or unit cell, in the array in step 1220.
  • the measured capacitance may be stored as the raw capacitance value converted to a digital value in one embodiment.
  • the measured capacitance may be stored as the difference between the raw capacitance value and a baseline value.
  • the baseline value may be static, dynamic, global to the entire array of unit cells, specific to each unit cell, or some combination of each.
  • the value at each unit cell may be updated as described with regard to Figures 11A and 11B, The array may then be updated with the processed capacitance values in step 1240.
  • the processed capacitance values from steps 1230 and 1240 may then be used to determine a peak at least at one peak unit cell in step 1250.
  • the peak unit cells(s) from step 1250 may then be used to calculate position of a contact hovering above the capacitance sensing electrodes in step 1260.
  • Peak unit cell detection may be completed as shown in Figures 10 and 11. At various stages, a hover may be detected (as opposed to a touch of the surface or some other interaction between a conductive object and the capacitance sensing electrodes) in block 1201. If a hover is not detected, before various steps of method 1200, the remaining steps may not be completed and the capacitance of each electrode or unit cell of electrodes may be measured again, or some other processing of the capacitance values completed.
  • FIG. 13 illustrates one embodiment of an array 1300 of mutual capacitance sensors 1311 over which an object 1320 is hovering.
  • the measured capacitance values of each unit cell may be low enough and with high enough variability so as to make position calculation at the edge an array imprecise. Consequently, an edge zone 1330 may be identified in which a correction factor may be applied. While a single edge zone 1330 is shown in Figure 13, edge zones may be defined for each edge/axis.
  • Figure 14A illustrates an embodiment of an array 1400 of mutual capacitance unit cells 1411, wherein each axis has two edge zones defined at each side of the array.
  • Edge zones 1431 and 1432 may be defined for positions along the top and bottom of the array.
  • Edge zones 1433 and 1434 may be defined for positions along the left and right of the array.
  • a conductive object 1420 may be detected by the array 1400 and capacitance measurement and processing circuitry (e.g., 301 or 302 of Figures 3A and 3B) as it moves toward the right edge of the array 1400 and into edge zone 1433.
  • Conductive object 1420 may move along a path 1425.
  • Figure 14B illustrates a close up view of section 1440 from Figure 14A.
  • Path 1425 is illustrated as the actual position of conductive object 1420 above array 1400 from Figure 14A.
  • an uncorrected position of the path may begin to differ significantly from the actual position along path 1425.
  • corrected path may have more or less fidelity to path 1425, however, the difference between corrected path 1429 and path 1425 is smaller than the difference between path 1427 and path 1425.
  • determination of an object to be within an edge zone may be by a calculated position, by particular unit cells recognized as peaks, or sums groups of sensors at or near the edge of the array of unit cells.
  • the same correction scheme may be applied to conductive objects detected in each zone.
  • a different scheme may be applied to each axis or near of far sensors (from the connection of the electrodes to the sensing circuitry).
  • a position may be scaled for the entire active area of the array, rather than just the zones at the edge of each axis.
  • position may be given by: max
  • P is the uncorrected value of the X position or Y position
  • P' is the corrected value of the X position or Y position
  • B is the position of the border of the edge zone
  • P max is the maximum possible position value
  • position changed with a second order or higher order correction may be given by:
  • P' a 2 * (B - P) 2 + a 1 * (B - P) + a 0 (9)
  • P is the uncorrected value of the X position or Y position
  • P' is the corrected value of the X position or Y position
  • B is the position of the border of the edge zone
  • ⁇ 3 ⁇ 4 are correction coefficients (greater than 1).
  • Figure 15 illustrates one embodiment of a method 1500 for correcting position of a conductive object hovering above an edge zone as illustrated in Figures 13, 14A, and 14B.
  • the location of the conductive object may be first calculated in step 1510. If the location is within an edge zone along the near end of the X axis in step 1515, a near correction scheme may be applied in step 1520. After the near correction scheme is applied, method 1500 may progress to decision step 1535, wherein the position is compared to an edge zone along the near end of the Y axis. If the position is within the edge zone along the near end of the Y axis, a near correction scheme may be applied in step 1540.
  • step 1515 the location is not within an edge zone along the near end of the X axis
  • method 1500 may proceed to step 1525, wherein the position is compared to an edge zone along the far end of the X axis. If the position is within the edge zone along the far end of the X axis, a far correction scheme may be applied in step 1530 before method 1500 proceeds to step 1535. Similarly, if in step 1535, the location is not within an edge zone along the near end of the Y axis, method 1500 may proceed to step 1545, wherein the position is compared to an edge zone along the far end of the Y axis.
  • a far correction scheme may be applied in step 1550 and method 1500 may proceed to step 1560 and report the position. If the position is not within any edge zone, the calculated position from step 1510 may be reported without any correction from steps 1520, 1530, 1540, or 1550. [00127] In one embodiment, formulae similar to that of equations (5) and (6) may be used. However, one of ordinary skill in the art would understand that different correction schemes may be used to produce positions of hovering conductive objects that are more faithful to the actual position of the hovering conductive objects over the array of electrodes.
  • equations (5) and (6) are described as applicable to near X and Y positions and far X and Y positions, respectively, one of ordinary skill in the art would understand that different correction schemes may be used for each edge zone.
  • a is applied to each equation and each zone, different coefficients may be used with the same equations for each edge zone. Different zone sizes may be used for different axes or for different sides of the array.
  • the edge zones may use a calculated position or the location of a peak unit cell.
  • the shape and size of the edge zone may be changed based on the height of a measured peak or on the activation level (measured capacitance) on at least one capacitance sensing electrode or combination of capacitance sensing electrodes.
  • FIG. 16 illustrates one embodiment of a method 1600 for detecting and confirming a conductive object hovering above an array of electrodes as illustrated in Figures 10A, IOC, and 14A and detected by system 501 of Figure 5.
  • Capacitance is first measured in step 1601. Capacitance measurement may be with both self capacitance measurement circuitry and mutual capacitance measurement circuitry. Following hover detection and position calculation with mutual capacitance, values of mutual capacitance at each of intersection (or unit cell) of an array may be recorded to a memory in step 1610.
  • Mutual capacitance may be measured with the circuit shown in Figure 3B and stored in a memory as shown in Figure 5. After all of the mutual capacitance data are recorded, a common mode filter may be applied in step 1620.
  • step 1620 The details of the common mode filter processing of step 1620 are illustrated in more detail in the embodiments shown in Figures 17A and 17B.
  • the processed mutual capacitance data, after the common mode filter of step 1620 is applied, may be recorded in step 1630 and a position of the conductive object hovering above the array of electrodes may be calculated in step 1640.
  • a valid hover may be confirmed using self capacitance measurement and processing.
  • Self capacitance may be measured in step 1601.
  • a self capacitance measurement circuit similar to that shown in Figure 3A may be used.
  • the measured self capacitance values may be recorded in step 1615 and may be processed in unit cell group processing of step 1625.
  • the processed self capacitance data may be recorded in 1635 and a baseline value which may be used in determination of valid hover updated in step 1645.
  • the updated self capacitance baseline may be used to validate a hover detection in step 1655. If the hover detection of step 1655 is validated, the hover position calculated in step 1640 may be reported to a host in step 1661.
  • method 1600 may be completed by CPU 512 of Figure 5. In various other embodiments, portions of method 1600 may be completed by a combination of CPU 512, tuning logic 513, and host 520 of Figure 5.
  • Figure 17A illustrates one embodiment of a common mode filter of step 1620 method 1600. Mutual capacitance may be first measured in step 1701, similar to the capacitance measurement of step 1601 of method 1600 and using a circuit similar to that shown in Figure 3B.
  • method 1700 may confirm that the device is in hover mode and that at least one conductive object is detected by electrodes associated with or corresponding to unit cells as illustrated in Figure 10A-D, 13 and 14A and capacitance measurement circuitry and processing logic in step 1715. If there is no conductive object detected in step 1715 or the device is not in hover mode, method 1700 may return to step 1701 and measure mutual capacitance again. If hover mode is active and there is at least one conductive object detected, each mutual capacitance sensor (unit cell or electrode intersection) may be compared to an exclusion region in step 1725. In one embodiment, the exclusion region may be defined by a level of capacitance change on each unit cell.
  • the mutual capacitance is change large enough, it may be determined to be within the exclusion region and the value of that unit cell not included, in one embodiment.
  • unit cells close enough to a peak unit cell may be determined to be within the exclusion region, meaning that the determination is spatial rather than based on signals. If a unit cell is determined to be within the exclusion region, its values may be withheld (excluded) from a common mode filter average in step 1730. If the unit cell is not within the exclusion region, a difference value may be calculated from the measured mutual capacitance value and a common mode filter baseline value in step 1740. If the difference value is greater than a threshold in step 1745, the raw capacitance measurement value may be added to a sum in step 1750.
  • Step 1755 may ensure that all receive channels (e.g. channel 320 of Figure 3B) have been analyzed and their values added to the sum if the unit cell is outside the exclusion region (step 1725) and the difference value greater than the threshold (step 1745). If all receive channels have been processed in step 1755, an average of the processed receive channels may be calculated in step 1760. In one embodiment, the average of the processed receive channels may be calculated from the summed difference values of step 1750 divided by the total number of difference values added to the sum. Note, the sum does not include values from unit cells within the exclusion zone (step 1725) or with difference values below the threshold (step 1745).
  • receive channels e.g. channel 320 of Figure 3B
  • step 1760 After the average is calculated in step 1760, it may be subtracted from the raw capacitance values of all of the sensors in step 1780, regardless of whether they are outside the exclusion zone or if their difference value is greater than the threshold.
  • the processed values from step 1780 may then be recorded in step 1630 of method 1600 and the remainder to method 1600 completed.
  • method 1700 may be completed by CPU 512 of Figure 5. In various other embodiments, portions of method 1700 may be completed by a combination of CPU 512, tuning logic 513, and host 520 of Figure 5.
  • Figure 17B illustrates the result 1790 of the common mode filter on the mutual capacitance hover data.
  • Line 1791 illustrates the recorded mutual capacitance data of step 1610 of methods 1600 and 1700.
  • Line 1795 illustrates mutual capacitance data of step 1610 where all data for mutual capacitance at each unit cell is treated with the same subtraction. While the values of mutual capacitances of unit cells near the edge of the hover detection are zeroed out, there is a great amount of signal lost.
  • Line 1793 illustrates mutual capacitance data of step 1780 with the common mode filter process complete. Noise at the edges of the hover detection is zeroed out, but a greater amount of signal is retained for calculation of hover position.
  • FIG 18A illustrates one embodiment of a method 1800 of the unit cell group processing of processing of step 1625 of method 1600.
  • Self capacitance may be first measured in step 1801, similar to the capacitance measurement of step 1601 of method 1600 and with a circuit similar to that of Figure 3A.
  • method 1800 may populate a plurality of virtual unit cells with values derived from the measured self capacitance values of real unit cells.
  • Virtual unit cells may correspond to locations on the array just outside the physical area covered by the electrodes and corresponding to unit cells as shown in Figure 2A-B and Figures 10A-D. Virtual unit cells may not be part of the array of unit cells, but rather created in firmware to represent unit cells beyond the physical limits of the array of unit cells. In one embodiment, virtual unit cells may be populated according to Table 1.
  • Table 1 is representative on one side of an array of self capacitance sensors and that a similar table may be used for the other side of the array of self capacitance sensors.
  • Table 2 An example is shown in Table 2.
  • Tables 1 and 2 show the same equations used for both the right and left of the array, one of ordinary skill in the art would understand that other calculations may be used to derive values for virtual unit cells. Additionally, while only two basic equations are shown and applied to specific unit cells, various other equations may be used and may differ based on the position of the virtual unit cell relative to the rest of the array of unit cells, the axis of the virtual unit cell or the axis under test, measured capacitance values for the various unit cells, including a peak unit cell, or the mode of operation of a device or touchscreen controller. Virtual unit cell and real unit cell data may be summed in step 1820 to create a new value for each real unit cell in the self capacitance array.
  • step 1820 may be compared to a hover threshold in step 1825 and if the sum index is greater than the hover threshold, a hover detected and the position from step 1640 of method 1600 reported to the host in step 1661. If the sum index is not greater than the hover threshold, the hover position calculated in step 1640 may not be reported to the host in step 1661.
  • method 1800 may be completed by CPU 512 of Figure 5. In various other embodiments, portions of method 1800 may be completed by a combination of CPU 512, tuning logic 513, and host 520 of Figure 5.
  • Figure 18B illustrates the result 1890 of the sum index and hover threshold comparison of method 1800.
  • the values of lines 1895 (raw capacitance data from step 1615) and 1893 (processed capacitance data from step 1614 and 1820) are compared to the hover threshold 1891. If the raw values are less than the threshold and the processed values are greater than the threshold, a valid hover is detected in step 1655.
  • the next step may be to determine the position of that hover on the array.
  • Figure 19A illustrates an array 1901 of mutual capacitance unit cells with capacitance changes similar to those shown in Figure IOC.
  • Peak unit cell 1910 is identified using the 5x5 matrix of unit cells as illustrated in Figure IOC and described in Figure 11A.
  • Figure 19A illustrates the calculation of an EdgeCutoff that may be used to determine which unit cells are used to calculate the position of a hover over the array 1901 of mutual capacitance unit cells. Because of noise at the edges of a hover, there may exist jitter on the position calculation of a hover that creates imprecision in the position reported to the host and viewed by the user. Determining which unit cells are used in calculating the position by setting a threshold that is derived, in part, from the measured capacitance changes may improve performance and the user experience.
  • peak unit cell 1910 After peak unit cell 1910 is identified, the center three 1x5 matrices of cells about the peak may be summed. In the example in Figure 19, the row with peak unit cell 1910 is summed to equal 84 (14+21+23+16+10) and the rows immediately above and below peak unit cell 1910 are summed to equal 97 and 94, respectively. The column with peak unit cell 1910 is summed to equal 106
  • 1x5 matrix is showin in Figure 19
  • different shapes and sizes of matrices may be used. For example 1x3, 1x4, or even 1x87 matrices may be used.
  • the size of the matrix may be set during development in one embodiment. In another embodiment, the size of the matrix may be set based on the unit cell or column or row under test, the signal level of a peak unit cell, or a mode of operation of the device or touchscreen controller. [00138] In the example of Figure 19 A, there are six columns with values above the EdgeCutoff and six rows with values above the EdgeCutoff. These six rows and six columns may be used in calculating the position of the hover.
  • the scalar of 10/54 may be determined in development and design of the touchscreen in one embodiment. In another embodiment, there may be multiple values stored in the device, either in the touchscreen controller or in the host, that may be used based on varied capacitance change values or user settings. In one embodiment, a user may be able to increase or decrease the scalar by entering a settings window and adjusting the sensitivity of the hover detection and the accuracy of the position, relative to other touch parameters. For example, decreasing the EdgeCutoff may provide greater range in calculation of hover position, but may also introduce more jitter into the position calculation. If a user values sensitivity over precision, they may chose to decrease the value of the scalar.
  • Figure 19B illustrates an embodiment of the EdgeCutoff and how it may be applied to the summed values in determining which rows are used in calculation of the position of the hover and how much of each column and row are used. While Figure 19A illustrates only the calculation of position along rows, one of ordinary skill in the art would understand that that a similar method and calculation may be used for columns.
  • Each row with values above the EdgeCutoff 1930 is shown as a histogram with the value at the top. Rows 2 and 9 are below EdgeCutoff 1930 and are therefore not used in the calculation of the hover position. Rows 3 and 8 are above EdgeCutoff 1930, but they are the first rows on each side of peak unit cell 1910 that are included and their contributions may therefore be discounted.
  • hover detection may be susceptible to false detections or rapidly changing positions due to proximity of other objects near or on the array of capacitance sensing electrodes.
  • some level of hysteresis may be used.
  • Figure 20 illustrates a method 2000 for maintaining position of hover in the presence of other signals (such as another hover or a grip) or high noise.
  • Method 2000 may be used to ensure that a hover detection is made and tracked over the screen as a user' s hand gets closer to the array of electrodes or if a user' s fingers holding the device begin to change the capacitance of electrodes along the edge of the array of electrodes.
  • Mutual capacitances for each unit cell may be measured in step 2010 and the sum of each unit cell and the 24 unit cells in a 5x5 matrix about that unit cell is calculated in step 2010. This may be similar to that described with regard to Figure IOC.
  • a peak sensor (or sensors) is identified in step 2030, for example as shown and described in Figures 10 11, and 12.
  • the presently identified peak unit cell may be compared to see if it is close to or far from the previous peak. If the presently identified peak unit cell is close to the previous peak unit cell, a normal hover threshold may be applied in step 2042. In various embodiments, this may mean that normal scalars are used (see EdgeCutoff from Figures 19 A and 19B) or it may mean that values of capacitance changes necessary to detect that a hover is even present are maintained.
  • a hysteresis may be applied to the various hover thresholds in step 2044.
  • scalars may be increased for the EdgeCutoff, or the values of capacitances necessary to detect that a hover is present may be raised.
  • the capacitance values may be analyzed to detect a hover in step 2045. If the hover requirements are not met in step 2045, method 2000 may return to step 2010 and measure the mutual capacitances again. If the hover requirements are met in step 2045, the height (Z position) of the object above the array and the object size may be calculated in step 2050.
  • the object size from step 2050 may be compared to a threshold for close objects in step 2053. If the object size from 2050 is greater than the threshold for close objects in step 2053, the position of the object may be calculated in step 2060 and recorded in step 2070. If the object size from 2050 is not greater than the threshold for close objects in step 2053, method 2000 may return a state of "no hover" in step 2080 and may return to step 2010 and measure mutual capacitances again. For far peaks identified in step 2035, the object size from step 2050 may be compared to a threshold for far objects in step 2057.
  • the position of the object may be calculated in step 2060 and recorded in step 2070. If the object size from 2050 is not greater than the threshold for far objects in step 2057, method 2000 may return a state of "no hover" in step 2080 and may return to step 2010 and measure mutual capacitances again.
  • Method 2000 of Figure 20 may be used to ignore extraneous or unintended interaction with the array that may register as a hover by using expected values to increase or decrease the criteria for detecting a hover.
  • the thresholds for qualifying a hover may be relaxed compared to the qualifications if a hover detection that is farther from the expected position. This means that a hover that is moving rapidly high above the array is more likely to be dropped than a finger at the same height that moves more slowly.
  • Figure 21 illustrates addition steps in method 2100 for determining which peak unit cell to use in detecting or maintaining a hover, as well as calculating the hover position.
  • Method 2100 may begin by recording positions for close and far objects in step 2110, similar to step 2070 of Figure 20. If a large object was previously detected in step 2115, method 2100 may then determine if a large object is presently detected in step 2125. If a large object is presently detected in step 2125, a large object debounce counter may be decremented in step 2130 and a hover not reported in step 2190. If a large object was not previously detected in step 2115 or a large object is not detected in step 2125, a peak unit cell may be detected in step 2135. If a peak unit cell is not detected, a hover is not reported in step 2190.
  • step 2135 If a peak unit cell is detected in step 2135, and the far peak unit cell is much higher than the close peak, the far peak is used for hover position calculation in step 2154. If the far peak unit cell is not much higher than the close peak unit cell, the close peak unit cell is used for hover position calculation in step 2152. Hover position may be updated based on the identified peak unit cells from steps 2152 and 2154 in step 2160. If the peak unit cell is not part of a large object in step 2165, the final position is calculated and a hover contact is report to the host in step 2180. If the peak unit cell is part of a large touch, a large object debounce counter may be initialized in step 2170 and a hover not reported in step 2190.
  • FIG. 22A illustrates an embodiment of a method 2201 for calculating the position of a hover contact on an array of electrodes using a median unit cell and the pitch of the electrodes of the array. All unit cells used in calculation of position may be summed in step 2210.
  • the unit cells used for hover position calculation are determined using the method outlined with regard to Figures 19 A and 19B.
  • a straight usage of a matrix of unit cells around a peak as shown in Figures lOB-C and 11 may be used.
  • all unit cells in the array or other subsets of unit cells may be use. After all of the signals from all relevant unit cells are summed in step 2210, the sum my then be divided by 2 in step 2220. Starting with the first unit cell in the array to be used in position calculation, values on each unit cell are added together in step 2230.
  • the value of a first unit cell, SN may be added to the value of a second unit cell, SN + I-
  • the value of S and SN + I may then be recorded as a total, STOTAL in step 2240.
  • the value of STOTAL may then be compared to the sum of all unit cells divided by 2 from step 2220 in step 2245. If STOTAL is less than half of the sum of all unit cells, the signal from the next unit cell may be added to STOTAL in step 2250 and the comparison of step 2245 may be repeated.
  • a ratio, R may be calculated by subtracting half of the sum of all unit cells from STOTAL and dividing that value by the value of the unit cell, SHALF, that cause STOTAL to pass the value of half of the sum of all unit cells in step 2260. Position may then be calculated as the ratio, R, times the unit cell pitch plus the number of previous unit cells times the unit cell pitch in step 2270. Dividing the sum of all relevant unit cells by two is not intended to be limiting. In some embodiments, different values may be used in the denominator, including whole numbers or fractions.
  • Figure 22B illustrates an example of method 2201 of Figure 22A.
  • the position using a standard centroid calculation is shown as centroid position 2280. However, this position may be unnecessarily effected by the sensor values at the right edge of the contact.
  • the median position 2282 is shown to be slightly left of the centroid position 2280.
  • STOTAL is 270, which means that the position is calculated as 28.125% across sensor 3. If standard centroid calculation is used, the position is calculated as 44.34%, according to equation (3).
  • Figures 23 A, 23B, and 23C illustrate various embodiments of ratios of values on unit cells for different contact types near the edge of an array of capacitance sensing electrodes. These ratios may be used to differentiate between the different contacts and apply different position calculation and other processing algorithms to the capacitance information. The sensors used in calculation may be determined similarly to those shown in Figure 19A and 19B.
  • Figure 23A illustrates a grip contact on the edge of an array.
  • a peak unit cell may be identified and a pair of 1x5 matrices about that peak unit cell summed. The ratio of the outer most matrix to the more centered matrix may then be calculated and recorded.
  • the sum of the 1x5 matrix at the edge of the array is 236 and the sum of the more centered 1x5 matrix is 64, yielding a ratio of 3.7.
  • Figure 23B illustrates a hover contact over the edge of an array.
  • a peak unit cell may be identified and a pair of 1x5 matrices about that peak unit cell summed as described in with regard to Figure 23A. The ratio of the outer most matrix to the more centered matrix may then be calculated and recorded.
  • the sum of the 1x5 matrix at the edge of the array is 68 and the sum of the more centered 1x5 matrix is 45, yielding a ratio of 1.5
  • Figure 23C illustrates a hover contact near the edge of an array, but not over it.
  • a peak unit cell may be identified and a pair of 1x5 matrices about that peak unit cell summed as described in with regard to Figures 23 A and 23B.
  • the ratio of the outer most matrix to the more centered matrix may then be calculated and recorded.
  • the sum of the 1x5 matrix at the edge of the array is 124 and the sum of the more centered 1x5 matrix is 111, yielding a ratio of 1.1.
  • the matrices of Figures 23A, 23B, and 23C while illustrated as 1x5 matrices, may have different dimensions and shapes.
  • the matrices may be determined during the manufacturing process for each touchscreen unit and burned into the controller at production.
  • the matrices may be changed or turned by the user through a settings menu and interface.
  • a user may not change the dimensions and shapes of the matrices directly. Rather settings on sensitivity or response may change a variety of parameters, of which the matrix shape may be one.
  • the various ratios of Figures 23 A, 23B, and 23C may be compared to expected values and if they are in certain ranges, the contacts correctly identified and processed accordingly.
  • the ranges may be determined during development and burned into the controller during production.
  • the ranges may be determined during the manufacturing process for each touchscreen unit and burned into the controller at production.
  • the ratios may be changed or otherwise altered by the user through a settings menu and interface.
  • a user may not change the ratio thresholds directly. Rather settings on sensitivity or response may change a variety of parameters, of which the ratio thresholds may be one.
  • Figures 23 A, 23B, and 23C show detection of objects on vertical edges of an array of unit cells, one of ordinary skill in the art would understand that the same or similar scheme may be used along horizontal edges of an array of unit cells..
  • FIG. 24 illustrates one embodiment of a method 2400 for determining what type of contact is present according the embodiments in Figures 23A, 23B, and 23C.
  • Mutual capacitance may be first measured in step 2410.
  • mutual capacitance may be measured according to the circuit illustrated in Figure 3B.
  • the measured mutual capacitance may be represented as the difference between a measured value and a baseline value that is maintained for each unit cell or globally for the array.
  • a peak unit cell may then be identified in step 2420.
  • the peak unit cell of step 2420 may be identified in a manner similar to that shown in Figures 10B and IOC, as well as Figures 11A and 11B.
  • active unit cells may be identified in step 2430 in a manner similar to that shown in Figure 19 A.
  • the contact ratio of the active unit cells as illustrated in Figure 23 may be calculate in step 2440.
  • the contact ratio of step 2440 may be calculated as the sum of a first 1x5 matrix about a peak unit cell at the edge of the array divided by a second 1x5 matrix about a peak unit cell one column or row more central than the first 1x5 matrix. While the embodiment shown in Figure 23 illustrates only a pair of 1x5 matrices, one of ordinary skill in the art would understand that matrixes of greater length and width may be used.
  • step 2440 Once the contact ratio is calculated in step 2440, it may be compared to a number of ranges in step 2445. If the ratio of step 2440 is within a "grip" range, a grip contact may be identified in step 2452 and processed with grip heuristics in step 2462. If the ratio of step 2440 is within a "hover on” range, a hover on the edge of the array may be identified in step 2454 and processed with the appropriate hover heuristics in step 2465. If the ratio of step 2440 is within a "near hover” range, a hover near the edge of the array may be identified in step 2456 and processed with the appropriate hover heuristics in step 2465.
  • Figure 25A illustrates the maximum signals detected for various touch types, including hover, hover near an edge, a thin glove, a 2mm stylus, no touch, a thick glove and a lmm stylus. These maximum signals may be used to define partitions or ranges with the regions of Figure 26. If there is touch or hover detected, the received signals may represented a noise floor, from which difference values may be calculated and processed to detect the various object and contact types.
  • Figure 25B illustrates the relationship between the value of a peak unit cell and the surrounding unit cells for the various object and contact types.
  • a typical finger has a high peak unit cell value and falls off from there at a rate that is below a stylus, but above a gloved finger or hover.
  • a gloved finger and a stylus have similar peak unit cells values, but the lower contact area of a stylus results in a much steeper drop off of signal from unit cells surrounding the peak unit cell.
  • a hover and a stylus may have different peak unit cell values, but integrating the value of the peak unit cell and the surrounding unit cell, may result in similar values.
  • Figure 26 illustrates a chart with various 5x5 sums of sensors similar as those illustrated in Figures 10B and 10D on the Y-axis and the peak sensor value on the X-axis to show that various touch types have peak values and sums that correspond to specific regions and may be categorized when a touch controller is in various touch modes.
  • Figures 27 A, 27B, and 27C illustrate various partitions or ranges for detecting a contact type when operating in various modes.
  • the 5x5 matrix sum about a peak unit cell is calculated similar as those illustrated in Figures 10B and 10D and plotted on the Y axis.
  • the peak unit cell value is plotted on the X axis. Where the point lies within the various partitions is used to identify the contact type.
  • Figure 27 A illustrates a plot 2701 of partitions or ranges for a touchscreen device operating in finger mode.
  • all peak unit cells with values less than a value of a first touch threshold 2721 are identified as "idle" or corresponding to no contact in direct contact the array.
  • peak unit cells and the values of each may be determined and calculated according to Figures 10, 11, and 12.
  • a maximum glove peak 2723 which is representative of the highest peak value for which a gloved finer may be detected
  • 5x5 sums that are greater than the peak unit cell value multiplied by a value to create a matrix threshold 2724 are identified as a gloved finger.
  • peak values greater than the first touch threshold 2721 with 5x5 sums that are less than the peak unit cell value multiplied by a value to create a matrix threshold 2726 and greater than the peak 2125 are identified as a stylus. All other points are identified as a bare finger directly on the array.
  • Figure 27B illustrates a plot 2702 of partitions or ranges for a touchscreen device operating in glove mode.
  • all peak unit values less than a value of a minimum value that can be used to identify a gloved finger 2731 are identified as "idle" or corresponding to no contact in direct contact the array.
  • peak unit cells and the values of each may be determined and calculated according to Figures 10, 11, and 12.
  • a gloved finger For peak values greater than the minimum value that can be used to identify a gloved finger 2731 and less than the maximum signal that can be used to identify a gloved finger 2733 with 5x5 sums (as illustrated in Figures 10B and 10D) that are not with a partition bounded by the value of the peak along the Y axis (line 2726), the maximum value of a 5x5 matrix that can be detected as a stylus 2726 and peak unit cell values bounded by matrix threshold 2724 and stylus threshold 2732, a gloved finger is identified.
  • a stylus For peak values greater than the stylus threshold 2732 with 5x5 sums less than the maximum value of a 5x5 matrix that can be detected as a stylus 2726 and greater than peak on the Y axis 2125 are identified as a stylus. All other points are identified as an uncovered finger directly on the array. For points within a partition bounded by the peak on the Y axis 2726, the maximum value of a 5x5 matrix that can be detected as a stylus 2726 and peak values bounded by matrix threshold 2724 and stylus threshold 2732, a stylus is identified. All other points are identified as a bare finger directly on the array.
  • Figure 27C illustrates a plot 2703 of partitions or ranges for a touchscreen device operating in stylus mode or hover mode.
  • all peak values less than a value of stylus threshold 2732 are identified as "idle" or corresponding to no contact in direct contact the array.
  • peak unit cells and the values of each may be determined and calculated according to Figures 10, 11, and 12. For peak values greater than stylus threshold 2732 and less than maximum glove peak 2723 with 5x5 sums (as illustrated in Figures 10B and 10D) that are greater than an increased peak value 2745 are identified as a gloved finger.
  • Figure 28 illustrates a method 2800 for determining the correct mode of a touchscreen based on the maximum signal (value of a peak unit cell) and a number of thresholds, according to one embodiment. Capacitance of each intersection may be measured in step 2802 and maximum difference values calculated in step 2804. If hover mode is active in step 2805, a threshold is set by the hover mode settings in step 2810 and the maximum difference value from step 2804 is compared to that threshold in step 2813.
  • the threshold is set by whatever the previous mode was in step 2808. The maximum difference value is compared to the threshold from the previous mode in step 2815 and, if it is less than the threshold, no contact is detected and the touchscreen enters an idle mode in step 2818. If it is greater than the threshold from the previous mode, the device transitions from hover mode to whatever the previous mode was in step 2822.
  • the maximum difference value from step 2804 is compared to a threshold for a transition from stylus detection to finger detection in step 2827 and if the maximum difference values are greater than that threshold, the touchscreen device enters finger mode in step 2830. If glove mode is active in step 2833, the maximum difference value from step 2804 is compared to a threshold for a transition from glove detection to finger detection in step 2835 and if the maximum difference values are greater than that threshold, the touchscreen device enters finger mode in step 2838. If it does not, the previous mode is entered and the appropriate thresholds for that mode are used.
  • Figure 29 illustrates another embodiment of a method 2900 for detecting object types and contact types from different modes, according to one embodiment.
  • the maximum different values may be first calculated in step 2902, similar to step 2804 of method 2800.
  • a mode may be determined in step 2905, between stylus mode, finger mode, or glove mode. If the device is in Stylus mode in step 2906, the peak unit cell and the 5x5 matrix of unit cells about the peak unit cell (as illustrated in Figures 10B and 10D) may be plotted on according to Figure 27 C. If the peak unit cell and the 5x5 matrix of unit cells falls within the glove partition of Figure 27C in step 1915, method 2900 may determine if the mode change is within a debounce time in step 2917.
  • the mode switch may be set to "pending" in step 1918 and Stylus mode may be maintained in step 2924.
  • Stylus mode may be maintained in step 2924.
  • the peak unit cell and the 5x5 matrix of unit cells falls within the glove partition of Figure 27C in step 1915, and the switch is outside a debounce time in step 2917, method 2900 may change the device mode to Glove mode in step 2922 and return that mode in step 2924.
  • the peak unit cell and the 5x5 matrix of unit cells about the peak unit cell may be plotted on according to Figure 27B. If the peak unit cell and the 5x5 matrix of unit cells falls within the stylus partition of Figure 27B in step 2925, method 2900 may determine if the mode change is within a debounce time in step 2927. If the mode change is within the debounce time the mode switch may be set to "pending" in step 2928 and glove mode may be maintained in step 2924. Likewise, if the peak unit cell and the 5x5 matrix of unit cells does not fall within the stylus partition of Figure 27C in step 2925, glove mode may be maintained in step 2942.
  • method 2900 may change the device mode to stylus mode in step 2940 and return that mode in step 2942.
  • method 2900 may first determine if there is a switch to glove mode pending in step 2945. If there is not, the peak unit cell and the 5x5 matrix of unit cells about the peak unit cell (as illustrated in Figures 10B and 10D) may be plotted on according to Figure 27 A. If the peak unit cell and the 5x5 matrix of unit cells falls within the stylus partition of Figure 27B in step 2955, method 2900 may determine if the mode change is within a debounce time in step 2957. If the mode change is within the debounce time, the mode switch may be set to "pending" in step 2958 and finger mode many be maintained in step 2972.
  • step 2955 finger mode may be maintained in step 2972.
  • the peak unit cell and the 5x5 matrix of unit cells falls within the stylus partition of Figure 27C in step 2955, and the switch is outside a debounce time in step 2957, method 2900 may change the device mode to stylus mode in step 2970 and return that mode in step 2972.
  • the peak unit cell and the 5x5 matrix of unit cells about the peak unit cell may be plotted on according to Figure 27A. If the peak unit cell and the 5x5 matrix of unit cells falls within the glove partition of Figure 27B in step 2975, method 2900 may determine if the mode change is within a debounce time in step 2977. If the mode change is within the debounce time, the mode switch may be set to "pending" in step 2958 and finger mode may be maintained in step 2992. Likewise, if the peak unit cell and the 5x5 matrix of unit cells does not fall within the glove partition of Figure 27C in step 2975, Finger mode may be maintained in step 2992.
  • method 2900 may change the device mode to glove mode in step 2990 and return that mode in step 2992.
  • Figure 30 illustrates one embodiment of a state diagram 3000 for entering into hover mode when if a large object is detected.
  • State diagram 3000 may begin in look for hover mode 3010 or look for touch mode 3012. If a hovering object is detected, state diagram 3000 may enter a valid hover state 3020. If a large hovering object is detected, state diagram may enter a large hover object state 3030 from which only two options are possible: look for hover mode 3010 or valid finger mode 3040. This means that a valid hover state may not be entered if a large object is detected hovering over the array.
  • Embodiments described herein may also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a non- transitory computer-readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, flash memory, or any type of media suitable for storing electronic instructions.
  • computer-readable storage medium should be taken to include a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) that store one or more sets of instructions.
  • the term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present embodiments.
  • the term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, magnetic media, any medium that is capable of storing a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)
PCT/US2014/060456 2013-10-14 2014-10-14 Hover position calculation in a touchscreen device WO2015057687A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201480062806.1A CN106030482B (zh) 2013-10-14 2014-10-14 触摸屏装置的悬停位置计算

Applications Claiming Priority (22)

Application Number Priority Date Filing Date Title
US201361890753P 2013-10-14 2013-10-14
US201361890738P 2013-10-14 2013-10-14
US201361890794P 2013-10-14 2013-10-14
US201361890745P 2013-10-14 2013-10-14
US201361890757P 2013-10-14 2013-10-14
US61/890,753 2013-10-14
US61/890,757 2013-10-14
US61/890,794 2013-10-14
US61/890,738 2013-10-14
US61/890,745 2013-10-14
US201462004724P 2014-05-29 2014-05-29
US62/004,724 2014-05-29
US201462028393P 2014-07-24 2014-07-24
US62/028,393 2014-07-24
US201462039308P 2014-08-19 2014-08-19
US62/039,308 2014-08-19
US201462039796P 2014-08-20 2014-08-20
US62/039,796 2014-08-20
US201462042678P 2014-08-27 2014-08-27
US62/042,678 2014-08-27
US14/513,179 2014-10-13
US14/513,179 US9213458B2 (en) 2013-10-14 2014-10-13 Hover position calculation in a touchscreen device

Publications (1)

Publication Number Publication Date
WO2015057687A1 true WO2015057687A1 (en) 2015-04-23

Family

ID=52809265

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/060456 WO2015057687A1 (en) 2013-10-14 2014-10-14 Hover position calculation in a touchscreen device

Country Status (3)

Country Link
US (2) US9213458B2 (zh)
CN (1) CN106030482B (zh)
WO (1) WO2015057687A1 (zh)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI529579B (zh) * 2013-12-31 2016-04-11 Egalax Empia Technology Inc Touch panel of the integrated circuit device
KR102186393B1 (ko) * 2014-01-02 2020-12-03 삼성전자주식회사 입력 처리 방법 및 그 전자 장치
CN104679375B (zh) * 2015-03-17 2017-11-14 京东方科技集团股份有限公司 一种优化信噪比参数的方法及装置
US10095347B2 (en) * 2015-08-28 2018-10-09 Wacom Co., Ltd. Passive stylus and grip shadow recognition
JP6532128B2 (ja) * 2015-09-14 2019-06-19 株式会社東海理化電機製作所 操作検出装置
KR102379635B1 (ko) * 2015-10-12 2022-03-29 삼성전자주식회사 전자 장치 및 이의 제스처 처리 방법
US10317947B2 (en) 2015-10-12 2019-06-11 Samsung Electronics Co., Ltd. Electronic device and method for processing gesture thereof
US10299729B2 (en) * 2015-10-23 2019-05-28 Google Llc Heart rate detection with multi-use capacitive touch sensors
JP6704754B2 (ja) * 2016-02-29 2020-06-03 アルプスアルパイン株式会社 判定装置及び判定方法
CN105824465A (zh) * 2016-03-10 2016-08-03 京东方科技集团股份有限公司 触控判定方法和装置以及显示装置
US10185867B1 (en) * 2016-03-15 2019-01-22 Cypress Semiconductor Corporation Pressure detection and measurement with a fingerprint sensor
US10401984B2 (en) 2016-12-14 2019-09-03 Texas Instruments Incorporated User interface mechanical control apparatus with optical and capacitive position detection and optical position indication
US10288658B2 (en) * 2017-02-02 2019-05-14 Texas Instruments Incorporated Enhancing sensitivity and robustness of mechanical rotation and position detection with capacitive sensors
US10521880B2 (en) * 2017-04-17 2019-12-31 Intel Corporation Adaptive compute size per workload
CN107102776A (zh) * 2017-05-24 2017-08-29 努比亚技术有限公司 一种触摸屏控制方法、装置、移动终端以及存储介质
US11243657B2 (en) 2017-06-28 2022-02-08 Huawei Technologies Co., Ltd. Icon display method, and apparatus
CN107506091B (zh) * 2017-09-28 2021-05-14 京东方科技集团股份有限公司 触控检测芯片、触控面板及触控检测方法
US10437365B2 (en) 2017-10-11 2019-10-08 Pixart Imaging Inc. Driver integrated circuit of touch panel and associated driving method
CN110069165A (zh) * 2019-04-29 2019-07-30 广州视源电子科技股份有限公司 触摸数据的处理方法、装置、及设备和存储介质
US11150751B2 (en) * 2019-05-09 2021-10-19 Dell Products, L.P. Dynamically reconfigurable touchpad
EP3970125A1 (de) * 2019-05-13 2022-03-23 Prismade Labs GmbH Vorrichtung und verfahren zur kontrolle von elektrisch leitfähigen sicherheitsmerkmalen und kontrollvorrichtung für elektrisch leitfähige sicherheitsmerkmale
GB2585653B (en) * 2019-07-09 2022-03-23 Cambridge Touch Tech Ltd Force signal processing
CN111124186B (zh) * 2019-12-27 2022-10-14 郑州大学 基于摩擦电与静电感应的不接触屏幕传感器和传感方法
JP7378898B2 (ja) 2020-01-17 2023-11-14 アルパイン株式会社 入力装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090174675A1 (en) * 2008-01-09 2009-07-09 Dave Gillespie Locating multiple objects on a capacitive touch pad
US20090284495A1 (en) * 2008-05-14 2009-11-19 3M Innovative Properties Company Systems and methods for assessing locations of multiple touch inputs
US20110007021A1 (en) * 2009-07-10 2011-01-13 Jeffrey Traer Bernstein Touch and hover sensing
US8115753B2 (en) * 2007-04-11 2012-02-14 Next Holdings Limited Touch screen system with hover and click input methods
US20120055021A1 (en) * 2010-09-08 2012-03-08 Microject Technology Co., Ltd. Inkjet head manufacturing method
US20120139870A1 (en) * 2010-12-01 2012-06-07 Stmicroelectronics (Rousset) Sas Capacitive touch pad configured for proximity detection
US20120182225A1 (en) * 2011-01-17 2012-07-19 Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. Detection of Predetermined Objects with Capacitive Touchscreens or Touch Panels
US8519970B2 (en) * 2010-07-16 2013-08-27 Perceptive Pixel Inc. Capacitive touch sensor having correlation with a receiver

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5055231B2 (ja) * 2008-09-08 2012-10-24 株式会社ジャパンディスプレイイースト タッチパネルのタッチ位置検出方法
JP5643719B2 (ja) * 2011-06-29 2014-12-17 アルプス電気株式会社 座標検出装置
CN102999198B (zh) * 2011-09-16 2016-03-30 宸鸿科技(厦门)有限公司 触摸面板边缘持握触摸的检测方法和装置
US9182860B2 (en) * 2012-02-08 2015-11-10 Sony Corporation Method for detecting a contact
JP5995473B2 (ja) * 2012-03-19 2016-09-21 ローム株式会社 静電容量センサのコントロール回路、それを用いた電子機器
EP3014401A4 (en) * 2013-06-28 2017-02-08 Intel Corporation Parallel touch point detection using processor graphics

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8115753B2 (en) * 2007-04-11 2012-02-14 Next Holdings Limited Touch screen system with hover and click input methods
US20090174675A1 (en) * 2008-01-09 2009-07-09 Dave Gillespie Locating multiple objects on a capacitive touch pad
US20090284495A1 (en) * 2008-05-14 2009-11-19 3M Innovative Properties Company Systems and methods for assessing locations of multiple touch inputs
US20110007021A1 (en) * 2009-07-10 2011-01-13 Jeffrey Traer Bernstein Touch and hover sensing
US8519970B2 (en) * 2010-07-16 2013-08-27 Perceptive Pixel Inc. Capacitive touch sensor having correlation with a receiver
US20120055021A1 (en) * 2010-09-08 2012-03-08 Microject Technology Co., Ltd. Inkjet head manufacturing method
US20120139870A1 (en) * 2010-12-01 2012-06-07 Stmicroelectronics (Rousset) Sas Capacitive touch pad configured for proximity detection
US20120182225A1 (en) * 2011-01-17 2012-07-19 Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. Detection of Predetermined Objects with Capacitive Touchscreens or Touch Panels

Also Published As

Publication number Publication date
CN106030482B (zh) 2018-09-21
US20160098147A1 (en) 2016-04-07
US9684409B2 (en) 2017-06-20
US20150103043A1 (en) 2015-04-16
US9213458B2 (en) 2015-12-15
CN106030482A (zh) 2016-10-12

Similar Documents

Publication Publication Date Title
US9684409B2 (en) Hover position calculation in a touchscreen device
US9983738B2 (en) Contact detection mode switching in a touchscreen device
US9176635B2 (en) Virtual buttons for a touch interface
US9164137B2 (en) Tunable baseline compensation scheme for touchscreen controllers
US8982097B1 (en) Water rejection and wet finger tracking algorithms for truetouch panels and self capacitance touch sensors
US10331267B2 (en) Touch detection method and touch detector performing the same
US11249638B2 (en) Suppression of grip-related signals using 3D touch
US8525799B1 (en) Detecting multiple simultaneous touches on a touch-sensor device
CN105637458B (zh) 单层传感器图案
US9391610B2 (en) Single layer touchscreen with ground insertion
US8692802B1 (en) Method and apparatus for calculating coordinates with high noise immunity in touch applications
US20110310064A1 (en) User Interfaces and Associated Apparatus and Methods
US9134870B2 (en) Capacitive touch-sensitive panel and mobile terminal using the same
US8970796B2 (en) Field-line repeater (FLR) structure of a sense array
US10162465B2 (en) Methods and devices for determining touch locations on a touch-sensitive surface
US9367181B2 (en) System and method for determining user input and interference on an input device
US20150248178A1 (en) Touchscreen apparatus and touch sensing method
US9507454B1 (en) Enhanced linearity of gestures on a touch-sensitive surface
JP2014525611A (ja) 線形センサまたは単一層センサ上の2本指ジェスチャ
WO2015126497A1 (en) Virtual buttons for a touch interface
US20130201148A1 (en) Two-finger gesture on a linear sensor or single layer sensor
KR20130099498A (ko) 멀티 터치 감지가 가능한 단일층 정전용량 방식 터치스크린장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14854495

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14854495

Country of ref document: EP

Kind code of ref document: A1