CN108345415B - Object tracking using object velocity information - Google Patents

Object tracking using object velocity information Download PDF

Info

Publication number
CN108345415B
CN108345415B CN201710060861.2A CN201710060861A CN108345415B CN 108345415 B CN108345415 B CN 108345415B CN 201710060861 A CN201710060861 A CN 201710060861A CN 108345415 B CN108345415 B CN 108345415B
Authority
CN
China
Prior art keywords
objects
sensing region
location
processing system
sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710060861.2A
Other languages
Chinese (zh)
Other versions
CN108345415A (en
Inventor
许俊泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Howell Tddi Ontario LLP
Original Assignee
Howell Tddi Ontario LLP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Howell Tddi Ontario LLP filed Critical Howell Tddi Ontario LLP
Priority to CN201710060861.2A priority Critical patent/CN108345415B/en
Priority to PCT/US2018/012239 priority patent/WO2018140200A1/en
Publication of CN108345415A publication Critical patent/CN108345415A/en
Application granted granted Critical
Publication of CN108345415B publication Critical patent/CN108345415B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/04166Details of scanning methods, e.g. sampling time, grouping of sub areas or time sharing with display driving
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • G06F3/0446Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means using a grid-like structure of electrodes in at least two directions, e.g. using row and column electrodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Abstract

Embodiments herein include a method for tracking an object in a sensing region that includes determining a first location of a first object and a first location of a second object in the sensing region. The method includes determining that one of the objects has left the sensing region and that one of the objects has remained in a current position in the sensing region. The method includes calculating a velocity of each of the first object and the second object based on a difference in a position of each respective object in two previous frames divided by a time interval between the two previous frames. The method includes predicting a next position of each of the objects based on the first position of each of the objects and the velocity of each of the objects, and determining which of the objects resides in the sensing region based on the predicted next position.

Description

Object tracking using object velocity information
Technical Field
Embodiments of the present invention relate generally to methods and devices for touch sensing and, more particularly, to tracking objects using an input device.
Background
Input devices including proximity sensor devices (also commonly referred to as touch pads or touch sensor devices) are widely used in a variety of electronic systems. The proximity sensor device typically includes a sensing region, generally distinguished by a surface, in which the proximity sensor device determines the presence, location, and/or movement of one or more input objects. The proximity sensor device may be used to provide an interface for an electronic system. For example, proximity sensor devices are commonly used as input devices for larger computing systems (such as opaque touchpads integrated into or external to a notebook or desktop computer). Proximity sensor devices are also often used in smaller computing systems (such as touch screens integrated in cellular telephones).
Disclosure of Invention
Embodiments described herein include a method for tracking objects in a touch sensing area that includes determining a first location of a first object and a first location of a second object in the sensing area. The method includes determining that one of the objects has left the sensing region and that one of the objects has remained in a current position in the sensing region. The velocity of each of the first object and the second object is calculated based on the difference in the position of each respective object in the two previous frames divided by the time interval between the two previous frames. Based on the first position of each of the objects and the velocity of each of the objects, a next position of each of the objects is predicted, and then based on the predicted next positions, it is determined which of the objects resides in the sensing region.
In another embodiment, an input device for capacitive touch sensing includes a processor, a memory, a capacitive touch sensor configured to detect a first location of a first object and a first location of a second object within a sensing region. The capacitive touch sensor is also configured to detect that one of the objects has left the sensing region and that one of the objects has remained in a current position in the sensing region. The input device includes a processing system configured to calculate a velocity of each of the first object and the second object based on a difference in a position of each respective object in two previous frames divided by a time interval between the two previous frames. The processing system is further configured to predict a next position of each of the objects based on the first position of each of the objects and the velocity of each of the objects, and determine which of the objects resides in the sensing region based at least in part on the predicted next position.
In another embodiment, a processing system for capacitive touch sensing is configured to determine a first location of a first object and a first location of a second object within a sensing region. The processing system is further configured to determine that one of the objects has left the sensing region and that one of the objects has remained in the current location in the sensing region. The processing system is further configured to calculate a velocity of each of the first object and the second object based on a difference in a position of each respective object in the two previous frames divided by a time interval between the two previous frames. The processing system is further configured to predict a next position of each of the objects based on the first position of each of the objects and the velocity of each of the objects, and determine which of the objects resides in the sensing region based at least in part on the predicted next position.
Drawings
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
FIG. 1 is a block diagram of a system including an input device, in accordance with an embodiment.
FIG. 2 is an example sensor electrode pattern in accordance with an embodiment.
Fig. 3A-3C illustrate an example object tracking algorithm in accordance with an embodiment.
Fig. 4A-4C illustrate another example object tracking algorithm in accordance with an embodiment.
FIG. 5 is a flow diagram illustrating a method for tracking objects within a touch sensing area, in accordance with an embodiment.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without explicit recitation. The drawings referred to herein should not be understood as being drawn to scale unless specifically indicated. Also, the drawings are generally simplified, and details or elements are omitted for clarity of presentation and explanation. The figures and discussion are presented to explain the principles discussed below wherein like reference numerals refer to like elements.
Detailed Description
The following detailed description is merely exemplary in nature and is not intended to limit the embodiments or the application and uses of such embodiments. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.
Various embodiments of the present technology provide input devices and methods for improving usability. In particular, the embodiments described herein advantageously provide an algorithm for tracking objects, such as fingers, within a sensing region. As the object moves across the sensing region, one or more of the objects may leave the sensing region, e.g., one of the user's fingers leaves the boundary of the sensing region on the touch screen. When this occurs, it can be difficult to determine which object or objects have left the touch screen and which object or objects have remained in the sensing region. As objects move across the sensing region, object velocity information from previous frames may be calculated and used to predict the position of one or more objects. This prediction may be compared to the object position when one or more objects have left the sensing region. The comparison determines which of the objects has left the sensing region and which of the objects has remained in the sensing region.
Turning now to the drawings, FIG. 1 is a block diagram of an exemplary input device 100 in accordance with an embodiment of the present invention. The input device 100 may be configured to provide input to an electronic system (not shown). As used in this document, the term "electronic system" (or "electronic device") broadly refers to any system capable of electronically processing information. Some non-limiting examples of electronic systems include personal computers of all sizes and shapes, such as desktop computers, laptop computers, netbooks, tablets, web browsers, e-book readers, and Personal Digital Assistants (PDAs). Additional example electronic systems include a compound input device, such as a physical keyboard including input device 100 and a separate joystick or key switch. Further example electronic systems include peripheral devices such as data input devices (including remote controls and mice) and data output devices (including display screens and printers). Other examples include remote terminals, kiosks, and video gaming machines (e.g., video game consoles, portable gaming devices, etc.). Other examples include communication devices (including cellular telephones such as smart phones) and media devices (including recorders, editors, and players such as televisions, set-top boxes, music players, digital photo frames, and digital cameras). In addition, the electronic system may be a master or slave to the input device.
The input device 100 can be implemented as a physical component of the electronic system or can be physically separate from the electronic system. Optionally, the input device 100 may communicate with components of the electronic system using any one or more of the following: buses, networks, and other wired or wireless interconnections. Examples include I 2 C. SPI, PS/2, universal Serial Bus (USB), bluetooth, RF, and IRDA.
In fig. 1, the input device 100 is shown as a proximity sensor device (also commonly referred to as a "touch pad" or "touch sensor device") configured to sense input provided by one or more input objects 140 in the sensing region 120. Example input objects include a finger and a stylus as shown in fig. 1.
The sensing region 120 includes any space above, around, in, and/or near the input device 100 in which the input device 100 is capable of detecting user input (e.g., user input provided by one or more input objects 140). The size, shape, and location of particular sensing regions can vary greatly from embodiment to embodiment. In some embodiments, the sensing region 120 extends into space in one or more directions from the surface of the input device 100 until the signal-to-noise ratio prevents sufficiently accurate object detection. This distance that the sensing region 120 extends in a particular direction may be on the order of less than one millimeter, a few millimeters, a few centimeters, or more in various embodiments, and may vary significantly depending on the type of sensing technology used and the accuracy desired. Thus, some embodiments sense an input, including no contact with any surface of the input device 100, contact with an input surface (e.g., a touch surface) of the input device 100, contact with an input surface of the input device 100 that is coupled with an amount of applied force or pressure, and/or combinations thereof. In various embodiments, the input surface may be provided by a surface of the housing in which the sensor electrode is located, by a panel applied over the sensor electrode or any housing, etc. In some embodiments, the sensing region 120 has a rectangular shape when projected onto the input surface of the input device 100.
The input device 100 may use any combination of sensor components and sensing technologies to detect user input in the sensing region 120. The input device 100 includes one or more sensing elements for detecting user input. As several non-limiting examples, the input device 100 may use capacitive, inverse dielectric, resistive, inductive, magnetic, acoustic, ultrasonic, and/or optical techniques. Some implementations are configured to provide images that span one, two, three, or higher dimensional space. Some implementations are configured to provide a projection of input along a particular axis or plane. In some resistive implementations of the input device 100, the flexible and conductive first layer is separated from the conductive second layer by one or more spacer elements. During operation, one or more voltage gradients are generated across the multiple layers. Pressing the flexible first layer may cause it to flex sufficiently to create electrical contact between the layers, resulting in a voltage output reflecting the point of contact between the layers. These voltage outputs can be used to determine location information.
In some inductive implementations of the input device 100, one or more sensing elements obtain loop current induced by a resonant coil or coil pair. Some combination of the magnitude, phase and frequency of the current may then be used to determine location information.
In some capacitive implementations of the input device 100, a voltage or current is applied to generate an electric field. A nearby input object causes a change in the electric field and produces a detectable change in the capacitive coupling, which can be detected as a change in voltage, current, etc.
Some capacitive implementations use an array of capacitive sensing elements or other regular or irregular patterns to generate an electric field. In some capacitive implementations, the individual sensing elements may be ohmically shorted together to form a larger sensor electrode. Some capacitive implementations utilize resistive patches, which may be uniform in resistance.
Some capacitive implementations utilize a "self-capacitance" (or "absolute capacitance") sensing method that is based on a change in capacitive coupling between a sensor electrode and an input object. In various embodiments, an input object near the sensor electrode alters the electric field near the sensor electrode, thereby altering the resulting capacitive coupling. In one implementation, the absolute capacitive sensing method operates by modulating the sensor electrode relative to a reference voltage (e.g., system ground), and by detecting capacitive coupling between the sensor electrode and an input object.
Some capacitive implementations utilize a varying "mutual capacitance" (or "transcapacitive") sensing method based on capacitive coupling between sensor electrodes. In various embodiments, an input object near the sensor electrodes changes the electric field between the sensor electrodes, thereby changing the resulting capacitive coupling. In one implementation, the transcapacitive sensing method operates by detecting capacitive coupling between one or more transmitter sensor electrodes (also "transmitter electrodes" or "transmitters") and one or more receiver sensor electrodes (also "receiver electrodes" or "receivers"). The transmitter sensor electrode may be modulated relative to a reference voltage (e.g., system ground) to transmit a transmitter signal. The receiver sensor electrode may be maintained substantially constant relative to a reference voltage to facilitate receipt of the resulting signal. The resulting signals may include effects corresponding to one or more transmitter signals and/or to one or more environmental interference sources (e.g., other electromagnetic signals). The sensor electrodes may be dedicated transmitters or receivers, or the sensor electrodes may be configured to both transmit and receive. Alternatively, the receiver electrode may be modulated with respect to ground.
In fig. 1, a processing system 110 is shown as a component of the input device 100. The processing system 110 is configured to operate the hardware of the input device 100 to detect inputs in the sensing region 120. Processing system 110 includes part or all of one or more Integrated Circuits (ICs) and/or other circuit components. For example, a processing system for a mutual capacitance sensor device may include transmitter circuitry configured to transmit signals with transmitter sensor electrodes and/or receiver circuitry configured to receive signals with receiver sensor electrodes. In some embodiments, processing system 110 also includes electronically readable instructions, such as firmware code, software code, and the like. In some embodiments, the components comprising the processing system 110 are positioned together, such as near the sensing elements of the input device 100. In other embodiments, the components of the processing system 110 are physically independent, with one or more components being proximate to the sensing elements of the input device 100 and one or more components being elsewhere. For example, the input device 100 may be a peripheral coupled to a desktop computer, and the processing system 110 may include software configured to run on a central processing unit of the desktop computer and one or more ICs (perhaps with associated firmware) separate from the central processing unit. As another example, the input device 100 may be physically integrated in a phone, and the processing system 110 may include circuitry and firmware as part of the phone's main processor. In some embodiments, the processing system 110 is dedicated to implementing the input device 100. In other embodiments, the processing system 110 also performs other functions, such as operating a display screen, driving a haptic actuator, and the like.
The processing system 110 may be implemented as a set of modules that handle the different functions of the processing system 110. Each module may include circuitry, firmware, software, or a combination thereof as part of processing system 110. In various embodiments, different combinations of modules may be used. Example modules include a hardware operation module for operating hardware such as sensor electrodes and a display screen, a data processing module for processing data such as sensor signals and location information, and a reporting module for reporting information. Further example modules include a sensor operation module configured to operate the sensing element to detect an input; an identification module configured to identify a gesture, such as a mode change gesture; and a mode changing module for changing the operation mode.
In some embodiments, processing system 110 responds directly to user input (or no user input) in sensing region 120 by causing one or more actions. Example actions include altering the mode of operation, as well as GUI actions such as cursor movement, selection, menu navigation, and other functions. In some embodiments, the processing system 110 provides information about the inputs (or lack of inputs) to some component of the electronic system (e.g., to a central processing system of the electronic system that is separate from the processing system 110, if such a separate central processing system exists). In some embodiments, a component of the electronic system processes information received from the processing system 110 to act upon user input so as to facilitate a full range of actions, including mode change actions and GUI actions.
For example, in some embodiments, the processing system 110 operates the sensing elements of the input device 100 to generate electrical signals indicative of an input (or no input) in the sensing region 120. Processing system 110 may perform any suitable processing on the electrical signals in generating information for provision to an electronic system. For example, the processing system 110 may digitize the analog electrical signal obtained from the sensor electrode. As another example, the processing system 110 may perform filtering or other signal conditioning. As yet another example, the processing system 110 may subtract or otherwise account for the baseline such that the information reflects the difference between the electrical signal and the baseline. As yet other examples, the processing system 110 may determine location information, recognize inputs as commands, recognize handwriting, and so forth.
"position information" as used herein broadly encompasses absolute position, relative position, velocity, acceleration, and other types of spatial information. Exemplary "zero-dimensional" location information includes near/far or contact/non-contact information. Exemplary "one-dimensional" positional information includes position along an axis. Exemplary "two-dimensional" positional information includes motion in a plane. Exemplary "three-dimensional" location information includes instantaneous or average velocity in space. Further examples include other representations of spatial information. Historical data regarding one or more types of location information may also be determined and/or stored, including, for example, historical data that tracks location, motion, or instantaneous speed over time.
In some embodiments, the input device 100 is implemented with additional input components that are operated by the processing system 110 or by some other processing system. These additional input components may provide redundant functionality for inputs in the sensing region 120, or some other functionality. Fig. 1 shows buttons 130 near the sensing region 120 that can be used to facilitate selection of items using the input device 100. Other types of additional input components include sliders, balls, wheels, switches, and the like. Conversely, in some embodiments, the input device 100 may be implemented without other input components.
In some embodiments, the input device 100 includes a touch screen interface, and the sensing region 120 overlaps at least a portion of the active area of the display screen. For example, the input device 100 may include generally transparent sensor electrodes that cover the display screen, as well as provide a touch screen interface for an associated electronic system. The display screen may be any type of dynamic display capable of displaying a visual interface to a user and may include any type of Light Emitting Diode (LED), organic LED (OLED), cathode Ray Tube (CRT), liquid Crystal Display (LCD), plasma, electroluminescent (EL), or other display technology. The input device 100 and the display screen may share physical elements. For example, some embodiments may use some of the same electrical components for display and sensing. As another example, the display screen may be operated in part or in whole by the processing system 110.
It should be appreciated that while many embodiments of the invention are described in the context of fully functional devices, the mechanisms of the present invention are capable of being distributed as a program product (e.g., software) in a variety of forms. For example, the mechanisms of the present invention may be implemented and distributed as a software program on an information bearing medium readable by an electronic processor (e.g., a non-transitory computer readable and/or recordable/writeable information bearing medium readable by processing system 110). In addition, the embodiments of the present invention apply equally regardless of the particular type of medium used to perform the dispensing. Examples of non-transitory, electronically readable media include various optical discs, memory sticks, memory cards, memory modules, and the like. The electronically readable medium may be based on flash, optical, magnetic, holographic, or any other storage technology.
FIG. 2 illustrates a system 210 including a processing system 110 and a portion of an example sensor electrode pattern configured to sense within a sensing region of an associated pattern, in accordance with some embodiments. For clarity of illustration and description, fig. 2 shows a simple rectangular pattern illustrating sensor electrodes, without showing the various components. This sensor electrode pattern comprises a plurality of transmitter electrodes 160 (160-1, 160-2, 160-3, … 160-n), and a plurality of receiver electrodes 170 (170-1, 170-2, 170-3, …, 170-n) arranged over the plurality of transmitter electrodes 160.
The transmitter electrode 160 and the receiver electrode 170 are typically ohmically isolated from each other. In other words, one or more insulators separate the transmitter electrode 160 and the receiver electrode 170 and prevent them from electrically shorting to each other. In some embodiments, the transmitter electrode 160 and the receiver electrode 170 are separated by an insulating material disposed therebetween in the overlap region; in such a configuration, the transmitter electrode 160 and/or the receiver electrode 170 may be formed using jumpers that connect different portions of the same electrode. In some embodiments, the transmitter electrode 160 and the receiver electrode 170 are separated by one or more layers of insulating material. In some other embodiments, the transmitter electrode 160 and the receiver electrode 170 are isolated by one or more substrates; for example, they may be arranged on opposite sides of the same substrate, or on different substrates stacked together.
The region of localized capacitive coupling between the transmitter electrode 160 and the receiver electrode 170 may be referred to as a "capacitive pixel. The capacitive coupling between the transmitter electrode 160 and the receiver electrode 170 varies with the proximity and movement of an input object in a sensing region associated with the transmitter electrode 160 and the receiver electrode 170.
In some embodiments, the sensor pattern is "scanned" to determine these capacitive couplings. In other words, the transmitter electrode 160 is driven to transmit a transmitter signal. The transmitter may be operated such that one transmitter electrode is transmitted at a time, or multiple transmitter electrodes are transmitted simultaneously. Where multiple transmitter electrodes transmit simultaneously, the multiple transmitter electrodes may transmit the same transmitter signal and effectively produce a substantially larger transmitter electrode, or the multiple transmitter electrodes may transmit different transmitter signals. For example, multiple transmitter electrodes may transmit different transmitter signals according to one or more coding schemes that enable their combined effect on the resulting signal of the receiver electrode 170 to be determined independently.
The receiver sensor electrodes 170 may be operated singly or in multiple to obtain a resultant signal. The resulting signal may be used to determine a measure of capacitive coupling at the capacitive pixel.
The set of metrics from the capacitive pixels form a "capacitive image" (also a "capacitive frame") that represents the capacitive coupling at the pixel. Multiple capacitive images may be obtained over multiple time periods and the differences between them used to derive information about the input in the sensing region. For example, successive capacitive images obtained over successive time periods can be used to track movement of one or more input objects into, out of, and within the sensing region.
The background capacitance of the sensor device is a capacitive image associated with the absence of an input object in the sensing region. The background capacitance varies with the environment and operating conditions and can be estimated in various ways. For example, when it is determined that no input object is in the sensing region, some embodiments acquire "baseline images" and use those baseline images as an estimate of their background capacitance.
The capacitive image may be adjusted for the background capacitance of the sensor device for more efficient processing. Some embodiments achieve this by "baselining" a measure of the capacitive coupling at the capacitive pixels to produce a "baselined capacitive image". In other words, some embodiments compare the metrics that form the capacitance image to the appropriate "baseline value" of the "baseline image" associated with those pixels, and determine the change from that baseline image.
In some touch screen embodiments, the transmitter electrode 160 includes one or more common electrodes (e.g., V-com electrodes) for updating the display of the display screen. These common electrodes may be arranged on a suitable display substrate. For example, the common electrode may be disposed on TFT glass in some display screens (e.g., in-plane switching (IPS) or in-plane-to-line switching (PLS)), on the bottom of filter glass of some display screens (e.g., pattern Vertical Alignment (PVA) or multi-domain vertical alignment (MVA)), and so on. In such an embodiment, the common electrode may also be referred to as a "combined electrode" because it performs multiple functions. In various embodiments, each transmitter electrode 160 includes one or more common electrodes. In other embodiments, at least two transmitter electrodes 160 may share at least one common electrode.
In various touch screen embodiments, the "capacitive frame rate" (the rate at which successive capacitive images are obtained) may be the same as or different from the "display frame rate" (the rate at which images are updated, including refreshing the screen to redisplay the same images). In some embodiments, where the two rates are different, successive capacitive images are obtained at different display update states, and the different display update states may affect the obtained capacitive images. In other words, the display update affects especially the background capacitive image. Thus, if the first capacitive image is obtained when the display update is in the first state and the second capacitive image is obtained when the display update is in the second state, the first and second capacitive images may be different due to differences in the background capacitive image associated with the display update state, rather than due to changes in the sensing region. This is more likely where the capacitive sensing and display update electrodes are very close to each other, or when they are shared (e.g., combined electrodes).
For ease of explanation, the capacitive images obtained during a particular display update state are considered to belong to a particular frame type. In other words, a particular frame type is associated with a mapping of a particular capacitive sensing sequence to a particular display sequence. Thus, a first capacitive image obtained during a first display update state is considered to be of a first frame type, a second capacitive image obtained during a second display update state is considered to be of a second frame type, a third capacitive image obtained during the first display update state is considered to be of a third frame type, and so on. Where the relationship between display update status and capacitive image acquisition is periodic, the acquired capacitive image loops through frame types and then repeats.
The processing system 110 may include a driver module 230, a receiver module 240, a determination module 250, and an optional memory 260. The processing system 110 is coupled to the receiver electrode 170 and the transmitter electrode 160 by a plurality of conductive routing traces (not shown in fig. 2).
The receiver module 240 is coupled to the plurality of receiver electrodes 170 and is configured to receive a resulting signal indicative of an input (or no input) and/or an environmental disturbance in the sensing region 120. The receiver module 240 may also be configured to pass the resulting signal to the determination module 250 for determining the presence of an input object and/or to the optional memory 260 for storage. In various embodiments, the IC of the processing system 110 may be coupled to a driver for driving the transmitter electrode 160. The drivers may be fabricated using Thin Film Transistors (TFTs) and may include switches, combinational logic, multiplexers, and other selection and control logic.
A driver module 230 (which includes driver circuitry) included in the processing system 110 may be configured to update images on a display screen of a display device (not shown). For example, the driver circuitry may include display circuitry and/or sensor circuitry configured to apply one or more pixel voltages to the display pixel electrode via the pixel source driver. The display and/or sensor circuitry may also be configured to apply one or more common drive voltages to the common electrode to update the display screen. In addition, the processing system 110 is configured to operate the common electrode as a transmitter electrode for input sensing by driving a transmitter signal onto the common electrode.
The processing system 110 may be implemented in one or more ICs to control various components in an input device. For example, the functionality of the IC of the processing system 110 may be implemented in more than one integrated circuit capable of controlling display module elements (e.g., common electrodes) and driving transmitter signals and/or receiving result signals received from an array of sensing elements. In embodiments where there is more than one IC of the processing system 110, communication between the individual processing system ICs may be achieved by a synchronization mechanism that serializes the signals provided to the transmitter electrode 160. Alternatively, the synchronization mechanism may be internal to any of the ICs.
Fig. 3A-3C illustrate an example object tracking algorithm in accordance with an embodiment. In fig. 3A-3C, two objects (such as user fingers, a stylus, an active pen, or other detectable objects) move across a sensing region 120 of the input device. The circles (302, 304, 306, etc.) illustrated in fig. 3A-3C represent the position of objects within the sensing region 120. As the object moves across the sensing region 120, the position of the object is updated by the receiver module 240 (illustrated in fig. 2) and the determination module 250. The arrow illustrates the trajectory (i.e., direction of movement) of the object in various scenes.
Fig. 3A illustrates a first object at a first point in time at a location 302 and a second object at a location 304. As indicated above, capacitive images may be obtained at different time periods, and the difference in the capacitive images is used to derive information about the input within the sensing region 120. Successive capacitive images obtained over successive time periods are used to track the motion of the input object. In some embodiments, the sensing frequency may be 60 hertz, which results in 60 capacitive images being obtained per second. Other embodiments may employ higher or lower sensing frequencies. The time interval between successive capacitive images is the inverse of the sensing frequency.
At a first point in time in fig. 3A, the object is at positions 302 and 304 when the capacitive image is captured. At a second point in time, only one object is detected at location 306. The second object has left the sensing region 120. If the first object and the second object follow the track marked by the arrow in fig. 3A, it appears that the first object is the object detected at the position 306. In other words, the first object has moved from location 302 to location 306. The second object has moved from location 304 to a location outside of the sensing region 120.
However, fig. 3B illustrates an alternative scenario. In fig. 3B, at a first point in time, the first object is again illustrated at location 302 and the second object is again illustrated at location 304. At a second point in time, only one object is detected at location 306. If the first object and the second object follow the trajectory marked by the arrow in fig. 3B, it appears that the second object is the object detected at the position 306. In other words, the second object has moved from location 304 to location 306. The first object has moved from location 302 to a location outside of the sensing region 120.
Some object tracking algorithms that determine which object has been moved to location 306 at a second point in time may identify an erroneous object at location 306. For example, some algorithms take the distance between the previous location (location 302 and location 304) and the current location (location 306) and select a smaller distance as the correct match. In other words, if location 306 is closer to location 302, then the first object located at location 302 is identified as the object at location 306. If location 304 is closer to location 302, then the second object at location 304 is identified as the object at location 306. However, if one or more objects have left the sensing region 120 and two or more of the calculated distances are close, the tracking algorithm may make a false match. As illustrated in fig. 3A and 3B, location 306 is substantially the same distance from location 302 and location 304. Thus, a simple algorithm that considers the distance between the previous location (locations 302 and 304) and the current location (location 306) may make mistakes and may incorrectly identify which of the two objects has moved to location 306.
Fig. 3C illustrates an algorithm that considers capacitive images (capacitive frames) captured during at least two previous time periods to predict the position of an object in the current time sensing region 120. Those predicted locations are then compared to the actual locations detected at the current time, and the closest predicted locations are selected as the objects that remain in the sensing region 120.
In FIG. 3C, at the current point in time, the object is detected at location 306. At a point in time before that (previous time), two objects are detected, one at location 302 and one at location 304. At a point in time before the previous time (previous-previous time), two objects are detected at locations 310 and 312. The trajectory of the two objects from the previous-previous time to the previous time is used to make predictions about the positions of the two objects at the current time. Those predictions are shown as locations 320 and 322. The algorithm determines that predicted location 320 is closer to actual location 306 than predicted location 322. Thus, the object predicted to be at location 320 matches the object at location 306. Another object is not detected in the sensing region 120 and is thus determined to be located outside the sensing region 120.
The algorithm for determining which object is in the current position using at least two previous time periods may be exemplified by a series of formulas. The object at position 302 is labeled "A" and the object at position 304 is labeled "B". The velocity of object a as it moves across sensing region 120 may be determined by subtracting the position of object a at position 310 from the position of object a at position 302 and dividing by the time interval between the captures of those two positions:
Speed prev-A =(Position prev-A -Position prev-prev-A )/time_lapse old
The position of the object at various points in time is stored in memory as X-Y locations within the sensing region 120. An associated time stamp for each location is also stored. Position prev-A Is the position of object a at the previous point in time (illustrated as 302 in fig. 3C). Position prev-prev-A Is the location of object a at the previous-previous point in time (illustrated as 310 in fig. 3C). Dividing the difference of those two positions by the time interval elapsed between the points in time at which those two positions were determined (which is defined by time_capsule old Representation). Determining time interval time_stamp from the difference between stored time stamps old . The difference in position divided by the time delay results in a Speed of object a as object a moves from position 310 to position 302 prev-A
A similar calculation is performed to determine the velocity of object B as it moves from position 312 to position 304:
Speed prev-B =(Position prev-B -Position prev-prev-B )/time_lapse old
with the velocity of each of objects a and B, a predicted position can be calculated for each object. The velocity of the object multiplied by the time delay between the previous point in time and the time of the associated current position is added to the position at the previous point in time:
Position predict-A =Position prev-A +Speed prev-A *time_lapse new
Position prev-A is the position of object a at the previous point in time (illustrated as 302 in fig. 3C). Calculate Speed as shown above prev-A . Time interval time_delay new Is the time interval that passes between the time that location 302 was determined (the previous time) and the time that location 306 was determined (the current time). Position determination prev-A Added to Speed prev-A *time_lapse new Calculating a predicted Position of the object A at the current time predict-A . This predicted Position predict-A Illustrated as location 320 in fig. 3C.
A similar calculation is performed to predict the position of object B at the current time using the following formula:
Position predict-B =Position prev-B +Speed prev-B *time_lapse new
the predicted position of object B is illustrated as position 322 in fig. 3C. As described above, the algorithm determines that predicted location 320 is closer to actual location 306 than predicted location 322. One way to determine which object is closer is to compare the absolute value of the difference in position between the predicted position and the current position:
∣Position predict-A -Position current ∣<<∣Position predict-B -Position current
in this example, object a is closer to the current location at location 306. Thus, object a is matched to the object at location 306 and object B is determined to be outside of sensing region 120.
Fig. 4A-4C illustrate another example object tracking algorithm in accordance with an embodiment. The process described with respect to fig. 3A-3C functions similarly in fig. 4A-4C. Fig. 4A-4C illustrate the tracking of three objects in the sensing region 120. Circles (402, 404, 406, etc.) illustrated in fig. 4A-4C represent the location of objects in the sensing region 120. As the object moves across the sensing region 120, the position of the object is updated by the receiver module 240 (illustrated in fig. 2) and the determination module 250. The arrows illustrate the trajectories of objects in various scenes.
Fig. 4A illustrates a first object a at a first point in time at a location 402, a second object B at a location 404, and a third object C at a location 406. At a second point in time, two objects are detected at locations 408 and 410. The arrow illustrates the trajectory of three objects. If three objects move across sensing region 120 in the trajectory illustrated by the three arrows, object a moves to position 408 and object B moves to position 410. Object C moves to a position outside of sensing region 120.
Fig. 4B illustrates an alternative scenario in which three objects move in different trajectories than those illustrated in fig. 4A. If three objects move across sensing region 120 in the trajectory illustrated by the three arrows, object B moves to position 408 and object C moves to position 410. Object a moves to a position outside of sensing region 120. Thus, a simple algorithm that considers the distance between the previous locations (402, 404, and 406) and the current locations (locations 408 and 410) may make mistakes and may not correctly identify which of the objects has left the sensing region 120.
Fig. 4C illustrates an algorithm for determining which of the three objects has left the sensing region 120. In fig. 4C, the algorithm predicts the position of the current time object in the sensing region 120 taking into account the capacitive images captured during at least two previous time periods. Those predicted locations are then compared to the actual locations detected at the current time, and the closest predicted locations are selected as the objects that remain in the sensing region 120.
The formula described above with respect to FIG. 3C may also be used to predict the location of the object in FIG. 4C. First, the velocity of each of three objects is calculated:
Speed prev-A =(Position prev-A -Position prev-prev-A )/time_lapse old
Speed prev-B =(Position prev-B -Position prev-prev-B )/time_lapse old
Speed prev-C =(Position prev-C -Position prev-prev-C )/time_lapse old
subsequently, for each predicted position of three objects:
Position predict-A =Position prev-A +Speed prev-A *time_lapse new
Position predict-B =Position prev-B +Speed prev-B *time_lapse new
Position predict-C =Position prev-C +Speed prev-C *time_lapse new
the predicted locations (420, 422, and 424) are compared to the detected locations (408, 410) of the objects resting in the sensing region 120 and a best match is determined as object B at location 408 and object C at location 410.
To reiterate, at the previous point in time, three objects are detected at locations 402, 404, and 406. At a time before that (previous-previous time), the object is detected at locations 412, 414, and 416. As described above, the velocity of the object between the previous-previous time point and the previous time point is used to predict the position of the object at the current time point. Those predictions are illustrated as locations 420, 422, and 424. Subsequently, the algorithm determines that predicted position 422 is closest to current position 408 and predicted position 424 is closest to current position 410. The algorithm concludes that object B has moved to location 408 and that object C has moved to location 410. Object a is outside of sensing region 120.
FIG. 5 is a flow chart illustrating a method 500 for tracking objects on a touch sensing device in accordance with one embodiment of the invention. Although method steps are described in connection with fig. 1-4, one skilled in the art will appreciate that any system configured to perform these method steps in any feasible order falls within the scope of the present invention. In various embodiments, the hardware and/or software elements described in fig. 1-4 are configured to perform the method steps of fig. 5. It is contemplated that method 500 may be performed using other suitable input devices.
The method 500 begins at step 510, where the processing system 110 determines a first location of a first object and a first location of a second object within the sensing region 120. As an example, the receiver module 240 may employ an absolute or transcapacitive sensing procedure as described above to receive a resulting signal indicative of an input within the sensing region 120. The receiver module 240 passes the resulting signal to the determination module 250 for determining the presence of an input object and passes the resulting signal to the memory 260 for storage. The stored signals are used in the algorithm described above to determine the position of the object and predict the position of the object.
The method 500 continues to step 520 where the processing system 110 determines that one of the objects has left the sensing region 120 and that one of the objects has remained in the current location at the sensing region 120. The receiver module 240 and the determination module 250 again receive signals from the touch sensing region 120 and perform this determination using an absolute or transcapacitive sensing procedure as described above.
At step 530, the processing system 110 calculates a velocity of each of the first object and the second object based on a difference in the position of each respective object in the two previous frames divided by a time interval between the two previous frames. For each frame, a timestamp is stored in memory 260. The time interval between two locations may be calculated by performing a subtraction of time stamps that correlate the two locations.
The method 500 continues to step 540 where the processing system 110 predicts a next location for each of the objects based on the first location of each of the objects and the velocity of each of the objects. Example formulas for calculating these predicted locations are described above with respect to fig. 3A-3C and 4A-4C, although other prediction algorithms may be used.
Finally, the method 500 proceeds to step 550, wherein the processing system 110 determines which of the objects remains in the sensing region 120 based at least in part on the predicted next location. As described above, the predicted position closest to the detected position of the object resting in the sensing region 120 determines which object is resting in the sensing region 120. Also, multiple objects may have remained in the sensing region 120 or left the sensing region 120, and the above algorithm may be performed on as many objects as are needed to determine which objects remain in the sensing region 120.
Thus, the embodiments and examples set forth herein are presented to best explain the embodiments in accordance with the present technology and its particular application and to thereby enable those skilled in the art to make and use the invention. However, those skilled in the art will recognize that the foregoing description and examples have been presented for the purpose of illustration and example only. The description as set forth is not intended to be exhaustive or to limit the invention to the precise form disclosed.
In view of the foregoing, the scope of the present disclosure is to be determined by the appended claims.
Reference numerals and signs
Figure SMS_1
/>

Claims (19)

1. A method for tracking an object on a touch sensing device, comprising:
determining a first position of a first object and a first position of a second object in a sensing region;
determining that one of the objects has left the sensing region and that one of the objects has remained in a current position in the sensing region;
calculating a velocity of each of the first object and the second object based on a difference in a position of each respective object in two previous frames divided by a time interval between the two previous frames;
predicting a next position of each of the objects based on the first position of each of the objects and the velocity of each of the objects; and
based at least in part on the predicted next position, it is determined which of the objects resides in the sensing region.
2. The method of claim 1, wherein determining which of the objects remains in the sensing region based at least in part on the predicted next positions comprises determining a lesser of each of the predicted next positions minus an absolute value of the current position of the object remaining in the sensing region, wherein the lesser of absolute values determines the object remaining in the sensing region.
3. The method of claim 1, wherein the time interval is determined by calculating a difference between timestamps associated with the two previous frames.
4. The method of claim 1, wherein predicting the next position of each of the objects further comprises adding a velocity of an object multiplied by a time interval between a current frame and a previous frame to the first position of the object.
5. The method of claim 1, wherein determining a difference in the position of each respective object in two previous frames comprises determining a difference in the position of each of the objects between the first position and a second position in an immediately preceding frame.
6. The method of claim 1, wherein the time interval between the two previous frames comprises an inverse of a sensing frequency of a touch sensor device.
7. The method of claim 1, wherein predicting the next position of each of the objects comprises predicting each next position along a line that coincides with a direction of movement of an object from its respective position during a previous frame to the first position.
8. An input device for capacitive touch sensing, comprising:
A capacitive touch sensor configured to:
detecting a first position of a first object and a first position of a second object in a sensing area;
detecting that one of the objects has left the sensing region and that one of the objects has remained in a current position in the sensing region; and
a processing system configured to:
calculating a velocity of each of the first object and the second object based on a difference in a position of each respective object in two previous frames divided by a time interval between the two previous frames;
predicting a next position of each of the objects based on the first position of each of the objects and the velocity of each of the objects; and
based at least in part on the predicted next position, it is determined which of the objects resides in the sensing region.
9. The input device of claim 8, further comprising:
a memory configured to store a position of the object in at least two previous frames.
10. The input device of claim 8, wherein the processing system is further configured to determine which of the objects is hovering in the sensing region based at least in part on the predicted next locations by determining each of the predicted next locations minus a smaller of absolute values of the current locations of the objects hovering in the sensing region, wherein the smaller of absolute values determines the objects hovering in the sensing region.
11. The input device of claim 8, wherein the processing system is further configured to determine the time interval by calculating a difference between timestamps associated with the two previous frames.
12. The input device of claim 8, the processing system further configured to predict the next location of each of the objects by adding a speed of an object multiplied by a time interval between a current frame and a previous frame to the first location of the object.
13. The input device of claim 8, wherein the processing system is further configured to predict each next location of each of the objects by predicting the next location along a line that coincides with a direction of movement of an object from its respective location during a previous frame to the first location.
14. The input device of claim 8, wherein the time interval between the two previous frames comprises an inverse of a sensing frequency of a touch sensor device.
15. A processing system for tracking objects on a touch sensing device, the processing system comprising:
a determination module configured to:
determining a first position of a first object and a first position of a second object in a sensing region;
Determining that one of the objects has left the sensing region and that one of the objects has remained in a current position in the sensing region; and
a processor configured to:
calculating a velocity of each of the first object and the second object based on a difference in a position of each respective object in two previous frames divided by a time interval between the two previous frames;
predicting a next position of each of the objects based on the first position of each of the objects and the velocity of each of the objects; and
based at least in part on the predicted next position, it is determined which of the objects resides in the sensing region.
16. The processing system of claim 15, wherein the processor is further configured to determine which of the objects is hovering in the sensing region based at least in part on the predicted next locations by determining each of the predicted next locations minus a lesser of an absolute value of the current location of the object hovering in the sensing region.
17. The processing system of claim 15, wherein the processor is further configured to determine the time interval by calculating a difference between timestamps associated with the two previous frames.
18. The processing system of claim 15, wherein the processor is further configured to predict the next location of each of the objects by adding a speed of an object multiplied by a time interval between a current frame and a previous frame to the first location of the object.
19. The processing system of claim 15, wherein the processor is further configured to predict each next location of each of the objects by predicting the next location along a line that coincides with a direction of movement of an object from its respective location during a previous frame to the first location.
CN201710060861.2A 2017-01-25 2017-01-25 Object tracking using object velocity information Active CN108345415B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710060861.2A CN108345415B (en) 2017-01-25 2017-01-25 Object tracking using object velocity information
PCT/US2018/012239 WO2018140200A1 (en) 2017-01-25 2018-01-03 Object tracking using object speed information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710060861.2A CN108345415B (en) 2017-01-25 2017-01-25 Object tracking using object velocity information

Publications (2)

Publication Number Publication Date
CN108345415A CN108345415A (en) 2018-07-31
CN108345415B true CN108345415B (en) 2023-06-30

Family

ID=62961988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710060861.2A Active CN108345415B (en) 2017-01-25 2017-01-25 Object tracking using object velocity information

Country Status (2)

Country Link
CN (1) CN108345415B (en)
WO (1) WO2018140200A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114264835B (en) * 2021-12-22 2023-11-17 上海集成电路研发中心有限公司 Method, device and chip for measuring rotation speed of fan

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102224480A (en) * 2008-10-21 2011-10-19 爱特梅尔公司 Multi-touch tracking
CN102986208A (en) * 2010-05-14 2013-03-20 株式会社理光 Imaging apparatus, image processing method, and recording medium for recording program thereon
CN103105957A (en) * 2011-11-14 2013-05-15 联想(北京)有限公司 Display method and electronic equipment
CN103425300A (en) * 2012-05-14 2013-12-04 北京汇冠新技术股份有限公司 Multipoint touch trajectory tracking method
CN103544977A (en) * 2012-07-16 2014-01-29 三星电子(中国)研发中心 Device and method for locating videos on basis of touch control
CN104081323A (en) * 2011-12-16 2014-10-01 平蛙实验室股份公司 Tracking objects on a touch surface
CN104798009A (en) * 2012-06-28 2015-07-22 辛纳普蒂克斯公司 Systems and methods for determining types of user input
CN105468214A (en) * 2014-08-16 2016-04-06 辛纳普蒂克斯公司 Location based object classification
CN105975119A (en) * 2016-04-21 2016-09-28 北京集创北方科技股份有限公司 Multi-target tracking method, and touch screen control method and system
CN106250863A (en) * 2016-08-09 2016-12-21 北京旷视科技有限公司 object tracking method and device
CN106326837A (en) * 2016-08-09 2017-01-11 北京旷视科技有限公司 Object tracking method and apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9542092B2 (en) * 2011-02-12 2017-01-10 Microsoft Technology Licensing, Llc Prediction-based touch contact tracking
TWI456448B (en) * 2011-08-30 2014-10-11 Pixart Imaging Inc Touch system with track detecting function and method thereof
EP2645218A1 (en) * 2012-03-29 2013-10-02 Sony Mobile Communications AB Method for operating an electronic device
WO2014109262A1 (en) * 2013-01-09 2014-07-17 シャープ株式会社 Touch panel system
US20160357280A1 (en) * 2015-06-04 2016-12-08 Uico, Llc Method and apparatus to implement two finger rotate gesture utilizing self-capacitance sensing on a touchscreen

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102224480A (en) * 2008-10-21 2011-10-19 爱特梅尔公司 Multi-touch tracking
CN102986208A (en) * 2010-05-14 2013-03-20 株式会社理光 Imaging apparatus, image processing method, and recording medium for recording program thereon
CN103105957A (en) * 2011-11-14 2013-05-15 联想(北京)有限公司 Display method and electronic equipment
CN104081323A (en) * 2011-12-16 2014-10-01 平蛙实验室股份公司 Tracking objects on a touch surface
CN103425300A (en) * 2012-05-14 2013-12-04 北京汇冠新技术股份有限公司 Multipoint touch trajectory tracking method
CN104798009A (en) * 2012-06-28 2015-07-22 辛纳普蒂克斯公司 Systems and methods for determining types of user input
CN103544977A (en) * 2012-07-16 2014-01-29 三星电子(中国)研发中心 Device and method for locating videos on basis of touch control
CN105468214A (en) * 2014-08-16 2016-04-06 辛纳普蒂克斯公司 Location based object classification
CN105975119A (en) * 2016-04-21 2016-09-28 北京集创北方科技股份有限公司 Multi-target tracking method, and touch screen control method and system
CN106250863A (en) * 2016-08-09 2016-12-21 北京旷视科技有限公司 object tracking method and device
CN106326837A (en) * 2016-08-09 2017-01-11 北京旷视科技有限公司 Object tracking method and apparatus

Also Published As

Publication number Publication date
CN108345415A (en) 2018-07-31
WO2018140200A1 (en) 2018-08-02

Similar Documents

Publication Publication Date Title
US10268313B2 (en) Display guarding techniques
US9870109B2 (en) Device and method for localized force and proximity sensing
US9841850B2 (en) Device and method for proximity sensing with force imaging
US9075488B2 (en) Virtual geometries for integrated display and sensing devices
US9632638B2 (en) Device and method for force and proximity sensing employing an intermediate shield electrode layer
US9594458B2 (en) Shielding with display elements
CN105005422B (en) Interference detection using frequency modulation
US9367189B2 (en) Compensating for source line interference
US10175827B2 (en) Detecting an active pen using a capacitive sensing device
CN105468215B (en) Current feedback techniques for capacitive sensing
US9785296B2 (en) Force enhanced input device with shielded electrodes
US10037112B2 (en) Sensing an active device&#39;S transmission using timing interleaved with display updates
US9268435B2 (en) Single layer capacitive sensor and capacitive sensing input device
CN107239161B (en) Inflection-based calibration method for force detector
CN109313521B (en) Calibrating continuous-time receivers for capacitive sensing
CN108345415B (en) Object tracking using object velocity information
US9436337B2 (en) Switched capacitance techniques for input sensing
US9870105B2 (en) Far-field sensing with a display device having an integrated sensing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200708

Address after: Room 2004b, 20 / F, tower 5, Hong Kong, China, 33 Canton Road, Tsim Sha Tsui, Kowloon, China

Applicant after: Xinchuan semiconductor (Hong Kong) Ltd.

Address before: California, USA

Applicant before: SYNAPTICS Inc.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211201

Address after: Ontario

Applicant after: Howell tddi Ontario LLP

Address before: Room 2004b, 20 / F, tower 5, Hong Kong, China, 33 Canton Road, Tsim Sha Tsui, Kowloon, China

Applicant before: Xinchuan semiconductor (Hong Kong) Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant