WO2012050510A1 - Touch determination by tomographic reconstruction - Google Patents

Touch determination by tomographic reconstruction Download PDF

Info

Publication number
WO2012050510A1
WO2012050510A1 PCT/SE2011/051201 SE2011051201W WO2012050510A1 WO 2012050510 A1 WO2012050510 A1 WO 2012050510A1 SE 2011051201 W SE2011051201 W SE 2011051201W WO 2012050510 A1 WO2012050510 A1 WO 2012050510A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
points
data samples
values
sample
Prior art date
Application number
PCT/SE2011/051201
Other languages
French (fr)
Inventor
Tomas Christiansson
Mats Petter Wallander
Peter Juhlin
Original Assignee
Flatfrog Laboratories Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Flatfrog Laboratories Ab filed Critical Flatfrog Laboratories Ab
Priority to US13/824,026 priority Critical patent/US9411444B2/en
Priority to EP11832837.6A priority patent/EP2628068A4/en
Publication of WO2012050510A1 publication Critical patent/WO2012050510A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0428Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means by sensing at the edges of the touch surface the interruption of optical paths, e.g. an illumination plane, parallel to the touch surface which may be virtual
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04109FTIR in optical digitiser, i.e. touch detection by frustrating the total internal reflection within an optical waveguide due to changes of optical properties or deformation at the touch location

Definitions

  • the present invention relates to touch- sensitive panels and data processing techniques in relation to such panels.
  • GUI graphical user interface
  • a fixed GUI may e.g. be in the form of printed matter placed over, under or inside the panel.
  • a dynamic GUI can be provided by a display screen integrated with, or placed underneath, the panel or by an image being projected onto the panel by a projector.
  • US2004/0252091 discloses an alternative technique which is based on frustrated total internal reflection (FTIR).
  • FTIR frustrated total internal reflection
  • Light sheets are coupled into a panel to propagate inside the panel by total internal reflection.
  • FTIR frustrated total internal reflection
  • Arrays of light sensors are located around the perimeter of the panel to detect the received light for each light sheet.
  • a coarse tomographic reconstruction of the light field across the panel surface is then created by geometrically back-tracing and triangulating all attenuations observed in the received light. This is stated to result in data regarding the position and size of each contact area.
  • US2009/0153519 discloses a panel capable of conducting signals.
  • a "tomograph” is positioned adjacent the panel with signal flow ports arrayed around the border of the panel at discrete locations. Signals (b) measured at the signal flow ports are tomograph- ically processed to generate a two-dimensional representation (x) of the conductivity on the panel, whereby touching objects on the panel surface can be detected.
  • the suggested method is both demanding in the term of processing and lacks suppression of high frequency components, possibly leading to much noise in the 2D representation.
  • CT methods are well-known imaging methods which have been developed for medical purposes.
  • CT methods employ digital geometry processing to reconstruct an image of the inside of an object based on a large series of projection measurements through the object.
  • Various CT methods have been developed to enable efficient processing and/or precise image reconstruction, e.g. Filtered Back Projection, ART, SART, etc.
  • the projection measurements are carried out in accordance with a standard geometry which is given by the CT method.
  • Another objective is to provide a technique that enables determination of touch- related data at sufficient precision to discriminate between a plurality of objects in simultaneous contact with a touch surface.
  • a first aspect of the invention is a method of enabling touch determination based on an output signal from a touch-sensitive apparatus.
  • the touch-sensitive apparatus comprises a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points, at least one signal generator coupled to the incoupling points to generate the signals, and at least one signal detector coupled to the outcoupling points to generate the output signal.
  • the method comprises: processing the output signal to generate a set of data samples, wherein each data sample is indicative of detected energy on one of the detection lines and is defined by a signal value and first and second dimension values in a two-dimensional sample space, wherein the first and second dimension values define the location of the detection line on the surface portion, and wherein the data samples are non-uniformly arranged in the sample space;
  • each adjustment factor is representative of the local density of data samples in the sample space for a respective data sample; and processing the set of the data samples by tomographic reconstruction, while applying the adjustment factors, to generate data indicative of a reconstructed distribution of an energy-related parameter within at least part of the surface portion.
  • the adjustment factor for a given data sample is calculated to represent the number of data samples within a region around the given data sample in the sample space.
  • the adjustment factor for a given data sample is calculated to represent an average of a set of smallest distances between the given data sample and neighboring data samples in the sample space.
  • the adjustment factor for a given data sample is calculated to represent an extent of a Voronoi cell or a set of Delaunay triangles in the sample space for the given data sample.
  • the reconstructed distribution comprises spatial data points, each spatial data point having a unique location on the surface portion and corresponding to a predetermined curve in the sample space, and the adjustment factor for a given data sample is calculated, for each spatial data point in a set of spatial data points, to represent the interaction between the predetermined curve of the spatial data point and a two-dimensional basis function located at the given data sample, wherein the basis function is given an extent in the sample space that is dependent on the local density.
  • the interaction may be calculated by evaluating a line integral of the basis function, along the predetermined curve.
  • the step of obtaining comprises: obtaining, for each spatial data point, a set of adjustment factors associated with a relevant set of data samples.
  • the step of processing the set of data samples may comprise: reconstructing each spatial data point by: scaling the signal value of each data sample in the relevant set of data samples by its corresponding adjustment factor and summing the thus-scaled signal values.
  • the predetermined curve is designed to include the shape of a predetermined one-dimensional filter function which extends in the first dimension of the sample space and which is centered on and reproduced at plural locations along the curve.
  • the interaction may be calculated by evaluating a surface integral of the combination of the predetermined curve and the basis function.
  • the step of processing the output signal comprises: obtaining a measurement value for each detection line and applying a filter function to generate a filtered signal value for each measurement value, wherein the filtered signal values form said signal values of the data samples.
  • the filter function may be a predetermined one- dimensional filter function which is applied in the first dimension of the sample space.
  • the step of applying the filter function may comprise: obtaining estimated signal values around each measurement value in the first dimension, and operating the filter function on the measurement value and the estimated signal values.
  • the filtered signal value may be generated as a weighted summation of the measurement values and the estimated signal values based on the filter function.
  • the estimated signal values may be obtained as measurement values of other detection lines, said other detection lines being selected as a best match to the extent of the filter function in the first dimension, or the estimated signal values may be generated at predetermined locations around the measurement value in the sample space.
  • the estimated signal values are generated by interpolation in the sample space based on the measurement values.
  • Each estimated signal value may be generated by interpolation of measurement values of neighboring data samples in the sample space.
  • the step of processing the output signal further may comprise: obtaining a predetermined two-dimensional interpolation function with nodes corresponding to the data samples, and calculating the estimated signal values according to the interpolation function and based on the measurement values of the data samples.
  • the reconstructed distribution comprises spatial data points, each spatial data point having a unique location on the surface portion and
  • the step of processing the set of data samples comprises: generating filtered signal values for the data samples by scaling the signal value of each data sample by a weight given by a predetermined filter function based on the distance of the data sample from the curve in the first dimension, and evaluating each spatial data point by: scaling the filtered signal value by the adjustment factor of the corresponding data sample and summing the thus- scaled filtered signal values.
  • the step of processing the set of data samples comprises: calculating Fourier transformation data for the data samples with respect to the first dimension only, and generating said data indicative of the reconstructed distribution by operating a two-dimensional inverse Fourier transform on the Fourier transformation data, wherein the adjustment factors are applied in the step of calculating Fourier transformation data.
  • the step of calculating the Fourier transformation data may comprise: transforming the data samples to a Fourier domain to produce uniformly arranged Fourier-transformed data samples with respect to the first and second dimensions, and transforming the Fourier-transformed data samples back to the sample space with respect to the second dimension only.
  • the first dimension value is a distance of the detection line in the plane of the panel from a predetermined origin
  • the second dimension value is a rotation angle of the detection line in the plane of the panel.
  • the first dimension value is a rotation angle of the detection line in the plane of the panel
  • the second dimension value is an angular location of the incoupling or outcoupling point of the detection line.
  • a second aspect of the invention is a computer program product comprising computer code which, when executed on a data-processing system, is adapted to carry out the method of the first aspect.
  • a third aspect of the invention is a device for enabling touch determination based on an output signal from a touch-sensitive apparatus.
  • the touch-sensitive apparatus comprises a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points, signal generating means coupled to the incoupling points to generate the signals, and signal detecting means coupled to the outcoupling points to generate the output signal.
  • the device comprises: means for processing the output signal to generate a set of data samples, wherein each data sample is indicative of detected energy on one of the detection lines and is defined by a signal value and first and second dimension values in a two-dimensional sample space, wherein the first and second dimension values define the location of the detection line on the surface portion, and wherein the data samples are non-uniformly arranged in the sample space; means for obtaining adjustment factors for the set of data samples, wherein each adjustment factor is representative of the local density of data samples in the sample space for a respective data sample; and means for processing the set of the data samples by tomographic reconstruction, while applying the adjustment factors, to generate data indicative of a reconstructed distribution of an energy-related parameter within at least part of the surface portion.
  • a fourth aspect of the invention is a touch-sensitive apparatus, comprising: a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points; means for generating the signals at the incoupling points; means for generating an output signal based on detected signals at the outcoupling points; and the device for enabling touch determination of the third aspect.
  • a fifth aspect of the invention is a touch-sensitive apparatus, comprising: a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points; at least one signal generator coupled to the incoupling points to generate the signals; at least one signal detector coupled to the outcoupling points to generate an output signal; and a signal processor connected to receive the output signal and configured to: process the output signal to generate a set of data samples, wherein each data sample is indicative of detected energy on one of the detection lines and is defined by a signal value and first and second dimension values in a two-dimensional sample space, wherein the first and second dimension values define the location of the detection line on the surface portion, and wherein the data samples are non-uniformly arranged in the sample space; obtain adjustment factors for the set of data samples, wherein each adjustment factor is representative of the local density of data samples in the sample space for a respective data sample; and process
  • Fig. 1 is a plan view of a touch-sensitive apparatus.
  • Fig. 2 is a top plan view of a touch- sensitive apparatus with an interleaved arrangement of emitters and sensors.
  • Figs 3A-3B are side and top plan views of touch-sensitive systems operating by frustrated total internal reflection (FTIR).
  • FTIR frustrated total internal reflection
  • Fig. 4 illustrates the underlying principle of the Projection-Slice Theorem.
  • Fig. 5 illustrates a parallel geometry used in tomographic reconstruction.
  • Figs 6A-6B are graphs of a regular arrangement of detection lines in a
  • Figs 7A-7H illustrate a starting point, intermediate results and final results of a back projection process using a parallel geometry, as well as a filter used in the process.
  • Figs 8A -8B are graphs of a non-regular arrangement of detection lines in the coordinate system of a touch surface and in a sample space, respectively.
  • Fig. 9 is a graph of sampling points defined by the interleaved arrangement in Fig.
  • Fig. 10A is a flow chart of a reconstruction method
  • Fig. 1 OB is a block diagram of a device that implements the method of Fig. 10A.
  • Figs 11 A-l IB illustrate a first embodiment for determining adjustment factors.
  • Fig. 12 illustrate a second embodiment for determining adjustment factors.
  • Fig. 13 illustrate a third embodiment for determining adjustment factors.
  • Fig. 14 illustrate a fourth embodiment for determining adjustment factors.
  • Fig. 15 illustrate the concept of interpolating basis functions.
  • Figs 16A-16B illustrate a correspondence between reconstruction points on the touch surface and reconstruction lines in the sample space.
  • Figs 17A-17D illustrate the interaction between a reconstruction line and an interpolating basis function.
  • Figs 18A-18D illustrate the evaluation of a surface integral defined by a reconstruction line, a filter function and a basis function.
  • Fig. 19 is a reference image mapped to an interleaved arrangement.
  • Fig. 20 illustrates a first embodiment of modified FBP reconstruction.
  • Figs 21 A-21B are graphs of different filters for use in modified FBP
  • Figs 22A-22B show the reconstructed attenuation fields obtained by the first embodiment, with and without adjustment factors.
  • Fig. 23A is a graph of Delaunay triangles defined in the sample space of Fig. 9, and Fig. 23B shows the reconstructed attenuation field obtained by a second
  • Fig. 24A illustrates a third embodiment of modified FBP reconstruction
  • Fig. 24B shows the reconstructed attenuation field obtained by the third embodiment.
  • Fig. 25A-25B illustrate a fourth embodiment of modified FBP reconstruction
  • Fig. 25C shows the reconstructed attenuation field obtained by the fourth embodiment.
  • Fig. 26 illustrates a fan geometry used in tomographic reconstruction.
  • Fig. 27 illustrates the use of a circle for mapping detection lines to a sample space for fan geometry reconstruction.
  • the present invention relates to techniques for enabling extraction of touch data for at least one object, and typically multiple objects, in contact with a touch surface of a touch-sensitive apparatus.
  • the description starts out by presenting the underlying concept of such a touch-sensitive apparatus, especially an apparatus operating by frustrated total internal reflection (FTIR) of light.
  • FTIR frustrated total internal reflection
  • the description continues to generally explain and exemplify the theory of tomographic reconstruction and its use of standard geometries. Then follows an example of an overall method for touch data extraction involving tomographic reconstruction.
  • FTIR frustrated total internal reflection
  • Fig. 1 illustrates a touch- sensitive apparatus 100 which is based on the concept of transmitting energy of some form across a touch surface 1, such that an object that is brought into close vicinity of, or in contact with, the touch surface 1 causes a local decrease in the transmitted energy.
  • the touch- sensitive apparatus 100 includes an arrangement of emitters and sensors, which are distributed along the periphery of the touch surface. Each pair of an emitter and a sensor defines a detection line, which corresponds to the propagation path for an emitted signal from the emitter to the sensor.
  • detection line D is illustrated to extend from emitter 2 to sensor 3, although it should be understood that the arrangement typically defines a dense grid of intersecting detection lines, each corresponding to a signal being emitted by an emitter and detected by a sensor. Any object that touches the touch surface along the extent of the detection line D will thus decrease its energy, as measured by the sensor 3.
  • the arrangement of sensors is electrically connected to a signal processor 10, which samples and processes an output signal from the arrangement.
  • the output signal is indicative of the received energy at each sensor 3.
  • the signal processor 10 may be configured to process the output signal by a tomographic technique to recreate an image of the distribution of an energy-related parameter (for simplicity, referred to as "energy distribution" in the following) across the touch surface 1.
  • the energy distribution may be further processed by the signal processor 10 or by a separate device (not shown) for touch determination, which may involve extraction of touch data, such as a position (e.g. x, y coordinates), a shape or an area of each touching object.
  • the touch- sensitive apparatus 100 also includes a controller 12 which is connected to selectively control the activation of the emitters 2.
  • the signal processor 10 and the controller 12 may be configured as separate units, or they may be incorporated in a single unit.
  • One or both of the signal processor 10 and the controller 12 may be at least partially implemented by software executed by a processing unit.
  • the touch- sensitive apparatus 100 may be designed to be used with a display device or monitor, e.g. as described in the Background section.
  • a display device has a rectangular extent, and thus the touch-sensitive apparatus 100 (the touch surface 1) is also likely to be designed with a rectangular shape.
  • the emitters 2 and sensors 3 all have a fixed position around the perimeter of the touch surface 1.
  • this puts certain limitations on the use of standard tomographic techniques for recreating/reconstructing the energy distribution within the touch surface 1.
  • At least a subset of the emitters 2 may be arranged to emit energy in the shape of a beam or wave that diverges in the plane of the touch surface 1, and at least a subset of the sensors 3 may be arranged to receive energy over a wide range of angles (field of view).
  • the individual emitter 2 may be configured to emit a set of separate beams that propagate to a number of sensors 3.
  • each emitter 2 transmits energy to a plurality of sensors 3, and each sensor 3 receives energy from a plurality of emitters 2.
  • the touch- sensitive apparatus 100 may be configured to permit transmission of energy in one of many different forms.
  • the emitted signals may thus be any radiation or wave energy that can travel in and across the touch surface 1 including, without limitation, light waves in the visible or infrared or ultraviolet spectral regions, electrical energy, electromagnetic or magnetic energy, or sonic and ultrasonic energy or vibration energy.
  • Fig. 3A is a side view of a touch- sensitive apparatus 100 which includes a light transmissive panel 4, one or more light emitters 2 (one shown) and one or more light sensors 3 (one shown).
  • the panel 4 defines two opposite and generally parallel surfaces 5, 6 and may be planar or curved.
  • a radiation propagation channel is provided between two boundary surfaces 5, 6 of the panel 4, wherein at least one of the boundary surfaces allows the propagating light to interact with a touching object 7.
  • the light from the emitter(s) 2 propagates by total internal reflection (TIR) in the radiation propagation channel, and the sensors 3 are arranged at the periphery of the panel 4 to generate a respective measurement signal which is indicative of the energy of received light.
  • TIR total internal reflection
  • the light may be coupled into and out of the panel 4 directly via the edge portion that connects the top and bottom surfaces 5, 6 of the panel 4.
  • a separate coupling element (e.g. in the shape of a wedge) may be attached to the edge portion or to the top or bottom surface 5, 6 of the panel 4 to couple the light into and/or out of the panel 4.
  • a separate coupling element e.g. in the shape of a wedge
  • part of the light may be scattered by the object 7, part of the light may be absorbed by the object 7, and part of the light may continue to propa- gate unaffected.
  • FTIR system FTIR - Frustrated Total Internal Reflection
  • the touch- sensitive apparatus 100 may be operated to measure the energy of the light transmitted through the panel 4 on a plurality of detection lines. This may, e.g., be done by activating a set of spaced-apart emitters 2 to generate a corresponding number of light sheets inside the panel 4, and by operating a set of sensors 3 to measure the transmitted energy of each light sheet.
  • a set of spaced-apart emitters 2 to generate a corresponding number of light sheets inside the panel 4
  • sensors 3 to measure the transmitted energy of each light sheet.
  • FIG. 3B Such an embodiment is illustrated in Fig. 3B, where each emitter 2 generates a beam of light that expands in the plane of the panel 4 while propagating away from the emitter 2. Each beam propagates from one or more entry or incoupling points within an incoupling site on the panel 4.
  • Arrays of light sensors 3 are located around the perimeter of the panel 4 to receive the light from the emitters 2 at a number of spaced-apart outcoupling points within an outcoupling site on the panel 4. It should be understood that the incoupling and outcoupling points merely refer to the position where the beam enters and leaves, respectively, the panel 4. Thus, one emitter/sensor may be optically coupled to a number of incoupling/outcoupling points. In the example of Fig. 3B, however, the detection lines D are defined by individual emitter- sensor pairs.
  • the light sensors 3 collectively provide an output signal, which is received and sampled by the signal processor 10.
  • the output signal contains a number of sub-signals, also denoted “projection signals”, each representing the energy of light emitted by a certain light emitter 2 and received by a certain light sensor 3, i.e. the received energy on a certain detection line.
  • the signal processor 10 may need to process the output signal for identification of the individual sub-signals.
  • the signal processor 10 is able to obtain an ensemble of measurement values that contains information about the distribution of an energy-related parameter across the touch surface 1.
  • the light emitters 2 can be any type of device capable of emitting light in a desired wavelength range, for example a diode laser, a VCSEL (vertical-cavity surface- emitting laser), or alternatively an LED (light-emitting diode), an incandescent lamp, a halogen lamp, etc.
  • a diode laser for example a diode laser, a VCSEL (vertical-cavity surface- emitting laser), or alternatively an LED (light-emitting diode), an incandescent lamp, a halogen lamp, etc.
  • the light sensors 3 can be any type of device capable of detecting the energy of light emitted by the set of emitters, such as a photodetector, an optical detector, a photo- resistor, a photovoltaic cell, a photodiode, a reverse-biased LED acting as photodiode, a charge-coupled device (CCD) etc.
  • a photodetector an optical detector
  • a photo- resistor a photovoltaic cell
  • a photodiode a reverse-biased LED acting as photodiode
  • CCD charge-coupled device
  • the emitters 2 may be activated in sequence, such that the received energy is measured by the sensors 3 for each light sheet separately. Alternatively, all or a subset of the emitters 2 may be activated concurrently, e.g. by modulating the emitters 2 such that the light energy measured by the sensors 3 can be separated into the sub-signals by a corresponding de-modulation.
  • the spacing between neighboring emitters 2 and sensors 3 is generally from about 1 mm to about 20 mm.
  • the spacing is generally in the 2-10 mm range.
  • the emitters 2 and sensors 3 may partially or wholly overlap, as seen in a plan view. This can be accomplished by placing the emitters 2 and sensors 3 on opposite sides of the panel 4, or in some equivalent optical arrangement.
  • Fig. 3 merely illustrates one example of an FTIR system.
  • the detection lines may instead be generated by sweeping or scanning one or more beams of light inside the panel.
  • Such and other examples of FTIR systems are e.g. disclosed in US6972753, US7432893, US2006/0114237,
  • WO2010/006884 WO2010/006885, WO2010/006886 and WO2010/064983, which are all incorporated herein by this reference.
  • the inventive concept may be advantageously applied to such alternative FTIR systems as well.
  • the transmitted light may carry information about a plurality of touches.
  • 7 is the transmission for the k:t detection line
  • T v is the transmission at a specific position along the detection line
  • a v is the relative attenuation at the same point.
  • the total transmission (modeled) along a detection line is thus:
  • h represents the transmitted energy on detection line Dk with attenuating object(s), Io
  • k represents the transmitted energy on detection line Dk without attenuating objects
  • a(x) is the attenuation coefficient along the detection line Dk.
  • the measurement values may be divided by a respective background value.
  • the measurement values are thereby converted into transmission values, which thus represent the fraction of the available light energy that has been measured on each of the detection lines.
  • Tomographic reconstruction which is well-known per se, may be based on the mathematics describing the Radon transform and its inverse. The following theoretical discussion is limited to the 2D Radon transform.
  • the general concept of tomography is to do imaging of a medium by measuring line integrals through the medium for a large set of angles and positions. The line integrals are measured through the image plane.
  • To find the inverse, i.e. the original image many algorithms use the so-called Projection- Slice Theorem.
  • FBP is a widely used algorithm, and there are many variants and extensions thereof. Below, a brief outline of the underlying mathematics for FBP is given, for the sole purpose of facilitating the following discussion about the inventive concept and its merits.
  • Projection-Slice Theorem states that given a two- dimensional function f x, y), the one- and two-dimensional Fourier transforms T x and T 2 , a projection operator Jl that projects a two-dimensional (2D) function onto a one- dimensional (ID) line, and a slice operator 5 ⁇ that extracts a central slice of a function, the following calculations are equal:
  • T i Rf ⁇ x, y) S i T 2 f ⁇ x, y)
  • Fig. 4 This relation is illustrated in Fig. 4.
  • the right-hand side of the equation above essentially extracts a ID line of the 2D Fourier transform of the function f(x, y).
  • the line passes through the origin of the 2D Fourier plane, as shown in the right-hand part of Fig. 4.
  • the left-hand side of the equation starts by projecting (i.e. integrating along ID lines in the projection direction p) the 2D function onto a ID line (orthogonal to the projection direction p), which forms a "projection” that is made up of the projection values for all the different detection lines extending in the projection direction p.
  • taking a ID Fourier transform of the projection gives the same result as taking a slice from the 2D Fourier transform of the function " (x, y) .
  • the function f(x, y) corresponds to the attenuation coefficient field a(x) (generally denoted "attenuation field” herein) to be reconstructed.
  • the attenuation vanishes outside the touch surface.
  • ⁇ ⁇ ⁇ x:
  • ⁇ r ⁇ the attenuation field set to zero outside of this disc.
  • (cos ⁇ , sin ⁇ ) be a unit vector denoting the direction normal to the detection line, and s is the shortest distance (with sign) from the detection line to the origin (taken as the centre of the screen, cf. Fig. 4). ).
  • is perpendicular to the above-mentioned projection direction vector, p.
  • g(9, s) we can denote g(9, s) by g ((p, s) since the latter notation more clearly indicates that g is a function of two variables and not a function of one scalar and one arbitrary vector.
  • the projection value for a detection line could be expressed as g ⁇ p, s), i.e.
  • Radon transform Jla.
  • the Radon transform operator is not invertible in the general sense. To be able to find a stable inverse, we need to impose restrictions on the variations of the attenuation field.
  • certain reconstruction techniques may benefit from a filtering step designed to increase the amount of information about high spatial frequencies. Without the filtering step, the information density will be much higher at low frequencies, and the reconstruction will yield a blurring from the low frequency components.
  • the filtering step may be implemented as a multiplication/weighting of the data points in the 2D Fourier transform plane.
  • This multiplication with a filter W b in the Fourier domain may alternatively be implemented as a convolution by a filter w b (s) in the spatial domain, i.e. with respect to the s variable, using the inverse Fourier transform of the weighting function.
  • the multiplication/weighting function in the 2D Fourier transform plane is rotationally symmetric.
  • Tomographic processing is generally based on standard geometries. This means that the mathematical algorithms presume a specific geometric arrangement of the detection lines in order to attain a desired precision and/or processing efficiency.
  • the geometric arrangement may be selected to enable a definition of the projection values in a 2D sample space, e.g. to enable the above-mentioned filtering in one of the dimensions of the sample space before the back projection, as will be further explained below.
  • the measurement system i.e. the location of the incoupling points and/or outcoupling points
  • the parallel geometry which is standard geometry widely used in conventional tomography e.g. in the medical field.
  • the parallel geometry is exemplified in Fig. 5.
  • the system measures projection values of a set of detection lines for a given angle ⁇ p fc .
  • the set of detection lines D are indicated by dashed arrows, and the resulting projection is represented by the function gi p ⁇ , s).
  • the measurement system is then rotated slightly around the origin of the x,y coordinate system in Fig. 5, to collect projection values for a new set of detection lines at this new projection angle. As shown by the dashed arrows, all detection lines are parallel to each other for each projection angle ⁇ p k .
  • the system generally measures projection values (line integrals) for angles spanning the range 0 ⁇ ⁇ ⁇ ⁇ .
  • Fig. 6A illustrate the detection lines for six different projection angles in a measurement coordinate system.
  • Existing tomographic reconstruction techniques often make use of the Projection-Slice Theorem, either directly or indirectly, and typically require a uniform sampling of information.
  • the uniformity of sampling may be assessed in a sample space, which is defined by dimensions that uniquely identify each detection line. There are a number of different ways to distinctly define a line; all of them will require two parameters.
  • the two-dimensional sample space is typically defined by the angle parameter ⁇ and the distance parameter s, and the projection values are represented by g ((p, s).
  • each detection line defines a sampling point
  • Fig. 6B illustrates the locations of these sampling points in the sample space.
  • the sampling points are positioned in a regular grid pattern. It should be noted that a true tomographic system typically uses many more projection angles and a denser set of detection lines D.
  • Fig. 7C illustrates the sinogram formed by all projections collected from the attenuation field, where the different projections are arranged as vertical sequences of values.
  • the projection shown in Fig. 7B is marked as a dashed line in Fig. 7C.
  • the filtering step i.e. convolution
  • the filtering step is a convolution, it may be computationally more efficient to perform the filtering step in the Fourier domain. For each column of values in the ⁇ - s-plane, a discrete ID Fast Fourier transform is computed. Then, the thus-transformed values are multiplied by the ID Fourier transform of the filter kernel. The filtered sinogram v is then obtained by taking the inverse Fourier transform of the result.
  • Fig. 7E represents the filtered sinogram that is obtained by operating the filter kernel in Fig. 7D on the sinogram in Fig. 7C.
  • Fig. 7E shows the absolute values of the filtered sinogram, with zero being set to white and the magnitude of the filtered values being represented by the amount of black.
  • the next step is to apply the back projection operator 3 ⁇ 4 # .
  • Fundamental to the back projection operator is that a single position in the attenuation field is represented by a sine function in the sinogram.
  • ⁇ x i > yd is a point in the attenuation field (i.e. a location on the touch surface 1).
  • Fig. 7E shows three sine curves ⁇ ⁇ - ⁇ (indicated by superimposed thick black lines) that correspond to three different positions in the attenuation field of Fig. 7 A.
  • the time complexity of the reconstruction process is 0(n 3 ), where n may indicate the number of incoupling and outcoupling points on one side of the touch surface, or the number of rows/columns of reconstruction points (see below).
  • An alternative approach is to compute the filtered values at the crossing points by applying individual filtering kernels.
  • the time complexity of such a reconstruction process is 0 (n 4 ) .
  • Fig. 7G shows the reconstructed attenuation field that is obtained by applying the back projection operator on the filtered sinogram in Fig. 7E. It should be noted that the filtering step may be important for the reconstruction to yield useful data.
  • Fig. 7H shows the reconstructed attenuation field that is obtained when the filtering step is omitted.
  • the standard techniques for tomographic processing as described above presume a regular arrangement of the sampling points in the ⁇ -s-plane, e.g. as exemplified in Fig. 6B.
  • the detection lines typically form an irregular pattern on the touch surface, such as exemplified in Fig. 8A.
  • the corresponding arrangement of sampling points (marked by x) in the sample space is also highly irregular, as illustrated in Fig. 8B.
  • Two nearby sampling points in the sample space correspond to two detection lines that are close to each other on the touch surface and/or that have a small difference in projection angle (and may thus be overlapping, with a small mutual angle, in some part of the touch surface).
  • FIG. 9 illustrates the sampling points in the ⁇ -s-plane for the interleaved system shown in Fig. 2.
  • the solid lines indicate the physical limits of the touch surface.
  • the angle ⁇ actually spans the range from 0 to 2 ⁇ , since the incoupling and outcoupling points extend around the entire perimeter.
  • a detection line is the same when rotated by ⁇ , and the projection values may thus be rearranged to fall within the range of 0 to ⁇ .
  • this rearrangement is optional.
  • the inventors have realized that the standard techniques for tomographic processing cannot be used to reconstruct the attenuation field a(x) on the touch surface, at least not with adequate precision, due to the irregular sampling.
  • the invention relates to ways of re-designing tomographic techniques so as to accommodate for irregular sampling, viz. such that the tomographic techniques use the same amount of information from all relevant parts of the sample space.
  • this is achieved by introducing an adjustment factor, p k , which represents the local density of sampling points in the sample space.
  • p k represents the local density of sampling points in the sample space.
  • Fig. 10A illustrates an embodiment of a method for reconstruction and touch data extraction in an touch-sensitive apparatus, such as the above-described FTIR system.
  • the method involves a sequence of steps 22-26 that are repeatedly executed, typically by the signal processor 10 (Figs 1 and 3).
  • each sequence of steps 22-26 is denoted a sensing instance.
  • the signal processor obtains adjustment factors, and possibly other processing parameters (coefficients), to be used in the tomographic reconstruction.
  • the adjustment factors are pre-computed and stored on an electronic memory, and the signal processor retrieves the pre-computed adjustment factors from the memory.
  • Each adjustment factor is computed to be representative of the local density of data samples in the sample space for a respective sampling point. This means that each detection line is associated with one or more adjustment factors.
  • the signal processor obtains the
  • adjustments factors by intermittently re-computing or updating the adjustment factors, or a subset thereof, during execution of the method, e.g. every n:th sensing instance.
  • the computation of adjustment factors will be further exemplified in Chapter 6.
  • Each sensing instance starts by a data collection step 22, in which measurement values are sampled from the light sensors 2 in the FTIR system, typically by sampling a value from each of the aforesaid sub- signals.
  • the data collection step 22 results in one projection value for each detection line (sampling point). It may be noted that the data may, but need not, be collected for all available detection lines in the FTIR system.
  • the data collection step 22 may also include pre-processing of the measurement values, e.g. filtering for noise reduction, conversion of measurement values into transmission values (or equivalently, attenuation values), conversion into logarithmic values, etc.
  • an "attenuation field" across the touch surface is reconstructed by processing of the projection data from the data collection step 22.
  • the attenuation field is a distribution of attenuation values across the touch surface (or a relevant part of the touch surface), i.e. an energy-related parameter.
  • the attenuation field and “attenuation values” may be given in terms of an absolute measure, such as light energy, or a relative measure, such as relative attenuation (e.g. the above-mentioned attenuation coefficient) or relative transmission.
  • reconstruction step operates a tomographic reconstruction algorithm on the projection data, where the tomographic reconstruction algorithm is designed to apply the adjustment factors to at least partly compensate for variations in the local density of sampling points in the sample space.
  • the tomographic processing may be based on any known algorithm for tomographic reconstruction.
  • the tomographic processing will be further exemplified in Chapter 7 with respect to algorithms for Back Projection, algorithms based on Fourier transformation and algorithms based on Hough transformation.
  • the attenuation field may be reconstructed within one or more subareas of the touch surface.
  • the subareas may be identified by analyzing intersections of detection lines across the touch surface, based on the above-mentioned projection signals. Such a technique for identifying subareas is further disclosed in WO2011/049513 which is incorporated herein by this reference.
  • the reconstructed attenuation field is processed for identification of touch-related features and extraction of touch data.
  • Any known technique may be used for isolating true (actual) touch points within the attenuation field.
  • ordinary blob detection and tracking techniques may be used for finding the actual touch points.
  • a threshold is first applied to the attenuation field, to remove noise. Any areas with attenuation values that exceed the threshold, may be further processed to find the center and shape by fitting for instance a two-dimensional second-order polynomial or a Gaussian bell shape to the attenuation values, or by finding the ellipse of inertia of the attenuation values.
  • Any available touch data may be extracted, including but not limited to x,y coordinates, areas, shapes and/or pressure of the touch points.
  • step 26 the extracted touch data is output, and the process returns to the data collection step 22.
  • steps 20-26 may be effected
  • the data collection step 22 of a subsequent sensing instance may be initiated concurrently with step 24 or 26.
  • the touch data extraction process is typically executed by a data processing device (cf. signal processor 10 in Figs 1 and 3) which is connected to sample the measurement values from the light sensors 3 in the FTIR system.
  • Fig. 10B shows an example of such a data processing device 10 for executing the process in Fig. 10A.
  • the device 10 includes an input 200 for receiving the output signal.
  • the device 10 further includes a parameter retrieval element (or means) 202 for retrieving the adjustment factors (or depending on implementation, for
  • a data collection element (or means) 204 for processing the output signal to generate the above-mentioned set of projection values
  • a reconstruction element (or means) 206 for generating the reconstructed attenuation field by tomographic processing
  • an output 210 for outputting the reconstructed attenuation field.
  • the actual extraction of touch data is carried out by a separate device 10' which is connected to receive the attenuation field from the data processing device 10.
  • the data processing device 10 may be implemented by special-purpose software (or firmware) run on one or more general-purpose or special-purpose computing devices.
  • each "element” or “means” of such a computing device refers to a conceptual equivalent of a method step; there is not always a one-to-one correspondence between elements/means and particular pieces of hardware or software routines.
  • One piece of hardware sometimes comprises different
  • a processing unit serves as one element/means when executing one instruction, but serves as another element/means when executing another instruction.
  • one element/means may be implemented by one instruction in some cases, but by a plurality of instructions in some other cases.
  • Such a software controlled computing device may include one or more processing units, e.g. a CPU ("Central Processing Unit"), a DSP ("Digital Signal Processor"), an ASIC
  • the data processing device 10 may further include a system memory and a system bus that couples various system components including the system memory to the processing unit.
  • the system bus may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • the system memory may include computer storage media in the form of volatile and/or non- volatile memory such as read only memory (ROM), random access memory (RAM) and flash memory.
  • the special-purpose software, and the adjustment factors may be stored in the system memory, or on other removable/non-removable volatile/non- volatile computer storage media which is included in or accessible to the computing device, such as magnetic media, optical media, flash memory cards, digital tape, solid state RAM, solid state ROM, etc.
  • the data processing device 10 may include one or more communication interfaces, such as a serial interface, a parallel interface, a USB interface, a wireless interface, a network adapter, etc, as well as one or more data acquisition devices, such as an A/D converter.
  • the special-purpose software may be provided to the data processing device 10 on any suitable computer-readable medium, including a record medium, a read-only memory, or an electrical carrier signal.
  • the density of detection lines is a measure of the angular and spatial distribution of detection lines on the touch surface. Recalling that a detection line is equivalent to a sampling point in the sampling space, the density of detection lines may be given by the density of sampling points (cf. Fig. 8B and Fig. 9).
  • the local density of detection lines is used for computing an adjustment factor p k for each individual detection line.
  • the adjustment factor p k may be a constant for each detection line or, in some examples, a function for each detection line.
  • the attenuation factor p k may be scaled to get appropriate scaling of the reconstructed attenuation field.
  • Voronoi areas i.e. to obtain the extent of a Voronoi cell for each detection line (sampling point) in the sample space.
  • Use of Delaunay triangles i.e. to obtain the areas of the Delaunay triangles associated with the detection lines (sampling points) in the sample space.
  • the adjustment factors may be (and typically are) pre-computed and stored for retrieval during touch determination (cf. step 20 in Fig. 10A).
  • the local density for a specific detection line may be determined by finding the number of detection lines that fall within a given distance ⁇ from the specific detection line.
  • the distance is measured in the sample space, i.e. the ⁇ -s-plane.
  • it may be preferable to at least approximately normalize the dimensions ( ⁇ , s) of the sample space. For example, if the projection angle spans 0 ⁇ ⁇ ⁇ ⁇ , the distance s may be scaled to fall within the same range.
  • the actual scaling typically depends on the size of the touch system, and theoretical
  • Fig. 11 A illustrates the determination of local density for two detection lines
  • Fig. 11B illustrate two groups of detection lines on the touch surface that fall within the respective dotted circle in Fig.l 1A.
  • the radii of the circles may e.g. be chosen to correspond to the expected height (with respect to the s dimension) of a touch.
  • the adjustment factor is proportional to the inverse of the number of nearby detection lines, i.e. 1/9 and 1/5, respectively. This means that a lower weight will be given to information from several detection lines that represent almost the same information in the attenuation field.
  • the adjustment factor is denoted £ and is computed according to:
  • the local density for a specific detection line may be determined by computing the distance to the N closest detection lines.
  • Fig. 12 illustrates the determination of local density for two detection lines (sampling points) ⁇ , D 2 '.
  • the nearest neighbors are marked by stars (*) around the respective detection line ⁇ , D 2 '.
  • the adjustment factor is denoted and is computed according to: where N k is the set of detection lines closest to detection line k.
  • the adjustment factors may be computed based on the extent of the Voronoi cell (as measured in the sample space) of the detection line.
  • Voronoi cells are obtained by defining the sampling points in the sample space as Voronoi sites in a Voronoi diagram.
  • a Voronoi diagram is a well-known mathematical technique to decompose a metric space based on distances to a specified discrete set of objects in the space. Specifically, a site in the Voronoi diagram has a Voronoi cell which contains all points that are closer to the site than to any other site.
  • Fig. 13 illustrates the Voronoi diagram for the sample space in Fig. 8B.
  • the detection lines (sampling points) ⁇ , D 2 ' have the areas 0.0434 and 0.1334,
  • the computation of obviates the need to set potentially arbitrary computation parameters, such as the distance ⁇ in the sample space and the number N and the definition of neighboring sampling points.
  • the adjustment factors may be computed based on the extent of the Delaunay triangles (in the sample space) for the detection line.
  • the Delaunay triangles are obtained by defining the sampling points as corners of a mesh of non-overlapping triangles and computing the triangles using the well-known Delaunay algorithm.
  • the dimensions of the sample space ( ⁇ ,s) may be rescaled to the essentially same length before applying the Delaunay triangulation algorithm.
  • Fig. 14 illustrates the Delaunay triangulation for the sample space in Fig. 8B.
  • the adjustment factor for a detection line is given by the total extent of all triangles that include the detection line.
  • the detection lines (sampling points) ⁇ , D 2 ' have the areas 0.16 and 0.32, respectively.
  • Fig. 15 shows two basis functions defined based on Delaunay triangles (cf. Fig. 15) for two detection lines (sampling points) ⁇ , D 2 '.
  • the basis functions suitably have the same height (strength) at the sampling points, e.g. unity. Since the basis function is given by Delaunay triangles, the base of each basis function is automatically adjusted to the local density of sampling points.
  • the adjustment factors for each detection line may be computed as the interaction between the interpolating basis function and the back projection operator.
  • the basis functions are defined to be linearly interpolating, but other interpolations can be achieved, such as nearest neighbor interpolation, second order interpolation or higher, continuously differentiable interpolation, etc. It is also conceivable to define the basis functions based on the Voronoi cells of the sampling points (cf. Fig. 15), which will yield a zero order interpolation, i.e. a top hat function. As will be explained below, the use of basis functions often results in the adjustment factor for each detection line being a function instead of a single value.
  • a first order interpolating Delaunay triangle basis function is denoted bj ⁇ 1 .
  • a reconstruction point in the attenuation field corresponds to a reconstruction line (curve) in the sample space.
  • the reconstruction line is a sine curve for the back projection operator. This is further illustrated in Figs 16A-16B, where the left-hand part indicates two different reconstruction points the touch surface and the right-hand part illustrates the corresponding reconstruction lines ⁇ ⁇ 5 ⁇ 2 in the sample space.
  • the adjustment factors for a given detection line (sampling point) with respect to a reconstruction line may be computed by evaluating the line integral for the reconstruction line running through the basis function.
  • Fig. 17A illustrates the interaction between the reconstruction line ⁇ 2 and the basis function for detection line (sampling point) D 2 '.
  • Fig. 17B is a graph of the values of the basis function along the reconstruction line ⁇ 2 with respect to the ⁇ dimension.
  • the line integral, evaluated with respect to the ⁇ dimension is 0.135 in this example. It can be noted that the top value in Fig. 17B does not reach a value of unity since the
  • the reconstruction line passes at a distance from the sampling point D 2 '.
  • the line integral may, e.g., be evaluated with respect to the length of the reconstruction line ⁇ 2 , i.e. in both dimensions s, ⁇ .
  • Fig. 17C illustrates the interaction between the reconstruction line ⁇ 2 and the basis function for another detection line (sampling point) D 3 '
  • Fig. 17D is a graph of the values of the basis function along the reconstruction line ⁇ 2 with respect to the ⁇ dimension.
  • the line integral, evaluated with respect to the ⁇ dimension is 0.095 in this example. It should be noted that the line integral is smaller for sampling point D ' than for sampling point D 2 ', even though the reconstruction line ⁇ 2 lies closer to sampling point D '.
  • Detection lines in a high density area will yield a basis function with a smaller base than detection lines in sparse areas.
  • the adjustment factor is denoted p ⁇ and is computed according to:
  • the basis functions are defined based on the Voronoi cells of the sampling points.
  • the adjustment factor is denoted and is computed according to: with b . being the zero-order interpolating Voronoi basis function.
  • an adjustment factor for a sampling point to be processed for tomographic reconstruction using back projection and filtering.
  • the computation is not limited to evaluating line integrals for the interaction between the basis function and the reconstruction line. Instead, a two-dimensional (surface) integral is evaluated for this interaction.
  • the basis function is given by Delaunay triangles and is defined to be linearly interpolating. It is to be understood that other types of basis functions may be used e.g. to achieve other interpolations, such as nearest neighbor interpolation, second order interpolation or higher, continuously differentiable interpolation, etc.
  • Fig. 18A illustrates the interaction between a reconstruction line ⁇ 1 and the basis function for detection line (sampling point) D 2 '.
  • Fig. 18B illustrates a known one-dimensional filter w b (As) for use in the filtering step prior to a back projection operation.
  • the ID filter is defined in terms of distance As from the reconstruction line with respect to the s dimension.
  • the ID filter may be defined as a continuous function of distance As.
  • Fig. 18C illustrates the sample space with the basis function for sampling point D 2 ' together with the reconstruction line ⁇ 1 of Fig. 18 A, where the reconstruction line is associated with a ridge.
  • the ridge includes the ID filter of Fig. 18B which has been reproduced to extend in the s dimension at plural locations along the reconstruction line ⁇ ⁇ .
  • the 2D function w t ( ⁇ , s) also defines negative valleys on both sides of the ridge.
  • Fig. 18D illustrates how the 2D function, which includes the ID filter, combines with the basis function for sampling point D 2 ', i.e. product of the 2D function and the basis function : ⁇ ⁇ ( ⁇ , s) ⁇ bj ⁇ 1 .
  • some parts of the sampling point D 2 ' via the basis function, will contribute positively and some parts negatively.
  • the total adjustment factor for the contribution of the sampling point D 2 ' to the reconstruction line ⁇ 1 in the back projection operation is given by the integral (sum) of the above result in Fig. 18D, which for D 2 ' (i.e. for this particular sampling point and reconstruction line) turns out to be zero.
  • the adjustment factor is denoted pj f 1 and is computed according to:
  • this equation can be understood to reflect the notion that each detection line (sampling point) is not limited only to the sampling point but has an extended influence in the sample space via the extent of the basis function.
  • a single detection line will thus contribute to the values of the ⁇ -s-plane in a region around the actual detection line in the sample space, the contribution being zero far away and having support, i.e. being greater than zero, only in a local neighborhood of the detection line.
  • Higher density of detection lines (sampling points) in the sample space yields smaller support and lower density gives larger support.
  • a single adjustment factor 1 is to be computed, all but the k:t detection line can be set to zero, before the above integral (sum) is computed.
  • the integration (summation) is done in both dimensions ⁇ , s.
  • the adjustment factors account for variations in the local density of detection lines.
  • the use of a surface integral results in a different adjustment factor, which may be more suitable for certain implementations of the touch system.
  • the choice of technique for calculating the adjustment factors is a tradeoff between computational complexity and precision of the reconstructed attenuation field, and any of the adjustment factors presented herein may find its use depending on the circumstances.
  • the basis functions are instead defined based on the Voronoi cells of the sampling points.
  • the adjustment factor is denoted fc' and is computed according to:
  • the reconstruction step involves evaluating a reconstruction function F(p k , g ((p k , s fc )), where p k is the adjustment factor (function or constant) for each data sample and g((Pk > s k) is the value of each data sample.
  • the data samples are given by the measured projection values for the detection lines of the touch- sensitive apparatus.
  • the data samples includes synthetic projection values which are generated from the projection values, e.g. by interpolation, to supplement or replace the measured projection values.
  • the application of adjustment factors will be discussed, and reference will be made to the different variants of adjustment factors discussed in sections 6.1-6.5.
  • the processing efficiency of the embodiments will be compared using Landau notation as a function of n, with n being the number of incoupling and outcoupling points on one side of the touch surface.
  • a reconstructed attenuation field containing n 2 reconstruction points will be presented.
  • the reconstructed attenuation field is calculated based on projection values obtained for the reference image in Fig. 19.
  • the reference image is thus formed by five touch objects 7 of different size and attenuation strength that are distributed on the touch surface 1.
  • Fig. 19 also shows the emitters 2 and sensors 3 in relation to the reference image. The distribution of sampling points in the ⁇ -s-plane for this system is given in Fig. 9.
  • the filtering process involves applying a two-dimensional sharpening filter. If the two-dimensional sharpening filter is applied in the spatial domain, the time complexity of the unfiltered back projection is 0 (n 4 ). If the filtering is done in the Fourier domain, the time complexity may be reduced.
  • the unfiltered back projection involves evaluating reconstruction lines in the sample space, using adjustment factors computed by means of interpolating basis functions, as described above in section 6.5. As mentioned in that section, use of interpolating basis functions results in a correction for the local density of sampling points.
  • the reconstruction function F(p k , g ((p k , s k ) ⁇ is given by a first sub-function that performs the back projection at desired reconstruction points in the attenuation field: and a second sub-function that applies the 2D sharpening filter on the
  • the adjustment factor p k may be any adjustment factor calculated based on an interpolating basis function, such as p k ⁇ or . It can also be noted that since several adjustment factors p k i are zero, the sum needs only be computed for a relevant subset of the sampling points, namely over all k where p k i > TH, where TH is a threshold value, e.g. 0.
  • the time complexity of the back projection operator is 0 (n 3 ), assuming that there are 0 (n) non-zero adjustment factors for each reconstruction point.
  • a reconstruction line in the sample space (cf. Fig. 16) is evaluated by computing a contribution value of each sampling point to the reconstruction line and summing the contribution values.
  • the contribution value for a sampling point is given by the product of its projection value, its adjustment factor and a filter value for the sampling point.
  • the time complexity of the reconstruction function is 0 (n 4 ) .
  • 9 ⁇ Pk > s k) i s me projection value of detection line k, ⁇ k , s k ) is the position of the detection line k in the sample space
  • w b (As) is the ID filter given as a function of distance As to the reconstruction line in the s dimension.
  • the operation of the reconstruction function is illustrated in Fig.
  • Fig. 20 for four sampling points D , D 2 ', D 4 ', D 5 ' in the ⁇ -s-plane.
  • Fig. 20 also indicates the extent of Voronoi cells for all sampling points.
  • the distance As for each sampling point is computed as the distance in the vertical direction from the sampling point to the reconstruction line ⁇ 2 .
  • the adjustment factor p k may be any adjustment factor that directly reflects the separation of sampling points in the sample space, such as p k A , p k , p ⁇ and P r.
  • ID filters w b (As) that may be used.
  • the ID filter may be defined as a continuous function of distance As.
  • Figs 21 A and 2 IB illustrate ID filters presented in the aforesaid books by Kak & Slaney and Natterer, respectively.
  • the bandwidth of these filters is preferably adapted to the signal bandwidth when the reconstruction function is evaluated.
  • Fig. 22A illustrates the reconstructed attenuation field that is obtained by applying the reconstruction function, using p k A , to the projection values obtained for the attenuation field in Fig. 19, and Fig. 22B illustrates a corresponding attenuation field obtained if the adjustment factors are omitted from the reconstruction function.
  • the adjustment factors greatly improve the quality of the reconstruction.
  • a reconstruction line in the sample space is evaluated by extending the influence of the sampling points by the use of interpolating basis function and by including the ID filter in the reconstruction line.
  • the adjustment factor p k may be any adjustment factor originating from a surface integral through interpolating basis functions in the sample space, such as £ f or £ f
  • the time complexity of the reconstruction function is 0 (n 4 ) . It can be noted that the time for executing the reconstruction (cf. step 24 in Fig. 10A) is largely independent of the selected interpolating basis function and ID filter, given that adjustment factors are typically pre-computed and stored in memory. The time spent for this pre- computation may however be different for different basis functions.
  • Fig. 23A illustrates an exemplifying definition of Delaunay triangles for the sample space in Fig. 9. Based on this definition and the ID filter in Fig. 21B, adjustment factors p k 1 have been computed and applied in the reconstruction function.
  • Fig. 23B illustrates the resulting attenuation field when the reconstruction function is operated on the projection values obtained for the attenuation field in Fig. 19. Clearly, the adjustment factors provide good reconstruction quality.
  • the filtering step is performed locally around each individual sampling point in the sample space using the ID filter.
  • the filtering is operated on synthetic projection values at synthetic sampling points which are generated from the projection values of the sampling points, e.g. by interpolation.
  • the synthetic projection values are estimated signal values that are generated around each projection value at given locations in the s dimension.
  • Fig. 24A illustrates a few actual sampling points (stars) and associated synthetic sampling points (circles) in a subset of the ⁇ -s-plane of Fig. 9.
  • Fig. 24A also illustrates Delaunay triangles (dotted lines) that may be used for interpolation of the synthetic sampling points.
  • sampling points are thus placed at the corners of a mesh of non- overlapping triangles, and the values of the synthetic sampling points are e.g. linearly interpolated in the triangles. This interpolation, and variants thereof, will be described in further detail below.
  • the third embodiment evaluates a reconstruction line in the sample space by computing a line integral through interpolating basis functions arranged at the actual sampling points.
  • the reconstruction function F(p k , g ((p k , s k ) ⁇ is given by a first sub-function that creates 2M synthetic sampling points g ⁇ (Pk,m > s k,m) with respect to the s dimension, and a second sub-function that applies a discrete ID filter (in the s dimension) on the collection of sampling points (actual and synthetic) to calculate a filtered value for each actual sampling point:
  • the adjustment factor p k may be any adjustment factor originating from a line integral through interpolating basis functions in the sample space, such as p k ⁇ or . It can also be noted that since several adjustment factors p k i are zero, the sum needs only be computed for a relevant subset of the sampling points, namely over all k where p k i > TH, where TH is a threshold value, e.g. 0.
  • Any suitable ID filter may be used, e.g. the one shown in Fig. 7D. However, it may be advantageous to adapt the band- width of the filter to each actual sampling point.
  • Fig. 24B illustrates the resulting attenuation field when the reconstruction function is operated on the projection values obtained for the attenuation field in Fig. 19, using adjustment factors p k ⁇ and the ID filter in Fig. 21B.
  • the adjustment factors provide good reconstruction quality.
  • the time complexity of the reconstruction function is 0 (n 3 ) . This is based on the fact that the number of sampling points are 0 (n 2 ), that the first sub-function computes 0(M ⁇ n 2 ) synthetic projection values, with M being 0(ri), that the second sub-function accesses each synthetic sampling point once, and that the third sub-function accesses 0(n) filtered values to generate each of 0(n 2 ) reconstruction points, giving a total time complexity of 0(n 3 ).
  • the generation of synthetic projection values may be achieved by interpolating the original sampling points.
  • the objective of the interpolation is to find an interpolation function that can produce interpolated values at specific synthetic sampling points in the sample space given a set of measured projection values at the actual sampling points.
  • Many different interpolating functions can be used for this purpose, i.e. to interpolate data points on a two-dimensional grid. Input to such an interpolation function is the actual sampling points in the sample space as well as the measured projection value for each actual sampling point.
  • Most interpolating functions involve a linear operation on the measured projection values. The coefficients in the linear operation are given by the known locations of the actual sampling points and the synthetic sampling point in the sample space. The linear operator may be pre-computed and then applied on the measured projection values in each sensing instance (cf.
  • interpolation functions include Delaunay triangulation, and other types of interpolation using triangle grids, bicubic interpolation, e.g. using spline curves or Bezier surfaces, Sinc/Lanczos filtering, nearest-neighbor interpolation, and weighted average interpolation.
  • interpolation points are further disclosed in Applicant's International application No. PCT/SE2011/050520, which was filed on April 28, 2011 and which is incorporated herein by this reference.
  • the filtering step is performed locally around each individual sampling point in the sample space using a ID filter.
  • the filtering is not operated on synthetic projection values, but on the projection values of adjacent actual sampling points that are forced into the ID filter.
  • the projection values of adjacent sampling points thus form estimated signal values around each projection value in the s dimension.
  • the ID filter is a (—1, 2,— 1) kernel which is operated with respect to the s dimension.
  • the fourth embodiment evaluates a reconstruction line in the sample space by computing a line integral through interpolating basis functions arranged at the actual sampling points.
  • Fig. 25A illustrates all sampling points in the ⁇ -s-plane for the touch- sensitive apparatus in Fig. 2.
  • selected sampling points are indicated by circles, and their adjacent sampling points in the s dimension are indicated by crosses.
  • the filtered value for each selected sampling points is calculated, using the above filter kernel, as two times the projection value of the selected sampling point minus the projection values of the adjacent sampling points.
  • the adjacent sampling points are generally selected as a best match to the extent of the filter kernel in the s dimension of the sample space.
  • the adjacent sampling points for each sampling point may instead be selected based on geometric criteria.
  • the adjacent sampling points are detection lines that extend from the next incoupling point (emitter) and the next outcoupling point (detector) in both directions away from the incoupling and outcoupling points that defines the detection line of the selected sampling point.
  • This principle is illustrated in Fig. 25B, where the detection lines of the selected sampling points in Fig. 25 A are represented by solid lines, and the detection lines of the adjacent sampling points are represented by dashed lines.
  • a filtered value for a detection line (sampling point) may be computed by identifying the incoupling and outcoupling points that give rise to the detection line, and then finding the neighboring incoupling and outcoupling points.
  • the adjustment factor p k may be any adjustment factor originating from a line integral through interpolating basis functions in the sample space, such as p k ⁇ or . It can also be noted that since several adjustment factors p k i are zero, the sum needs only be computed for a relevant subset of the sampling points, namely over all k where p k i > TH, where TH is a threshold value, e.g. 0. It is also conceivable to add an overall scaling factor to the back projection operator to achieve a desired reconstruction result.
  • Fig. 25C illustrates the resulting attenuation field when the reconstruction function is operated on the projection values obtained for the attenuation field in Fig. 19, using adjustment factors and the above-described filter kernel. Clearly, the adjustment factors provide adequate reconstruction quality. 7.6 Fourier transformation techniques
  • NUFFT Non-Uniform FFT
  • a NUFFT (Non-Uniform FFT) algorithm is an adaptation of a regular discrete Fourier transformation function, e.g. an FFT, to handle non-uniform input data and/or output data while retaining the "fast" property of the FFT algorithms, thus allowing for time complexities of ⁇ ( ⁇ 2 ⁇ log(n)).
  • the reconstruction step may utilize a so-called NED (Non-Equispaced Data) algorithm which is modified by adjustment factors to account for the varying density of sampling points in the sample space.
  • NED Non-Equispaced Data
  • the reconstruction function F(p k , g ((p k , s k ) ⁇ is given by four consecutive sub-functions.
  • a first sub-function operates a 2D forward NED FFT on the projection values to generate the Fourier transform of g( ⁇ Pk, s k ⁇ ) : g (xp, r) ⁇ fc - g ( . (p k , s k ).
  • the forward NED FFT applies adjustment factors to compensate for varying density of sampling points in the sample space.
  • the evaluation of the first sub-function typically operates on pre-computed adjustment factors and other pre-computed coefficients of the forward NED FFT (see Chapter 8).
  • a second sub-function operates a regular ID inverse Fourier transform (e.g. an FFT) with respect to the ⁇ dimension: This is done since the Projection-Slice Theorem is valid only for Fourier transforms with respect to the s dimension, i.e. one-dimensional transforms of the different projections.
  • the second sub-function results in a polar coordinate
  • a fourth sub-function operates a 2D inverse NED FFT on the polar representation f(j, r) to generate the attenuation field: f(x, y) ⁇ — f(j, r).
  • the inverse NED FFT may or may not be designed in correspondence with the forward NED FFT.
  • the time complexity of the reconstruction function is 0 (n 2 ⁇ log(n)) .
  • the adjustment factor p k may be any one of , p A , Pk m > or Pk m - T ne l ast two adjustment factors are obtained similarly to the adjustment factors pj f 1 and p%Y 0 respectively, i.e. via surface integrals through interpolating basis functions (section 6.5).
  • the interpolating function ⁇ is reproduced along the reconstruction line.
  • the interpolating function ⁇ is defined in Chapter 8.
  • the Hough transform is a method for extracting features. It is mainly used in image analysis and computer vision. The main idea is to find geometric objects within a certain class of geometric shapes by a voting procedure. The voting procedure is carried out in the parameter space of the representation of the geometric objects. Generally, the objects are found as local maxima in a so-called accumulator space.
  • the original algorithm is a method for finding lines in a digital image.
  • the original algorithm is outlined below, followed by ways to modify and use the Hough transformation for finding touches in the sinogram directly, without filtering and back projection.
  • any line in a two-dimensional (image) plane can be represented by an angle, y, and the smallest distance to the origin, r.
  • the line detection algorithm cannot be directly applied for reconstructing the attenuation field based on the measured projection values.
  • a modification of the Hough transform can be used for finding sine curves (i.e. reconstruction lines) present in the sinogram. It can be noted that all sine curves have the same periodicity, 2 ⁇ , and that a sine curve can be represented by an amplitude, A, and a phase, ⁇ .
  • the weight of the sampling point is added to all corresponding sine curves in the accumulator image.
  • the weight of the sampling point is given by the projection value modified by the adjustment factor p k , such that the projection value is compensated for the local density of sample points.
  • the adjustment factor p k may be any adjustment factor that directly reflects the separation of sampling points in the sample space, such as p k A , p k , p k , and p k A . When all sampling points have been processed, touches are found as local maxima in the accumulator image.
  • the modified Hough transform algorithm has a time complexity of 0 (n 3 ), since ⁇ ⁇ 1 ) values are added to the accumulator image for each detection line, the number of detection lines being 0 (n 2 ).
  • the process for finding local maxima has a lower time complexity.
  • NUFFT/NFFT Generalized FFT
  • NDFT Non-uniform DFT
  • NER Non- Equispaced Result FFT
  • NED Non-Equispaced Data FFT
  • USFFT Unequally spaced FFT
  • oversampling may be introduced, given by a factor c.
  • ⁇ ( ⁇ ) has compact support and is continuously differentiable in [- a, a] and is non-zero in [- ⁇ /c, ⁇ /c] .
  • the Fourier transform of ⁇ ( ⁇ ) is preferably as small as possible outside of [- M, M] since this will make the summation fast and exact.
  • the first function, ⁇ ( ⁇ ), is taken to be zero when x 2 ⁇ M 2 . 1 0 is the modified Bessel function of the first kind.
  • inverse NED algorithms perform all steps equal to the forward NED algorithm but uses an ordinary IFFT instead.
  • the NED equation includes an adjustment factor p k , which compensates for the varying density of sampling points. The adjustment factor will be discussed in more detail below.
  • the above NED equation may be modified to utilize regular FFT
  • Non-zero terms of u q occur only for ⁇ q + ctN— ⁇ ⁇ ⁇ ⁇ M, which means that each u q is the sum of all non-equispaced z k , multiplied with their respective adjustment factor, within distance ⁇ M; with distance computed modulo cN.
  • ⁇ 3 ⁇ 4 . is the nearest equispaced sampling point in the FFT input.
  • the input for the NED FFT comprises the projection values of the non- equispaced sampling points z k , the sampling points x k , the oversampling factor c, with a total length c ⁇ N suitable for FFT, the interpolation length M, and the coefficients 0 fc , ⁇ 3 ⁇ 4 . and $ k m which may be pre-computed.
  • NED problem may be formulated as
  • ⁇ ⁇ ⁇ /2 ⁇ - ⁇ (2 ⁇ ⁇ fc/(c ⁇ ⁇ )) ⁇ ⁇ (2 ⁇ ⁇ n/(c ⁇ N)) .
  • the execution of the 2D NED algorithm thus comprises the steps:
  • the adjustment factor p t may be any one of p k , p k , p k A or p k A .
  • the adjustment factor may be computed as the product of interpolation basis functions, for instance bj ⁇ 1 or b k °, for a given sampling point with the two-dimensional extent of the interpolating function ⁇ . This would render a set of adjustment factors p k m and j respectively.
  • the detection lines have been represented as sampling points in the ⁇ -s-plane, it should be realized any other parameter representation of the detection lines can be used.
  • the detection lines can be represented in a ⁇ - ⁇ -plane, as is used in a fan geometry which is standard geometry widely used in conventional tomography e.g. in the medical field.
  • the fan geometry is exemplified in Fig.
  • the measurement system is then rotated slightly ( ⁇ ) around the origin of the x,y coordinate system in Fig.
  • each detection line can be represented by a sampling point in sample space defined by the angular emitter location parameter ⁇ and the angular direction parameter a. All of the above-described adjustment factors and reconstruction steps are equally applicable to such a sample space.
  • the ID filter in the filtered back projection algorithm is applied to extend in the a dimension, and the back projection operator is different from the one used in the above-described parallel geometry. Suitable filters and operators are found in the literature.
  • Fig. 27 exemplifies a technique for assigning values to the parameters a and ⁇ for the detection lines of a touch- sensitive apparatus.
  • the apparatus is circumscribed by a fictitious circle C which may or may not be centered at the origin of the x,y coordinate system (Fig. 2) of the apparatus.
  • the emitters 2 and sensors 3 define detection lines (not shown) across the touch surface 1.
  • the intersection between the detection line and the circle C is taken to define a ⁇ value, whereas the a value of each detection line is given by the inclination angle of the detection line with respect to the above-mentioned reference line.
  • the reconstructed attenuation field may be subjected to post-processing before the touch data extraction (step 26 in Fig. 10A).
  • post-processing may involve different types of filtering, for noise removal and/or image enhancement.
  • the reconstructed attenuation field need not represent the distribution of attenuation coefficient values within the touch surface, but could instead represent the distribution of energy, relative transmission, or any other relevant entity derivable by processing of projection values given by the output signal of the sensors.
  • the projection values may represent measured energy, differential energy (e.g. given by a measured energy value subtracted by a background energy value for each detection line), relative attenuation, relative transmission, a logarithmic attenuation, etc.
  • each individual projection signal included in the output signal may be subjected to a high-pass filtering in the time domain, whereby the thus-filtered projection signals represent background- compensated energy and can be sampled for generation of projection values.
  • the touch surface may be implemented as an electrically conductive panel, the emitters and sensors may be electrodes that couple electric currents into and out of the panel, and the output signal may be indicative of the resistance/impedance of the panel on the individual detection lines.
  • the touch surface may include a material acting as a dielectric, the emitters and sensors may be electrodes, and the output signal may be indicative of the capacitance of the panel on the individual detection lines.
  • the touch surface may include a material acting as a vibration conduc- ting medium, the emitters may be vibration generators (e.g. acoustic or piezoelectric transducers), and the sensors may be vibration sensors (e.g. acoustic or piezoelectric sensors).

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)

Abstract

A touch-sensitive apparatus comprises a panel for conducting signals from incoupling points to outcoupling points. Actual detection lines are defined between pairs of incoupling and outcoupling points to extend across the panel. A signal generator is coupled to the incoupling points to generate the signals, e.g. light, and a signal detector is coupled to the outcoupling points to generate an output signal. A data processor processes (22) the output signal to generate a set of data samples, which are non-uniformly arranged in a two-dimensional sample space. Each data sample is indicative of detected energy on a detection line and is defined by a signal value and first and second dimension values in the sample space. The first and second dimension values define the location of the detection line on the surface portion. Adjustment factors are obtained (20) for the data samples, each adjustment factor being representative of the local density of data samples in the sample space. The data samples are processed (24) by tomographic reconstruction, while applying the adjustment factors, to generate a reconstructed distribution of an energy-related parameter within the surface portion.

Description

TOUCH DETERMINATION BY TOMOGRAPHIC RECONSTRUCTION
Cross-Reference to Related Applications
The present application claims the benefit of Swedish patent application No.
1051061-8, filed on October 11, 2010, and U.S. provisional application No. 61/391,764, filed on October 11, 2010, both of which are incorporated herein by reference. Technical Field
The present invention relates to touch- sensitive panels and data processing techniques in relation to such panels.
Background Art
To an increasing extent, touch- sensitive panels are being used for providing input data to computers, electronic measurement and test equipment, gaming devices, etc. The panel may be provided with a graphical user interface (GUI) for a user to interact with using e.g. a pointer, stylus or one or more fingers. The GUI may be fixed or dynamic. A fixed GUI may e.g. be in the form of printed matter placed over, under or inside the panel. A dynamic GUI can be provided by a display screen integrated with, or placed underneath, the panel or by an image being projected onto the panel by a projector.
There are numerous known techniques for providing touch sensitivity to the panel, e.g. by using cameras to capture light scattered off the point(s) of touch on the panel, or by incorporating resistive wire grids, capacitive sensors, strain gauges, etc into the panel.
US2004/0252091 discloses an alternative technique which is based on frustrated total internal reflection (FTIR). Light sheets are coupled into a panel to propagate inside the panel by total internal reflection. When an object comes into contact with a surface of the panel, two or more light sheets will be locally attenuated at the point of touch. Arrays of light sensors are located around the perimeter of the panel to detect the received light for each light sheet. A coarse tomographic reconstruction of the light field across the panel surface is then created by geometrically back-tracing and triangulating all attenuations observed in the received light. This is stated to result in data regarding the position and size of each contact area.
US2009/0153519 discloses a panel capable of conducting signals. A "tomograph" is positioned adjacent the panel with signal flow ports arrayed around the border of the panel at discrete locations. Signals (b) measured at the signal flow ports are tomograph- ically processed to generate a two-dimensional representation (x) of the conductivity on the panel, whereby touching objects on the panel surface can be detected. The presented technique for tomographic reconstruction is based on a linear model of the tomographic system, Ax=b. The system matrix A is calculated at factory, and its pseudo inverse A"1 is calculated using Truncated SVD algorithms and operated on the measured signals to yield the two-dimensional (2D) representation of the conductivity: x= A b. The suggested method is both demanding in the term of processing and lacks suppression of high frequency components, possibly leading to much noise in the 2D representation.
US2009/0153519 also makes a general reference to Computer Tomography (CT). CT methods are well-known imaging methods which have been developed for medical purposes. CT methods employ digital geometry processing to reconstruct an image of the inside of an object based on a large series of projection measurements through the object. Various CT methods have been developed to enable efficient processing and/or precise image reconstruction, e.g. Filtered Back Projection, ART, SART, etc. Often, the projection measurements are carried out in accordance with a standard geometry which is given by the CT method. Clearly, it would be desirable to capitalize on existing CT methods for reconstructing the 2D distribution of an energy-related parameter (light, conductivity, etc) across a touch surface based on a set of projection measurements.
Summary
It is an object of the invention to enable touch determination on a panel based on projection measurements by use of existing CT methods.
Another objective is to provide a technique that enables determination of touch- related data at sufficient precision to discriminate between a plurality of objects in simultaneous contact with a touch surface.
This and other objects, which may appear from the description below, are at least partly achieved by means of a method of enabling touch determination, a computer program product, a device for enabling touch determination, and a touch- sensitive apparatus according to the independent claims, embodiments thereof being defined by the dependent claims.
A first aspect of the invention is a method of enabling touch determination based on an output signal from a touch-sensitive apparatus. The touch-sensitive apparatus comprises a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points, at least one signal generator coupled to the incoupling points to generate the signals, and at least one signal detector coupled to the outcoupling points to generate the output signal. The method comprises: processing the output signal to generate a set of data samples, wherein each data sample is indicative of detected energy on one of the detection lines and is defined by a signal value and first and second dimension values in a two-dimensional sample space, wherein the first and second dimension values define the location of the detection line on the surface portion, and wherein the data samples are non-uniformly arranged in the sample space;
obtaining adjustment factors for the set of data samples, wherein each adjustment factor is representative of the local density of data samples in the sample space for a respective data sample; and processing the set of the data samples by tomographic reconstruction, while applying the adjustment factors, to generate data indicative of a reconstructed distribution of an energy-related parameter within at least part of the surface portion.
In one embodiment, the adjustment factor for a given data sample is calculated to represent the number of data samples within a region around the given data sample in the sample space.
In another embodiment, the adjustment factor for a given data sample is calculated to represent an average of a set of smallest distances between the given data sample and neighboring data samples in the sample space.
In yet another embodiment, the adjustment factor for a given data sample is calculated to represent an extent of a Voronoi cell or a set of Delaunay triangles in the sample space for the given data sample.
In yet another embodiment, the reconstructed distribution comprises spatial data points, each spatial data point having a unique location on the surface portion and corresponding to a predetermined curve in the sample space, and the adjustment factor for a given data sample is calculated, for each spatial data point in a set of spatial data points, to represent the interaction between the predetermined curve of the spatial data point and a two-dimensional basis function located at the given data sample, wherein the basis function is given an extent in the sample space that is dependent on the local density. The interaction may be calculated by evaluating a line integral of the basis function, along the predetermined curve.
In one embodiment, the step of obtaining comprises: obtaining, for each spatial data point, a set of adjustment factors associated with a relevant set of data samples. The step of processing the set of data samples may comprise: reconstructing each spatial data point by: scaling the signal value of each data sample in the relevant set of data samples by its corresponding adjustment factor and summing the thus-scaled signal values.
In one embodiment, the predetermined curve is designed to include the shape of a predetermined one-dimensional filter function which extends in the first dimension of the sample space and which is centered on and reproduced at plural locations along the curve. In this embodiment, the interaction may be calculated by evaluating a surface integral of the combination of the predetermined curve and the basis function.
In one embodiment, the step of processing the output signal comprises: obtaining a measurement value for each detection line and applying a filter function to generate a filtered signal value for each measurement value, wherein the filtered signal values form said signal values of the data samples. The filter function may be a predetermined one- dimensional filter function which is applied in the first dimension of the sample space.
Alternatively or additionally, the step of applying the filter function may comprise: obtaining estimated signal values around each measurement value in the first dimension, and operating the filter function on the measurement value and the estimated signal values. The filtered signal value may be generated as a weighted summation of the measurement values and the estimated signal values based on the filter function. The estimated signal values may be obtained as measurement values of other detection lines, said other detection lines being selected as a best match to the extent of the filter function in the first dimension, or the estimated signal values may be generated at predetermined locations around the measurement value in the sample space.
In one embodiment, the estimated signal values are generated by interpolation in the sample space based on the measurement values. Each estimated signal value may be generated by interpolation of measurement values of neighboring data samples in the sample space. Alternatively or additionally, the step of processing the output signal further may comprise: obtaining a predetermined two-dimensional interpolation function with nodes corresponding to the data samples, and calculating the estimated signal values according to the interpolation function and based on the measurement values of the data samples.
In one embodiment, the reconstructed distribution comprises spatial data points, each spatial data point having a unique location on the surface portion and
corresponding to a predetermined curve in the sample space, and wherein the step of processing the set of data samples comprises: generating filtered signal values for the data samples by scaling the signal value of each data sample by a weight given by a predetermined filter function based on the distance of the data sample from the curve in the first dimension, and evaluating each spatial data point by: scaling the filtered signal value by the adjustment factor of the corresponding data sample and summing the thus- scaled filtered signal values.
In one embodiment, the step of processing the set of data samples comprises: calculating Fourier transformation data for the data samples with respect to the first dimension only, and generating said data indicative of the reconstructed distribution by operating a two-dimensional inverse Fourier transform on the Fourier transformation data, wherein the adjustment factors are applied in the step of calculating Fourier transformation data. The step of calculating the Fourier transformation data may comprise: transforming the data samples to a Fourier domain to produce uniformly arranged Fourier-transformed data samples with respect to the first and second dimensions, and transforming the Fourier-transformed data samples back to the sample space with respect to the second dimension only.
In one embodiment, the first dimension value is a distance of the detection line in the plane of the panel from a predetermined origin, and the second dimension value is a rotation angle of the detection line in the plane of the panel.
In an alternative embodiment, the first dimension value is a rotation angle of the detection line in the plane of the panel, and the second dimension value is an angular location of the incoupling or outcoupling point of the detection line.
A second aspect of the invention is a computer program product comprising computer code which, when executed on a data-processing system, is adapted to carry out the method of the first aspect.
A third aspect of the invention is a device for enabling touch determination based on an output signal from a touch-sensitive apparatus. The touch-sensitive apparatus comprises a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points, signal generating means coupled to the incoupling points to generate the signals, and signal detecting means coupled to the outcoupling points to generate the output signal. The device comprises: means for processing the output signal to generate a set of data samples, wherein each data sample is indicative of detected energy on one of the detection lines and is defined by a signal value and first and second dimension values in a two-dimensional sample space, wherein the first and second dimension values define the location of the detection line on the surface portion, and wherein the data samples are non-uniformly arranged in the sample space; means for obtaining adjustment factors for the set of data samples, wherein each adjustment factor is representative of the local density of data samples in the sample space for a respective data sample; and means for processing the set of the data samples by tomographic reconstruction, while applying the adjustment factors, to generate data indicative of a reconstructed distribution of an energy-related parameter within at least part of the surface portion.
A fourth aspect of the invention is a touch-sensitive apparatus, comprising: a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points; means for generating the signals at the incoupling points; means for generating an output signal based on detected signals at the outcoupling points; and the device for enabling touch determination of the third aspect.
A fifth aspect of the invention is a touch-sensitive apparatus, comprising: a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points; at least one signal generator coupled to the incoupling points to generate the signals; at least one signal detector coupled to the outcoupling points to generate an output signal; and a signal processor connected to receive the output signal and configured to: process the output signal to generate a set of data samples, wherein each data sample is indicative of detected energy on one of the detection lines and is defined by a signal value and first and second dimension values in a two-dimensional sample space, wherein the first and second dimension values define the location of the detection line on the surface portion, and wherein the data samples are non-uniformly arranged in the sample space; obtain adjustment factors for the set of data samples, wherein each adjustment factor is representative of the local density of data samples in the sample space for a respective data sample; and process the set of the data samples by tomographic reconstruction, while applying the adjustment factors, to generate data indicative of a reconstructed distribution of an energy-related parameter within at least part of the surface portion.
Any one of the embodiments of the first aspect can be combined with the second to fifth aspects.
Still other objectives, features, aspects and advantages of the present invention will appear from the following detailed description, from the attached claims as well as from the drawings.
Brief Description of Drawings
Embodiments of the invention will now be described in more detail with reference to the accompanying schematic drawings.
Fig. 1 is a plan view of a touch- sensitive apparatus.
Fig. 2 is a top plan view of a touch- sensitive apparatus with an interleaved arrangement of emitters and sensors.
Figs 3A-3B are side and top plan views of touch-sensitive systems operating by frustrated total internal reflection (FTIR).
Fig. 4 illustrates the underlying principle of the Projection-Slice Theorem. Fig. 5 illustrates a parallel geometry used in tomographic reconstruction.
Figs 6A-6B are graphs of a regular arrangement of detection lines in a
measurement coordinate system and in a sample space, respectively.
Figs 7A-7H illustrate a starting point, intermediate results and final results of a back projection process using a parallel geometry, as well as a filter used in the process.
Figs 8A -8B are graphs of a non-regular arrangement of detection lines in the coordinate system of a touch surface and in a sample space, respectively.
Fig. 9 is a graph of sampling points defined by the interleaved arrangement in Fig.
2.
Fig. 10A is a flow chart of a reconstruction method, and Fig. 1 OB is a block diagram of a device that implements the method of Fig. 10A.
Figs 11 A-l IB illustrate a first embodiment for determining adjustment factors.
Fig. 12 illustrate a second embodiment for determining adjustment factors.
Fig. 13 illustrate a third embodiment for determining adjustment factors.
Fig. 14 illustrate a fourth embodiment for determining adjustment factors.
Fig. 15 illustrate the concept of interpolating basis functions.
Figs 16A-16B illustrate a correspondence between reconstruction points on the touch surface and reconstruction lines in the sample space.
Figs 17A-17D illustrate the interaction between a reconstruction line and an interpolating basis function.
Figs 18A-18D illustrate the evaluation of a surface integral defined by a reconstruction line, a filter function and a basis function.
Fig. 19 is a reference image mapped to an interleaved arrangement.
Fig. 20 illustrates a first embodiment of modified FBP reconstruction.
Figs 21 A-21B are graphs of different filters for use in modified FBP
reconstructions.
Figs 22A-22B show the reconstructed attenuation fields obtained by the first embodiment, with and without adjustment factors.
Fig. 23A is a graph of Delaunay triangles defined in the sample space of Fig. 9, and Fig. 23B shows the reconstructed attenuation field obtained by a second
embodiment of modified FBP reconstruction.
Fig. 24A illustrates a third embodiment of modified FBP reconstruction, and Fig. 24B shows the reconstructed attenuation field obtained by the third embodiment.
Fig. 25A-25B illustrate a fourth embodiment of modified FBP reconstruction, and Fig. 25C shows the reconstructed attenuation field obtained by the fourth embodiment.
Fig. 26 illustrates a fan geometry used in tomographic reconstruction. Fig. 27 illustrates the use of a circle for mapping detection lines to a sample space for fan geometry reconstruction.
Detailed Description of Example Embodiments
The present invention relates to techniques for enabling extraction of touch data for at least one object, and typically multiple objects, in contact with a touch surface of a touch-sensitive apparatus. The description starts out by presenting the underlying concept of such a touch-sensitive apparatus, especially an apparatus operating by frustrated total internal reflection (FTIR) of light. The description continues to generally explain and exemplify the theory of tomographic reconstruction and its use of standard geometries. Then follows an example of an overall method for touch data extraction involving tomographic reconstruction. Finally, different inventive aspects of applying techniques for tomographic reconstruction for touch determination are further explained and exemplified.
Throughout the description, the same reference numerals are used to identify corresponding elements.
1. Touch-sensitive apparatus
Fig. 1 illustrates a touch- sensitive apparatus 100 which is based on the concept of transmitting energy of some form across a touch surface 1, such that an object that is brought into close vicinity of, or in contact with, the touch surface 1 causes a local decrease in the transmitted energy. The touch- sensitive apparatus 100 includes an arrangement of emitters and sensors, which are distributed along the periphery of the touch surface. Each pair of an emitter and a sensor defines a detection line, which corresponds to the propagation path for an emitted signal from the emitter to the sensor. In Fig. 1, only one such detection line D is illustrated to extend from emitter 2 to sensor 3, although it should be understood that the arrangement typically defines a dense grid of intersecting detection lines, each corresponding to a signal being emitted by an emitter and detected by a sensor. Any object that touches the touch surface along the extent of the detection line D will thus decrease its energy, as measured by the sensor 3.
The arrangement of sensors is electrically connected to a signal processor 10, which samples and processes an output signal from the arrangement. The output signal is indicative of the received energy at each sensor 3. As will be explained below, the signal processor 10 may be configured to process the output signal by a tomographic technique to recreate an image of the distribution of an energy-related parameter (for simplicity, referred to as "energy distribution" in the following) across the touch surface 1. The energy distribution may be further processed by the signal processor 10 or by a separate device (not shown) for touch determination, which may involve extraction of touch data, such as a position (e.g. x, y coordinates), a shape or an area of each touching object.
In the example of Fig. 1, the touch- sensitive apparatus 100 also includes a controller 12 which is connected to selectively control the activation of the emitters 2. The signal processor 10 and the controller 12 may be configured as separate units, or they may be incorporated in a single unit. One or both of the signal processor 10 and the controller 12 may be at least partially implemented by software executed by a processing unit.
The touch- sensitive apparatus 100 may be designed to be used with a display device or monitor, e.g. as described in the Background section. Generally, such a display device has a rectangular extent, and thus the touch-sensitive apparatus 100 (the touch surface 1) is also likely to be designed with a rectangular shape. Further, the emitters 2 and sensors 3 all have a fixed position around the perimeter of the touch surface 1. Thus, in contrast to a conventional tomographic apparatus used e.g. in the medical field, there will be no possibility of rotating the complete measurement system. As will be described in further detail below, this puts certain limitations on the use of standard tomographic techniques for recreating/reconstructing the energy distribution within the touch surface 1.
In the following, embodiments of the invention will be described in relation to an exemplifying arrangement of emitters 2 and sensors 3. This arrangement, shown in Fig. 2, is denoted "interleaved arrangement" and has emitters 2 (indicated by crossed circles) and sensors 3 (indicated by squares) placed one after the other along the periphery of the touch surface 1. Thus, every emitter 2 is placed between two sensors 3. The distance between neighboring emitters 2 is the same along the periphery. The same applies for the distance between neighboring sensors 3.
It is to be understood that this arrangement is given merely for the purpose of illustration and the concepts of the invention are applicable irrespective of aspect ratio, shape of the touch surface, and arrangement of emitters and sensors.
In the embodiments shown herein, at least a subset of the emitters 2 may be arranged to emit energy in the shape of a beam or wave that diverges in the plane of the touch surface 1, and at least a subset of the sensors 3 may be arranged to receive energy over a wide range of angles (field of view). Alternatively or additionally, the individual emitter 2 may be configured to emit a set of separate beams that propagate to a number of sensors 3. In either embodiment, each emitter 2 transmits energy to a plurality of sensors 3, and each sensor 3 receives energy from a plurality of emitters 2. The touch- sensitive apparatus 100 may be configured to permit transmission of energy in one of many different forms. The emitted signals may thus be any radiation or wave energy that can travel in and across the touch surface 1 including, without limitation, light waves in the visible or infrared or ultraviolet spectral regions, electrical energy, electromagnetic or magnetic energy, or sonic and ultrasonic energy or vibration energy.
In the following, an example embodiment based on propagation of light will be described. Fig. 3A is a side view of a touch- sensitive apparatus 100 which includes a light transmissive panel 4, one or more light emitters 2 (one shown) and one or more light sensors 3 (one shown). The panel 4 defines two opposite and generally parallel surfaces 5, 6 and may be planar or curved. A radiation propagation channel is provided between two boundary surfaces 5, 6 of the panel 4, wherein at least one of the boundary surfaces allows the propagating light to interact with a touching object 7. Typically, the light from the emitter(s) 2 propagates by total internal reflection (TIR) in the radiation propagation channel, and the sensors 3 are arranged at the periphery of the panel 4 to generate a respective measurement signal which is indicative of the energy of received light.
As shown in Fig. 3 A, the light may be coupled into and out of the panel 4 directly via the edge portion that connects the top and bottom surfaces 5, 6 of the panel 4.
Alternatively, not shown, a separate coupling element (e.g. in the shape of a wedge) may be attached to the edge portion or to the top or bottom surface 5, 6 of the panel 4 to couple the light into and/or out of the panel 4. When the object 7 is brought sufficiently close to the boundary surface, part of the light may be scattered by the object 7, part of the light may be absorbed by the object 7, and part of the light may continue to propa- gate unaffected. Thus, when the object 7 touches a boundary surface of the panel (e.g. the top surface 5), the total internal reflection is frustrated and the energy of the transmitted light is decreased. This type of touch- sensitive apparatus is denoted "FTIR system" (FTIR - Frustrated Total Internal Reflection) in the following.
The touch- sensitive apparatus 100 may be operated to measure the energy of the light transmitted through the panel 4 on a plurality of detection lines. This may, e.g., be done by activating a set of spaced-apart emitters 2 to generate a corresponding number of light sheets inside the panel 4, and by operating a set of sensors 3 to measure the transmitted energy of each light sheet. Such an embodiment is illustrated in Fig. 3B, where each emitter 2 generates a beam of light that expands in the plane of the panel 4 while propagating away from the emitter 2. Each beam propagates from one or more entry or incoupling points within an incoupling site on the panel 4. Arrays of light sensors 3 are located around the perimeter of the panel 4 to receive the light from the emitters 2 at a number of spaced-apart outcoupling points within an outcoupling site on the panel 4. It should be understood that the incoupling and outcoupling points merely refer to the position where the beam enters and leaves, respectively, the panel 4. Thus, one emitter/sensor may be optically coupled to a number of incoupling/outcoupling points. In the example of Fig. 3B, however, the detection lines D are defined by individual emitter- sensor pairs.
The light sensors 3 collectively provide an output signal, which is received and sampled by the signal processor 10. The output signal contains a number of sub-signals, also denoted "projection signals", each representing the energy of light emitted by a certain light emitter 2 and received by a certain light sensor 3, i.e. the received energy on a certain detection line. Depending on implementation, the signal processor 10 may need to process the output signal for identification of the individual sub-signals.
Irrespective of implementation, the signal processor 10 is able to obtain an ensemble of measurement values that contains information about the distribution of an energy- related parameter across the touch surface 1.
The light emitters 2 can be any type of device capable of emitting light in a desired wavelength range, for example a diode laser, a VCSEL (vertical-cavity surface- emitting laser), or alternatively an LED (light-emitting diode), an incandescent lamp, a halogen lamp, etc.
The light sensors 3 can be any type of device capable of detecting the energy of light emitted by the set of emitters, such as a photodetector, an optical detector, a photo- resistor, a photovoltaic cell, a photodiode, a reverse-biased LED acting as photodiode, a charge-coupled device (CCD) etc.
The emitters 2 may be activated in sequence, such that the received energy is measured by the sensors 3 for each light sheet separately. Alternatively, all or a subset of the emitters 2 may be activated concurrently, e.g. by modulating the emitters 2 such that the light energy measured by the sensors 3 can be separated into the sub-signals by a corresponding de-modulation.
Reverting to the emitter- sensor- arrangements in Fig. 2, the spacing between neighboring emitters 2 and sensors 3 is generally from about 1 mm to about 20 mm. For practical as well as resolution purposes, the spacing is generally in the 2-10 mm range.
In a variant of the interleaved arrangement, the emitters 2 and sensors 3 may partially or wholly overlap, as seen in a plan view. This can be accomplished by placing the emitters 2 and sensors 3 on opposite sides of the panel 4, or in some equivalent optical arrangement.
It is to be understood that Fig. 3 merely illustrates one example of an FTIR system. For example, the detection lines may instead be generated by sweeping or scanning one or more beams of light inside the panel. Such and other examples of FTIR systems are e.g. disclosed in US6972753, US7432893, US2006/0114237,
US2007/0075648, WO2009/048365, WO2010/006882, WO2010/006883,
WO2010/006884, WO2010/006885, WO2010/006886 and WO2010/064983, which are all incorporated herein by this reference. The inventive concept may be advantageously applied to such alternative FTIR systems as well.
2. Transmission
As indicated in Fig. 3A, the light will not be blocked by the touching object 7. Thus, if two objects 7 happen to be placed after each other along a light path from an emitter 2 to a sensor 3, part of the light will interact with both objects 7. Provided that the light energy is sufficient, a remainder of the light will reach the sensor 3 and generate an output signal that allows both interactions (touch points) to be identified. Thus, in multi-touch FTIR systems, the transmitted light may carry information about a plurality of touches.
In the following, 7 is the transmission for the k:t detection line, Tv is the transmission at a specific position along the detection line, and Av is the relative attenuation at the same point. The total transmission (modeled) along a detection line is thus:
The above equation is suitable for analyzing the attenuation caused by discrete objects on the touch surface, when the points are fairly large and separated by a distance. However, a more correct definition of attenuation through an attenuating medium may be used:
Figure imgf000014_0001
In this formulation, h represents the transmitted energy on detection line Dk with attenuating object(s), Io,k represents the transmitted energy on detection line Dk without attenuating objects, and a(x) is the attenuation coefficient along the detection line Dk. We also let the detection line interact with the touch surface along the entire extent of the detection line, i.e. the detection line is represented as a mathematical line.
To facilitate the tomographic reconstruction as described in the following, the measurement values may be divided by a respective background value. By proper choice of background values, the measurement values are thereby converted into transmission values, which thus represent the fraction of the available light energy that has been measured on each of the detection lines.
The theory of the Radon transform (see below) deals with line integrals, and it may therefore be proper to use the logarithm of the above expression:
Figure imgf000015_0001
3. Tomographic techniques
Tomographic reconstruction, which is well-known per se, may be based on the mathematics describing the Radon transform and its inverse. The following theoretical discussion is limited to the 2D Radon transform. The general concept of tomography is to do imaging of a medium by measuring line integrals through the medium for a large set of angles and positions. The line integrals are measured through the image plane. To find the inverse, i.e. the original image, many algorithms use the so-called Projection- Slice Theorem.
Several efficient algorithms have been developed for tomographic reconstruction, e.g. Filtered Back Projection (FBP), FFT-based algorithms, ART (Algebraic
Reconstruction Technique). SART (Simultaneous Algebraic Reconstruction
Technique), etc. FBP is a widely used algorithm, and there are many variants and extensions thereof. Below, a brief outline of the underlying mathematics for FBP is given, for the sole purpose of facilitating the following discussion about the inventive concept and its merits.
3.1 Projection-Slice Theorem
Many tomographic reconstruction techniques make use of a mathematical theorem called Projection-Slice Theorem. This Theorem states that given a two- dimensional function f x, y), the one- and two-dimensional Fourier transforms Tx and T2, a projection operator Jl that projects a two-dimensional (2D) function onto a one- dimensional (ID) line, and a slice operator 5^ that extracts a central slice of a function, the following calculations are equal:
TiRf{x, y) = SiT2f{x, y)
This relation is illustrated in Fig. 4. The right-hand side of the equation above essentially extracts a ID line of the 2D Fourier transform of the function f(x, y). The line passes through the origin of the 2D Fourier plane, as shown in the right-hand part of Fig. 4. The left-hand side of the equation starts by projecting (i.e. integrating along ID lines in the projection direction p) the 2D function onto a ID line (orthogonal to the projection direction p), which forms a "projection" that is made up of the projection values for all the different detection lines extending in the projection direction p. Thus, taking a ID Fourier transform of the projection gives the same result as taking a slice from the 2D Fourier transform of the function "(x, y) . In the context of the present disclosure, the function f(x, y) corresponds to the attenuation coefficient field a(x) (generally denoted "attenuation field" herein) to be reconstructed. 3.2 Radon transform
First, it can be noted that the attenuation vanishes outside the touch surface. For the following mathematical discussion, we define a circular disc that circumscribes the touch surface, ΩΓ = {x: |x| < r}, with the attenuation field set to zero outside of this disc. Further, the projection value for a given detection line is given by:
Figure imgf000016_0001
Here, we let Θ = (cos φ , sin φ) be a unit vector denoting the direction normal to the detection line, and s is the shortest distance (with sign) from the detection line to the origin (taken as the centre of the screen, cf. Fig. 4). ). Note that Θ is perpendicular to the above-mentioned projection direction vector, p. This means that we can denote g(9, s) by g ((p, s) since the latter notation more clearly indicates that g is a function of two variables and not a function of one scalar and one arbitrary vector. Thus, the projection value for a detection line could be expressed as g <p, s), i.e. as a function of the angle of the detection line to a reference direction, and the distance of the detection line to an origin. We let the angle span the range 0 < φ < π, and since the attenuation field has support in ΩΓ, it is sufficient to consider s in the interval— r < s < r. The set of projections collected for different angles and distances may be stacked together to form a "sinogram".
Our goal is now to reconstruct the attenuation field a(x) given the measured
Radon transform, g = Jla. The Radon transform operator is not invertible in the general sense. To be able to find a stable inverse, we need to impose restrictions on the variations of the attenuation field.
One should note that the Radon transform is the same as the above-mentioned projection operator in the Projection-Slice Theorem. Hence, taking the ID Fourier transform of g((p, s) with respect to the s variable results in central slices from the 2D Fourier transform of the attenuation field a(x).
3.3 Continuous vs. discrete tomography
The foregoing sections 3.1-3.2 describe the mathematics behind tomographic reconstruction using continuous functions and operators. However, in a real world system, the measurement data represents a discrete sampling of functions, which calls for modifications of the algorithms. For a thorough description of such modifications, we refer to the mathematical literature, e.g. "The Mathematics of Computerized Tomography" by Natterer, and "Principles of Computerized Tomographic Imaging" by Kak and Slaney.
When operating on discretely sampled functions, certain reconstruction techniques may benefit from a filtering step designed to increase the amount of information about high spatial frequencies. Without the filtering step, the information density will be much higher at low frequencies, and the reconstruction will yield a blurring from the low frequency components.
The filtering step may be implemented as a multiplication/weighting of the data points in the 2D Fourier transform plane. This multiplication with a filter Wb in the Fourier domain may alternatively be implemented as a convolution by a filter wb (s) in the spatial domain, i.e. with respect to the s variable, using the inverse Fourier transform of the weighting function. The multiplication/weighting function in the 2D Fourier transform plane is rotationally symmetric. Thus, we can make use of the Projection-Slice Theorem to get the corresponding ID convolution kernel in the projection domain, i.e. the kernel we should use on the projections gathered at specific angles. This also means that the convolution kernel will be the same for all projection angles.
In the literature, several implementations of the filter can be found, e.g. Ram-Lak, Shepp-Logan, Cosine, Hann, and Hamming. 4. Parallel geometry for tomographic processing
Tomographic processing is generally based on standard geometries. This means that the mathematical algorithms presume a specific geometric arrangement of the detection lines in order to attain a desired precision and/or processing efficiency. The geometric arrangement may be selected to enable a definition of the projection values in a 2D sample space, e.g. to enable the above-mentioned filtering in one of the dimensions of the sample space before the back projection, as will be further explained below. In conventional tomography, the measurement system (i.e. the location of the incoupling points and/or outcoupling points) is controlled or set to yield the desired geometric arrangement of detection lines. Below follows a brief presentation of the parallel geometry, which is standard geometry widely used in conventional tomography e.g. in the medical field.
The parallel geometry is exemplified in Fig. 5. Here, the system measures projection values of a set of detection lines for a given angle <pfc. In Fig. 5, the set of detection lines D are indicated by dashed arrows, and the resulting projection is represented by the function gi p^, s). The measurement system is then rotated slightly around the origin of the x,y coordinate system in Fig. 5, to collect projection values for a new set of detection lines at this new projection angle. As shown by the dashed arrows, all detection lines are parallel to each other for each projection angle <pk. The system generally measures projection values (line integrals) for angles spanning the range 0 < φ < π.
Fig. 6A illustrate the detection lines for six different projection angles in a measurement coordinate system. Existing tomographic reconstruction techniques often make use of the Projection-Slice Theorem, either directly or indirectly, and typically require a uniform sampling of information. The uniformity of sampling may be assessed in a sample space, which is defined by dimensions that uniquely identify each detection line. There are a number of different ways to distinctly define a line; all of them will require two parameters. For a parallel geometry, the two-dimensional sample space is typically defined by the angle parameter φ and the distance parameter s, and the projection values are represented by g ((p, s). In Fig. 6A, each detection line defines a sampling point, and Fig. 6B illustrates the locations of these sampling points in the sample space. Clearly, the sampling points are positioned in a regular grid pattern. It should be noted that a true tomographic system typically uses many more projection angles and a denser set of detection lines D.
Below, the use of a parallel geometry in tomographic processing is further exemplified in relation to a known attenuation field shown in Fig. 7A, in which the right-end bar indicates the coding of gray levels to attenuation strength ( ). Fig. 7B is a graph of the projection values as a function of distance s for the projection obtained at φ = π/6 in the attenuation field of Fig. 7 A. Fig. 7C illustrates the sinogram formed by all projections collected from the attenuation field, where the different projections are arranged as vertical sequences of values. For reference, the projection shown in Fig. 7B is marked as a dashed line in Fig. 7C.
The filtering step, i.e. convolution, is now done with respect to the s variable, i.e. in the vertical direction in Fig. 7C. The filtering step results in a filtered sinogram v = wb (s) * g (φ, s) . As mentioned above, there are many different filter kernels that may be used in the filtering. Fig. 7D illustrates the central part of a discrete filter kernel wb (s) that is used in this example. As shown, the absolute magnitude of the filter values quickly drops off from the center of the kernel (k=0). In many practical
implementations, it is possible to use only the most central parts of the filter kernel, thereby decreasing the number of processing operations in the filtering step.
Since the filtering step is a convolution, it may be computationally more efficient to perform the filtering step in the Fourier domain. For each column of values in the φ- s-plane, a discrete ID Fast Fourier transform is computed. Then, the thus-transformed values are multiplied by the ID Fourier transform of the filter kernel. The filtered sinogram v is then obtained by taking the inverse Fourier transform of the result.
Fig. 7E represents the filtered sinogram that is obtained by operating the filter kernel in Fig. 7D on the sinogram in Fig. 7C. For illustration purposes, Fig. 7E shows the absolute values of the filtered sinogram, with zero being set to white and the magnitude of the filtered values being represented by the amount of black.
The next step is to apply the back projection operator ¾#. Fundamental to the back projection operator is that a single position in the attenuation field is represented by a sine function in the sinogram. Thus, to reconstruct each individual attenuation value in the attenuation field, the back projection operator corresponds to a summation of the values of the filtered sinogram along the corresponding sine function. This can be expressed as
Figure imgf000019_0001
where Θ = (cos , sin q)j), p is the number of projection angles, and xt
{xi> yd is a point in the attenuation field (i.e. a location on the touch surface 1).
To illustrate this concept, Fig. 7E shows three sine curves σχ-σ (indicated by superimposed thick black lines) that correspond to three different positions in the attenuation field of Fig. 7 A.
Since the location of a reconstructed attenuation value will not coincide exactly with all of the relevant detection lines, it may be necessary to perform linear interpolation with respect to the s variable where the sine curve crosses between two sampling points. The interpolation is exemplified in Fig. 7F, which is an enlarged of Fig. 7E and in which x indicates the sampling points, which each has a filtered projection value. The contribution to the back projection value for the sine curve σ1 from the illustrated small part of the φ-s-plane becomes:
(1 - ¾β) O * 5) 26,176 + ¾6 O * 5) 26,177
+ (1 - ¾7) ' ^W * 5) 27,175 + ¾7 O * 5) 27,176
+ (1 - ¾8) ' (w * 5) 28,173 + z28 O * g) 28,174 The weights zt in the linear interpolation is given by the normalized distance from the sine curve to the sampling point, i.e. 0 < zt < 1.
By using linear interpolation in the back projection operator, the time complexity of the reconstruction process is 0(n3), where n may indicate the number of incoupling and outcoupling points on one side of the touch surface, or the number of rows/columns of reconstruction points (see below).
An alternative approach is to compute the filtered values at the crossing points by applying individual filtering kernels. The time complexity of such a reconstruction process is 0 (n4) .
Fig. 7G shows the reconstructed attenuation field that is obtained by applying the back projection operator on the filtered sinogram in Fig. 7E. It should be noted that the filtering step may be important for the reconstruction to yield useful data. Fig. 7H shows the reconstructed attenuation field that is obtained when the filtering step is omitted.
The standard techniques for tomographic processing as described above presume a regular arrangement of the sampling points in the φ-s-plane, e.g. as exemplified in Fig. 6B. However, it is difficult if not impossible to design a touch-sensitive apparatus with a regular arrangement of sampling points, since it is non-trivial to position the incoupling and outcoupling points so as to reproduce a perfect parallel geometry of detection lines. Thus, compared to the desired regular arrangement in Fig. 6A, the detection lines typically form an irregular pattern on the touch surface, such as exemplified in Fig. 8A. The corresponding arrangement of sampling points (marked by x) in the sample space is also highly irregular, as illustrated in Fig. 8B. Two nearby sampling points in the sample space correspond to two detection lines that are close to each other on the touch surface and/or that have a small difference in projection angle (and may thus be overlapping, with a small mutual angle, in some part of the touch surface).
As a further example of irregular sampling points, Fig. 9 illustrates the sampling points in the φ-s-plane for the interleaved system shown in Fig. 2. In Fig. 9, the solid lines indicate the physical limits of the touch surface. It can be noted that the angle φ actually spans the range from 0 to 2π, since the incoupling and outcoupling points extend around the entire perimeter. However, a detection line is the same when rotated by π, and the projection values may thus be rearranged to fall within the range of 0 to π. However, this rearrangement is optional.
The inventors have realized that the standard techniques for tomographic processing cannot be used to reconstruct the attenuation field a(x) on the touch surface, at least not with adequate precision, due to the irregular sampling.
5. Use of tomographic processing for touch determination
In its various aspects, the invention relates to ways of re-designing tomographic techniques so as to accommodate for irregular sampling, viz. such that the tomographic techniques use the same amount of information from all relevant parts of the sample space. In various embodiments, this is achieved by introducing an adjustment factor, pk, which represents the local density of sampling points in the sample space. By clever use of the adjustment factor, it is possible to adapt existing tomographic techniques so as to enable reconstruction of the attenuation field for arbitrary patterns of detection lines on the touch surface, i.e. also including non-uniform arrangements of sampling points in the sample space.
Fig. 10A illustrates an embodiment of a method for reconstruction and touch data extraction in an touch-sensitive apparatus, such as the above-described FTIR system. The method involves a sequence of steps 22-26 that are repeatedly executed, typically by the signal processor 10 (Figs 1 and 3). In the context of this description, each sequence of steps 22-26 is denoted a sensing instance.
In a preparatory step 20, the signal processor obtains adjustment factors, and possibly other processing parameters (coefficients), to be used in the tomographic reconstruction. In one embodiment, the adjustment factors are pre-computed and stored on an electronic memory, and the signal processor retrieves the pre-computed adjustment factors from the memory. Each adjustment factor is computed to be representative of the local density of data samples in the sample space for a respective sampling point. This means that each detection line is associated with one or more adjustment factors. In a variant (not shown), the signal processor obtains the
adjustments factors by intermittently re-computing or updating the adjustment factors, or a subset thereof, during execution of the method, e.g. every n:th sensing instance. The computation of adjustment factors will be further exemplified in Chapter 6.
Each sensing instance starts by a data collection step 22, in which measurement values are sampled from the light sensors 2 in the FTIR system, typically by sampling a value from each of the aforesaid sub- signals. The data collection step 22 results in one projection value for each detection line (sampling point). It may be noted that the data may, but need not, be collected for all available detection lines in the FTIR system. The data collection step 22 may also include pre-processing of the measurement values, e.g. filtering for noise reduction, conversion of measurement values into transmission values (or equivalently, attenuation values), conversion into logarithmic values, etc.
In a reconstruction step 24, an "attenuation field" across the touch surface is reconstructed by processing of the projection data from the data collection step 22. The attenuation field is a distribution of attenuation values across the touch surface (or a relevant part of the touch surface), i.e. an energy-related parameter. As used herein, "the attenuation field" and "attenuation values" may be given in terms of an absolute measure, such as light energy, or a relative measure, such as relative attenuation (e.g. the above-mentioned attenuation coefficient) or relative transmission. The
reconstruction step operates a tomographic reconstruction algorithm on the projection data, where the tomographic reconstruction algorithm is designed to apply the adjustment factors to at least partly compensate for variations in the local density of sampling points in the sample space.
The tomographic processing may be based on any known algorithm for tomographic reconstruction. The tomographic processing will be further exemplified in Chapter 7 with respect to algorithms for Back Projection, algorithms based on Fourier transformation and algorithms based on Hough transformation.
The attenuation field may be reconstructed within one or more subareas of the touch surface. The subareas may be identified by analyzing intersections of detection lines across the touch surface, based on the above-mentioned projection signals. Such a technique for identifying subareas is further disclosed in WO2011/049513 which is incorporated herein by this reference.
In a subsequent extraction step 26, the reconstructed attenuation field is processed for identification of touch-related features and extraction of touch data. Any known technique may be used for isolating true (actual) touch points within the attenuation field. For example, ordinary blob detection and tracking techniques may be used for finding the actual touch points. In one embodiment, a threshold is first applied to the attenuation field, to remove noise. Any areas with attenuation values that exceed the threshold, may be further processed to find the center and shape by fitting for instance a two-dimensional second-order polynomial or a Gaussian bell shape to the attenuation values, or by finding the ellipse of inertia of the attenuation values. There are also numerous other techniques as is well known in the art, such as clustering algorithms, edge detection algorithms, etc. Any available touch data may be extracted, including but not limited to x,y coordinates, areas, shapes and/or pressure of the touch points.
After step 26, the extracted touch data is output, and the process returns to the data collection step 22.
It is to be understood that one or more of steps 20-26 may be effected
concurrently. For example, the data collection step 22 of a subsequent sensing instance may be initiated concurrently with step 24 or 26.
The touch data extraction process is typically executed by a data processing device (cf. signal processor 10 in Figs 1 and 3) which is connected to sample the measurement values from the light sensors 3 in the FTIR system. Fig. 10B shows an example of such a data processing device 10 for executing the process in Fig. 10A. In the illustrated example, the device 10 includes an input 200 for receiving the output signal. The device 10 further includes a parameter retrieval element (or means) 202 for retrieving the adjustment factors (or depending on implementation, for
computing/updating the adjustment factors), a data collection element (or means) 204 for processing the output signal to generate the above-mentioned set of projection values, a reconstruction element (or means) 206 for generating the reconstructed attenuation field by tomographic processing, and an output 210 for outputting the reconstructed attenuation field. In the example of Fig. 10B, the actual extraction of touch data is carried out by a separate device 10' which is connected to receive the attenuation field from the data processing device 10.
The data processing device 10 may be implemented by special-purpose software (or firmware) run on one or more general-purpose or special-purpose computing devices. In this context, it is to be understood that each "element" or "means" of such a computing device refers to a conceptual equivalent of a method step; there is not always a one-to-one correspondence between elements/means and particular pieces of hardware or software routines. One piece of hardware sometimes comprises different
means/elements. For example, a processing unit serves as one element/means when executing one instruction, but serves as another element/means when executing another instruction. In addition, one element/means may be implemented by one instruction in some cases, but by a plurality of instructions in some other cases. Such a software controlled computing device may include one or more processing units, e.g. a CPU ("Central Processing Unit"), a DSP ("Digital Signal Processor"), an ASIC
("Application- Specific Integrated Circuit"), discrete analog and/or digital components, or some other programmable logical device, such as an FPGA ("Field Programmable Gate Array"). The data processing device 10 may further include a system memory and a system bus that couples various system components including the system memory to the processing unit. The system bus may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system memory may include computer storage media in the form of volatile and/or non- volatile memory such as read only memory (ROM), random access memory (RAM) and flash memory. The special-purpose software, and the adjustment factors, may be stored in the system memory, or on other removable/non-removable volatile/non- volatile computer storage media which is included in or accessible to the computing device, such as magnetic media, optical media, flash memory cards, digital tape, solid state RAM, solid state ROM, etc. The data processing device 10 may include one or more communication interfaces, such as a serial interface, a parallel interface, a USB interface, a wireless interface, a network adapter, etc, as well as one or more data acquisition devices, such as an A/D converter. The special-purpose software may be provided to the data processing device 10 on any suitable computer-readable medium, including a record medium, a read-only memory, or an electrical carrier signal.
6. Computation of adjustment factors
In this chapter we introduce the concept of "density of detection lines", which is a measure of the angular and spatial distribution of detection lines on the touch surface. Recalling that a detection line is equivalent to a sampling point in the sampling space, the density of detection lines may be given by the density of sampling points (cf. Fig. 8B and Fig. 9). The local density of detection lines is used for computing an adjustment factor pk for each individual detection line. The adjustment factor pk may be a constant for each detection line or, in some examples, a function for each detection line. In the following examples, it should be noted that the attenuation factor pk may be scaled to get appropriate scaling of the reconstructed attenuation field.
There are many different ways of generating a measure of the local density of sampling points, to be used for calculating the adjustment factors for the tomographic processing. Below, a few examples are listed.
· Use of the number of nearby sampling points, i.e. to measure the number of sampling points that fall within a distance from the detection line (sampling point) of interest in the sample space.
• Use of the average distance, i.e. to measure the average of the N smallest distances in the sample space between the detection line (sampling point) and its neighbors.
• Use of Voronoi areas, i.e. to obtain the extent of a Voronoi cell for each detection line (sampling point) in the sample space. • Use of Delaunay triangles, i.e. to obtain the areas of the Delaunay triangles associated with the detection lines (sampling points) in the sample space.
• Use of interpolating basis functions, i.e. to assign a density-dependent two-dimensional basis function to each detection line (sampling point) in the sample space, and evaluate the interaction between the basis function and a number of reconstruction curves in the sample space, where each reconstruction curve corresponds to a spatial point on the touch surface. The reconstruction curve may e.g. be given by the aforesaid back projection operator (¾#).
Each of these examples will now be described in further detail in separate sections 6.1-6.5. It should be noted that in these, and all other examples, the adjustment factors may be (and typically are) pre-computed and stored for retrieval during touch determination (cf. step 20 in Fig. 10A).
6.1 Number of nearby sampling points (p )
The local density for a specific detection line may be determined by finding the number of detection lines that fall within a given distance λ from the specific detection line. The distance is measured in the sample space, i.e. the φ-s-plane. To facilitate the definition of distance, it may be preferable to at least approximately normalize the dimensions (φ, s) of the sample space. For example, if the projection angle spans 0 < φ < π, the distance s may be scaled to fall within the same range. The actual scaling typically depends on the size of the touch system, and theoretical
recommendations are found in the literature.
Fig. 11 A illustrates the determination of local density for two detection lines
(sampling points) Ό , D2'. The given distance (Λ = 0.4) is marked by a dotted circle around the respective detection line Ό , D2'. In this example, Ό , D2' have 9 and 5, respectively, nearby detection lines (counting also Ό and D2', respectively). For comparison, Fig. 11B illustrate two groups of detection lines on the touch surface that fall within the respective dotted circle in Fig.l 1A. The radii of the circles may e.g. be chosen to correspond to the expected height (with respect to the s dimension) of a touch. It should also be noted that the value of λ = 0.4 is merely given as an example, and that the given distance λ may be set to a smaller value in an actual touch system.
The adjustment factor is proportional to the inverse of the number of nearby detection lines, i.e. 1/9 and 1/5, respectively. This means that a lower weight will be given to information from several detection lines that represent almost the same information in the attenuation field. In this example, the adjustment factor is denoted £ and is computed according to:
Figure imgf000026_0001
6.2 Average distance (ρ )
The local density for a specific detection line may be determined by computing the distance to the N closest detection lines. Fig. 12 illustrates the determination of local density for two detection lines (sampling points) Ό , D2'. The nearest neighbors are marked by stars (*) around the respective detection line Ό , D2'. In this example, Ό , D2' yield the average distances 0.27 and 0.39, respectively, for N = 9.
In this example, the adjustment factor is denoted and is computed according to:
Figure imgf000026_0002
where Nk is the set of detection lines closest to detection line k.
6. 3 Voronoi areas (p^A)
The adjustment factors may be computed based on the extent of the Voronoi cell (as measured in the sample space) of the detection line. Voronoi cells are obtained by defining the sampling points in the sample space as Voronoi sites in a Voronoi diagram. A Voronoi diagram is a well-known mathematical technique to decompose a metric space based on distances to a specified discrete set of objects in the space. Specifically, a site in the Voronoi diagram has a Voronoi cell which contains all points that are closer to the site than to any other site.
Fig. 13 illustrates the Voronoi diagram for the sample space in Fig. 8B. The detection lines (sampling points) Ό , D2' have the areas 0.0434 and 0.1334,
respectively.
It may be advantageous to normalize the adjustment factors such that the total area equals a given value e.g. unity. Special care may need to be taken when defining Voronoi cells at the edge of the sample space, since these cells will have an infinite area unless a constraint is added for the size of these cells. When the sample space is π- periodic, it may be sufficient to only add constraint edges respect to the s dimension if the sample points are properly mirrored between φ = 0 and φ = π.
The adjustment factor is denoted and is computed according to: ρΙΑ = voronoi_area(( fe, sfe)).
Compared to p£ and ", the computation of
Figure imgf000027_0001
obviates the need to set potentially arbitrary computation parameters, such as the distance λ in the sample space and the number N and the definition of neighboring sampling points.
6.4 Delaunay triangles (p°A)
The adjustment factors may be computed based on the extent of the Delaunay triangles (in the sample space) for the detection line. The Delaunay triangles are obtained by defining the sampling points as corners of a mesh of non-overlapping triangles and computing the triangles using the well-known Delaunay algorithm. To achieve triangles with reduced skewness, if deemed necessary, the dimensions of the sample space (< ,s) may be rescaled to the essentially same length before applying the Delaunay triangulation algorithm.
Fig. 14 illustrates the Delaunay triangulation for the sample space in Fig. 8B. The adjustment factor for a detection line is given by the total extent of all triangles that include the detection line. The detection lines (sampling points) Ό , D2' have the areas 0.16 and 0.32, respectively.
In this example, the adjustment factor is denoted p A and is computed according to: p%A = delaunay_area(0pfe, sfe)).
6.5 Interpolating basis functions
By assigning a density-dependent basis function to each detection line (in the sample space) and evaluating the reconstruction algorithm for each basis function separately, it is possible to compute high precision adjustment factors. One major benefit of using basis functions is that they enable the use of higher order interpolation when computing the adjustment factor for each detection line. The following examples are all given for reconstruction algorithms that are based on the back projection operation (¾#). In the following example, the basis functions are defined based on Delaunay triangles for each detection line (in the sample space), but it should be understood that any density-dependent basis function could be used.
6.5.1 Line integrals (Pkj, Pk°i ) Fig. 15 shows two basis functions defined based on Delaunay triangles (cf. Fig. 15) for two detection lines (sampling points) Ό , D2'. The basis functions suitably have the same height (strength) at the sampling points, e.g. unity. Since the basis function is given by Delaunay triangles, the base of each basis function is automatically adjusted to the local density of sampling points. The adjustment factors for each detection line may be computed as the interaction between the interpolating basis function and the back projection operator. In the illustrated example, the basis functions are defined to be linearly interpolating, but other interpolations can be achieved, such as nearest neighbor interpolation, second order interpolation or higher, continuously differentiable interpolation, etc. It is also conceivable to define the basis functions based on the Voronoi cells of the sampling points (cf. Fig. 15), which will yield a zero order interpolation, i.e. a top hat function. As will be explained below, the use of basis functions often results in the adjustment factor for each detection line being a function instead of a single value.
In the following, a first order interpolating Delaunay triangle basis function is denoted bj^1.
Before exemplifying the computation of adjustment factors, it is to be recalled that a reconstruction point in the attenuation field corresponds to a reconstruction line (curve) in the sample space. As explained above (Chapter 4), the reconstruction line is a sine curve for the back projection operator. This is further illustrated in Figs 16A-16B, where the left-hand part indicates two different reconstruction points the touch surface and the right-hand part illustrates the corresponding reconstruction lines σΐ5 σ2 in the sample space.
The adjustment factors for a given detection line (sampling point) with respect to a reconstruction line may be computed by evaluating the line integral for the reconstruction line running through the basis function. The reconstruction line ot is defined by σέ = (φ, Χι Θ), where Θ = (cos φ , sin φ) and Xi = (xit y£) is the reconstruction point on the touch surface that corresponds to the reconstruction line. Fig. 17A illustrates the interaction between the reconstruction line σ2 and the basis function for detection line (sampling point) D2'. Fig. 17B is a graph of the values of the basis function along the reconstruction line σ2 with respect to the φ dimension. The line integral, evaluated with respect to the φ dimension, is 0.135 in this example. It can be noted that the top value in Fig. 17B does not reach a value of unity since the
reconstruction line passes at a distance from the sampling point D2'. In variant, the line integral may, e.g., be evaluated with respect to the length of the reconstruction line σ2, i.e. in both dimensions s, φ. Fig. 17C illustrates the interaction between the reconstruction line σ2 and the basis function for another detection line (sampling point) D3', and Fig. 17D is a graph of the values of the basis function along the reconstruction line σ2 with respect to the φ dimension. The line integral, evaluated with respect to the φ dimension, is 0.095 in this example. It should be noted that the line integral is smaller for sampling point D ' than for sampling point D2', even though the reconstruction line σ2 lies closer to sampling point D '. This illustrates how the use of basis functions compensates for the varying density of detection lines. Detection lines in a high density area will yield a basis function with a smaller base than detection lines in sparse areas. In this example, the adjustment factor is denoted p^ and is computed according to:
Figure imgf000029_0001
In an alternative embodiment, the basis functions are defined based on the Voronoi cells of the sampling points. In such an example, the adjustment factor is denoted and is computed according to:
Figure imgf000029_0002
with b . being the zero-order interpolating Voronoi basis function.
It should be noted that∑fc bj^1 (φ, s) is set to a fixed value, e.g. 1, for all values of (φ, s) that fall within the valid region of sample space. This equation should hold for all relevant basis functions.
6.5.2 Surface integrals (p i , p™i )
Below, the computation of an advanced adjustment factor will be exemplified, specifically an adjustment factor for a sampling point to be processed for tomographic reconstruction using back projection and filtering. In this case, the computation is not limited to evaluating line integrals for the interaction between the basis function and the reconstruction line. Instead, a two-dimensional (surface) integral is evaluated for this interaction. In the following examples, the basis function is given by Delaunay triangles and is defined to be linearly interpolating. It is to be understood that other types of basis functions may be used e.g. to achieve other interpolations, such as nearest neighbor interpolation, second order interpolation or higher, continuously differentiable interpolation, etc. Fig. 18A illustrates the interaction between a reconstruction line σ1 and the basis function for detection line (sampling point) D2'.
Fig. 18B illustrates a known one-dimensional filter wb (As) for use in the filtering step prior to a back projection operation. It can be noted, however, that the ID filter is defined in terms of distance As from the reconstruction line with respect to the s dimension. The ID filter may be defined as a continuous function of distance As.
Fig. 18C illustrates the sample space with the basis function for sampling point D2' together with the reconstruction line σ1 of Fig. 18 A, where the reconstruction line is associated with a ridge. The ridge includes the ID filter of Fig. 18B which has been reproduced to extend in the s dimension at plural locations along the reconstruction line σχ. Thus, the ridge is defined as a 2D function wt (φ, s) which is computed as Wi ((p, s) = wb (xi - Θ— s), where Θ = (cos φ , sin φ) and Xi = (χι, Υι) . It should be noted that the 2D function wt (φ, s) also defines negative valleys on both sides of the ridge.
In the back projection operation, the values of the sampling points should be weighted with the effect of the ID filter. Fig. 18D illustrates how the 2D function, which includes the ID filter, combines with the basis function for sampling point D2', i.e. product of the 2D function and the basis function : νέ (φ, s) bj^1. As can be seen from Fig. 18D, some parts of the sampling point D2', via the basis function, will contribute positively and some parts negatively. The total adjustment factor for the contribution of the sampling point D2' to the reconstruction line σ1 in the back projection operation is given by the integral (sum) of the above result in Fig. 18D, which for D2' (i.e. for this particular sampling point and reconstruction line) turns out to be zero.
In this example, the adjustment factor is denoted pj f 1 and is computed according to:
Figure imgf000030_0001
Conceptually, this equation can be understood to reflect the notion that each detection line (sampling point) is not limited only to the sampling point but has an extended influence in the sample space via the extent of the basis function. A single detection line will thus contribute to the values of the φ-s-plane in a region around the actual detection line in the sample space, the contribution being zero far away and having support, i.e. being greater than zero, only in a local neighborhood of the detection line. Higher density of detection lines (sampling points) in the sample space yields smaller support and lower density gives larger support. When a single adjustment factor 1 is to be computed, all but the k:t detection line can be set to zero, before the above integral (sum) is computed. The integration (summation) is done in both dimensions φ, s. Clearly, the adjustment factors account for variations in the local density of detection lines.
As noted above, the adjustment factor for detection line D2' is pj f 1 =0. This value of the adjustment factor accounts not only for the interaction between the reconstruction line and the sampling point, but also the influence of the ID filter. This could be compared to the adjustment factor p , which was computed to 0.32 for detection line D2' in Section 6.4 above. After adding the influence of the ID filter (As = 1.5 for detection line D2' and wb (As) = - 1.5), this equals - 1.5*0.32 = -0.0034. Thus, the use of a surface integral results in a different adjustment factor, which may be more suitable for certain implementations of the touch system. However, the choice of technique for calculating the adjustment factors is a tradeoff between computational complexity and precision of the reconstructed attenuation field, and any of the adjustment factors presented herein may find its use depending on the circumstances.
In an alternative embodiment, the basis functions are instead defined based on the Voronoi cells of the sampling points. In this example, the adjustment factor is denoted fc' and is computed according to:
Figure imgf000031_0001
7. Tomographic reconstruction using adjustment factors
There are numerous available techniques for reconstructing an attenuation field based on a set of projection values. The following description will focus on three main techniques, and embodiments thereof, namely Back Projection, Fourier Transformation and Hough Transformation. Common to all embodiments is that existing reconstruction techniques are re-designed, by the use of the adjustment factors, to operate on data samples that have an irregular or non-uniform arrangement in the sample space. Thus, the reconstruction step (cf. 24 in Fig. 10A) involves evaluating a reconstruction function F(pk, g ((pk, sfc)), where pk is the adjustment factor (function or constant) for each data sample and g((Pk> sk) is the value of each data sample. Typically, the data samples are given by the measured projection values for the detection lines of the touch- sensitive apparatus. However, it is conceivable that the data samples includes synthetic projection values which are generated from the projection values, e.g. by interpolation, to supplement or replace the measured projection values. For each embodiment, the application of adjustment factors will be discussed, and reference will be made to the different variants of adjustment factors discussed in sections 6.1-6.5. Furthermore, the processing efficiency of the embodiments will be compared using Landau notation as a function of n, with n being the number of incoupling and outcoupling points on one side of the touch surface. In some
embodiments, a reconstructed attenuation field containing n2 reconstruction points will be presented. The reconstructed attenuation field is calculated based on projection values obtained for the reference image in Fig. 19. The reference image is thus formed by five touch objects 7 of different size and attenuation strength that are distributed on the touch surface 1. For reasons of clarity, Fig. 19 also shows the emitters 2 and sensors 3 in relation to the reference image. The distribution of sampling points in the φ-s-plane for this system is given in Fig. 9.
7.1 Unfiltered back projection
As an alternative to filtered back projection, it is possible to do an unfiltered back projection and do the filtering afterwards. In this case, the filtering process involves applying a two-dimensional sharpening filter. If the two-dimensional sharpening filter is applied in the spatial domain, the time complexity of the unfiltered back projection is 0 (n4). If the filtering is done in the Fourier domain, the time complexity may be reduced.
The unfiltered back projection involves evaluating reconstruction lines in the sample space, using adjustment factors computed by means of interpolating basis functions, as described above in section 6.5. As mentioned in that section, use of interpolating basis functions results in a correction for the local density of sampling points.
In this embodiment, the reconstruction function F(pk, g ((pk, sk)^ is given by a first sub-function that performs the back projection at desired reconstruction points in the attenuation field:
Figure imgf000032_0001
and a second sub-function that applies the 2D sharpening filter on the
reconstructed attenuation field.
In this embodiment, the adjustment factor pk may be any adjustment factor calculated based on an interpolating basis function, such as pk \ or . It can also be noted that since several adjustment factors pk i are zero, the sum needs only be computed for a relevant subset of the sampling points, namely over all k where pk i > TH, where TH is a threshold value, e.g. 0.
The time complexity of the back projection operator is 0 (n3), assuming that there are 0 (n) non-zero adjustment factors for each reconstruction point.
7.2 Filtered back projection, first embodiment
In a first embodiment of filtered back projection, a reconstruction line in the sample space (cf. Fig. 16) is evaluated by computing a contribution value of each sampling point to the reconstruction line and summing the contribution values. The contribution value for a sampling point is given by the product of its projection value, its adjustment factor and a filter value for the sampling point.
In this embodiment, the reconstruction function F(pk, g ((pk, sk)^ is given by
Figure imgf000033_0001
The time complexity of the reconstruction function is 0 (n4) . In this function, 9^Pk> sk) is me projection value of detection line k, ^k, sk) is the position of the detection line k in the sample space, and wb (As) is the ID filter given as a function of distance As to the reconstruction line in the s dimension. The distance is computed as As = sk — Xi · 9k where Xi is the reconstruction point (in the attenuation field) and 9k = (cos <pk , sin (pk) . The operation of the reconstruction function is illustrated in Fig. 20 for four sampling points D , D2', D4', D5' in the φ-s-plane. In this particular example, Fig. 20 also indicates the extent of Voronoi cells for all sampling points. The distance As for each sampling point is computed as the distance in the vertical direction from the sampling point to the reconstruction line σ2.
In this embodiment, the adjustment factor pk may be any adjustment factor that directly reflects the separation of sampling points in the sample space, such as pk A, pk , p~ andPr.
There are many different ID filters wb (As) that may be used. The ID filter may be defined as a continuous function of distance As. Figs 21 A and 2 IB illustrate ID filters presented in the aforesaid books by Kak & Slaney and Natterer, respectively. The bandwidth of these filters is preferably adapted to the signal bandwidth when the reconstruction function is evaluated.
Fig. 22A illustrates the reconstructed attenuation field that is obtained by applying the reconstruction function, using pk A, to the projection values obtained for the attenuation field in Fig. 19, and Fig. 22B illustrates a corresponding attenuation field obtained if the adjustment factors are omitted from the reconstruction function. Clearly, the adjustment factors greatly improve the quality of the reconstruction.
7.3 Filtered back projection, second embodiment
In a second embodiment of filtered back projection, a reconstruction line in the sample space is evaluated by extending the influence of the sampling points by the use of interpolating basis function and by including the ID filter in the reconstruction line.
In this embodiment, the reconstruction function F(pk, g ((pk, sk)^ is given by
Figure imgf000034_0001
In this embodiment, the adjustment factor pk may be any adjustment factor originating from a surface integral through interpolating basis functions in the sample space, such as £f or £f
The time complexity of the reconstruction function is 0 (n4) . It can be noted that the time for executing the reconstruction (cf. step 24 in Fig. 10A) is largely independent of the selected interpolating basis function and ID filter, given that adjustment factors are typically pre-computed and stored in memory. The time spent for this pre- computation may however be different for different basis functions.
Fig. 23A illustrates an exemplifying definition of Delaunay triangles for the sample space in Fig. 9. Based on this definition and the ID filter in Fig. 21B, adjustment factors pk 1 have been computed and applied in the reconstruction function. Fig. 23B illustrates the resulting attenuation field when the reconstruction function is operated on the projection values obtained for the attenuation field in Fig. 19. Clearly, the adjustment factors provide good reconstruction quality.
7.4 Filtered back projection, third embodiment
In a third embodiment of filtered back projection, the filtering step is performed locally around each individual sampling point in the sample space using the ID filter. The filtering is operated on synthetic projection values at synthetic sampling points which are generated from the projection values of the sampling points, e.g. by interpolation. The synthetic projection values are estimated signal values that are generated around each projection value at given locations in the s dimension. Fig. 24A illustrates a few actual sampling points (stars) and associated synthetic sampling points (circles) in a subset of the φ-s-plane of Fig. 9. Fig. 24A also illustrates Delaunay triangles (dotted lines) that may be used for interpolation of the synthetic sampling points. The actual sampling points are thus placed at the corners of a mesh of non- overlapping triangles, and the values of the synthetic sampling points are e.g. linearly interpolated in the triangles. This interpolation, and variants thereof, will be described in further detail below.
Furthermore, the third embodiment evaluates a reconstruction line in the sample space by computing a line integral through interpolating basis functions arranged at the actual sampling points.
In this embodiment, the reconstruction function F(pk, g ((pk, sk)^ is given by a first sub-function that creates 2M synthetic sampling points g{(Pk,m> sk,m) with respect to the s dimension, and a second sub-function that applies a discrete ID filter (in the s dimension) on the collection of sampling points (actual and synthetic) to calculate a filtered value for each actual sampling point:
M
v ((pk, Sk = ^ Wb (sk - Sk>m) g ((pk,m, Sk,m),
m= -M and a third sub-function that performs the back projection at desired
reconstruction points in the attenuation field, based on the filtered values:
= Pk - v(_(pk, sk). In the above expressions g {(pkia, sk Q)≡ g ((pk, sk) , i.e. an actual sampling point.
The adjustment factor pk may be any adjustment factor originating from a line integral through interpolating basis functions in the sample space, such as pk \ or . It can also be noted that since several adjustment factors pk i are zero, the sum needs only be computed for a relevant subset of the sampling points, namely over all k where pk i > TH, where TH is a threshold value, e.g. 0.
Any suitable ID filter may be used, e.g. the one shown in Fig. 7D. However, it may be advantageous to adapt the band- width of the filter to each actual sampling point.
Fig. 24B illustrates the resulting attenuation field when the reconstruction function is operated on the projection values obtained for the attenuation field in Fig. 19, using adjustment factors pk \ and the ID filter in Fig. 21B. Clearly, the adjustment factors provide good reconstruction quality.
The time complexity of the reconstruction function is 0 (n3) . This is based on the fact that the number of sampling points are 0 (n2), that the first sub-function computes 0(M n2) synthetic projection values, with M being 0(ri), that the second sub-function accesses each synthetic sampling point once, and that the third sub-function accesses 0(n) filtered values to generate each of 0(n2) reconstruction points, giving a total time complexity of 0(n3).
As noted above, the generation of synthetic projection values may be achieved by interpolating the original sampling points. The objective of the interpolation is to find an interpolation function that can produce interpolated values at specific synthetic sampling points in the sample space given a set of measured projection values at the actual sampling points. Many different interpolating functions can be used for this purpose, i.e. to interpolate data points on a two-dimensional grid. Input to such an interpolation function is the actual sampling points in the sample space as well as the measured projection value for each actual sampling point. Most interpolating functions involve a linear operation on the measured projection values. The coefficients in the linear operation are given by the known locations of the actual sampling points and the synthetic sampling point in the sample space. The linear operator may be pre-computed and then applied on the measured projection values in each sensing instance (cf.
iteration of steps 22-26 in Fig. 10A). Some non-limiting examples of suitable interpolation functions include Delaunay triangulation, and other types of interpolation using triangle grids, bicubic interpolation, e.g. using spline curves or Bezier surfaces, Sinc/Lanczos filtering, nearest-neighbor interpolation, and weighted average interpolation. Embodiments of these and other techniques for generating synthetic sampling points (interpolation points) are further disclosed in Applicant's International application No. PCT/SE2011/050520, which was filed on April 28, 2011 and which is incorporated herein by this reference.
7.5 Filtered back projection, fourth embodiment
In a fourth embodiment of filtered back projection, the filtering step is performed locally around each individual sampling point in the sample space using a ID filter. In contrast to the third embodiment, the filtering is not operated on synthetic projection values, but on the projection values of adjacent actual sampling points that are forced into the ID filter. The projection values of adjacent sampling points thus form estimated signal values around each projection value in the s dimension. In the following example, the ID filter is a (—1, 2,— 1) kernel which is operated with respect to the s dimension.
Furthermore, the fourth embodiment evaluates a reconstruction line in the sample space by computing a line integral through interpolating basis functions arranged at the actual sampling points. Fig. 25A illustrates all sampling points in the φ-s-plane for the touch- sensitive apparatus in Fig. 2. For the purpose of illustrating the use of neighboring sampling points, selected sampling points are indicated by circles, and their adjacent sampling points in the s dimension are indicated by crosses. The filtered value for each selected sampling points is calculated, using the above filter kernel, as two times the projection value of the selected sampling point minus the projection values of the adjacent sampling points.
The adjacent sampling points are generally selected as a best match to the extent of the filter kernel in the s dimension of the sample space.
For the interleaved arrangement (Fig. 2), the adjacent sampling points for each sampling point may instead be selected based on geometric criteria. In the interleaved arrangement, the adjacent sampling points are detection lines that extend from the next incoupling point (emitter) and the next outcoupling point (detector) in both directions away from the incoupling and outcoupling points that defines the detection line of the selected sampling point. This principle is illustrated in Fig. 25B, where the detection lines of the selected sampling points in Fig. 25 A are represented by solid lines, and the detection lines of the adjacent sampling points are represented by dashed lines. Thus, a filtered value for a detection line (sampling point) may be computed by identifying the incoupling and outcoupling points that give rise to the detection line, and then finding the neighboring incoupling and outcoupling points.
In this embodiment, the reconstruction function F(pk, g((pk, sk)^ is given by a first sub-function that finds the adjacent sampling points, g((pk,, Sk>) and
Figure imgf000037_0001
sk,,), a second-sub-function that applies the filter kernel on the relevant sampling points: v(.(pk> Sk) = -g (.<Pk Skl + 2 g ((pk, sk) - g ((pkll, skll), and a third sub-function that performs the back projection at desired
reconstruction points in the attenuation field, based on the filtered values:
(9t#v) (*i) = Pk - v(_(pk, sk) The adjustment factor pk may be any adjustment factor originating from a line integral through interpolating basis functions in the sample space, such as pk \ or . It can also be noted that since several adjustment factors pk i are zero, the sum needs only be computed for a relevant subset of the sampling points, namely over all k where pk i > TH, where TH is a threshold value, e.g. 0. It is also conceivable to add an overall scaling factor to the back projection operator to achieve a desired reconstruction result.
It should be realized that other filter kernels may be used, although it for practical reasons may be preferable to limit the kernel to 3-15 values.
Fig. 25C illustrates the resulting attenuation field when the reconstruction function is operated on the projection values obtained for the attenuation field in Fig. 19, using adjustment factors and the above-described filter kernel. Clearly, the adjustment factors provide adequate reconstruction quality. 7.6 Fourier transformation techniques
It is also possible to use so-called NUFFT algorithms in the reconstruction step (24 in Fig. 10A). A NUFFT (Non-Uniform FFT) algorithm is an adaptation of a regular discrete Fourier transformation function, e.g. an FFT, to handle non-uniform input data and/or output data while retaining the "fast" property of the FFT algorithms, thus allowing for time complexities of θ(η2 ■ log(n)). Specifically, the reconstruction step may utilize a so-called NED (Non-Equispaced Data) algorithm which is modified by adjustment factors to account for the varying density of sampling points in the sample space.
To simplify the following presentation, the theory of NED algorithms in general and the inventive modification in particular has been separated into Chapter 8.
In one embodiment, the reconstruction function F(pk, g ((pk, sk)^ is given by four consecutive sub-functions. A first sub-function operates a 2D forward NED FFT on the projection values to generate the Fourier transform of g(<Pk, sk~) : g (xp, r) ^ fc - g (.(pk, sk).
Thereby, the Fourier transform is computed with respect to both dimensions φ, s. The forward NED FFT applies adjustment factors to compensate for varying density of sampling points in the sample space. The evaluation of the first sub-function typically operates on pre-computed adjustment factors and other pre-computed coefficients of the forward NED FFT (see Chapter 8).
A second sub-function operates a regular ID inverse Fourier transform (e.g. an FFT) with respect to the φ dimension: This is done since the Projection-Slice Theorem is valid only for Fourier transforms with respect to the s dimension, i.e. one-dimensional transforms of the different projections. The second sub-function results in a polar coordinate
representation, possibly oversampled, of the Fourier transform of the attenuation field to be reconstructed.
A third sub-function, which is optional, applies a smoothing filter F(r/Q) to g(j, r) and may also apply a scaling factor pr to scale the result to an appropriate level: f j, r) = F(r/Q) pr g j, r). A fourth sub-function operates a 2D inverse NED FFT on the polar representation f(j, r) to generate the attenuation field: f(x, y) <— f(j, r).
The inverse NED FFT may or may not be designed in correspondence with the forward NED FFT.
The time complexity of the reconstruction function is 0 (n2 ■ log(n)) .
In this embodiment, the adjustment factor pk may be any one of
Figure imgf000039_0001
, p A, Pk m > or Pk m - Tne last two adjustment factors are obtained similarly to the adjustment factors pj f 1 and p%Y 0 respectively, i.e. via surface integrals through interpolating basis functions (section 6.5). However, instead of reproducing a ID filter, wb, along the reconstruction line, the interpolating function φ is reproduced along the reconstruction line. The interpolating function φ is defined in Chapter 8.
7.7 Hough transformation techniques
The Hough transform is a method for extracting features. It is mainly used in image analysis and computer vision. The main idea is to find geometric objects within a certain class of geometric shapes by a voting procedure. The voting procedure is carried out in the parameter space of the representation of the geometric objects. Generally, the objects are found as local maxima in a so-called accumulator space.
The original algorithm is a method for finding lines in a digital image. The original algorithm is outlined below, followed by ways to modify and use the Hough transformation for finding touches in the sinogram directly, without filtering and back projection.
As noted in Chapter 4 above, any line in a two-dimensional (image) plane can be represented by an angle, y, and the smallest distance to the origin, r. This means that any given line can be defined as a point in the y-r-plane. The inverse is also true; any point (pixel) in the image plane can be represented by a curve in the y-r-plane. This curve represents all the different lines that the point can be a part of. This is the fundament of the Hough transform. For each point in the image plane, the value
(weight) of the point is added to the corresponding line in the y-r-plane (in an accumulator image). When all points in the image plane have been processed, the lines present in the image can be found as local maxima in the accumulator image. The position of a local maximum identifies the values of the two parameters y, r for the line. The presence of several local maxima would indicate that there are several different lines in the image.
The line detection algorithm cannot be directly applied for reconstructing the attenuation field based on the measured projection values. However, a modification of the Hough transform can be used for finding sine curves (i.e. reconstruction lines) present in the sinogram. It can be noted that all sine curves have the same periodicity, 2π, and that a sine curve can be represented by an amplitude, A, and a phase, φ. Hence, for all sampling points in the sinogram, the weight of the sampling point is added to all corresponding sine curves in the accumulator image. The weight of the sampling point is given by the projection value modified by the adjustment factor pk, such that the projection value is compensated for the local density of sample points. In this embodiment, the adjustment factor pk may be any adjustment factor that directly reflects the separation of sampling points in the sample space, such as pk A, pk , pk , and pk A. When all sampling points have been processed, touches are found as local maxima in the accumulator image.
The modified Hough transform algorithm has a time complexity of 0 (n3), since Ο ζη1) values are added to the accumulator image for each detection line, the number of detection lines being 0 (n2). The process for finding local maxima has a lower time complexity.
8. NUFFT theory and modification of NED FFT
NUFFT algorithms come in many different names: Non-Uniform FFT
(NUFFT/NFFT), Generalized FFT (GFFT), Non-uniform DFT (NDFT), Non- Equispaced Result FFT (NER), Non-Equispaced Data FFT (NED), Unequally spaced FFT (USFFT). There are many different variants of NUFFT algorithms; some use least- squares, some use iterative solutions and some use Fourier expansion (Shannon's sampling theorem) to re-map the non-uniform data points to an equispaced grid, which is amenable to fast algorithms. Below, the underlying theory will be presented based on the sampling theorem. Further details are found in the article "Non-Equispaced Fast Fourier Transforms with Applications to Tomography", by Karsten Fourmont, in Journal of Fourier Analysis and Applications, Volume 9, Issue 5, 2003, pages 431-450.
The Fourier transform of non-equispaced data, zk = z(xfc), where xk E
[— N/2, N/2] , can be evaluated at equispaced grid points I =—N/2, ... , (N/2)— 1. The equation is a forward NED (Non-Equispaced Data) function that can be written as:
Figure imgf000041_0001
The goal is now to express every single part of the summation above as a sum, i.e. i . i
e - m xk -fj — ∑m T(xk, m) e ~ m m N using a suitable weight function T(xk, m) .
As will be shown, this equation can be adapted to use standard FFT algorithm implementations. Consider Shannon' s theorem for a band limited function / with bandwidth < π: o = sinc(77:(x— m)) /(m)
τηεΖ
If / is chosen to represent a single function in a Fourier expansion, the above equation becomes: sinc(77:(x— m)) e ντη'ω , \ω \ < π
The exponential on the right-hand side resembles a component of an FFT function. However, the sine function may not decay fast enough to allow for rapid computation. To achieve rapid computation, we may give up the requirement of finding a band-limited function, and instead use an essentially band-limited function. In this way, a function may be found that has a rapid decay while also providing a rapid decay in its Fourier transform. For example, the following interpolating function may be used:
Figure imgf000041_0002
To get better resolution in the frequency domain, oversampling may be introduced, given by a factor c. The oversampling factor can be as low as 3/2, but for the examples herein c = 2. We also require that φ (ω) has compact support and is continuously differentiable in [- a, a] and is non-zero in [- π/c, π/c] . The Fourier transform of φ(ω) is preferably as small as possible outside of [- M, M] since this will make the summation fast and exact.
The best solution for φ(ω) is the prolate spheroidal wave functions. These functions are, however, difficult to use and instead Kaiser-Bessel window functions may be used as approximate solution:
Figure imgf000042_0001
The first function, φ(χ), is taken to be zero when x2≥ M2. 10 is the modified Bessel function of the first kind. By choosing ω = 2π l/(c N) and x = c xk, we get efficient equations for the NED algorithm.
It should be noted that inverse NED algorithms perform all steps equal to the forward NED algorithm but uses an ordinary IFFT instead.
8.1 ID NED algorithm, modified with adjustment factors
Below, we give a practical implementation for a NED algorithm for one- dimensional transforms. The NED equation is given by:
Figure imgf000042_0002
for I =—N/2, ... , N/2— 1. The NED equation includes an adjustment factor pk, which compensates for the varying density of sampling points. The adjustment factor will be discussed in more detail below.
The above NED equation may be modified to utilize regular FFT
implementations. If it is assumed that <p, φ, and z¾ are zero outside their area of definition, it is now possible to convert the NED equation to:
Figure imgf000042_0003
where we have introduced the following notations: φι = /2π - φ(2π l/(c Ν)) . for i = -N/2, ... , N/2 - 1
μίί = round(c xk) , for k = 1, ... , K
1
0fc m =—=. pk 0(c xk (jik + m)) , for k = 1, ... , K and m =—M, ... , M
π
To make the algorithm as fast as possible, most of the coefficients above may be pre-computed.
The equation can be rewritten, by introducing a new index q = μ¾. + m:
CCNiV// 2 --1I
I
—Ίπι-α—,
; uq - e (c-w)
q = -cN/2 cN/2 , ... , cN/2 - 1, and where
Figure imgf000043_0001
Non-zero terms of uq occur only for \q + ctN— μίί \ < M, which means that each uq is the sum of all non-equispaced zk, multiplied with their respective adjustment factor, within distance < M; with distance computed modulo cN.
It should be clear from the above rewritten equation that an ordinary FFT may be used for solving the NED problem.
It should be noted that μ¾. is the nearest equispaced sampling point in the FFT input. The input for the NED FFT comprises the projection values of the non- equispaced sampling points zk, the sampling points xk, the oversampling factor c, with a total length c N suitable for FFT, the interpolation length M, and the coefficients 0fc, μ¾. and $k m which may be pre-computed.
8.2 2D NED algorithm, modified with adjustment factors
The above ID NED algorithm is easily extendable to more dimensions. In two dimensions, the NED problem may be formulated as
-2ni-yk-n/N
Pk zk , for l, n = -N/2. .... N/2
Figure imgf000043_0002
It should be noted that the (χ^, γ^) values are not necessarily a tensor product of two coordinates and therefore cannot be written, in the general form, as two indices, i.e. one for each dimension. The weighing factor is, on the other hand, chosen as a tensor product, φ(χ, y) = 0(y) 0(y). To make the 2D NED algorithm as fast as possible, most coefficients may be pre-computed: φ η = τ/2π - φ(2π fc/(c ΛΓ)) φ(2π n/(c N)) . for l. n = -N/2. ... . N/2 - 1 μχ = round(c xk) , for k = Ι, .,. , Κ
μζ = round(c yk) , for k = Ι, .,. , Κ
1
4>k,mx,my = ~i= Pk 0(c ½ - ½ + mxy) 4>{c yk - (μζ + my)) , for k = 1, ... , K and
Λ/ 2π
for mx, my = -Μ, .,. , Μ
The execution of the 2D NED algorithm thus comprises the steps:
1. Compute the
Figure imgf000044_0001
Start by setting all uqx qy = 0, then loop over mx = -M, ... , M, my = -M, ... , M, and k = 1, ... , K and in this way build the uq -values: ημχ+πιχ ,μΐ+my = ημχ+πιχ,μΙ+πιν + Zfc ' ^kmxm.y 2. Compute Uk n <- uqx>qy , using an ordinary FFT on input length c N
3. Scale the result: zi n = υί η/φι η
It should be noted that different M may be used for the two different directions, this is also true for c and N.
8.3 Adjustment factors in NED algorithms
The need for using adjustment factors in the NED algorithms above becomes apparent when the density of sampling points xk is very unevenly distributed. It can be noted that uq is defined as the sum of the neighboring xk (within distance M) multiplied by the φ function. For the sake of argument, let φ = 1. Then, the sum will depend on the number of sampling points that contribute to a particular uq value. Consider, for example, if the function zk = 1 , V k is sampled twice for u0 and once for ux . This would make the resulting uq values differ when they in fact should be identical. This serious artifact is overcome by scaling the 0fcjn coefficients by the adjustment factor pk . As explained in section 7.6, the adjustment factor pt may be any one of pk , pk , pk A or pk A. As also explained in section 7.6, the adjustment factor may be computed as the product of interpolation basis functions, for instance bj^1 or bk°, for a given sampling point with the two-dimensional extent of the interpolating function φ. This would render a set of adjustment factors pk m and j respectively. These adjustment factors are also extendible into two
d>Dl νο
dimensions as p£mW and p£mW.
8.4 Notes on FFT implementations
In this chapter, the Fourier transform is defined as:
N/2 -1
1 = ^ e -2nH-k/N . Zk
k = -N/2
Many efficient implementations of the FFT algorithm are defined as: z; = ^ e-2ni-bk'N■ zk
k=0
It is possible to utilize existing FFT implementations by swapping the first and last parts of the data before and after the FFT. Another way is to modulate the input by the sequence \,—\,\,—\, ... and then modulate the output by the same sequence. The sequence is actually the Nyquist frequency e~m'k.
It is also possible to utilize special FFT implementations that only handle real input for the Fourier transform and real output for the inverse Fourier transform.
It is also possible that there is a need to take special care with the multiplication constants in the Fourier transforms of the particular FFT implementation, i.e. whether or not multiplication of 1/λ[2π is done symmetrically or is absent.
9. Concluding remarks
The invention has mainly been described above with reference to a few
embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope and spirit of the invention, which is defined and limited only by the appended patent claims. For example, although the detection lines have been represented as sampling points in the φ-s-plane, it should be realized any other parameter representation of the detection lines can be used. For example, the detection lines can be represented in a β- α-plane, as is used in a fan geometry which is standard geometry widely used in conventional tomography e.g. in the medical field. The fan geometry is exemplified in Fig. 26, in which an emitter emits rays in many directions, and sensors are placed to measure the received energy from this single emitter on a number of detection lines D, illustrated by dashed lines in Fig. 26. Thus, the measurement system collects projection values for a set of detection lines D extending from the emitter when located at angle βι . In the illustrated example, each detection line D is defined by the angular location β of the emitter with respect to a reference angle (β=0 coinciding with the x-axis), and the angle a of the detection line D with respect to a reference line (in this example, a line going from the emitter through the origin). The measurement system is then rotated slightly (δβ) around the origin of the x,y coordinate system in Fig. 26, to collect a new set of projection values for this new angular location. Thus, each detection line can be represented by a sampling point in sample space defined by the angular emitter location parameter β and the angular direction parameter a. All of the above-described adjustment factors and reconstruction steps are equally applicable to such a sample space. However, the ID filter in the filtered back projection algorithm is applied to extend in the a dimension, and the back projection operator is different from the one used in the above-described parallel geometry. Suitable filters and operators are found in the literature. With respect to the use of a /?-a-plane, Fig. 27 exemplifies a technique for assigning values to the parameters a and β for the detection lines of a touch- sensitive apparatus. The apparatus is circumscribed by a fictitious circle C which may or may not be centered at the origin of the x,y coordinate system (Fig. 2) of the apparatus. The emitters 2 and sensors 3 define detection lines (not shown) across the touch surface 1. To assign values of parameters a and β for each detection line, the intersection between the detection line and the circle C is taken to define a β value, whereas the a value of each detection line is given by the inclination angle of the detection line with respect to the above-mentioned reference line.
It is also to be understood that the reconstructed attenuation field may be subjected to post-processing before the touch data extraction (step 26 in Fig. 10A). Such post-processing may involve different types of filtering, for noise removal and/or image enhancement.
Furthermore, the reconstructed attenuation field need not represent the distribution of attenuation coefficient values within the touch surface, but could instead represent the distribution of energy, relative transmission, or any other relevant entity derivable by processing of projection values given by the output signal of the sensors. Thus, the projection values ("data samples") may represent measured energy, differential energy (e.g. given by a measured energy value subtracted by a background energy value for each detection line), relative attenuation, relative transmission, a logarithmic attenuation, etc. The person skilled in the art realizes that there are other ways of generating projection values based on the output signal. For example, each individual projection signal included in the output signal may be subjected to a high-pass filtering in the time domain, whereby the thus-filtered projection signals represent background- compensated energy and can be sampled for generation of projection values.
Furthermore, all the above embodiments, examples, variants and alternatives given with respect to an FTIR system are equally applicable to a touch-sensitive apparatus that operates by transmission of other energy than light. In one example, the touch surface may be implemented as an electrically conductive panel, the emitters and sensors may be electrodes that couple electric currents into and out of the panel, and the output signal may be indicative of the resistance/impedance of the panel on the individual detection lines. In another example, the touch surface may include a material acting as a dielectric, the emitters and sensors may be electrodes, and the output signal may be indicative of the capacitance of the panel on the individual detection lines. In yet another example, the touch surface may include a material acting as a vibration conduc- ting medium, the emitters may be vibration generators (e.g. acoustic or piezoelectric transducers), and the sensors may be vibration sensors (e.g. acoustic or piezoelectric sensors).

Claims

1. A method of enabling touch determination based on an output signal from a touch- sensitive apparatus (100), the touch- sensitive apparatus (100) comprising a panel (4) configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining detection lines (DjJ that extend across a surface portion (1) of the panel (4) between pairs of incoupling and outcoupling points, at least one signal generator (2) coupled to the incoupling points to generate the signals, and at least one signal detector (3) coupled to the outcoupling points to generate the output signal, the method comprising:
processing the output signal to generate a set of data samples, wherein each data sample is indicative of detected energy on one of the detection lines (DjJ and is defined by a signal value and first and second dimension values (s, φ; a, β) in a two- dimensional sample space, wherein the first and second dimension values {ε, φ,- , β) define the location of the detection line (DjJ on the surface portion (1), and wherein the data samples are non-uniformly arranged in the sample space;
obtaining adjustment factors (pfc) for the set of data samples, wherein each adjustment factor (pfc) is representative of the local density of data samples in the sample space for a respective data sample; and
processing the set of the data samples by tomographic reconstruction, while applying the adjustment factors (ρ¾), to generate data indicative of a reconstructed distribution of an energy-related parameter within at least part of the surface portion (1).
2. The method of claim 1, wherein the adjustment factor (pfc) for a given data sample is calculated to represent the number of data samples within a region around the given data sample in the sample space.
3. The method of claim 1, wherein the adjustment factor (pfc) for a given data sample is calculated to represent an average of a set of smallest distances between the given data sample and neighboring data samples in the sample space.
4. The method of claim 1, wherein the adjustment factor (pfc) for a given data sample is calculated to represent an extent of a Voronoi cell or a set of Delaunay triangles in the sample space for the given data sample.
5. The method of claim 1, wherein the reconstructed distribution comprises spatial data points, each spatial data point having a unique location on the surface portion (1) and corresponding to a predetermined curve in the sample space, and wherein the adjustment factor (pfc) for a given data sample is calculated, for each spatial data point in a set of spatial data points, to represent the interaction between the predetermined curve of the spatial data point and a two-dimensional basis function located at the given data sample, wherein the basis function is given an extent in the sample space that is dependent on the local density.
6. The method of claim 5, wherein the interaction is calculated by evaluating a line integral of the basis function, along the predetermined curve.
7. The method of claim 5 or 6, wherein the step of obtaining comprises: obtaining, for each spatial data point, a set of adjustment factors associated with a relevant set of data samples.
8. The method of claim 7, wherein the step of processing the set of data samples comprises: reconstructing each spatial data point by: scaling the signal value of each data sample in the relevant set of data samples by its corresponding adjustment factor (pfc) and summing the thus- scaled signal values.
9. The method of claim 8, wherein the predetermined curve is designed to include the shape of a predetermined one-dimensional filter function which extends in the first dimension (s; a) of the sample space and which is centered on and reproduced at plural locations along the curve.
10. The method of any one of claims 5-8, wherein the step of processing the output signal comprises: obtaining a measurement value for each detection line and applying a filter function to generate a filtered signal value for each measurement value, wherein the filtered signal values form said signal values of the data samples.
11. The method of claim 10, wherein the filter function is a predetermined one- dimensional filter function which is applied in the first dimension (s; a) of the sample space.
12. The method of claim 10 or 11, wherein the step of applying the filter function comprises: obtaining estimated signal values around each measurement value in the first dimension (s; a), and operating the filter function on the measurement value and the estimated signal values.
13. The method of claim 12, wherein the filtered signal value is generated as a weighted summation of the measurement values and the estimated signal values based on the filter function.
14. The method of claim 13, wherein the estimated signal values are obtained as measurement values of other detection lines, said other detection lines being selected as a best match to the extent of the filter function in the first dimension (s; a).
15. The method of claim 13, wherein the estimated signal values are generated at predetermined locations around the measurement value in the sample space.
16. The method of claim 15, wherein the estimated signal values are generated by interpolation in the sample space based on the measurement values.
17. The method of claim 16, wherein each estimated signal value is generated by interpolation of measurement values of neighboring data samples in the sample space.
18. The method of claim 16 or 17, wherein the step of processing the output signal further comprises: obtaining a predetermined two-dimensional interpolation function with nodes corresponding to the data samples, and calculating the estimated signal values according to the interpolation function and based on the measurement values of the data samples.
19. The method of any one of claims 1-4, wherein the reconstructed distribution comprises spatial data points, each spatial data point having a unique location on the surface portion (1) and corresponding to a predetermined curve in the sample space, and wherein the step of processing the set of data samples comprises: generating filtered signal values for the data samples by scaling the signal value of each data sample by a weight given by a predetermined filter function based on the distance of the data sample from the curve in the first dimension (s; a), and evaluating each spatial data point by: scaling the filtered signal value by the adjustment factor (pfc) of the corresponding data sample and summing the thus-scaled filtered signal values.
20. The method of any one of claim 1-4, wherein the step of processing the set of data samples comprises: calculating Fourier transformation data (g) for the data samples with respect to the first dimension (s; a) only, and generating said data indicative of the reconstructed distribution by operating a two-dimensional inverse Fourier transform on the Fourier transformation data (g), wherein the adjustment factors (pfc) are applied in the step of calculating Fourier transformation data.
21. The method of claim 20, wherein the step of calculating the Fourier transformation data (g) comprises: transforming the data samples to a Fourier domain to produce uniformly arranged Fourier-transformed data samples with respect to the first and second dimensions (s, φ; a, /?), and transforming the Fourier-transformed data samples back to the sample space with respect to the second dimension (φ; /?) only.
22. The method of any preceding claim, wherein the first dimension value is a distance (s) of the detection line (DjJ in the plane of the panel (4) from a predetermined origin, and the second dimension value is a rotation angle (φ) of the detection line (DjJ in the plane of the panel (4).
23. The method of any one of claims 1-21, wherein the first dimension value is a rotation angle (a) of the detection line (DjJ in the plane of the panel (4), and the second dimension value is an angular location (/?) of the incoupling or outcoupling point of the detection line (DjJ.
24. A computer program product comprising computer code which, when executed on a data-processing system, is adapted to carry out the method of any preceding claim.
25. A device for enabling touch determination based on an output signal from a touch- sensitive apparatus (100), the touch- sensitive apparatus (100) comprising a panel (4) configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining detection lines (DjJ that extend across a surface portion (1) of the panel (4) between pairs of incoupling and outcoupling points, signal generating means (2) coupled to the incoupling points to generate the signals, and signal detecting means (3) coupled to the outcoupling points to generate the output signal, said device comprising:
means (204) for processing the output signal to generate a set of data samples, wherein each data sample is indicative of detected energy on one of the detection lines (Djs) and is defined by a signal value and first and second dimension values (s, φ; β, a) in a two-dimensional sample space, wherein the first and second dimension values
(s, φ; β, a) define the location of the detection line (DjJ on the surface portion (1), and wherein the data samples are non-uniformly arranged in the sample space;
means (202) for obtaining adjustment factors (pfc) for the set of data samples, wherein each adjustment factor (pfc) is representative of the local density of data samples in the sample space for a respective data sample; and
means (206) for processing the set of the data samples by tomographic reconstruction, while applying the adjustment factors (ρ¾), to generate data indicative of a reconstructed distribution of an energy-related parameter within at least part of the surface portion (1).
26. A touch-sensitive apparatus, comprising:
a panel (4) configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining detection lines (Djs) that extend across a surface portion (1) of the panel (4) between pairs of incoupling and outcoupling points;
means (2, 12) for generating the signals at the incoupling points;
means (3) for generating an output signal based on detected signals at the outcoupling points; and
the device (10) for enabling touch determination according to claim 25.
27. A touch-sensitive apparatus, comprising:
a panel (4) configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining detection lines (Djs) that extend across a surface portion (1) of the panel (4) between pairs of incoupling and outcoupling points;
at least one signal generator (2, 12) coupled to the incoupling points to generate the signals;
at least one signal detector (3) coupled to the outcoupling points to generate an output signal; and
a signal processor (10) connected to receive the output signal and configured to: process the output signal to generate a set of data samples, wherein each data sample is indicative of detected energy on one of the detection lines (DjJ and is defined by a signal value and first and second dimension values (s, φ; β, a) in a two- dimensional sample space, wherein the first and second dimension values (s, φ; β, a) define the location of the detection line (DjJ on the surface portion (1), and wherein the data samples are non-uniformly arranged in the sample space;
obtain adjustment factors (pfc) for the set of data samples, wherein each adjustment factor (pfc) is representative of the local density of data samples in the sample space for a respective data sample; and
process the set of the data samples by tomographic reconstruction, while applying the adjustment factors (ρ¾), to generate data indicative of a reconstructed distribution of an energy-related parameter within at least part of the surface portion (1).
PCT/SE2011/051201 2010-10-11 2011-10-07 Touch determination by tomographic reconstruction WO2012050510A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/824,026 US9411444B2 (en) 2010-10-11 2011-10-07 Touch determination by tomographic reconstruction
EP11832837.6A EP2628068A4 (en) 2010-10-11 2011-10-07 Touch determination by tomographic reconstruction

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US39176410P 2010-10-11 2010-10-11
SE1051061-8 2010-10-11
SE1051061 2010-10-11
US61/391,764 2010-10-11

Publications (1)

Publication Number Publication Date
WO2012050510A1 true WO2012050510A1 (en) 2012-04-19

Family

ID=45939056

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2011/051201 WO2012050510A1 (en) 2010-10-11 2011-10-07 Touch determination by tomographic reconstruction

Country Status (3)

Country Link
US (1) US9411444B2 (en)
EP (1) EP2628068A4 (en)
WO (1) WO2012050510A1 (en)

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013055282A2 (en) 2011-10-11 2013-04-18 Flatfrog Laboratories Ab Improved multi-touch detection in a touch system
WO2013089622A2 (en) 2011-12-16 2013-06-20 Flatfrog Laboratories Ab Tracking objects on a touch surface
WO2013133756A1 (en) * 2012-03-09 2013-09-12 Flatfrog Laboratories Ab Efficient tomographic processing for touch determination
WO2013165305A2 (en) 2012-05-02 2013-11-07 Flatfrog Laboratories Ab Object detection in touch systems
WO2013165306A2 (en) 2012-05-02 2013-11-07 Flatfrog Laboratories Ab Object detection in touch systems
EP2706443A1 (en) 2012-09-11 2014-03-12 FlatFrog Laboratories AB Touch force estimation in a projection-type touch-sensing apparatus based on frustrated total internal reflection
WO2014168567A1 (en) * 2013-04-11 2014-10-16 Flatfrog Laboratories Ab Tomographic processing for touch detection
US8890849B2 (en) 2011-09-27 2014-11-18 Flatfrog Laboratories Ab Image reconstruction for touch determination
US8982084B2 (en) 2011-12-16 2015-03-17 Flatfrog Laboratories Ab Tracking objects on a touch surface
US9274645B2 (en) 2010-12-15 2016-03-01 Flatfrog Laboratories Ab Touch determination with signal enhancement
US9588619B2 (en) 2012-01-31 2017-03-07 Flatfrog Laboratories Ab Performance monitoring and correction in a touch-sensitive apparatus
US9613436B1 (en) 2013-12-23 2017-04-04 Sensing Electromagnetic Plus Corp. Optimization methods for feature detection
US9626040B2 (en) 2012-05-23 2017-04-18 Flatfrog Laboratories Ab Touch-sensitive apparatus with improved spatial resolution
US9639210B2 (en) 2011-12-22 2017-05-02 Flatfrog Laboratories Ab Touch determination with interaction compensation
US9678602B2 (en) 2012-05-23 2017-06-13 Flatfrog Laboratories Ab Touch-sensitive apparatus with improved spatial resolution
US9785287B2 (en) 2012-12-17 2017-10-10 Flatfrog Laboratories Ab Optical coupling in touch-sensing systems
US9857917B2 (en) 2012-12-17 2018-01-02 Flatfrog Laboratories Ab Optical coupling of light into touch-sensing systems
US9857916B2 (en) 2012-07-24 2018-01-02 Flatfrog Laboratories Ab Optical coupling in touch-sensing systems using diffusively transmitting element
US9864470B2 (en) 2014-05-30 2018-01-09 Flatfrog Laboratories Ab Enhanced interaction touch system
US9874978B2 (en) 2013-07-12 2018-01-23 Flatfrog Laboratories Ab Partial detect mode
US9927920B2 (en) 2011-12-16 2018-03-27 Flatfrog Laboratories Ab Tracking objects on a touch surface
US10126882B2 (en) 2014-01-16 2018-11-13 Flatfrog Laboratories Ab TIR-based optical touch systems of projection-type
US10146376B2 (en) 2014-01-16 2018-12-04 Flatfrog Laboratories Ab Light coupling in TIR-based optical touch systems
US10152176B2 (en) 2013-11-22 2018-12-11 Flatfrog Laboratories Ab Touch sensitive apparatus with improved spatial resolution
US10161886B2 (en) 2014-06-27 2018-12-25 Flatfrog Laboratories Ab Detection of surface contamination
US10168835B2 (en) 2012-05-23 2019-01-01 Flatfrog Laboratories Ab Spatial resolution in touch displays
US10268319B2 (en) 2012-12-17 2019-04-23 Flatfrog Laboratories Ab Edge-coupled touch-sensitive apparatus
US10282035B2 (en) 2016-12-07 2019-05-07 Flatfrog Laboratories Ab Touch device
US10318074B2 (en) 2015-01-30 2019-06-11 Flatfrog Laboratories Ab Touch-sensing OLED display with tilted emitters
US10365768B2 (en) 2012-12-20 2019-07-30 Flatfrog Laboratories Ab TIR-based optical touch systems of projection-type
US10401546B2 (en) 2015-03-02 2019-09-03 Flatfrog Laboratories Ab Optical component for light coupling
EP3537269A1 (en) 2015-02-09 2019-09-11 FlatFrog Laboratories AB Optical touch system
US10437389B2 (en) 2017-03-28 2019-10-08 Flatfrog Laboratories Ab Touch sensing apparatus and method for assembly
US10474249B2 (en) 2008-12-05 2019-11-12 Flatfrog Laboratories Ab Touch sensing apparatus and method of operating the same
US10481737B2 (en) 2017-03-22 2019-11-19 Flatfrog Laboratories Ab Pen differentiation for touch display
US10761657B2 (en) 2016-11-24 2020-09-01 Flatfrog Laboratories Ab Automatic optimisation of touch signal
US11182023B2 (en) 2015-01-28 2021-11-23 Flatfrog Laboratories Ab Dynamic touch quarantine frames
US11256371B2 (en) 2017-09-01 2022-02-22 Flatfrog Laboratories Ab Optical component
US11301089B2 (en) 2015-12-09 2022-04-12 Flatfrog Laboratories Ab Stylus identification
US11474644B2 (en) 2017-02-06 2022-10-18 Flatfrog Laboratories Ab Optical coupling in touch-sensing systems
US11567610B2 (en) 2018-03-05 2023-01-31 Flatfrog Laboratories Ab Detection line broadening
US11709568B2 (en) 2020-02-25 2023-07-25 Promethean Limited Convex interactive touch displays and related systems and methods
US11893189B2 (en) 2020-02-10 2024-02-06 Flatfrog Laboratories Ab Touch-sensing apparatus
US11943563B2 (en) 2019-01-25 2024-03-26 FlatFrog Laboratories, AB Videoconferencing terminal and method of operating the same
US12056316B2 (en) 2019-11-25 2024-08-06 Flatfrog Laboratories Ab Touch-sensing apparatus
US12055969B2 (en) 2018-10-20 2024-08-06 Flatfrog Laboratories Ab Frame for a touch-sensitive device and tool therefor

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201203052A (en) * 2010-05-03 2012-01-16 Flatfrog Lab Ab Touch determination by tomographic reconstruction
KR101260341B1 (en) * 2011-07-01 2013-05-06 주식회사 알엔디플러스 Apparatus for sensing multi-touch on touch screen apparatus
US9250794B2 (en) * 2012-01-23 2016-02-02 Victor Manuel SUAREZ ROVERE Method and apparatus for time-varying tomographic touch imaging and interactive system using same
KR101372423B1 (en) * 2012-03-26 2014-03-10 주식회사 알엔디플러스 Multi-touch on touch screen apparatus
CN105283744B (en) * 2013-06-05 2018-05-18 Ev 集团 E·索尔纳有限责任公司 To determine the measuring device and method of pressure map
CL2016002047A1 (en) * 2016-08-12 2017-03-17 Oculus Machina S P A A method to perform element detection by segmentation within an orderly sequence of digital data.
EP3529013A4 (en) 2016-10-03 2020-07-01 Carnegie Mellon University Touch-sensing system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7065234B2 (en) * 2004-02-23 2006-06-20 General Electric Company Scatter and beam hardening correction in computed tomography applications
WO2007112742A1 (en) * 2006-03-30 2007-10-11 Flatfrog Laboratories Ab A system and a method of determining a position of a scattering/reflecting element on the surface of a radiation transmissive element
US20090153519A1 (en) * 2007-12-17 2009-06-18 Suarez Rovere Victor Manuel Method and apparatus for tomographic touch imaging and interactive system using same
US20100039405A1 (en) * 2008-08-13 2010-02-18 Au Optronics Corp. Projective Capacitive Touch Apparatus, and Method for Identifying Distinctive Positions

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4891829A (en) * 1986-11-19 1990-01-02 Exxon Research And Engineering Company Method and apparatus for utilizing an electro-optic detector in a microtomography system
US5345490A (en) 1991-06-28 1994-09-06 General Electric Company Method and apparatus for converting computed tomography (CT) data into finite element models
US6972753B1 (en) 1998-10-02 2005-12-06 Semiconductor Energy Laboratory Co., Ltd. Touch panel, display device provided with touch panel and electronic equipment provided with display device
US7432893B2 (en) 2003-06-14 2008-10-07 Massachusetts Institute Of Technology Input device based on frustrated total internal reflection
GB2409304B (en) * 2003-12-19 2007-11-14 Westerngeco Ltd Processing geophysical data
US7702142B2 (en) * 2004-11-15 2010-04-20 Hologic, Inc. Matching geometry generation and display of mammograms and tomosynthesis images
US8599140B2 (en) 2004-11-17 2013-12-03 International Business Machines Corporation Providing a frustrated total internal reflection touch interface
US7916144B2 (en) * 2005-07-13 2011-03-29 Siemens Medical Solutions Usa, Inc. High speed image reconstruction for k-space trajectory data using graphic processing unit (GPU)
US8847924B2 (en) 2005-10-03 2014-09-30 Hewlett-Packard Development Company, L.P. Reflecting light
US20090220136A1 (en) * 2006-02-03 2009-09-03 University Of Florida Research Foundation Image Guidance System for Deep Brain Stimulation
RU2444788C2 (en) * 2007-06-01 2012-03-10 Эксонмобил Апстрим Рисерч Компани Generation of constrained voronoi grid in plane
EP2212763A4 (en) 2007-10-10 2012-06-20 Flatfrog Lab Ab A touch pad and a method of operating the touch pad
TW201005606A (en) 2008-06-23 2010-02-01 Flatfrog Lab Ab Detecting the locations of a plurality of objects on a touch surface
TW201007530A (en) 2008-06-23 2010-02-16 Flatfrog Lab Ab Detecting the location of an object on a touch surface
TW201001258A (en) 2008-06-23 2010-01-01 Flatfrog Lab Ab Determining the location of one or more objects on a touch surface
WO2010006885A2 (en) 2008-06-23 2010-01-21 Flatfrog Laboratories Ab Detecting the location of an object on a touch surface
TW201013492A (en) 2008-06-23 2010-04-01 Flatfrog Lab Ab Determining the location of one or more objects on a touch surface
SE533704C2 (en) 2008-12-05 2010-12-07 Flatfrog Lab Ab Touch sensitive apparatus and method for operating the same
DE102009042922B4 (en) * 2009-09-24 2019-01-24 Siemens Healthcare Gmbh Method and apparatus for image determination from x-ray projections taken when traversing a trajectory
RU2012118597A (en) 2009-10-19 2013-11-27 ФлэтФрог Лэборэторис АБ DETERMINATION OF TOUCH DATA FOR ONE OR MULTIPLE ITEMS ON A TOUCH SURFACE
TW201203052A (en) 2010-05-03 2012-01-16 Flatfrog Lab Ab Touch determination by tomographic reconstruction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7065234B2 (en) * 2004-02-23 2006-06-20 General Electric Company Scatter and beam hardening correction in computed tomography applications
WO2007112742A1 (en) * 2006-03-30 2007-10-11 Flatfrog Laboratories Ab A system and a method of determining a position of a scattering/reflecting element on the surface of a radiation transmissive element
US20090153519A1 (en) * 2007-12-17 2009-06-18 Suarez Rovere Victor Manuel Method and apparatus for tomographic touch imaging and interactive system using same
US20100039405A1 (en) * 2008-08-13 2010-02-18 Au Optronics Corp. Projective Capacitive Touch Apparatus, and Method for Identifying Distinctive Positions

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2628068A4 *

Cited By (78)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10474249B2 (en) 2008-12-05 2019-11-12 Flatfrog Laboratories Ab Touch sensing apparatus and method of operating the same
US9274645B2 (en) 2010-12-15 2016-03-01 Flatfrog Laboratories Ab Touch determination with signal enhancement
US9594467B2 (en) 2010-12-15 2017-03-14 Flatfrog Laboratories Ab Touch determination with signal enhancement
US8890849B2 (en) 2011-09-27 2014-11-18 Flatfrog Laboratories Ab Image reconstruction for touch determination
WO2013055282A2 (en) 2011-10-11 2013-04-18 Flatfrog Laboratories Ab Improved multi-touch detection in a touch system
US9377884B2 (en) 2011-10-11 2016-06-28 Flatfrog Laboratories Ab Multi-touch detection in a touch system
EP3506069A1 (en) 2011-12-16 2019-07-03 FlatFrog Laboratories AB Tracking objects on a touch surface
US9927920B2 (en) 2011-12-16 2018-03-27 Flatfrog Laboratories Ab Tracking objects on a touch surface
US8982084B2 (en) 2011-12-16 2015-03-17 Flatfrog Laboratories Ab Tracking objects on a touch surface
WO2013089622A2 (en) 2011-12-16 2013-06-20 Flatfrog Laboratories Ab Tracking objects on a touch surface
US9639210B2 (en) 2011-12-22 2017-05-02 Flatfrog Laboratories Ab Touch determination with interaction compensation
US10372265B2 (en) 2012-01-31 2019-08-06 Flatfrog Laboratories Ab Performance monitoring and correction in a touch-sensitive apparatus
US9588619B2 (en) 2012-01-31 2017-03-07 Flatfrog Laboratories Ab Performance monitoring and correction in a touch-sensitive apparatus
US9684414B2 (en) 2012-03-09 2017-06-20 Flatfrog Laboratories Ab Efficient tomographic processing for touch determination
WO2013133756A1 (en) * 2012-03-09 2013-09-12 Flatfrog Laboratories Ab Efficient tomographic processing for touch determination
EP2845081A4 (en) * 2012-05-02 2015-12-16 Flatfrog Lab Ab Object detection in touch systems
US10318041B2 (en) 2012-05-02 2019-06-11 Flatfrog Laboratories Ab Object detection in touch systems
EP2845080A4 (en) * 2012-05-02 2015-12-16 Flatfrog Lab Ab Object detection in touch systems
WO2013165305A2 (en) 2012-05-02 2013-11-07 Flatfrog Laboratories Ab Object detection in touch systems
US9626018B2 (en) 2012-05-02 2017-04-18 Flatfrog Laboratories Ab Object detection in touch systems
WO2013165306A2 (en) 2012-05-02 2013-11-07 Flatfrog Laboratories Ab Object detection in touch systems
US9678602B2 (en) 2012-05-23 2017-06-13 Flatfrog Laboratories Ab Touch-sensitive apparatus with improved spatial resolution
US9626040B2 (en) 2012-05-23 2017-04-18 Flatfrog Laboratories Ab Touch-sensitive apparatus with improved spatial resolution
US10168835B2 (en) 2012-05-23 2019-01-01 Flatfrog Laboratories Ab Spatial resolution in touch displays
US10001881B2 (en) 2012-05-23 2018-06-19 Flatfrog Laboratories Ab Touch-sensitive apparatus with improved spatial resolution
US9857916B2 (en) 2012-07-24 2018-01-02 Flatfrog Laboratories Ab Optical coupling in touch-sensing systems using diffusively transmitting element
US10088957B2 (en) 2012-09-11 2018-10-02 Flatfrog Laboratories Ab Touch force estimation in touch-sensing apparatus
EP2706443A1 (en) 2012-09-11 2014-03-12 FlatFrog Laboratories AB Touch force estimation in a projection-type touch-sensing apparatus based on frustrated total internal reflection
EP3327557A1 (en) 2012-09-11 2018-05-30 FlatFrog Laboratories AB Touch force estimation in a projection-type touch-sensing apparatus based on frustrated total internal reflection
US9857917B2 (en) 2012-12-17 2018-01-02 Flatfrog Laboratories Ab Optical coupling of light into touch-sensing systems
US9785287B2 (en) 2012-12-17 2017-10-10 Flatfrog Laboratories Ab Optical coupling in touch-sensing systems
US10268319B2 (en) 2012-12-17 2019-04-23 Flatfrog Laboratories Ab Edge-coupled touch-sensitive apparatus
US10365768B2 (en) 2012-12-20 2019-07-30 Flatfrog Laboratories Ab TIR-based optical touch systems of projection-type
US10019113B2 (en) 2013-04-11 2018-07-10 Flatfrog Laboratories Ab Tomographic processing for touch detection
WO2014168567A1 (en) * 2013-04-11 2014-10-16 Flatfrog Laboratories Ab Tomographic processing for touch detection
US9874978B2 (en) 2013-07-12 2018-01-23 Flatfrog Laboratories Ab Partial detect mode
US10152176B2 (en) 2013-11-22 2018-12-11 Flatfrog Laboratories Ab Touch sensitive apparatus with improved spatial resolution
US9613436B1 (en) 2013-12-23 2017-04-04 Sensing Electromagnetic Plus Corp. Optimization methods for feature detection
US10146376B2 (en) 2014-01-16 2018-12-04 Flatfrog Laboratories Ab Light coupling in TIR-based optical touch systems
US10126882B2 (en) 2014-01-16 2018-11-13 Flatfrog Laboratories Ab TIR-based optical touch systems of projection-type
US9864470B2 (en) 2014-05-30 2018-01-09 Flatfrog Laboratories Ab Enhanced interaction touch system
US10324566B2 (en) 2014-05-30 2019-06-18 Flatfrog Laboratories Ab Enhanced interaction touch system
US10161886B2 (en) 2014-06-27 2018-12-25 Flatfrog Laboratories Ab Detection of surface contamination
US11182023B2 (en) 2015-01-28 2021-11-23 Flatfrog Laboratories Ab Dynamic touch quarantine frames
US10318074B2 (en) 2015-01-30 2019-06-11 Flatfrog Laboratories Ab Touch-sensing OLED display with tilted emitters
EP3537269A1 (en) 2015-02-09 2019-09-11 FlatFrog Laboratories AB Optical touch system
US10496227B2 (en) 2015-02-09 2019-12-03 Flatfrog Laboratories Ab Optical touch system comprising means for projecting and detecting light beams above and inside a transmissive panel
US11029783B2 (en) 2015-02-09 2021-06-08 Flatfrog Laboratories Ab Optical touch system comprising means for projecting and detecting light beams above and inside a transmissive panel
US10401546B2 (en) 2015-03-02 2019-09-03 Flatfrog Laboratories Ab Optical component for light coupling
US11301089B2 (en) 2015-12-09 2022-04-12 Flatfrog Laboratories Ab Stylus identification
EP4075246A1 (en) 2015-12-09 2022-10-19 FlatFrog Laboratories AB Stylus for optical touch system
US10761657B2 (en) 2016-11-24 2020-09-01 Flatfrog Laboratories Ab Automatic optimisation of touch signal
EP4152132A1 (en) 2016-12-07 2023-03-22 FlatFrog Laboratories AB An improved touch device
EP3667475A1 (en) 2016-12-07 2020-06-17 FlatFrog Laboratories AB A curved touch device
US11281335B2 (en) 2016-12-07 2022-03-22 Flatfrog Laboratories Ab Touch device
US11579731B2 (en) 2016-12-07 2023-02-14 Flatfrog Laboratories Ab Touch device
US10775935B2 (en) 2016-12-07 2020-09-15 Flatfrog Laboratories Ab Touch device
US10282035B2 (en) 2016-12-07 2019-05-07 Flatfrog Laboratories Ab Touch device
US11740741B2 (en) 2017-02-06 2023-08-29 Flatfrog Laboratories Ab Optical coupling in touch-sensing systems
US11474644B2 (en) 2017-02-06 2022-10-18 Flatfrog Laboratories Ab Optical coupling in touch-sensing systems
US11016605B2 (en) 2017-03-22 2021-05-25 Flatfrog Laboratories Ab Pen differentiation for touch displays
US10606414B2 (en) 2017-03-22 2020-03-31 Flatfrog Laboratories Ab Eraser for touch displays
US11099688B2 (en) 2017-03-22 2021-08-24 Flatfrog Laboratories Ab Eraser for touch displays
US10481737B2 (en) 2017-03-22 2019-11-19 Flatfrog Laboratories Ab Pen differentiation for touch display
US10739916B2 (en) 2017-03-28 2020-08-11 Flatfrog Laboratories Ab Touch sensing apparatus and method for assembly
US11269460B2 (en) 2017-03-28 2022-03-08 Flatfrog Laboratories Ab Touch sensing apparatus and method for assembly
US10845923B2 (en) 2017-03-28 2020-11-24 Flatfrog Laboratories Ab Touch sensing apparatus and method for assembly
US10606416B2 (en) 2017-03-28 2020-03-31 Flatfrog Laboratories Ab Touch sensing apparatus and method for assembly
US10437389B2 (en) 2017-03-28 2019-10-08 Flatfrog Laboratories Ab Touch sensing apparatus and method for assembly
US11256371B2 (en) 2017-09-01 2022-02-22 Flatfrog Laboratories Ab Optical component
US11650699B2 (en) 2017-09-01 2023-05-16 Flatfrog Laboratories Ab Optical component
US12086362B2 (en) 2017-09-01 2024-09-10 Flatfrog Laboratories Ab Optical component
US11567610B2 (en) 2018-03-05 2023-01-31 Flatfrog Laboratories Ab Detection line broadening
US12055969B2 (en) 2018-10-20 2024-08-06 Flatfrog Laboratories Ab Frame for a touch-sensitive device and tool therefor
US11943563B2 (en) 2019-01-25 2024-03-26 FlatFrog Laboratories, AB Videoconferencing terminal and method of operating the same
US12056316B2 (en) 2019-11-25 2024-08-06 Flatfrog Laboratories Ab Touch-sensing apparatus
US11893189B2 (en) 2020-02-10 2024-02-06 Flatfrog Laboratories Ab Touch-sensing apparatus
US11709568B2 (en) 2020-02-25 2023-07-25 Promethean Limited Convex interactive touch displays and related systems and methods

Also Published As

Publication number Publication date
EP2628068A4 (en) 2014-02-26
US20130249833A1 (en) 2013-09-26
EP2628068A1 (en) 2013-08-21
US9411444B2 (en) 2016-08-09

Similar Documents

Publication Publication Date Title
US9411444B2 (en) Touch determination by tomographic reconstruction
US9996196B2 (en) Touch determination by tomographic reconstruction
EP2823382B1 (en) Efficient tomographic processing for touch determination
US9684414B2 (en) Efficient tomographic processing for touch determination
US20140300572A1 (en) Touch determination by tomographic reconstruction
US10019113B2 (en) Tomographic processing for touch detection
EP2491479A1 (en) Extracting touch data that represents one or more objects on a touch surface
AU2011249099A1 (en) Touch determination by tomographic reconstruction
SE535005C2 (en) Determination of contact through tomographic reconstruction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11832837

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2011832837

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2011832837

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 13824026

Country of ref document: US