SE1050434A1 - Determination of contact through tomographic reconstruction - Google Patents

Determination of contact through tomographic reconstruction Download PDF

Info

Publication number
SE1050434A1
SE1050434A1 SE1050434A SE1050434A SE1050434A1 SE 1050434 A1 SE1050434 A1 SE 1050434A1 SE 1050434 A SE1050434 A SE 1050434A SE 1050434 A SE1050434 A SE 1050434A SE 1050434 A1 SE1050434 A1 SE 1050434A1
Authority
SE
Sweden
Prior art keywords
points
samples
values
touch
data
Prior art date
Application number
SE1050434A
Other languages
Swedish (sv)
Other versions
SE535005C2 (en
Inventor
Tomas Christiansson
Peter Juhlin
Original Assignee
Flatfrog Lab Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Flatfrog Lab Ab filed Critical Flatfrog Lab Ab
Priority to SE1050434A priority Critical patent/SE535005C2/en
Priority to TW100114596A priority patent/TW201203052A/en
Priority to EP15197923.4A priority patent/EP3012721A1/en
Priority to CN201180030215.2A priority patent/CN103026325B/en
Priority to US13/695,505 priority patent/US8780066B2/en
Priority to KR1020127031392A priority patent/KR101760539B1/en
Priority to CA2798176A priority patent/CA2798176A1/en
Priority to CN201610251052.5A priority patent/CN105930002B/en
Priority to PCT/SE2011/050520 priority patent/WO2011139213A1/en
Priority to KR1020177019739A priority patent/KR101840991B1/en
Priority to RU2012148777/08A priority patent/RU2012148777A/en
Priority to JP2013509026A priority patent/JP5807057B2/en
Priority to EP11777650.0A priority patent/EP2567306B1/en
Publication of SE1050434A1 publication Critical patent/SE1050434A1/en
Publication of SE535005C2 publication Critical patent/SE535005C2/en
Priority to IL222797A priority patent/IL222797A0/en
Priority to US14/293,257 priority patent/US9547393B2/en
Priority to US15/388,457 priority patent/US9996196B2/en
Priority to US15/973,717 priority patent/US20180253187A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means

Abstract

ABSTRACT A touch-sensitive apparatus comprises a panel configured to conduct signals froma plurality of peripheral incoupling points to a plurality of peripheral outcoupling points.Actual detection lines are defined between pairs of incoupling and outcoupling points toextend across a surface portion of the panel. The signals may be in the form of light, andobjects touching the surface portion may affect the light via frustrated total internalreflection (FTIR). A signal generator is coupled to the incoupling points to generate thesignals, and a signal detector is coupled to the outcoupling points to generate an outputsignal. A data processor operates on the output signal to enable identification of touch-ing objects. The output signal is processed (40) to generate a set of data samples, Whichare indicative of detected energy for at least a subset of the actual detection lines. Theset of data samples is processed (42) to generate a set of matched samples, Which areindicative of estimated detected energy for fictitious detection lines that have a locationon the surface portion that matches a standard geometry for tomographic reconstruction.The set of matched samples is processed (44, 46) by tomographic reconstruction togenerate data indicative of a distribution of an energy-related parameter Within at least part of the surface portion. Elected for publication: Fig. 4A

Description

TOUCH DETERMINATION BY TOMOGRAPHIC RECONSTRUCTION Technical FieldThe present invention relates to touch-sensitive panels and data processing techniques in relation to such panels.
Background ArtTo an increasing extent, touch-sensitive panels are being used for providing input data to computers, electronic measurement and test equipment, gaming devices, etc. Thepanel may be provided with a graphical user interface (GUI) for a user to interact Withusing e. g. a pointer, stylus or one or more fingers. The GUI may be fixed or dynamic. Afixed GUI may e. g. be in the form of printed matter placed over, under or inside thepanel. A dynamic GUI can be provided by a display screen integrated With, or placedunderneath, the panel or by an image being proj ected onto the panel by a projector.
There are numerous known techniques for providing touch sensitivity to the panel,e.g. by using cameras to capture light scattered off the point(s) of touch on the panel, orby incorporating resistive Wire grids, capacitive sensors, strain gauges, etc into thepanel.
US2004/025209l discloses an alternative technique Which is based on frustratedtotal intemal reflection (FTIR). Light sheets are coupled into a panel to propagate insidethe panel by total intemal reflection. When an object comes into contact With a surfaceof the panel, two or more light sheets Will be locally attenuated at the point of touch.Arrays of light sensors are located around the perimeter of the panel to detect thereceived light for each light sheet. A coarse tomographic reconstruction of the light fieldacross the panel surface is then created by geometrically back-tracing and triangulatingall attenuations observed in the received light. This is stated to result in data regardingthe position and size of each contact area.
US2009/0l535l9 discloses a panel capable of conducting signals. A "tomograph"is positioned adj acent the panel With signal flow ports arrayed around the border of thepanel at discrete locations. Signals (b) measured at the signal flow ports are tomograph-ically processed to generate a two-dimensional representation (X) of the conductivity onthe panel, Whereby touching objects on the panel surface can be detected. The presentedtechnique for tomographic reconstruction is based on a linear model of the tomographicsystem, Ax=b. The system matrix A is calculated at factory, and its pseudo inverse ATIis calculated using Truncated SVD algorithms and operated on the measured signals to yield the two-dimensional (2D) representation of the conductivity: x= A'1b. The 2 suggested method is both demanding in the term of processing and lacks suppression ofhigh frequency components, possibly leading to much noise in the 2D representation.US2009/0l535l9 also makes a general reference to Computer Tomography (CT).CT methods are well-known imaging methods which have been developed for medicalpurposes. CT methods employ digital geometry processing to reconstruct an image ofthe inside of an object based on a large series of projection measurements through theobject. Various CT methods have been developed to enable efficient processing and/orprecise image reconstruction, e. g. Filtered Back Projection, ART, SART, etc. Often, theprojection measurements are carried out in accordance with a standard geometry whichis given by the CT method. Clearly, it would be desirable to capitalize on existing CTmethods for reconstructing the 2D distribution of an energy-related parameter (light, conductivity, etc) across a touch surface based on a set of projection measurements.
SummaryIt is an object of the invention to enable touch determination on a panel based on projection measurements by use of existing CT methods.
Another objective is to provide a technique that enables detennination of touch-related data at sufficient precision to discriminate between a plurality of objects insimultaneous contact with a touch surface.
This and other objects, which may appear from the description below, are at leastpartly achieved by means of a method of enabling touch determination, a computerprogram product, a device for enabling touch determination, and a touch-sensitiveapparatus according to the independent claims, embodiments thereof being defined bythe dependent claims.
A first aspect of the invention is a method of enabling touch determination basedon an output signal from a touch-sensitive apparatus, which comprises a panel configu-red to conduct signals from a plurality of peripheral incoupling points to a plurality ofperipheral outcoupling points, thereby defining actual detection lines that extend acrossa surface portion of the panel between pairs of incoupling and outcoupling points, atleast one signal generator coupled to the incoupling points to generate the signals, and atleast one signal detector coupled to the outcoupling points to generate the output signal.The method comprises: processing the output signal to generate a set of data samples,wherein the data samples are indicative of detected energy for at least a subset of theactual detection lines; processing the set of data samples to generate a set of matchedsamples, wherein the matched samples are indicative of estimated detected energy forfictitious detection lines that have a location on the surface portion that matches a standard geometry for tomographic reconstruction; and processing the set of matched samples by tomographic reconstruction to generate data indicative of a distribution ofan energy-related parameter within at least part of the surface portion.
In one embodiment, the step of processing the output signal comprises: generatingthe data samples in a two-dimensional sample space, wherein each data sample is repre-sentative of an actual detection line and is defined by a signal value and two dimensionvalues that define the location of the actual detection line on the surface portion.
In one embodiment, the step of processing the set of data samples comprises:generating estimated signal values of the matched samples at predetermined locations inthe two-dimensional sample space, Wherein the predetermined locations correspond tothe fictitious detection lines. The estimated signal values may be generated by inter-polation based on the signal values of the data samples, and each estimated signal valuemay be generated by interpolation of the signal values of neighboring data samples inthe two-dimensional sample space.
In one embodiment, the step of processing the set of data samples furthercomprises: obtaining a predetermined two-dimensional interpolation function withnodes corresponding to the set of data samples, and calculating the estimated signalvalues according to the interpolation function and based on the signal values of the datasamples. The method may further comprise a step of receiving exclusion data identify-ing one or more data samples to be excluded, wherein the step of processing the datasamples comprises identifying the node corresponding to each data sample to beexcluded, re-designing the predetermined interpolation function without each thus-identified node, and calculating the estimated signal values according to the re-designedinterpolation scheme and based on the signal values of the data samples in the nodes ofthe re-designed interpolation scheme.
In one embodiment, the matched samples are arranged as rows and/or columns inthe two-dimensional sample space. The matched samples may be arranged with equi-distant spacing within each of said rows and/or columns.
In one embodiment, the step of processing the set of matched samples comprises:applying a one-dimensional high-pass filtering of the matched samples in the two-dimensional sample space to generate filtered samples, and processing the filteredsamples to generate at set of back projection values indicative of said distribution.
In one embodiment, the surface portion defines a sampling area in the two-dimensional sample space, and the step of processing comprises, if the actual detectionlines given by the geometric arrangement of incoupling and outcouplíng points result inat least one contiguous region without data samples within the sampling area, the stepsof: obtaining a predetermined set of estimated sampling points within the contiguousregion, and, for each estimated sampling point, identifying the location of a correspon- 4 ding fictitious detection line on the surface portion; identifying, for each intersectionpoint between the corresponding fictitious detection line and the actual detection linesand/or between the corresponding fictitious detection line and the fictitious detectionlines for the set of matched samples, an intersection point value as the smallest signalvalue of all data samples corresponding to the actual detection lines associated with theintersection point; and calculating a signal value of the estimated sampling point as afunction of the intersection point values. In one implementation, the signal value of theestimated sampling point may be given by the largest intersection point value. Inanother implementation, the method further comprises, for each estimated samplingpoint: identifying a number of local maxima in the intersection point values, andcalculating the signal value of the estimated sampling point as a combination of thelocal maxima.
In one embodiment, the dimension values comprise a rotation angle of thedetection line in the plane of the panel, and a distance of the detection line in the planeof the panel from a predetennined origin.
In another embodiment, the dimension values comprise an angular location of theincoupling or outcoupling point of the detection line, and a rotation angle of thedetection line in the plane of the panel. In one implementation, the standard geometry isa fan geometry, the touch surface has a non-circular perimeter, and the angular locationis defined by an intersection between the detection line and a fictitious circle arrangedto circumscribe the touch surface.
In one embodiment, the standard geometry is one of a parallel geometry and a fangeometry.
In one embodiment, the signals comprise one of electrical energy, light, magneticenergy, sonic energy and vibration energy.
In one embodiment, the panel defines a touch surface and an opposite surface,wherein said at least one signal generator is arranged to provide light inside the panel,such that the light propagates from the incoupling points by internal reflection betweenthe touch surface and the opposite surface to the outcoupling points for detection bysaid at least one signal detector, and wherein the touch-sensitive apparatus is configuredsuch that the propagating light is locally attenuated by one or more objects touching thetouch surface.
A second aspect of the invention is a computer program product comprisingcomputer code which, when executed on a data-processing system, is adapted to carryout the method of the first aspect.
A third aspect of the invention is a device for enabling touch determination based on an output signal of a touch-sensitive apparatus, which comprises a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality ofperipheral outcoupling points, thereby defining actual detection lines that extend acrossa surface portion of the panel between pairs of incoupling and outcoupling points,means for generating the signals at the incoupling points, and means for generating theoutput signal based on detected signals at the outcoupling points. The device comprises:means for receiving the output signal; means for processing the output signal togenerate a set of data samples, wherein the data samples are indicative of detectedenergy for at least a subset of the actual detection lines; means for processing the set ofdata samples to generate a set of matched samples, Wherein the matched samples areindicative of estimated detected energy for fictitious detection lines that have a locationon the surface portion that matches a standard geometry for tomographic reconstruction;and means for processing the set of matched samples by tomographic reconstruction togenerate data indicative of a distribution of an energy-related parameter within at leastpart of the surface portion.
A fourth aspect of the invention is a touch-sensitive apparatus, comprising: apanel configured to conduct signals from a plurality of peripheral incoupling points to aplurality of peripheral outcoupling points, thereby defining actual detection lines thatextend across a surface portion of the panel between pairs of incoupling and out-coupling points; means for generating the signals at the incoupling points; means forgenerating an output signal based on detected signals at the outcoupling points; and thedevice for enabling touch determination according to the third aspect.
A fifth aspect of the invention is a touch-sensitive apparatus, comprising: a panelconfigured to conduct signals from a plurality of peripheral incoupling points to aplurality of peripheral outcoupling points, thereby defining actual detection lines thatextend across a surface portion of the panel between pairs of incoupling and out-coupling points; at least one signal generator coupled to the incoupling points togenerate the signals; at least one signal detector coupled to the outcoupling points togenerate an output signal; and a signal processor connected to receive the output signaland configured to: process the output signal to generate a set of data samples, whereinthe data samples are indicative of detected energy for at least a subset of the actualdetection lines, process the set of data samples to generate a set of matched samples,wherein the matched samples are indicative of estimated detected energy for fictitiousdetection lines that have a location on the surface portion that matches a standardgeometry for tomographic reconstruction, and process the set of matched samples bytomographic reconstruction to generate data indicative of a distribution of an energy- related parameter within at least part of the surface portion.
Any one of the embodiments of the first aspect can be combined with the secondto fifth aspects.
Still other objectives, features, aspects and advantages of the present inventionwill appear from the following detailed description, from the attached claims as well as from the drawings.
Brief Description of Drawings Embodiments of the invention will now be described in more detail with referenceto the accompanying schematic drawings.
Fig. l is a plan view of a touch-sensitive apparatus.
Fig. 2A-2B are top plan views of a touch-sensitive apparatus with an interleavedand non-interleaved arrangement, respectively, of emitters and sensors.
Figs 3A-3B are side and top plan views of touch-sensitive systems operating byfrustrated total internal reflection (FTIR).
Fig. 4A is a flow chart of a reconstruction method, and Fig. 4B is a block diagramof a device that implements the method of Fig. 4A.
Fig. 5 illustrates the underlying principle of the Projection-Slice Theorem.
Fig. 6 illustrates the applicability of filtering for back projection processing.
Fig. 7 illustrates a parallel geometry used in tomographic reconstruction.
Figs 8A-8H illustrate a starting point, intermediate results and final results of aback projection process using a parallel geometry.
Fig. 9 illustrates a fan geometry used in tomographic reconstruction.
Figs l0A-l0C illustrate intermediate and final results of a back projection processusing a fan geometry.
Fig. ll is graph of projection values collected in the fan geometry of Fig. 9mapped to a sampling space for a parallel geometry.
Fig. l2A is a graph of sampling points defined by interleaved arrangement in Fig.2A, Figs l2B-12C illustrate discrepancies between detection lines in an interleavedarrangement and a fan geometry, and Fig. l2D is a graph of sampling points for thenon-interleaved arrangement in Fig. 2B.
Fig. 13 is a reference image mapped to an interleaved arrangement.
Fig. 14A is a graph of a 2D interpolation function for an interleaved arrangement,Fig. l4B illustrates the generation of interpolation points using the interpolationfunction of Fig. l4A, Fig. l4C is an interpolated sinogram generated based on thereference image in Fig. 13, and Fig. l4D is a reconstructed attenuation field.
Fig. 15 illustrates an altemative way of generating interpolation points using the interpolation function of Fig. l4A.
Figs 16A-16D and Figs 17a-17B illustrate how the 2D interpolation function isupdated When sampling points are removed from reconstruction.
Fig. 18 is a reference image mapped to a non-interleaved arrangement.
Figs 19A-19B illustrate a first variant for reconstruction in a non-interleavedarrangement.
Figs 20A-20B illustrate a second variant for reconstruction in a non-interleavedarrangement.
Figs 21A-21B illustrate a third variant for reconstruction in a non-interleavedarrangement.
Figs 22A-22B illustrate a fourth variant for reconstruction in a non-interleavedarrangement.
Figs 23A-23F illustrate a fifth variant for reconstruction in a non-interleavedarrangement.
Figs 24A-24E illustrate a sixth variant for reconstruction in a non-interleavedarrangement.
Fig. 25 is a flowchart of a process for filtered back projection.
Figs 26A-26B illustrate a first variant for reconstruction in an interleaved arrange-ment using a tomographic algorithm designed for fan geometry.
Figs 27A-27B illustrate a second variant for reconstruction in an interleavedarrangement using a tomographic algorithm designed for fan geometry.
Fig. 28 illustrates the use of a circle for defining a two-dimensional sample spaceof a touch-sensitive apparatus.
Fig. 29A-D illustrate a third variant for reconstruction in an interleaved arrange-ment using a tomographic algorithm designed for fan geometry.
Fig. 30 shows the reconstructed attenuation field in Fig. 23F after image enhancement processing.
Detailed Description of Example Embodiments The present invention relates to techniques for enabling extraction of touch datafor at least one object, and typically multiple objects, in contact with a touch surface ofa touch-sensitive apparatus. The description starts out by presenting the underlyingconcept of such a touch-sensitive apparatus, especially an apparatus operating byfrustrated total intemal reflection (FTIR) of light. Then follows an example of anoverall method for touch data extraction involving tomographic reconstruction. Thedescription continues to generally explain and exemplify the theory of tomographicreconstruction and its use of standard geometries. Finally, different inventive aspects of applying techniques for tomographic reconstruction for touch determination are furtherexplained and exemplified.Throughout the description, the same reference numerals are used to identify corresponding elements. 1. Touch-sensitive apparatus Fig. 1 illustrates a touch-sensitive apparatus 100 which is based on the concept oftransmitting energy of some form across a touch surface 1, such that an object that isbrought into close vicinity of, or in contact with, the touch surface 1 causes a localdecrease in the transmitted energy. The touch-sensitive apparatus 100 includes anarrangement of emitters and sensors, which are distributed along the periphery of thetouch surface. Each pair of an emitter and a sensor defines a detection line, whichcorresponds to the propagation path for an emitted signal from the emitter to the sensor.In Fig. 1, only one such detection line D is illustrated to extend from emitter 2 to sensor3, although it should be understood that the arrangement typically defines a dense gridof intersecting detection lines, each corresponding to a signal being emitted by anemitter and detected by a sensor. Any object that touches the touch surface along theextent of the detection line D will thus decrease its energy, as measured by the sensor 3.
The arrangement of sensors is electrically connected to a signal processor 10,which samples and processes an output signal from the arrangement. The output signalis índicative of the received energy at each sensor 3. As will be explained below, thesignal processor 10 may be configured to process the output signal by a tomographictechnique to recreate an image of the distribution of an energy-related parameter (forsimplicity, referred to as "energy distribution" in the following) across the touch surface1. The energy distribution may be further processed by the signal processor 10 or by aseparate device (not shown) for touch determination, which may involve extraction oftouch data, such as a position (e. g. x, y coordinates), a shape or an area of each touchingobject.
In the example of Fig. 1, the touch-sensitive apparatus 100 also includes acontroller 12 which is connected to selectively control the activation of the emitters 2.The signal processor 10 and the controller 12 may be configured as separate units, orthey may be incorporated in a single unit. One or both of the signal processor 10 andthe controller 12 may be at least partially implemented by software executed by aprocessing unit.
The touch-sensitive apparatus 100 may be designed to be used with a displaydevice or monitor, e. g. as described in the Background section. Generally, such adisplay device has a rectangular extent, and thus the touch-sensitive apparatus 100 (the touch surface 1) is also likely to be designed with a rectangular shape. Further, theemitters 2 and sensors 3 all have a fixed position around the perimeter of the touchsurface 1. Thus, in contrast to a conventional tomographic apparatus used e.g. in themedical field, there will be no possibility of rotating the complete measurement system.As will be described in further detail below, this puts certain limitations on the use ofstandard tomographic techniques for recreating/reconstructing the energy distributionwithin the touch surface 1.
In the following, embodiments of the invention will be described in relation totwo main arrangements of emitters 2 and sensors 3. A first main arrangement, shown inFig. 2A, is denoted "interleaved arrangement" and has emitters 2 and sensors 3 placedone after the other along the periphery of the touch surface 1. Thus, every emitter 2 isplaced between two sensors 3. The distance between neighboring emitters 2 is the samealong the periphery. The same applies for the distance between neighboring sensors 3.A second main arrangement, shown in Fig. 2B, is denoted "non-interleaved arrange-ment" and has merely sensors 3 on two adj acent sides (i.e. sides connected via a corner),and merely emitters 2 on its other sides.
The interleaved arrangement may be preferable since it generates a more uniformdistribution of detection lines. However, there are electro-optical aspects of theinterleaved system that may favor the use of the non-interleaved arrangement. Forexample, the interleaved arrangement may require the emitters 2, which may be fedwith high driving currents, to be located close to the sensors 3, which are configured todetect weak photo-currents. This may lead to undesired detection noise. The electricalconnection to the emitters 2 and sensors 3 may also be somewhat demanding since theemitters 2 and sensors 3 are dispersed around the periphery of the touch surface l. Thus,there may be reasons for using a non-interleaved arrangement instead of an interleavedarrangement, since the former obviates these potential obstacles.
It is to be understood that there are many variations and blends of these two typesof arrangements. For example, the sensor-sensor, sensor-emitter, emitter-emitterdistance(s) may vary along the periphery, and/or the blending of emitters and sensorsmay be different, e. g. there may be two or more emitters/sensors between everyemitter/sensor, etc. Although the following examples are given for the first and secondmain arrangements, specifically a rectangular touch surface with a 1619 aspect ratio, thisis merely for the purpose of illustration, and the concepts of the invention are applicableirrespective of aspect ratio, shape of the touch surface, and arrangement of emitters andsensors.
In the embodiments shown herein, at least a subset of the emitters 2 may bearranged to emit energy in the shape of a beam or wave that diverges in the plane of the touch surface 1, and at least a subset of the sensors 3 may be arranged to receive energyover a wide range of angles (field of view). Altematively or additionally, the individualemitter 2 may be configured to emit a set of separate beams that propagate to a numberof sensors 3. In either embodiment, each emitter 2 transmits energy to a plurality ofsensors 3, and each sensor 3 receives energy from a plurality of emitters 2.
The touch-sensitive apparatus 100 may be configured to permit transmission ofenergy in one of many different forms. The emitted signals may thus be any radiation orwave energy that can travel in and across the touch surface 1 including, Without limi-tation, light waves in the visible or infrared or ultraviolet spectral regions, electricalenergy, electromagnetic or magnetic energy, or sonic and ultrasonic energy or vibrationenergy.
In the following, an example embodiment based on propagation of light will bedescribed. Fig. 3A is a side view of a touch-sensitive apparatus 100 which includes alight transmissive panel 4, one or more light emitters 2 (one shown) and one or morelight sensors 3 (one shown). The panel 4 defines two opposite and generally parallelsurfaces 5, 6 and may be planar or curved. A radiation propagation channel is providedbetween two boundary surfaces 5, 6 of the panel 4, wherein at least one of the boundarysurfaces allows the propagating light to interact with a touching object 7. Typically, thelight from the emitter(s) 2 propagates by total internal reflection (TIR) in the radiationpropagation channel, and the sensors 3 are arranged at the periphery of the panel 4 togenerate a respective measurement signal which is indicative of the energy of receivedlight.
As shown in Fig. 3A, the light may be coupled into and out of the panel 4 directlyvia the edge portion that connects the top and bottom surfaces 5, 6 of the panel 4.Altematively, not shown, a separate coupling element (e. g. in the shape of a wedge)may be attached to the edge portion or to the top or bottom surface 5, 6 of the panel 4 tocouple the light into and/or out of the panel 4. When the object 7 is brought sufficientlyclose to the boundary surface, part of the light may be scattered by the object 7, part ofthe light may be absorbed by the object 7, and part of the light may continue to propa-gate unaffected. Thus, when the object 7 touches a boundary surface of the panel (e. g.the top surface 5), the total internal reflection is frustrated and the energy of thetransmitted light is decreased. This type of touch-sensitive apparatus is denoted "FTIRsystem" (FTIR - Frustrated Total Internal Reflection) in the following.
The touch-sensitive apparatus 100 may be operated to measure the energy of thelight transmitted through the panel 4 on a plurality of detection lines. This may, e.g., bedone by activating a set of spaced-apart emitters 2 to generate a corresponding numberof light sheets inside the panel 4, and by operating a set of sensors 3 to measure the 11 transmitted energy of each light sheet. Such an embodiment is illustrated in Fig. 3B,Where each emitter 2 generates a beam of light that expands in the plane of the panel 4while propagating away from the emitter 2. Each beam propagates from one or moreentry or incoupling points within an incoupling site on the panel 4. Arrays of lightsensors 3 are located around the perimeter of the panel 4 to receive the light from theemitters 2 at a number of spaced-apart outcoupling points within an outcoupling site onthe panel 4. It should be understood that the incoupling and outcoupling points merelyrefer to the position where the beam enters and leaves, respectively, the panel 4. Thus,one emitter/ sensor may be optically coupled to a number of incoupling/outcouplingpoints. In the example of Fig. 3B, however, the detection lines D are defined byindividual emitter-sensor pairs.
The light sensors 3 collectively provide an output signal, which is received andsampled by the signal processor 10. The output signal contains a number of sub-signals,also denoted "proj ection signals", each representing the energy of light emitted by acertain light emitter 2 and received by a certain light sensor 3, i.e. the received energyon a certain detection line. Depending on implementation, the signal processor 10 mayneed to process the output signal for identification of the individual sub-signals.Irrespective of implementation, the signal processor 10 is able to obtain an ensemble ofmeasurement values that contains information about the distribution of an energy-related parameter across the touch surface 1.
The light einitters 2 can be any type of device capable of emitting light in adesired wavelength range, for example a diode laser, a VCSEL (vertical-cavity surface-emitting laser), or altematively an LED (light-emitting diode), an incandescent lamp, ahalogen lamp, etc.
The light sensors 3 can be any type of device capable of detecting the energy oflight emitted by the set of emitters, such as a photodetector, an optical detector, a photo-resistor, a photovoltaic cell, a photodiode, a reverse-biased LED acting as photodiode, acharge-coupled device (CCD) etc.
The emitters 2 may be activated in sequence, such that the received energy ismeasured by the sensors 3 for each light sheet separately. Alternatively, all or a subsetof the emitters 2 may be activated concurrently, e. g. by modulating the emitters 2 suchthat the light energy measured by the sensors 3 can be separated into the sub-signals bya corresponding de-modulation.
Reverting to the emitter-sensor-arrangements in Fig. 2, the spacing betweenneighboring emitters 2 and sensors 3 in the interleaved arrangement (Fig. 2A) and between neighboring emitters 2 and neighboring sensors 3, respectively, in the non- 12 interleaved arrangement (Fig. 2B) is generally from about 1 mm to about 20 mm. Forpractical as Well as resolution purposes, the spacing is generally in the 2-10 mm range.
In a variant of the interleaved arrangement, the emitters 2 and sensors 3 maypartially or wholly overlap, as seen in a plan view. This can be accomplished by placingthe emitters 2 and sensors 3 on opposite sides of the panel 4, or in some equivalentoptical arrangement.
It is to be understood that Fig. 3 merely illustrates one example of an FTIRsystem. Further examples of FTIR systems are e. g. disclosed in US6972753,US7432893, US2006/0114237, US2007/0075648, WO2009/048365, WO2010/006882,WO2010/006883, WO2010/006884, WO2010/006885, WO2010/006886, and Intema-tional application No. PCT/SE2009/051364, which are all incorporated herein by thisreference. The inventive concept may be advantageously applied to such altemativeFTIR systems as well. 2. Transmission As indicated in Fig. 3A, the light will not be blocked by the touching object 7.Thus, if two objects 7 happen to be placed after each other along a light path from anemitter 2 to a sensor 3, part of the light will interact with both objects 7. Provided thatthe light energy is sufficient, a remainder of the light will reach the sensor 3 andgenerate an output signal that allows both interactions (touch points) to be identified.Thus, in multi-touch FT IR systems, the transmitted light may carry information about aplurality of touches.
In the following, 1:; is the transmission for the j:th detection line, T V is the trans-mission at a specific position along the detection line, and Av is the relative attenuationat the same point. The total transmission (modeled) along a detection line is thus: T,-=HTv=H(1-AV) The above equation is suitable for analyzing the attenuation caused by discreteobjects on the touch surface, when the points are fairly large and separated by adistance. However, a more correct definition of attenuation through an attenuating medium may be used: 13 In this formulation, I,- represents the transmitted energy on detection line Dj withattenuating obj ect(s), 10,1- represents the transmitted energy on detection line Dj withoutattenuating objects, and a(x) is the attenuation coefficient along the detection line Dj.We also let the detection line interact with the touch surface along the entire extent ofthe detection line, i.e. the detection line is represented as a mathematical line.
To facilitate the tomographic reconstruction as described in the following, themeasurement values may be divided by a respective background value. By properchoice of background values, the measurement values are thereby converted intotransmission values, which thus represent the fraction of the available light energy thathas been measured on each of the detection lines.
The theory of the Radon transform (see below) deals with line integrals, and it may therefore be proper to use the logarithm of the above expression:log(T)=log(e"f“(x)dx)= - I a(x)dx 3. Reconstruction and touch data extraction Fig. 4A illustrates an embodiment of a method for reconstruction and touch dataextraction in an FTIR system. The method involves a sequence of steps 40-48 that arerepeatedly executed, typically by the signal processor 10 (Figs 1 and 3). In the contextof this description, each sequence of steps 40-48 is denoted a sensing instance.
Each sensing instance starts by a data collection step 40, in which measurementvalues are sampled from the light sensors 2 in the FTIR system, typically by sampling avalue from each of the aforesaid sub-signals. The data collection results in one projec-tion value for each detection line. It may be noted that the data may, but need not, becollected for all available detection lines in the FTIR system. The data collection step 40may also include pre-processing of the measurement values, e.g. filtering for noisereduction, conversion of measurement values into transmission values (or equivalently,attenuation values), conversion into logarithmic values, etc.
In a re-calculation step 42, the set of projection values are processed for gene-ration of an updated set of projection values that represent fictitious detection lines witha location on the touch surface that matches a standard geometry for tomographicreconstruction. This step typically involves an interpolation among the projection valuesas located in a 2D sample space which is defined by two dimensions that represent theunique location of the detection lines on the touch surface. In this context, a "location" refers to the physical extent of the detection line on the touch surface as seen in a plan 14 view. The re-calculation step 42 will be further explained and motivated in Chapter 6below.
In a filtering step 44, the updated set of projection values is subjected to a filteringaiming at increasing high spatial frequencies in relation to low spatial frequenciesamongst the set of projection values. Thus, step 44 results in a filtered version of theupdated set of proj ection values, denoted "filtered set" in the following. Typically, step44 involves applying a suitable 1D filter kernel to the updated set of projection values.The use of filter kernels will be further explained and motivated in Chapter 4 below.
In a reconstruction step 46, an "attenuation field" across the touch surface isreconstructed by processing the filtered set in the 2D sample space. The attenuationfield is a distribution of attenuation values across the touch surface (or a relevant part ofthe touch surface), i.e. an energy-related parameter. As used herein, "the attenuationfield" and "attenuation values" may be given in terms of an absolute measure, such aslight energy, or a relative measure, such as relative attenuation (e. g. the above-mentioned attenuation coefficient) or relative transmission. Step 46 may involve apply-ing a back projection operator to the filtered set of proj ection values in the 2D samplespace. Such an operator typically generates an individual attenuation value by calcula-ting some form of weighted sum of selected projection values included the filtered set.The use of a back proj ection operator will be further explained and motivated inChapters 4 and 5 below.
The attenuation field may be reconstructed within one or more subareas of thetouch surface. The subareas may be identified by analyzing intersections of detectionlines across the touch surface, based on the above-mentioned projection signals. Such atechnique for identifying subareas is further disclosed in Applicant's U.S. provisionalpatent application No. 6l/272,665, which was filed on October 19, 2009 and which isincorporated herein by this reference.
In a subsequent extraction step 48, the reconstructed attenuation field is processedfor identification of touch-related features and extraction of touch data. Any knowntechnique may be used for isolating true (actual) touch points within the attenuationfield. For example, ordinary blob detection and tracking techniques may be used forfinding the actual touch points. In one embodiment, a threshold is first applied to theattenuation field, to remove noise. Any areas with attenuation values that exceed thethreshold, may be further processed to find the center and shape by fitting for instance atwo-dimensional second-order polynomial or a Gaussian bell shape to the attenuationvalues, or by finding the ellipse of inertia of the attenuation values. There are alsonumerous other techniques as is well known in the art, such as clustering algorithms,edge detection algorithms, etc.
Any available touch data may be extracted, including but not limited to x,ycoordinates, areas, shapes and/or pressure of the touch points.
After step 48, the extracted touch data is output, and the process returns to thedata collection step 40.
It is to be understood that one or more of steps 40-48 may be effectedconcurrently. For example, the data collection step 40 of a subsequent sensing instancemay be initiated concurrently with any of steps 42-48. In can also be noted that the re-calculation and filtering steps 42, 44 can be merged into one single step, since thesesteps generally involve linear operations.
The touch data extraction process is typically executed by a data processingdevice (cf. signal processor 10 in Figs 1 and 3) which is connected to sample themeasurement values from the light sensors 3 in the FTIR system. Fig. 4B shows anexample of such a data processing device 10 for executing the process in Fig. 4A. In theillustrated example, the device 10 includes an input 400 for receiving the output signal.The device 10 further includes a data collection element (or means) 402 for processingthe output signal to generate the above-mentioned set of projection values, and a re-calculation element (or means) 404 for generating the above-mentioned updated set ofprojection values. There is also provided a filtering element (or means) 406 forgenerating the above-mentioned filtered set. The device 10 further includes areconstruction element (or means) 408 for generating the reconstructed attenuation fieldby processing the filtered set, and an output 410 for outputting the reconstructedattenuation field. In the example of Fig. 4B, the actual extraction of touch data is carriedout by a separate device 10' which is connected to receive the attenuation field from thedata processing device 10.
The data processing device 10 may be implemented by special-purpose software(or firmware) run on one or more general-purpose or special-purpose computingdevices. In this context, it is to be understood that each "element" or "means" of such acomputing device refers to a conceptual equivalent of a method step; there is not alwaysa one-to-one correspondence between elements/means and particular pieces of hardwareor software routines. One piece of hardware sometimes comprises differentmeans/elements. For example, a processing unit serves as one element/means whenexecuting one instruction, but serves as another element/means when executing anotherinstruction. In addition, one element/means may be implemented by one instruction insome cases, but by a plurality of instructions in some other cases. Such a softwarecontrolled computing device may include one or more processing units, e. g. a CPU("Central Processing Unit"), a DSP ("Digital Signal Processor"), an ASIC("Application-Specific Integrated Circuit"), discrete analog and/or digital components, 16 or some other programmable logical device, such as an FPGA ("Field ProgrammableGate Array"). The data processing device 10 may further include a system memory anda system bus that couples various system components including the system memory tothe processing unit. The system bus may be any of several types of bus structuresincluding a memory bus or memory controller, a peripheral bus, and a local bus usingany of a variety of bus architectures. The system memory may include computer storagemedia in the form of volatile and/or non-volatile memory such as read only memory(ROM), random access memory (RAM) and flash memory. The special-purposesoftware may be stored in the system memory, or on other removable/non-removablevolatile/non-volatile computer storage media which is included in or accessible to thecomputing device, such as magnetic media, optical media, flash memory cards, digitaltape, solid state RAM, solid state ROM, etc. The data processing device 10 may includeone or more communication interfaces, such as a serial interface, a parallel interface, aUSB interface, a wireless interface, a network adapter, etc, as well as one or more dataacquisition devices, such as an A/D converter. The special-purpose software may beprovided to the data processing device 10 on any suitable computer-readable medium, including a record medium, a read-only memory, or an electrical carrier signal. 4. Tomographic techniquesTomographic reconstruction, which is well-known per se, is based on themathematics describing the Radon transform and its inverse. The following theoreticaldiscussion is limited to the 2D Radon transform. The general concept of tomography isto do imaging of a medium by measuring line integrals through the medium for a largeset of angles and positions. The line integrals are measured through the image plane. Tofind the inverse, i.e. the original image, many algorithms use the so-called Projection-Slice Theorem.Several efficient algorithms have been developed for tomographic reconstruction, e.g. Filtered Back Projection, FFT-based algorithms, ART (Algebraic ReconstructionTechnique), SART (Simultaneous Algebraic Reconstruction Technique), etc. FilteredBack Projection (FBP) is a widely used algorithm, and there are many variants andextensions thereof. Below, a brief outline of the underlying mathematics for FBP isgiven, for the sole purpose of facilitating the following discussion about the inventive concept and its merits. 4.1 Proiection-Slice Theorem Many tomographic reconstruction techniques make use of a mathematicaltheorem called Projection-Slite Theorem. This Theorem states that given a two- 17 dimensional function f (x, y), the one- and two-dimensional Fourier transforms Tl and3:2, a projection operator R that projects a two-dimensional (2D) function onto a one-dimensional (1D) line, and a slice operator S1 that extracts a central slice of a function, the following calculations are equal: fFrrR/CÛÉY) = Sifpz/CÛGY) This relation is illustrated in Fig. 5. The right-hand side of the equation aboveessentially extracts a 1D line of the 2D Fourier transform of the function f (x, y). Theline passes through the origin of the 2D Fourier plane, as shown in the right-hand partof Fig. 5. The left-hand side of the equation starts by projecting (i.e. integrating alonglD lines in the projection direction ß) the 2D function onto a lD line (orthogonal to theprojection direction p), which forms a "projection" that is made up of the projectionvalues for all the different detection lines extending in the projection direction 15 . Thus,taking a lD Fourier transform of the projection gives the same result as taking a slicefrom the 2D Fourier transform of the function f (x, y). In the context of the presentdisclosure, the function f (x, y) corresponds to the attenuation coefficient field a(x) (generally denoted "attenuation field" herein) to be reconstructed. 4.2 Radon transform First, it can be noted that the attenuation vanishes outside the touch surface. Forthe following mathematical discussion, we define a circular disc that circumscribes thetouch surface, Qr = {X: |X| S r), with the attenuation field set to zero outside of this disc. Further, the proj ection value for a given detection line is given by: _g(9, s) = (IRa)(6, s) = I a(x)dx s=x-9 Here, we let 9 = (cos (p , sin (p) be a unit vector denoting the direction normal tothe detection line, and s is the shortest distance (with sign) from the detection line to theorigin (taken as the centre of the screen, cf. Fig. 5). ). Note that 0 is perpendicular to theabove-mentioned proj ection direction vector, ß. This means that we can denote _g(6, s)by g(ç0, s) since the latter notation more clearly indicates that g is a function of twovariables and not a function of one scalar and one arbitrary vector. Thus, the projectionvalue for a detection line could be expressed as g( the detection line to a reference direction, and the distance of the detection line to an 18 origin. We let the angle span the range 0 S (p < n, and since the attenuation field hassupport in Qr, it is sufficient to consider s in the interval -r S s S r. The set ofprojections collected for different angles and distances may be stacked together to forma "sinogram".
Our goal is now to reconstruct the attenuation field a(x) given the measuredRadon transform, g = Ra. The Radon transform operator is not invertible in thegeneral sense. To be able to find a stable inverse, we need to impose restrictions on theVariations of the attenuation field.
One should note that the Radon transform is the same as the above-mentionedprojection operator in the Projection-Slice Theorem. Hence, taking the 1D Fouriertransform of g (go, s) with respect to the s Variable results in central slices from the 2D Fourier transforin of the attenuation field a(x). 4.3 Continuous vs. discrete tomographv The foregoing sections 4.1-4.2 describe the mathematics behind tomographicreconstruction using continuous functions and operators. However, in a real worldsystem, the measurement data represents a discrete sampling of functions, which callsfor modifications of the algorithms. For a thorough description of such modifications,we refer to the mathematical literature, e. g. "The Mathematics of ComputerizedTomography" by Natterer, and "Principles of Computerized Tomographic Imaging" byKak and Slaney.
One important modification is a need for a filtering step when operating ondiscretely sampled functions. The need for filtering can intuitively be understood byconsidering the Projection-Slice Theorem in a system with discrete sampling points andangles, i.e. a finite set of detection lines. According to this Theorem, for each angle (p,we take the 1D discrete Fourier transform of _g( To compensate for the non-uniform distribution of sampling points in the 2DFourier transform plane, we may increase the amount of information about the highspatial frequencies. This can be achieved by filtering, which can be expressed as amultiplication/weighting of the data points in the 2D Fourier transform plane. This isexemplified in the right-hand part of Fig. 6, where the amplitude of the high spatial 19 frequencies are increased and the amplitude of the low frequency components isdecreased. This multiplication in the 2D Fourier transform plane can alternatively beexpressed as a convolution in the spatial domain, i.e. with respect to the s Variable,using the inverse Fourier transform of the Weighting function. The multiplication/-Weighting function in the 2D Fourier transform plane is rotationally symmetric. Thus,We can make use of the Projection-Slice Theorem to get the corresponding 1D convolu-tion kernel in the projection domain, i.e. the kernel We should use on the projectionsgathered at specific angles. This also means that the convolution kernel Will be the same for all proj ection angles. 4.4 Filtering and back proiection As explained in the foregoing section, the sinogram data is first filtered and thenback-projected. The filtering can be done by multiplication With a filter Wb in theFourier domain. There are also efficient Ways of implementing the filtering as aconvolution by a filter wb in the spatial domain. In one embodiment, the filtering is done on the s parameter only, and may be described by the following expression: (Wb * fXx) = R#(WD(S) * 909. 5)) E 72%, Where 924* is a back projection operator defined as: TI (øzfiyxx) = z f v(e,x- ewa,0 and Wb (x) E Rlgwb. The idea is to choose the wb (s) -filter such that Wb (x) E6 (x). This is typically accomplished by Working in the Fourier domain, taking l//l/ï, (f)as a step function supported in a circular disc of radius b, and letting b -> 00. The corresponding filter in the spatial domain is wb (s) = <åf With continuous extension across the singularity at s = 0.In the literature, several variants of the filter can be found, e.g. Ram-Lak, Shepp- Logan, Cosine, Hann, and Hamming. 5. Standard geometries for tomographic processing Tomographic processing is generally based on standard geometries. This meansthat the mathematical algorithms presume a specific geometric arrangement of thedetection lines in order to attain a desired precision and/or processing efficiency. Thegeometric arrangement may be selected to enable a definition of the proj ection values ina 2D sample space, inter alía to enable the above-mentioned filtering in one of thedimensions of the sample space before the back projection.
In conventional tomography, the measurement system (i.e. the location of theincoupling points and/or outcoupling points) is controlled or set to yield the desiredgeometric arrangement of detection lines. Below follows a brief presentation of the two major standard geometries used in conventional tomography e. g. in the medical field. 5.1 Parallel geometryThe parallel geometry is exemplified in Fig. 7. Here, the system measures projection values of a set of detection lines for a given angle (pk. In Fig. 7, the set ofdetection lines D are indicated by dashed arrows, and the resulting projection isrepresented by the function g( 0 S (p < n. When all the projections are collected, they can be arranged side by side ina data structure to form a sinogram. The sinogram is generally given in a 2D samplespace defined by dimensions that uniquely assign each proj ection value to a specificdetection line. In the case of a parallel geometry, the sample space is typically definedby the angle parameter ço and the distance parameter s.
Below, the use of a parallel geometry in tomographic processing is furtherexemplified in relation to a known attenuation field shown in Fig. SA, in which theright-end bar indicates the coding of gray levels to attenuation strength (%). Fig. SB is agraph of the projection values as a function of distance s for the projection obtained atgo = 1I/6 in the attenuation field of Fig. SA. Fig. SC illustrates the sinogram formed byall projections collected from the attenuation field, where the different projections arearranged as vertical sequences of values. For reference, the projection shown in Fig. SBis marked as a dashed line in Fig. SC.
The filtering step, i.e. convolution, is now done with respect to the s variable, i.e.in the vertical direction in Fig. SC. As mentioned above, there are many different filterkernels that may be used in the filtering. Fig. SD illustrates the central part of a discretefilter kernel wb that is used in the following examples. As shown, the absolute magni- 21 tude of the filter values quickly drop off from the center of the kernel (k=0). In manypractical implementations, it is possible to use only the most central parts of the filterkernel, thereby decreasing the number of processing operations in the filtering step.
Since the filtering step is a convolution, it may be computationally more efficientto perform the filtering step in the Fourier domain. For each column of values in the go-s-plane, a discrete 1D Fast Fourier transform is computed. Then, the thus-transformedvalues are multiplied by the 1D Fourier transform of the filter kernel. The filteredsinogram is then obtained by taking the inverse Fourier transform of the result. Thistechnique can reduce the complexity from 0(n2) down to 0(n - log 2 (11)) of thefiltering step for each (p, where n is the number of sample points (projection values)with respect to the s variable.
Fig. SE shows the filtered sinogram that is obtained by operating the filter kernelin Fig. SD on the sinogram in Fig. SC.
The next step is to apply the back projection operator. Fundamental to the backprojection operator is that a single position in the attenuation field is represented by asine function in the sinogram. T hus, to reconstruct each individual attenuation value inthe attenuation field, the back projection operator integrates the values of the filteredsinogram along the corresponding sine function. To illustrate this concept, Fig. SEshows three sine functions Pl-PS that correspond to three different positions in theattenuation field of Fig. SA.
Since the location of a reconstructed attenuation value will not coincide exactlywith all of the relevant detection lines, it may be necessary to perform linear inter-polation with respect to the s variable where the sine curve crosses between twoprojection values. Another approach, which is less computationally effective, is tocompute the filtered values at the crossing points by applying individual filteringkernels. The interpolation is exemplified in Fig. SF, which is an enlarged view of Fig.SE and in which x indicates the different filtered projection values of the filteredsinogram. The contribution to the back projection value for the sine curve P1 from the illustrated small part of the (p-s-plane becomes: (1 _ 226) ' (W * 9)26,176 + 226 ' (W * 9)26,177+ (1 _ 227) ' (W * 9)27,175 + 227 ' (W * 9)27,176+ (1 _ 228) ' (W * 9)2s,173 + 228 ' (W * 9)2s,174 The weights zl- in the linear interpolation is given by the normalized distance from the sine curve to the projection value, i.e. 0 S zl- < 1. 22 Fig. 8G shows the reconstructed attenuation field that is obtained by applying theback projection operator on the filtered sinogram in Fig. 8E. It should be noted that thefiltering step is important for the reconstruction to yield useful data. Fig. 8H shows the reconstructed attenuation field that is obtained when the filtering step is omitted. 5.2 Fan geometrv Another major type of tomography arrangement is based on sampling of data froma single emitter, instead of measuring parallel projections at several different angles.This so-called fan geometry is exemplified in Fig. 9. As shown, the emitter emits rays inmany directions, and sensors are placed to measure the received energy from this singleemitter on a number of detection lines D, illustrated by dashed lines in Fig. 9. Thus, themeasurement system collects projection values for a set of detection lines D extendingfrom the emitter when located at angle ,ß¿. In the illustrated example, each detection lineD is defined by the angular location ß of the emitter with respect to a reference angle(ß=0 coinciding with the x-axis), and the angle a of the detection line D with respect toa reference line (in this example, a line going from the emitter through the origin). Themeasurement system is then rotated slightly (åß) around the origin of the x,y coordinatesystem in Fig. 9, to collect a new set of projection values for this new angular location.It should be noted that the rotation might not be limited to 0 S ß < n, but could beextended, as is well-known to the skilled person. The following example is given for afull rotation: 0 S ß < Zn.
Fan beam tomographs may be categorized as equiangular or equidistant. Equi-angular systems collect information at the same angle (as seen from the emitter)between neighboring sensors. Equiangular systems may be configured with emitter andsensors placed on a circle, or the sensors may be non-equidistantly arranged on a lineopposite to the emitter. Equidistant systems collect information at the same distancebetween neighboring sensors. Equidistant systems may be configured with sensorsplaced on a line opposite to the emitter. The following example is given for anequiangular system, and based on the known attenuation field shown in Fig. 8A. For athorough description of the different types of fan (beam) geometries, we refer to theliterature.
Fig. 10A illustrates the sinogram formed by all projections collected from theattenuation field in Fig. 8A, by the measurement system outlined in Fig. 9. In Fig. l0A,the different projections are arranged as vertical sequences of values. It could be notedthat the sinogram is given in a 2D sample space defined by the angular emitter location parameter ß and the angular direction parameter a. 23 In an exemplifying tomographic processing of the sinogram in Fig. 10A, an angle correction is first applied on all collected projections according to: yïaiaßi) = H ' yüßffaßf) 'COSMO- The filtering step, i.e. convolution, is now done with respect to the ak Variable ofthe angle-corrected sinogram, i.e. corresponding to the vertical direction in the angle-corrected sinogram. As mentioned above, there are many different filter kemels thatmay be used in the filtering. The following example uses a filter kernel similar to theone shown in Fig. SC. For example, many symmetric high-pass filters with a coefficientsum equal to zero may enable adequate reconstruction of the attenuation field.However, a careful choice of filter may be needed in order to reduce reconstructionartifacts. The result may also be improved by applying a smoothing filter in this step, asis well-known in the art. Like in the parallel geometry, the filtering may involve aconvolution in the spatial domain or a multiplication in the Fourier domain.
The filtered sinogram obtained by operating the filter kernel on the angle-corrected sinogram is shown in Fig. l0B.
The next step is to apply the back projection operator. The back projectionoperator is different from the one used in the above-described parallel geometry. In the fan geometry, the back projection step may be given by the expression: (atom = ßß zgm - z) -v + z - vofkaßlo).ß: l where D,- is the position of the source giving the ßi projection, Z is a parameterthat describes the linear interpolation between the detection lines and a ray that extendsfrom the source through the location of the respective attenuation value to bereconstructed.
Fig. l0C shows the reconstructed attenuation field that is obtained by applying the back projection operator on the filtered sinogram in Fig. l0B. 5.3 Re-sorting algorithmsAnother approach to do the filtered back projection for a fan geometry is to choose the locations of emitters and sensors such that it is possible to re-sort the datainto a parallel geometry. Generally, such re-sorting algorithms are designed to achieveregularly spaced data samples in the (p-s-plane. More information about re-sorting 24 algorithms is e. g. found in "Principles of Computerized Tomographic Imaging" by Kakand Slaney.To further explain the concept of re-sorting, Fig. 11 shows the data samples(proj ection values) collected from two different emitters (i.e. two different values of ß)in an equiangular fan beam tomograph. The data samples are mapped to a (p-s-plane. Itcan be noted that the projection values obtained from a single emitter do not show up asa straight vertical line with respect to the s variable. It can also be seen that the go valuesdiffer only by a constant, and that the s values are identical for the two differentprojections. One re-sorting approach is thus to collect projection values that originatefrom detection lines With the same (p values (i.e. from different emitters) and let theseconstitute a column in the (p-s-plane. However, this leads to a non-uniform spacing ofthe s values, which may be overcome by interpolating (re-sampling) the projectionvalues with respect to the s variable. It should be noted that this procedure is a strictly1D interpolation and that all columns undergo the same transform. It should also benoted that this procedure transforms one standard tomography geometry into anotherstandard tomography geometry.In order for the re-sorting algorithms to work, it is essential (as stated in the literature) that åß = åa, i.e. the angular rotation between two emitter locations is thesame as the angular separation between two detection lines. Only when this requirement is fulfilled, the projection values will form columns with respect to the s variable. 6. Use of tomographic processing for touch determination Fig. 12A illustrates the sampling points (corresponding to detection lines, and thusto measured projection values) in the (p-s-plane for the interleaved system shown in Fig.2A. Due to the irregularity of the sampling points, it is difficult to apply the above-described filter. The irregularity of the sampling points also makes it difficult to apply are-sorting algorithm.
In Fig. 12A, the solid lines indicate the physical limits of the touch surface. It canbe noted that the angle go actually spans the range from 0 to Zn, since the incouplingand outcoupling points extend around the entire perimeter. However, a detection line isthe same when rotated by n, and the projection values can thus be rearranged to fallwithin the range of 0 to n. This rearrangement is optional; the data processing can bedone in the full range of angles with a correction of some constants in the backprojection function.
When comparing the interleaved arrangement in Fig. 2A With the fan geometry inFig. 9, we see that the angular locations ß,- are not equally spaced, and that angular directions a are neither equiangular nor equidistant. Also, the values attained by a are different for different ßi. The different ßl- values for the interleaved arrangement areshown in Fig. 12B. In an ideal fan beam tomograph, this plot would be a straight line.The step change at emitter 23 is caused by the numbering of the emitters (in thisexample, the emitters are numbered counter-clockwise starting from the lower-leftcorner in Fig. 2A). Fig. 12C exemplifies the variation in a values for emitter 10 (markedwith crosses) and emitter 14 (marked with circles) in Fig. 2A. In an ideal equiangularfan beam tomograph, this plot would result in two straight lines, with a separation in thevertical direction arising from the numbering of the sensors. Instead, Fig. 12C shows alack of regularity for both the individual emitter and between different emitters. Anotheraspect is that the fan geometry assumes that the source is positioned, for all projections,at the same distance from the origin, which is not true for an interleaved arrangementaround a non-circular touch surface.
Fig. l2D illustrates the sampling points in the (p-s-plane for the non-interleavedsystem shown in Fig. 2B. Apart from the irregularity of sampling points, there are alsolarge portions of the ço-s-plane that lack sampling points due to the non-interleavedarrangement of incoupling and outcoupling points.
Thus, it is not viable to apply a filter directly on the sarnpling points mapped to asample space such as the (p-s-plane or the ß-a-plane, and the sampling points cannot bere-sorted to match any standard tomography geometry. This problem is overcome by there-calculation step (42 in Fig. 4), which processes the projection values of the samplingpoints for generation of projection values for an updated set of sampling points. Theupdated set of sampling points represent a corresponding set of fictitious detection lines.These fictitious detection lines have a location on the touch surface that matches astandard geometry, typically the parallel geometry or the fan geometry. The generationof projection values of an updated set of sampling points may be achieved byinterpolating the original sampling points.
The objective of the interpolation is to find an interpolation function that canproduce interpolated values at specific interpolation points in the sample space given aset of measured projection values at the original sampling points. The interpolationpoints, possibly together with part of the original sampling points, form the above-mentioned updated set of sampling points. This updated set of sampling points isgenerated to be located in accordance with, for instance, the parallel geometry or the fangeometry. The density of the updated set of sampling points is preferably similar to theaverage density of the original sampling points in the sample space.
Many different interpolating functions can be used for this purpose, i.e. to inter-polate data points on a two-dimensional grid. Input to such an interpolation function isthe original sampling points in the sample space as well as the measured projection 26 value for each original sampling point. Most interpolating functions involve a linearoperation on the measured projection values. The coefficients in the linear operation aregiven by the known locations of the original sampling points and the interpolationpoints in the sample space. The linear operator may be pre-computed and then appliedon the measured projection values in each sensing instance (cf. iteration of steps 40-48in Fig. 4). Some non-limiting examples of suitable interpolation functions includeDelaunay triangulation, and other types of interpolation using triangle grids, bicubicinterpolation, e. g. using spline curves or Bezier surfaces, Sinc/Lanczos filtering,nearest-neighbor interpolation, and weighted average interpolation.
The following examples are based on Delaunay triangulation, where the samplingpoints are placed at the comers of a mesh of non-overlapping triangles. The values ofthe interpolation points are linearly interpolated in the triangles. The triangles can becomputed using the well-known Delaunay algorithm. To achieve triangles with reducedskewness, it is usually necessary to rescale the dimensions of the sample space (rp, s andß, a, respectively) to the essentially same length, before applying the Delaunay triangu-lation algorithm.
The interpolation function will be able to produce output values for any givenposition in the sample space. However, the frequency information in the updated set ofsampling points will be limited according to the density of original sampling points inthe sample space. Thus, wherever the original density is high, the updated set ofsampling points can mimic high frequencies present in the sampled data. Wherever theoriginal density is low, as well as if there are large gaps in the sample space , theupdated set will only be able to produce low frequency Variations. Non-interleavedarrangements (see Fig. 2B), will produce a sample space with one or more contiguousregions (also denoted "gap regions") that lack sampling points (see Fig. l2D). Thesegap regions may be left as they are, or be populated by interpolation points, or may behandled otherwise, as will be explained below in relation to a number of examples.
The following examples will illustrate re-calculation of sampling points into aparallel geometry and a fan geometry, respectively. Each example is based on a nume-rical simulation, starting from a reference image that represents a known attenuationfield on the touch surface. Based on this known attenuation field, the projection valuesfor all detection lines have been estimated and then used in a tomographic reconstruc-tion according to steps 40-46 in Fig. 4, to produce a reconstructed attenuation field.Thus, the estimated projection values are used as "measured projection values" in the following examples. 27 In the examples, two different merit values are used for comparing the quality ofthe reconstructed attenuation fields for different embodiments. The first merit value m1 is defined as: _ zfm1 _ 2|f-f#|” where f is a reference image (i.e. the known attenuation field) and f 1* is thereconstructed attenuation field. The first merit value intends to capture the similaritybetween the original image and the reconstructed image.
The second merit value m1 is defined as: Zf mzïí” i.e. the denominator only includes absolute differences in the regions where theattenuation values are zero in the reference image. The second merit value thus intendsto capture the noise in the reconstructed image by analyzing the regions of the imagewhere there should be no attenuation present. 6.1 Re-calculation into a parallel geometry The following examples will separately illustrate the re-calculation into a standardparallel geometry for an interleaved arrangement and for a non-interleaved arrangement.Since the re-calculation is made for a parallel geometry, the following examples are given for processing in the (p-s-plane. 6.1.] Example: interleaved arrangement This example is given for the interleaved arrangement shown in Fig. 2A,assuming the reference image shown in Fig. 13. The reference image is thus formed byfive touching objects 7 of different size and attenuation strength that are distributed onthe touch surface 1. For reasons of clarity, Fig. 13 also shows the emitters 2 and sensors3 in relation to the reference image.
Fig. 14A is a plan view of the resulting sample space, where a mesh of non-overlapping triangles have been adapted to the sampling points so as to provide a two-dimensional interpolation function. Fig. 14B is a close-up of Fig. 14A to illustrate thesampling points (stars) and the Delaunay triangulation (dotted lines extending betweenthe sampling points). Fig. 14B also illustrates the interpolation points (circles). Thus,the values of the interpolation points are calculated by operating the Delaunay triangu- lation on the projection values in the sampling points. In the illustrated example, the 28 interpolation points replace the sampling points in the subsequent calculations. In otherwords, the sinogram formed by the measured proj ection values is replaced by aninterpolated sinogram formed by interpolated projection values. Thereby, it is possibleto obtain a uniform density of interpolation points across the sample space, if desired.Each interpolation point corresponds to a fictitious detection line that extends across thetouch surface in accordance with a parallel geometry. Thus, the interpolation isdesigned to produce a set of fictitious detection lines that match a parallel geometry,that allows a reconstruction of the attenuation field using standard algorithms.
As shown, the interpolation points are arranged as columns (i.e. with respect tothe s Variable) in the sample space, allowing subsequent 1D filtering with respect to thes Variable. In this example, the interpolation points are arranged with equidistantspacing with respect to the s Variable, which has been found to improve the reconstruc-tion quality and facilitate the subsequent reconstruction processing, e. g. the 1D filtering.Preferably, the inter-column distance is the same for all columns since this will makethe back projection integral perform better.
In the interpolated sinogram, each (p value with its associated s values (i.e. eachcolumn) corresponds to a set of mutually parallel (fictitious) detection lines, and thusthe data is matched to a parallel geometry in a broad sense.
Fig. 14C illustrates the interpolated sinogram, i.e. the interpolated projectionvalues that has been calculated by operating the interpolation function in Fig. 14A onthe measured projection values. After filtering the interpolated sinogram with respect tothe s Variable, using the filter in Fig. 8D, and applying the back projection operator onthe thus filtered sinogram, a reconstructed attenuation field is obtained as shown in Fig.14D, having merit values: m1=1.3577 and m2=3.3204.
Fig. 15 illustrates an altemative way of generating the updated set of samplingpoints, again using Delaunay triangulation (dotted lines). Here, the original samplingpoints (stars), in the sample space, are kept and supplemented by interpolation points(circles). Around each sampling point, a line of interpolation points are generated withrespect to the s Variable. This interpolation can of course use the same principles as theforegoing example. After the interpolation, the 1D filtering is performed locally on eachindividual sampling point and its supplementary interpolation points. In this variant, theupdated set of sampling points will have the same Variations in density within thesample space as the set of original sample points, and it may therefore be advantageousto adapt the bandwidth of the 1D filter to the local density in the sample space, i.e. touse different bandwidth for different sampling points. To achieve different bandwidths,we may change the distance, with respect to the s Variable, between the supplementarypoints prior to computing the 1D filtering. After the filtering, the back projection 29 operator is applied on the resulting filtered data to compute a reconstructed attenuationfield.
Further variants of generating the updated set of sampling points are of coursepossible. For example, the above interpolation techniques may be used concurrently ondifferent parts of the sample space, or certain sampling points may be retained whereasothers are replaced by interpolated points in the updated set of sampling points.
As will be explained in the following, the generation of the updated set ofsampling points may be designed to allow detection lines to be removed dynamicallyduring operation of the touch-sensitive apparatus. For example, if an emitter or a sensorstarts to perform badly, or not at all, during operation of the apparatus, this may have asignificant impact on the reconstructed attenuation field. It is conceivable to provide theapparatus with the ability of identifying faulty detection lines, e. g. by monitoringtemporal changes in output signal of the light sensors, and specifically the individualprojection signals. The temporal changes may e. g. show up as changes in the energy/-attenuation/transmission or the signal-to-noise ratio (SNR) of the projection signals.Any faulty detection line may be removed from the reconstruction. Such a touch-sensitive apparatus is disclosed in Applicant's US provisional application No.61/288416, which was filed on December 21, 2009 and which is incorporated herein bythis reference. To fully benefit from such functionality, the touch-sensitive apparatusmay be designed to have slightly more sensors and/or emitters than necessary to achieveadequate performance, such that it is possible to discard a significant amount of theprojection values, for example 5%, without significantly affecting performance.
The re-calculation step (cf. step 42 in Fig. 4) may be configured to dynamically(i.e. for each individual sensing instance) account for such faulty detection lines by,whenever a detection line is marked as faulty, removing the corresponding samplingpoint in the sample space and re-computing the interpolation function around thatsampling point. Thereby, the density of sampling points is reduced locally (in the (p-s-plane), but the reconstruction process will continue to work adequately while discardinginformation from the faulty detection line.
This is further illustrated in Figs 16-17. Fig. 16A is a close-up of two-dimensionalinterpolation function formed as an interpolation grid in the sample space. Assume thatthis interpolation function is stored for use in the re-calculation step for a complete setof sampling points. Also assume that the sampling point indicated by a circle in Fig.16A corresponds to a detection line which is found to be faulty. In such a situation, thesampling point is removed, and the interpolation function is updated or recomputed based on the remaining sampling points. The result of this operation is shown in Fig. 16B. As shown, the change will be local to the triangles closest to the removedsampling point.
If an emitter is deemed faulty, all detection lines originating from this emittershould be removed. This corresponds to removal of a collection of sampling points anda corresponding update of the interpolation function. Fig. l6C illustrates theinterpolation function in Fig. 16A after such updating, and Fig. 16D illustrates theupdated interpolation function for the complete sample space. The removal of thedetection lines results in a band of lower density (indicated by arrow L1), but thereconstruction process still works properly.
Instead, if a sensor is deemed faulty, all detection lines originating from thissensor should be removed. This is done in the same way as for the faulty emitter, andFig. 17A illustrates the interpolation function in Fig. 16A after such updating. Fig. 17Billustrates the updated interpolation function for the complete sample space. Theremoval of the detection lines again results in a band of lower density (indicated byarrow L2), but the reconstruction process still works properly. 6.1.2 Example: non-interleaved arrangement The non-interleaved arrangement generally results in a different set of samplingpoints than the interleaved arrangement, as seen by comparing Fig. 12A and Fig. l2D.However, there is no fundamental difference between the interpolation solutions forthese arrangements, and all embodiments and examples of reconstruction processingdescribed above in relation to the interleaved arrangement are equally applicable to thenon-interleaved arrangement. The following example therefore focuses on differenttechniques for handling the gap regions, i.e. regions without sampling points, which areobtained in non-interleaved arrangement.
The following example is given for the non-interleaved arrangement shown inFig. 2B, assuming a reference image as shown in Fig. 18, i.e. the same reference imageas in Fig. 13.
Fig. 19A is a plan view of the resulting interpolation function, where a mesh of non-overlapping triangles have been adapted to the sampling points in the sample space.
Thus, this example forms the interpolation function directly from the original samplingpoints. Since the sample space contain contiguous gap regions (see Fig. 12D), theresulting interpolation function is undefined in these gap regions, or stated differently,the values at the implicit sampling points in the gap regions are set to zero. The inter-polation function in Fig. 19A may be used to generate an updated set of samplingpoints, like in the foregoing examples. Fig. l9B illustrates the reconstructed attenuationfield that is obtained by calculating the interpolated projection values for the reference 31 image in Fig. 18, operating the 1D filter of the result, and applying the back projectionoperator on the result of the filtered data. The reconstructed attenuation field has meritvalues: m1=0.7413 and m2=1.2145.
An alternative approach to handling the gap regions is to extend the interpolationfunction across the gap regions, i.e. to extend the mesh of triangles over the gap regions,as shown in Fig. 20A. The interpolation function in Fig. 19A may thus be used togenerate desirable interpolation points within the entire sample space, i.e. also in thegap regions. Fig. 20B illustrates the interpolated projection values calculated for thereference image in Fig. 18. It can be seen that projection values are smeared out into thegap regions in the (p-s-plane. The reconstructed attenuation field (not shown), obtainedafter 1D filtering and back projection, has merit values: m1=0.8694 and m2=1.45 32, i.e.slightly better than Fig. l9B.
Yet another alternative approach is to add some border vertices to the inter-polation function in the gap regions, where these border vertices form a gradualtransition from the original sampling points to zero values, and letting the interpolationfunction be undefined/zero in the remainder of the gap regions. This results in asmoother transition of the interpolation function into the gap regions, as seen in Fig.21A. Fig. 21B illustrates the interpolated projection values calculated for the referenceimage in Fig. 18. The reconstructed attenuation field (not shown), obtained after 1Dfiltering and back projection, has merit values: m1=0.8274 and m2=1.4434, i.e. slightlybetter than Fig. 19B.
All of the three above-described approaches lead to reconstructed attenuationfields of approximately the same quality. Below follows a description of a technique forimproving the quality further, by improving the estimation of sampling points in the gapregions.
This improved technique for generating estimation points in the gap regions willbe described in relation to Figs 23-24. It is to be noted that this technique may also beapplied to populate gaps fonned by removal of faulty detection lines, as a supplement oralternative to the technique discussed in chapter 6.1.1. Generally, the estimation pointsmay be selected to match the standard geometries, like the interpolation points, possiblywith a lower density than the interpolation points. Fig. 22A illustrates the sample spacesupplemented with such estimation points in the gap regions. Like in the foregoingexamples, an interpolation function is generated based on the sample space, in this casebased on the combination of sampling points and interpolation points. Fig. 22Billustrates the resulting interpolation function.
The aim is to obtain a good estimate for every added estimation point. This may be achieved by making assumptions about the touching objects, although this is not 32 strictly necessary. For example, if it can be presumed that the touching objects arefingertips, it can be assumed that each touching object results in a top hat profile in theattenuation field with a circular or ellipsoidal contour. Unless the number of touchingobjects is excessive, there will exist, for each touching object, at least one detection linethat interacts with this touching object only. If it is assumed that the touch profiles areessentially round, the touch profile will cause essentially the same attenuation of alldetection lines that are affected by the touch profile.
The value at each estimation point, in the (p-s-plane (marked with diamonds inFig 22A), represents a line integral along a specific line on the touch surface. Since theestimation points are located in the gap region, there is no real (physical) detection linethat matches the specific line. Thus, the specific line is a virtual line in the x-y-plane(i.e. a fictitious detection line, although it does not correspond to an interpolation pointbut to an estimation point). The value at the estimation point may be obtained byanalyzing selected points along the virtual line in the x-y-plane. Specifically, aminimum projection value is identified for each selected point, by identifying minimumprojection value for the ensemble of detection lines (actual or fictitious) that passesthrough the selected point. This means that, for every analyzed point, the algorithm goesthrough the different detection lines passing through the point and identifies the lowestvalue of all these detection lines. The value of the estimation point may then be givenby the maximum value of all identified minimum projection values, i.e. for the differentanalyzed points, along the virtual line.
To explain this approach further, Fig. 23A illustrates the original sampling pointstogether with two estimation points EP1, EP2 indicated by circles. The estimation pointEPl corresponds to a virtual line V1, which is indicated in the reference image of Fig.23B. The next step is to evaluate selected points along the virtual line V1. For everyselected point, the projection values for all intersecting detection lines are collected. Theresult is shown in the two-dimensional plot of Fig. 23 C, which illustrates projectionvalues as a function of detection line (represented by its angle) and the selected points(given as position along the virtual line). The large black areas in Fig. 23C correspondto non-existing detection lines. To find the value of the estimation point EP1, the data inFig. 23C is first processed to identify the minimum projection value (over the angles)for each selected point along the virtual line V1. The result is shown in the graph of Fig.23D. The value of the estimation point EP1 is then selected as the maximum of theseminimum projection values. Fig. 23E illustrates the values of all estimation points inFig. 22A calculated for the reference image in Fig. 18 using this approach, together withthe interpolated proj ection values. By comparing Fig. 20B and Fig. 21B, a significantimprovement is seen with respect to the information in the gap regions of the sample 33 space. The reconstructed attenuation field, obtained after 1D filtering and backprojection, is shown in Fig. 23F and has merit values: m1=1.2085 and m2=2.5997, i.e.much better than Fig. 19B.
It is possible to improve the estimation process further. Instead of choosing themaximum among the minimum proj ection values, the process may identify the presenceof plural touch profiles along the investigated virtual line and combine (sum, weightedsum, etc) the maximum proj ection values of the different touch profiles. To explain thisapproach further, consider the estimation point EP2 in Fig. 23A. The estimation pointEP2 corresponds to a virtual line V2, which is indicated in the reference image of Fig.24A. Like in the foregoing example, selected points along the virtual line V2 areevaluated. The result is shown in the two-dimensional plot of Fig. 24B. Like in theforegoing example, the data in Fig. 24B is then processed to identify the minimumprojection value (over the angles) for each selected point along the virtual line V2. Theresult is shown in the graph of Fig. 24C. This graph clearly indicates that there are twoseparate touch profiles on the virtual line V2. Thus, the estimation process processes themaximum projection values in Fig. 24C to identify local maxima (in this example twomaxima), and sets the value of the estimation point EP2 equal to the sum of the localmaxima (projection values). Fig. 24D illustrates the values of all estimation points inFig. 22A calculated for the reference image in Fig. 18 using this approach, together withthe interpolated proj ection values. The gap regions of the sample space are representedby relevant information. The reconstructed attenuation field, obtained after 1D filteringand back projection, is shown in Fig. 24E and has merit values: m1=1.2469 andm2=2.6589, i.e. slightly better than Fig. 23F.
Fig. 25 is a flowchart of an exemplifying reconstruction process, which is a moredetailed version of the general process in Fig. 4 adapted for data processing in a touch-sensitive apparatus with a non-interleaved arrangement. The process operates on theoutput signal from the light sensor arrangement, using data stored in a system memory50, and intermediate data generated during the process. It is realized that the inter-mediate data also may be stored temporarily in the system memory 50 during theprocess. The flowchart will not be described in great detail, since the different stepshave already been explained above.
In step 500, the process samples the output signal from the light sensorarrangement. In step 502, the sampled data is processed for calculation of projectionvalues (g). In step 504, the process reads the interpolation function (IF) from thememory 50. The interpolation function (IF) could, e. g., be designed as any one of theinterpolation functions shown in Figs 19A, 20A, 21A and 22B. The process also reads"exclusion data" from the memory 50, or obtains this data directly from a dedicated 34 process. The exclusion data identifies any faulty detection lines that should be excludedin the reconstruction process. The process modifies the interpolation function (IF) basedon the exclusion data, resulting in an updated interpolation function (IF) which may bestored in the memory 50 for use during subsequent iterations. Based on the updatedinterpolation function (IF), and the proj ection values (g), step 504 generates newprojection values ("interpolation values", i) at given interpolation points. Step 504 mayalso involve a calculation of new projection values ("estimation values", e) at givenestimation points in the gap regions, based on the updated interpolation function (IF).Step 504 results in a matched sinogram (g '), which contains the interpolation valuesand the estimation values. In step 506, the process reads the filter kernel (wb) from thememory 50 and operates the kemel in one dimension on the matched sinogram (g '). Theresult of step 506 is a filtered sinogram (1)). In step 508, the process reads "subareadata" from the memory 50, or obtains this data directly from a dedicated process. Thesubarea data indicates the parts of the attenuation field/touch surface to be reconstruc-ted. Based on the subarea data, and the filtered sinogram (1)), step 510 generates areconstructed attenuation field (a), which is output, stored in memory 50, or processedfurther. Following step 508, the process retums to step 500.
It is to be understood that a similar process may be applied for data processing in a touch-sensitive apparatus with an interleaved arrangement. 6.2 Re-calculation into fan geometry The following example will illustrate the re-calculation into a standard fangeometry for an interleaved arrangement. Since the re-calculation is made for a fan geometry, the following examples are given for the ß-a-plane. 6.2.] Example: interleaved arrangement This example is given for the interleaved arrangement shown in Fig. 2A,assuming a reference image as shown in Fig. 13.
A first implementation of the re-calculation step (cf. step 42 in Fig. 4) will bedescribed with reference to Fig. 26. In the first implementation, the sampled data is"squeezed" to fit a specific fan geometry. This means that the projection values obtainedfor the detection lines of the interleaved arrangement are re-assigned to fictitiousdetection lines that match a fan geometry, in this example the geometry of an equi-angular fan beam tomograph. Making such a re-assignment may involve a step offinding the best guess for an equiangular spacing of the ßí values, and for the ak values.In this example, the ß,- values for the sampling points are re-interpreted to be consistentwith the angles of an equiangular fan beam tomograph . This essentially means that the difference in rotation angle between the different incoupling points is considered to bethe same around the perimeter of the touch surface, i.e. åß = 2 - 11 /M , where M is thetotal number of emitters (incoupling points). The ak values for the sampling points arere-interpreted by postulating that the ak values are found at n - åa, where - S n S Nand ZN + 1 is the total number of sensors (outcoupling points) that receive light energyfrom the relevant emitter. To get accurate ordering of the ak values, n = 0 may be setas the original sample with the smallest value of ak.
Fig. 26A illustrates the sampling points in the ß-a-plane, after this basic re-assignment of proj ection values. After angle correction, lD filtering of the angle-corrected data, and back projection, a reconstructed attenuation field is obtained asshown in Fig. 26B. It is evident that the first implementation is able to reproduce theoriginal image (Fig. 13), but with a rather low quality, especially in the comer regions.
In a second implementation of the re-calculation step, the measured projectionvalues are processed for calculation of new (updated) projection values for fictitiousdetection lines that match a fan geometry. In the second implementation, like in the firstimplementation, each emitter (incoupling point) on the perimeter of the touch surface isregarded as the origin of a set of detection lines of different directions. This means thatevery ßí value corresponds to an emitter (incoupling point) in the interleaved arrange-ment, which generates a plurality of detection lines with individual angular directionsak, and the sampling points defined by the actual ßí values and ak values thus formcolumns in the ß-a-plane. Therefore, interpolation in the ß,- direction can be omitted,and possibly be replaced by a step of adding an individual weighting factor to the backprojection operator (by changing öß to åßi, which should correspond to the differencein ßl- values between neighboring emitters). In the second implementation, the re-calculation step involves an interpolation with respect to the ak variable, suitably toprovide values of interpolation points having an equidistant separation with respect tothe ak variable for each ß,- value in the sampling space. Thus, the interpolation of thesampling points may be reduced to applying a lD interpolation function. The lDinterpolation function may be of any type, such as linear, cubic, spline, Lanczos, Sinc,etc. In following example, the interpolation function is linear. It should be noted,though, that a 2D interpolating function as described in section 6.1 above can alterna-tively be applied for interpolation in the ß-a-plane.
Fig. 27A illustrates the sampling points in the ß-a-plane, after the 1D interpola-tion. Fig. 27B shows the reconstructed attenuation field which is obtained after anglecorrection, lD filtering of the angle-corrected data, and back proj ection. By comparingFig. 27B with Fig. 26B, it can be seen that the second implementation provides a significant quality improvement compared to the first implementation. 36 Further, by comparing Fig. 27B with Fig. 14D, Which both illustrate reconstructedattenuation fields for the interleaved arrangement, it may appear as if the parallelgeometry may result in a higher reconstruction quality than the fan geometry. Thisapparent quality difference may have several causes. First, reconstruction algorithms forthe fan geometry restrict the direction angle a to the range - 11/2 S a S 7I/2. Directionangles outside this range will cause the angle correction (see section 5.2) to deteriorate.In the touch-sensitive apparatus, detection lines may have direction angles outside thisrange, especially for emitters located the corners of the touch surface (recalling that a=0for a line going from the emitter through the origin, i.e. the center of the touch surface).Second, the Weighted back projection operator (see section 5.2) involves a normaliza-tion based on the inverse of the squared distance between the source and the reconstruc-ted position. This distance becomes close to zero near the perimeter of the touch surfaceand its inverse goes towards infinity, thereby reducing the reconstruction quality at theperimeter. Still further, the standard reconstruction algorithms assume that all sensors(outcoupling points) are arranged at the same distance from the emitters (incouplingpoints).
A third implementation of the re-calculation step will now be described withreference to Figs 28-29. In the third implementation, which is designed to at leastpartially overcome the above-mentioned limitations of the first and second implemen-tations, the detection lines are defined based on fictive emitter/ sensor locations. Fig. 28illustrates the touch-sensitive apparatus circumscribed by a circle C which may or maynot be centered at the origin of the x,y coordinate system (Fig. 2) of the apparatus. Theemitters 2 and sensors 3 provide a set of detection lines (not shown) across the touchsurface 1. To define the detection lines in a ß-a-plane, the intersection of each detectionline and the circle C is taken to define a ßl- value, whereas the ak value of eachdetection line is given by the inclination angle of the detection line with respect to areference line (like in the other fan geometry examples given herein). Thereby, the ßand a variables are defined in strict alignment with the theoretical definition depicted inFig. 9, where the ß Variable is defined as a rotation angle along a circular perimeter.
Fig. 29A illustrates the resulting sampling points in the ß-a-plane for theinterleaved system shown in Fig. 28, where the ßí values are defined according to theforegoing "fictive circle approach". The sampling space contains a highly irregularpattern of sampling points. Fig. 29B is a plan view of a 2D interpolation function fittedto the sampling points in Fig. 29A. It should be realized that the techniques described inchapters 6.1.1 and 6.1.2 may be applied also to the sampling points in the ß-a-plane togenerate interpolation/estimation points that represent fictitious detection lines matchinga standard fan geometry. Thus, the interpolation/estimation points are suitably generated 37 to form columns With respect to the ß variable, preferably With equidistant spacing. Fig.29C illustrates the interpolated sinogram, Which is obtained by operating the inter-polation function in Fig. 29B on the projection values that are given by the referenceimage in Fig. l3. Fig. 29D shows the reconstructed attenuation field Which is obtainedafter angle correction, lD filtering of the angle-corrected data, and back proj ection. Bycomparing Fig. 29D With Fig. 27B, it can be seen that the third implementation providesa significant quality improvement compared to the first and second implementations.
In all of the above implementations, the ite-calculation step results in an updatedsinogram, in Which each ß value and its associated a values (i.e. each column in thesinogram) corresponds to a fan of detection lines With a common origin, and thus thedata is matched to a fan geometry in a broad sense. 7. Concluding remarks The invention has mainly been described above With reference to a fewembodiments. However, as is readily appreciated by a person skilled in the art, otherembodiments than the ones disclosed above are equally possible within the scope andspirit of the invention, Which is defined and limited only by the appended patent claims.
For example, the reconstructed attenuation field may be subjected to post-processing before the touch data extraction (step 48 in Fig. 4). Such post-processingmay involve different types of filtering, for noise removal and/or image enhancement.Fig. 30 illustrates the result of applying a Bayesían image enhancer to the reconstructedattenuation field in Fig. 24E. The enhanced attenuation field has merit values:m1=l.6433 and m2=5.5233. For comparison, the enhanced attenuation field obtained byapplying the Bayesían image enhancer the reconstructed attenuation field in Fig. l4Dhas merit values: m1=l.8536 and m2=l0.0283. In both cases, a significant qualityimprovement is obtained.
Furthermore, it is to be understood that the inventive concept is applicable to anytouch-sensitive apparatus that defines a fixed set of detection lines and operates byprocessing measured projection values for the detection lines according to any tomo-graphic reconstruction algorithm that is defined for a standard geometry, Where thesestandard geometry does not match the fixed set of detection lines. Thus, although theabove description is given With reference to FBP algorithms, the inventive concept havea more general applicability.
It should also be emphasized that all the above embodiments, examples, variantsand alternatives given With respect to interpolation, removal of detection lines, andestimation in gap regions are generally applicable to any type of emitter-sensorarrangement and irrespective of standard geometry. 38 Furthermore, the reconstructed attenuation field need not represent the distributionof attenuation coefficient values within the touch surface, but could instead representthe distribution of energy, relative transmission, or any other relevant entity derivableby processing of projection values given by the output signal of the sensors. Thus, theprojection values may represent measured energy, differential energy (e. g. given by ameasured energy value subtracted by a background energy value for each detectionline), relative attenuation, relative transmission, a logarithmic attenuation, a logarithmicattenuation, etc. The person skilled in the art realizes that there are other ways ofgenerating projection values based on the output signal. For example, each individualprojection signal included in the output signal may be subjected to a high-pass filteringin the time domain, Whereby the thus-filtered proj ection signals represent background-compensated energy and can be sampled for generation of proj ection values.
Furthermore, all the above embodiments, examples, variants and altemativesgiven with respect to an FTIR system are equally applicable to a touch-sensitiveapparatus that operates by transmission of other energy than light. In one example, thetouch surface may be implemented as an electrically conductive panel, the emitters andsensors may be electrodes that couple electric currents into and out of the panel, and theoutput signal may be indicative of the resistance/impedance of the panel on the indivi-dual detection lines. In another example, the touch surface may include a material actingas a dielectric, the emitters and sensors may be electrodes, and the output signal may beindicative of the capacitance of the panel on the individual detection lines. In yetanother example, the touch surface may include a material acting as a vibration conduc-ting medium, the emitters may be vibration generators (e. g. acoustic or piezoelectrictransducers), and the sensors may be vibration sensors (e. g. acoustic or piezoelectricsensors).
Still further, the inventive concept may be applied to improve tomographicreconstruction in any field of technology, such as radiology, archaeology, biology,geophysics, oceanography, materials science, astrophysics, etc, whenever the detectionlines are mismatched to a standard geometry that forms the basis for the tomographicreconstruction algorithm. Thus, the inventive concept could be generally defined as amethod for image reconstruction based on an output signal from a tomograph, thetomograph comprising a plurality of peripheral entry points and a plurality of peripheralWithdrawal points, which between them define actual detection lines that extend acrossa measurement space to propagate energy signals from the entry points to the with-drawal points , at least one signal generator coupled to the entry points to generate theenergy signals, and at least one signal detector coupled to the Withdrawal points togenerate the output signal, the method comprising: processing the output signal to 39 generate a set of data samples, Wherein the data samples are indicative of detectedenergy for at least a subset of the actual detection lines; processing the set of datasamples to generate a set of matched samples, Wherein the matched samples areindicative of estimated detected energy for fictitious detection lines that have a locationin the measurement space that matches a standard geometry for tomographic reconstruc-tion; and processing the set of matched samples by tomographic reconstruction togenerate data indicative of a distribution of an energy-related parameter Within at least part of the measurement space.

Claims (23)

1. A method of enabling touch determination based on an output signal from atouch-sensitive apparatus (100), the touch-sensitive apparatus (100) comprising a panel(4) configured to conduct signals from a plurality of peripheral incoupling points to aplurality of peripheral outcoupling points, thereby defining actual detection lines (D)that extend across a surface portion (1) of the panel (4) between pairs of incoupling andoutcoupling points, at least one signal generator (2) coupled to the incoupling points togenerate the signals, and at least one signal detector (3) coupled to the outcouplingpoints to generate the output signal, the method comprising: processing (40) the output signal to generate a set of data samples, Wherein thedata samples are indicative of detected energy for at least a subset of the actualdetection lines (D), processing (42) the set of data samples to generate a set of matched samples,Wherein the matched samples are indicative of estimated detected energy for fictitiousdetection lines that have a location on the surface portion (1) that matches a standardgeometry for tomographic reconstruction, and processing (44, 46) the set of matched samples by tomographic reconstruction togenerate data indicative of a distribution of an energy-related parameter Within at leastpart of the surface portion (1).
2. The method of claim 1, Wherein the step of processing (40) the output signalcomprises: generating the data samples in a two-dimensional sample space, Whereineach data sample is representative of an actual detection line (D) and is defined by asignal value and two dimension values (
3. The method of claim 2, Wherein the step of processing (42) the set of datasamples comprises: generating estimated signal values of the matched samples atpredetermined locations in the two-dimensional sample space, Wherein thepredetermined locations correspond to the fictitious detection lines.
4. The method of claim 3, Wherein the estimated signal values are generated byinterpolation based on the signal values of the data samples.
5. The method of claim 4, Wherein each estimated signal value is generated byinterpolation of the signal values of neighboring data samples in the two-dimensionalsample space.
6. The method of claim 4 or 5, Wherein the step of processing (42) the set of datasamples further comprises: obtaining a predetermined two-dimensional interpolationfunction (IF) With nodes corresponding to the set of data samples, and calculating the 41 estimated signal values according to the interpolation function (IF) and based on thesignal values of the data samples.
7. The method of claim 6, further comprising: receiving exclusion data identifyingone or more data samples to be excluded, Wherein the step of processing (42) the datasamples comprises identifying the node corresponding to each data sample to beexcluded, re-designing the predetermined interpolation function (IF) Without each thus-identified node, and calculating the estimated signal values according to the re-designedinterpolation scheme (IF') and based on the signal values of the data samples in thenodes of the re-designed interpolation scheme (IF').
8. The method of any one of claims 3-7, Wherein the matched samples arearranged as rows and/or columns in the tWo-dimensional sample space.
9. The method of claim 8, Wherein the matched samples are arranged Withequidistant spacing Within each of said roWs and/or columns.
10. The method of any one of claims 3-9, Wherein the step of processing (44, 46)the set of matched samples comprises: applying (44) a one-dimensional high-passfiltering of the matched samples in the two-dimensional sample space to generatefiltered samples, and processing (46) the filtered samples to generate at set of backprojection values indicative of said distribution.
11. The method of any one of claims 2-10, Wherein the surface portion (1) definesa sampling area in the two-dimensional sample space, and Wherein, if the actualdetection lines (D) given by the geometric arrangement of incoupling and outcouplingpoints result in at least one contiguous region Without data samples Within the samplingarea, the step of processing the set of data samples comprises: obtaining a predetermined set of estimated sampling points Within the contiguousregion, and for each estimated sampling point, identifying the location of a correspondingfictitious detection line on the surface portion; identifying, for each intersection pointbetween the corresponding fictitious detection line and the actual detection lines (D)and/or between the corresponding fictitious detection line and the fictitious detectionlines for the set of matched samples, an intersection point value as the smallest signalvalue of all data samples corresponding to the actual detection lines (D) associated Withthe intersection point; and calculating a signal value of the estimated sampling point asa function of the intersection point values.
12. The method of claim ll, Wherein the signal value of the estimated samplingpoint is given by the largest intersection point value.
13. The method of claim 11, further comprising, for each estimated sampling point: identifying a number of local maxima in the intersection point values, and 42 calculating the signal value of the estimated sampling point as a combination of thelocal maxima.
14. The method of any one of claims 2-13, Wherein the dimension valuescomprise a rotation angle of the detection line in the plane of the panel (4), and adistance of the detection line in the plane of the panel (4) from a predetermined origin.
15. The method of any one of claims 2-13, wherein the dimension valuescomprise an angular location of the incoupling or outcoupling point of the detectionline, and a rotation angle of the detection line in the plane of the panel (4).
16. The method of claim 15, Wherein the standard geometry is a fan geometry,wherein the touch surface ( 1) has a non-circular perimeter, and Wherein the angularlocation is defined by an intersection between the actual detection line (D) and afictitious circle (C) arranged to circumscribe the touch surface (1).
17. The method of any one of claims 1-15, wherein the standard geometry is oneof a parallel geometry and a fan geometry.
18. The method of any preceding claim, wherein said signals comprise one ofelectrical energy, light, magnetic energy, sonic energy and vibration energy.
19. The method of any preceding claim, wherein the panel (4) defines a touchsurface (1) and an opposite surface (5; 6), wherein said at least one signal generator (2)is arranged to provide light inside the panel (4), such that the light propagates from theincoupling points by internal reflection between the touch surface (1) and the oppositesurface (5; 6) to the outcoupling points for detection by said at least one signal detector(3), and Wherein the touch-sensitive apparatus (100) is configured such that thepropagating light is locally attenuated by one or more objects (7) touching the touchsurface (1).
20. A computer program product comprising computer code which, whenexecuted on a data-processing system, is adapted to carry out the method of any one ofclaims 1-19.
21. A device for enabling touch determination based on an output signal of atouch-sensitive apparatus (100), said touch-sensitive apparatus (100) comprising a panel(4) configured to conduct signals from a plurality of peripheral incoupling points to aplurality of peripheral outcoupling points, thereby defining actual detection lines (D)that extend across a surface portion (1) of the panel (4) between pairs of incoupling andoutcoupling points, means (2, 12) for generating the signals at the incoupling points,and means (3) for generating the output signal based on detected signals at theoutcoupling points, said device comprising: means (400) for receiving the output signal; 43 means (402) for processing the output signal to generate a set of data samples,Wherein the data samples are indicative of detected energy for at least a subset of theactual detection lines (D); means (404) for processing the set of data samples to generate a set of matchedsamples, Wherein the matched samples are indicative of estimated detected energy forfictitious detection lines that have a location on the surface portion (1) that matches astandard geometry for tomographic reconstruction; and means (406, 408) for processing the set of matched samples by tomographicreconstruction to generate data indicative of a distribution of an energy-relatedparameter Within at least part of the surface portion (1).
22. A touch-sensitive apparatus, comprising: a panel (4) configured to conduct signals from a plurality of peripheral incouplingpoints to a plurality of peripheral outcoupling points, thereby defining actual detectionlines (D) that extend across a surface portion (1) of the panel (4) between pairs ofincoupling and outcoupling points; means (2, 12) for generating the signals at the incoupling points; means (3) for generating an output signal based on detected signals at theoutcoupling points; and the device (10) for enabling touch determination according to claim 21.
23. A touch-sensitive apparatus, comprising: a panel (4) configured to conduct signals from a plurality of peripheral incouplingpoints to a plurality of peripheral outcoupling points, thereby defining actual detectionlines (D) that extend across a surface portion (1) of the panel (4) between pairs ofincoupling and outcoupling points; at least one signal generator (2, 12) coupled to the incoupling points to generatethe signals; at least one signal detector (3) coupled to the outcoupling points to generate anoutput signal; and a signal processor (10) connected to receive the output signal and configured to: process the output signal to generate a set of data samples, Wherein thedata samples are indicative of detected energy for at least a subset of the actualdetection lines (D), process the set of data samples to generate a set of matched samples,Wherein the matched samples are indicative of estimated detected energy for fictitiousdetection lines that have a location on the surface portion (1) that matches a standard geometry for tomographic reconstruction, and 44 process the set of matched samples by tomographic reconstruction togenerate data indicative of a distribution of an energy-related parameter Within at leastpart of the surface portion (1).
SE1050434A 2010-05-03 2010-05-03 Determination of contact through tomographic reconstruction SE535005C2 (en)

Priority Applications (17)

Application Number Priority Date Filing Date Title
SE1050434A SE535005C2 (en) 2010-05-03 2010-05-03 Determination of contact through tomographic reconstruction
TW100114596A TW201203052A (en) 2010-05-03 2011-04-27 Touch determination by tomographic reconstruction
KR1020177019739A KR101840991B1 (en) 2010-05-03 2011-04-28 Touch determination by tomographic reconstruction
JP2013509026A JP5807057B2 (en) 2010-05-03 2011-04-28 Contact determination by tomographic reconstruction
US13/695,505 US8780066B2 (en) 2010-05-03 2011-04-28 Touch determination by tomographic reconstruction
KR1020127031392A KR101760539B1 (en) 2010-05-03 2011-04-28 Touch determination by tomographic reconstruction
CA2798176A CA2798176A1 (en) 2010-05-03 2011-04-28 Touch determination by tomographic reconstruction
CN201610251052.5A CN105930002B (en) 2010-05-03 2011-04-28 Touch determination is carried out by tomographic image reconstruction
PCT/SE2011/050520 WO2011139213A1 (en) 2010-05-03 2011-04-28 Touch determination by tomographic reconstruction
EP15197923.4A EP3012721A1 (en) 2010-05-03 2011-04-28 Touch determination by tomographic reconstruction
RU2012148777/08A RU2012148777A (en) 2010-05-03 2011-04-28 DETERMINATION OF TOUCH BY TOMOGRAPHIC RECONSTRUCTION
CN201180030215.2A CN103026325B (en) 2010-05-03 2011-04-28 Touch and determine by tomography reconstruct
EP11777650.0A EP2567306B1 (en) 2010-05-03 2011-04-28 Touch determination by tomographic reconstruction
IL222797A IL222797A0 (en) 2010-05-03 2012-11-01 Touch determination by tomographic reconstruction
US14/293,257 US9547393B2 (en) 2010-05-03 2014-06-02 Touch determination by tomographic reconstruction
US15/388,457 US9996196B2 (en) 2010-05-03 2016-12-22 Touch determination by tomographic reconstruction
US15/973,717 US20180253187A1 (en) 2010-05-03 2018-05-08 Touch determination by tomographic reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
SE1050434A SE535005C2 (en) 2010-05-03 2010-05-03 Determination of contact through tomographic reconstruction

Publications (2)

Publication Number Publication Date
SE1050434A1 true SE1050434A1 (en) 2011-11-04
SE535005C2 SE535005C2 (en) 2012-03-13

Family

ID=45066042

Family Applications (1)

Application Number Title Priority Date Filing Date
SE1050434A SE535005C2 (en) 2010-05-03 2010-05-03 Determination of contact through tomographic reconstruction

Country Status (1)

Country Link
SE (1) SE535005C2 (en)

Also Published As

Publication number Publication date
SE535005C2 (en) 2012-03-13

Similar Documents

Publication Publication Date Title
US9996196B2 (en) Touch determination by tomographic reconstruction
US10019113B2 (en) Tomographic processing for touch detection
EP2823382B1 (en) Efficient tomographic processing for touch determination
US9411444B2 (en) Touch determination by tomographic reconstruction
EP2491479B1 (en) Extracting touch data that represents one or more objects on a touch surface
EP2823388B1 (en) Efficient tomographic processing for touch determination
US9626018B2 (en) Object detection in touch systems
EP2771771A2 (en) Touch determination by tomographic reconstruction
JP5782446B2 (en) Determination of contact data for one or more objects on the contact surface
US10318041B2 (en) Object detection in touch systems
SE1050434A1 (en) Determination of contact through tomographic reconstruction
AU2011249099A1 (en) Touch determination by tomographic reconstruction