US8587563B2 - Touch system and positioning method therefor - Google Patents
Touch system and positioning method therefor Download PDFInfo
- Publication number
- US8587563B2 US8587563B2 US13/115,468 US201113115468A US8587563B2 US 8587563 B2 US8587563 B2 US 8587563B2 US 201113115468 A US201113115468 A US 201113115468A US 8587563 B2 US8587563 B2 US 8587563B2
- Authority
- US
- United States
- Prior art keywords
- image
- positions
- pointer
- mapping
- image sensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
- G06F3/0428—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means by sensing at the edges of the touch surface the interruption of optical paths, e.g. an illumination plane, parallel to the touch surface which may be virtual
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/0416—Control or interface arrangements specially adapted for digitisers
- G06F3/0418—Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/041—Indexing scheme relating to G06F3/041 - G06F3/045
- G06F2203/04104—Multi-touch detection in digitiser, i.e. details about the simultaneous detection of a plurality of touching locations, e.g. multiple fingers or pen and finger
Definitions
- This invention generally relates to a touch system and, more particularly, to an optical touch system and a positioning method therefor.
- FIG. 1 a shows a schematic diagram of an optical touch system
- FIG. 1 b shows a schematic diagram of image windows acquired by the two image sensors included in the touch system shown in FIG. 1 a.
- the touch system 9 includes a touch surface 90 and two image sensors 91 and 91 ′.
- the image sensors 91 and 91 ′ are configured to acquire image windows W 91 and W 91 ′ respectively looking across the touch surface 90 .
- the image windows W 91 and W 91 ′ acquired by the image sensors 91 and 91 ′ respectively include finger images I 81 and I 81 ′ of the finger 81 .
- a processing unit 92 can calculate a two-dimensional coordinate of the finger 81 with respect to the touch surface 90 according to a one-dimensional position of the finger image I 81 in the image window W 91 and a one-dimensional position of the finger image I 81 ′ in the image window W 91 ′.
- one finger may block other finger or fingers with respect to a part of the image sensors.
- the image sensor 91 acquires images of the fingers 81 and 82 following a route “a” and the image sensor 91 ′ acquires images of the fingers 81 and 82 following routes “b” and “c” respectively.
- the image window W 91 acquired by the image sensor 91 only includes a finger image I 81 +I 82 (i.e. a combined image of the finger images I 81 and I 82 ). Therefore, the processing unit 92 is not able to correctly calculate two-dimensional coordinates of different fingers with respect to the touch surface 90 according to the finger images I 81 ′, I 82 ′ and I 81 +I 82 .
- the present invention provides a touch system and a positioning method therefor configured to correctly obtain two-dimensional coordinates of a plurality of pointers with respect to a touch system.
- the present invention provides a positioning method for a touch system.
- the touch system includes a first image sensor and a second image sensor for acquiring image windows looking across a touch surface and containing images of two pointers operating above the touch surface.
- the positioning method includes the steps of: acquiring a first image window with the first image sensor; acquiring a second image window with the second image sensor; identifying numbers of pointer images in the first image window and the second image window; generating a two-dimensional space according to the first image window and the second image window when the first image window and the second image window contain different numbers of pointer images; connecting, on the two-dimensional space, a mapping position of the first image sensor with mapping positions of two outermost edges of the pointer image in the first image window and connecting, on the two-dimensional space, a mapping position of the second image sensor with mapping positions of two outermost edges of the pointer image in the second image window to form a quadrilateral; calculating four first internal bisectors of the quadrilateral; and connecting, on the two-dimensional space
- a pair of previous correct positions of the pointers with respect to the touch surface was determined in a previous sample time (image capture time) before the first image sensor acquires the first image window and the second image sensor acquires the second image window.
- the positioning method further includes the steps of: comparing the first possible positions with the pair of previous correct positions to obtain a pair of current correct positions.
- the touch system further includes a third image sensor for acquiring image windows looking across the touch surface and containing images of the two pointers.
- the positioning method further includes the steps of: acquiring a third image window with the third image sensor; identifying the numbers of pointer images in the first, second and third image windows; mapping the third image window to the two-dimensional space when the numbers of pointer images in two of the image windows are smaller than that in the rest image window; connecting, on the two-dimensional space, mapping positions of two image sensors acquiring fewer pointer image with mapping positions of two outermost edges of the pointer image in the image windows acquired by the same two image sensors to form a quadrilateral; calculating four second internal bisectors of the quadrilateral; connecting, on the two-dimensional space, a mapping position of the image sensor acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the second internal bisectors thereby generating second possible positions; and comparing the first possible positions with the second possible
- the touch system further includes a third image sensor for acquiring image windows looking across the touch surface and containing images of the two pointers.
- the positioning method further includes the steps of: acquiring a third image window with the third image sensor; identifying the numbers of pointer images in the first, second and third image windows; mapping the third image window to the two-dimensional space when the numbers of pointer images in two of the image windows are larger than that in the rest image window; connecting, on the two-dimensional space, a mapping position of one of two image sensors acquiring more pointer images with mapping positions of two outermost edges of the pointer images in the image window acquired by the same image sensor and connecting, on the two-dimensional space, a mapping position of the image sensor acquiring fewer pointer image with mapping positions of two outermost edges of the pointer image in the image window acquired by the same image sensor to form a quadrilateral; calculating four third internal bisectors of the quadrilateral; connecting, on the two-dimensional space, a mapping position of one of two image sensors acquiring more pointer images with
- the touch system further includes a third image sensor for acquiring image windows looking across the touch surface and containing images of the two pointers.
- the positioning method further includes the steps of: acquiring a third image window with the third image sensor; identifying the numbers of pointer images in the first, second and third image windows; mapping the third image window to the two-dimensional space when the numbers of pointer images in two of the image windows are larger than that in the rest image window; connecting, on the two-dimensional space, mapping positions of two image sensors acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image windows acquired by the same two image sensors to form a quadrilateral; defining four corners of the quadrilateral as fourth possible positions of the pointers with respect to the touch surface; and comparing the first possible positions with the fourth possible positions to obtain a pair of current correct positions.
- the present invention further provides a positioning method for a touch system.
- the touch system includes a first image sensor, a second image sensor and a third image sensor for acquiring image windows looking across a touch surface and containing images of two pointers operating above the touch surface.
- the positioning method includes the steps of: respectively acquiring an image window with three image sensors; identifying numbers of pointer images in the image windows; generating a two-dimensional space according to the three image windows; executing the following steps when the numbers of pointer images in two of the image windows is smaller then that in the rest image window: connecting, on the two-dimensional space, mapping positions of two image sensors acquiring fewer pointer image with mapping positions of two outermost edges of the pointer image in the image windows acquired by the same two image sensors to form a quadrilateral; calculating four second internal bisectors of the quadrilateral; and connecting, on the two-dimensional space, a mapping position of the image sensor acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by
- a pair of previous correct positions of the pointers with respect to the touch surface was determined in a previous sample time (image capture time) before the image sensors acquire the image windows.
- the positioning method further includes the steps of: comparing the second possible positions with the pair of previous correct positions to obtain a pair of current correct positions when the numbers of pointer images in two of the image windows are smaller than that in the rest image window; and comparing the third possible positions with the pair of previous correct positions to obtain a pair of current correct positions when the numbers of pointer images in two of the image windows are larger than that in the rest image window.
- the positioning method further includes the steps of: selecting two image sensors acquiring different numbers of pointer images; connecting, on the two-dimensional space, mapping positions of the two image sensors respectively with mapping positions of two outermost edges of the pointer images in the image windows acquired by the same two image sensors to form a quadrilateral; calculating four first internal bisectors of the quadrilateral; connecting, on the two-dimensional space, a mapping position of one of the two image sensors acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the first internal bisectors thereby generating first possible positions, wherein comparing the second possible positions with the first possible positions to obtain a pair of current correct positions when the numbers of pointer images in two of the image windows are smaller than that in the rest image window, and comparing the third possible positions with the first possible positions to obtain a pair of current correct positions when the numbers of pointer images in two of the image windows are larger than that in the rest image window.
- the positioning method further includes the steps of: connecting, on the two-dimensional space, mapping positions of two image sensors acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image windows acquired by the same two image sensors to form a quadrilateral; defining four corners of the quadrilateral as fourth possible positions; and comparing the third possible positions with the fourth possible positions to obtain a pair of current correct positions.
- the present invention further provides a touch system including a touch surface, at least two image sensors and a processing unit.
- a plurality of pointers are operated above the touch surface to accordingly control the touch system.
- the image sensors are configured to acquire image windows looking across the touch surface and containing images of the pointers operating above the touch surface.
- the processing unit generates a two-dimensional space according the image windows acquired by the image sensors, obtains a quadrilateral and four internal bisectors of the quadrilateral by connecting mapping positions of the image sensors with mapping positions of two outermost edges of the pointer image in the image windows acquired by the image sensors on the two-dimensional space, and connects a mapping position of the image sensor acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the internal bisectors thereby generating possible positions.
- two-dimensional information such as two-dimensional coordinates, edge lines, position lines and internal bisectors
- processed by the processing unit is mapped from the one-dimensional image windows acquired by a plurality of image sensors, wherein the internal bisectors may be calculated by using vector arithmetic from the four sides of the quadrilateral.
- FIG. 1 a shows a schematic diagram of an optical touch system.
- FIG. 1 b shows a schematic diagram of image windows acquired by the image sensors included in the touch system shown in FIG. 1 a.
- FIG. 2 a shows a schematic diagram of the touch system according to the first embodiment of the present invention.
- FIG. 2 b shows a schematic diagram of image windows acquired by the image sensors included in the touch system according to the first embodiment of the present invention.
- FIG. 2 c shows a schematic diagram of the positioning method for the touch system according to the first embodiment of the present invention.
- FIG. 2 d shows a flow chart of the positioning method for the touch system according to the first embodiment of the present invention.
- FIG. 3 shows a schematic diagram of the touch system according to the second embodiment of the present invention.
- FIG. 4 a shows a schematic diagram of the positioning method for the touch system according to a first aspect of the second embodiment of the present invention.
- FIG. 4 b shows a flow chart of the positioning method for the touch system according to the first aspect of the second embodiment of the present invention.
- FIG. 4 c shows a schematic diagram of the positioning method for the touch system according to a second aspect of the second embodiment of the present invention.
- FIG. 4 d shows a flow chart of the positioning method for the touch system according to the second aspect of the second embodiment of the present invention.
- FIGS. 5 a to 5 d show schematic diagrams of the positioning method for the touch system according to a third aspect of the second embodiment of the present invention.
- FIG. 5 e shows a flow chart of the positioning method for the touch system according to the third aspect of the second embodiment of the present invention.
- FIG. 6 a shows a schematic diagram of the positioning method for the touch system according to a fourth aspect of the second embodiment of the present invention.
- FIG. 6 b shows a flow chart of the positioning method for the touch system according to the fourth aspect of the second embodiment of the present invention.
- a touch system of the present invention includes at least two image sensors.
- a positioning method for the touch system is applicable to a touch system controlled by a user (not shown) with a plurality of pointers and in the touch system one pointer blocks another pointer with respect to at least one image sensor. That is, numbers of pointer images contained in the image windows acquired by the plurality of image sensors are different from actual numbers of the pointers.
- the numbers of pointer images contained in the image windows acquired by the plurality of image sensors is equal to the actual numbers of the pointers, the two-dimensional coordinate of every pointer may be traced by using other conventional methods.
- the touch system 1 includes a touch surface 10 , a first image sensor 11 , a second image sensor 11 ′ and a processing unit 12 .
- the touch surface 10 may be a white board, a touch screen or the surface of a suitable object.
- the touch surface 10 may also configured to display the operation status, such as the motion of a cursor or a predetermined function (e.g. screen rolling, object zooming or the like).
- the touch system 1 may further include a display for displaying the operation status.
- the first image sensor 11 and the second image sensor 11 ′ may be, for example CCD image sensors, CMOS image sensors or the like, and are configured to synchronously acquire an image window looking across the touch surface 10 within each image capture time.
- the image sensors may have the ability of blocking visible lights so as to eliminate the interference from ambient lights; for example, but not limited to, an optical bandpass filter may be disposed in front of the image sensors.
- the processing unit 12 is configured to process the image windows acquired by the first image sensor 11 and the second image sensor 11 ′, and to trace and to position the pointers, such as to calculate the two-dimensional coordinates of the pointers 81 and 82 with respect to the touch surface 10 .
- the pointer may be a finger, a touch pen, a rod or other suitable objects.
- locations of the first image sensor 11 and the second image sensor 11 ′ are not limited to those shown in FIG. 2 a .
- the first image sensor 11 may be disposed at the lower left corner and the second image sensor 11 ′ may be disposed at the lower right corner.
- FIG. 2 b it shows an image window W 11 acquired by the first image sensor 11 and an image window W 11 ′ acquired by the second image sensor 11 ′ shown in FIG. 2 a .
- the image window W 11 contains two pointer images I 81 and I 82 respectively corresponding to the pointers 81 and 82 , and has a numerical range, such as 0 to 960, to form a one-dimensional space.
- the image window W 11 ′ contains a pointer image I′ (combined image) corresponding to the pointers 81 and 82 , and has a numerical range, such as 0 to 960, to form another one-dimensional space. It is appreciated that, the numerical range may be determined by an actual size of the touch surface 10 .
- the image window W 11 ′ in FIG. 2 b includes only one pointer image I′.
- a two-dimensional space S (as shown in FIG. 2 c ) can be mapped and the two-dimensional space S is corresponding to the touch surface 10 .
- a pair of numerical numbers of the image windows W 11 and W 11 ′ corresponds to a two-dimensional coordinate on the two-dimensional space S.
- a corresponding relationship between a pair of numerical numbers of the image windows and a two-dimensional coordinate may be determined according to the actual application.
- the positioning method of every embodiment or aspect of the present invention may be implemented by performing two-dimensional coordinate operation and vector arithmetic on the two-dimensional space S.
- FIG. 2 d shows a flow chart of the positioning method for a touch system according to the first embodiment of the present invention including the steps of: acquiring a first image window with a first image sensor (Step S 10 ); acquiring a second image window with a second image sensor (Step S 11 ); identifying numbers of pointer images in the first image window and the second image window (Step S 12 ); generating a two-dimensional space according to the first image window and the second image window when the first image window and the second image window contain different numbers of pointer images (Step S 13 ); connecting, on the two-dimensional space, a mapping position of the first image sensor with mapping positions of two outermost edges of the pointer image in the first image window to form a first edge line and a second edge line, and connecting, on the two-dimensional space, a mapping position of the second image sensor with mapping positions of two outermost edges of the pointer image in the second image window to form a third edge line and a fourth edge
- two possible positions associated with the two internal bisectors of two opposite corners of the quadrilateral may be defined as a pair of first possible positions, and each pair of the first possible positions is then compared with the pair of previous correct positions pair by pair.
- the image sensors 11 and 11 ′ respectively acquire an image window W 11 and W 11 ′ at a sample time “t”, and one of the image windows W 11 and W 11 ′ includes only one pointer image (Steps S 10 , S 11 ).
- both image windows W 11 and W 11 ′ respectively acquired by the image sensors 11 and 11 ′ at a sample time “t ⁇ 1” include two pointer images. That is, one of the pointers 81 and 82 does not block the other with respect to any image sensor at the sample time “t ⁇ 1”.
- the processing unit 12 processes the image windows W 11 and W 11 ′ so as to identify whether the image windows W 11 and W 11 ′ contain an identical number of pointer images (Step S 12 ).
- the processing unit 12 identifies that the first image window W 11 and the second image window W 11 ′ contain different numbers of pointer images
- the processing unit 12 generates a two-dimensional space S ( FIG. 2 c ) according to the first image window W 11 and the second image window W 11 ′.
- the first image window W 11 contains two pointer images I 81 and I 82 while the second image window W 11 ′ contains only one pointer image I′ (Step S 13 ).
- the processing unit 12 obtains positions of the pointers 81 and 82 with respect to the touch surface 10 using the positioning method of the present invention.
- the processing unit 12 now respectively maps the pointers 81 and 82 to the pointer images 81 ′ and 82 ′ on the two-dimensional space S.
- the first image sensor 11 and the second image sensor 11 ′ are respectively mapped to mapping positions (0,0) and (960,0) on the two-dimensional space S. It is appreciated that, mapping positions of the image sensors on the two-dimensional space S are determined according to locations of the image sensors disposed on the touch surface 10 .
- the processing unit 12 connects a mapping position (0,0) of the first image sensor 11 on the two-dimensional space S with mapping positions of two outermost edges E 81 and E 82 of the pointer images I 81 and I 82 in the first image window W 11 on the two-dimensional space S so as to form a first edge line L 1 and a second edge line L 2 ; and connects a mapping position (960,0) of the second image sensor 11 ′ on the two-dimensional space S with mapping positions of two outermost edges E 81 ′ and E 82 ′ of the pointer image I′ in the second image window W 11 ′ on the two-dimensional space S so as to form a third edge line L 3 and a fourth edge line L 4 (Step S 14 ).
- the processing unit 12 calculates four first internal bisectors V 1 to V 4 of a quadrilateral ADBC formed by the first edge line L 1 to the fourth edge line L 4 , wherein the first internal bisector V 1 may be obtained by using the vectors ⁇ right arrow over (AD) ⁇ and ⁇ right arrow over (AC) ⁇ . Similarly, internal bisectors V 2 to V 4 may be obtained in the same way (Step S 15 ).
- the processing unit 12 connects a mapping position (0,0) of the image sensor acquiring more pointer images (i.e. the first image sensor 11 herein) on the two-dimensional space S with mapping positions (i.e. centers C 81 and C 82 of the pointer images) of a predetermined point (e.g. center pointer or center of weight) of the pointer images in the first image window W 11 on the two-dimensional space S to form two first poison lines PL 1 and PL 2 (Step S 16 ).
- mapping positions i.e. centers C 81 and C 82 of the pointer images
- a predetermined point e.g. center pointer or center of weight
- the processing unit 12 defines four cross points of the first position lines PL 1 , PL 2 and the first internal bisectors V 1 to V 4 as four first possible positions P 1 to P 4 ; wherein two first possible positions associated with the two internal bisectors of two opposite corners of the quadrilateral ADBC may be defined as a pair of first possible positions.
- P 1 and P 2 may be defined as a pair of first possible positions
- P 3 and P 4 may be defined as another pair of first possible positions (Step S 17 ).
- the processing unit 12 compares the first possible positions P 1 to P 4 with a pair of previous correct positions determined in a previous sample time “t ⁇ 1” of the first image sensor 11 and the second image sensor 11 ′ so as to determine a pair of current correct positions (Step S 18 ).
- the characteristic such as a distance, a moving direction, a moving speed or the like of the pair of previous correct positions and two pairs of first possible positions P 1 , P 2 and P 3 , P 4 may be respectively compared.
- the pair of previous correct positions has a shortest distance, a closest moving direction or a closest moving speed with one pair of the first possible positions
- the pair of first possible positions is identified as the current correct positions, such as P 3 and P 4 herein.
- the four first possible positions P 1 to P 4 may be respectively compared with the pair of previous correct positions to obtain two current correct positions.
- Step S 15 may be performed in Step S 14 .
- FIG. 3 it shows a schematic diagram of the touch system 1 ′ according to the second embodiment of the present invention including a touch surface 10 , a first image sensor 11 , a second image sensor 11 ′, a third image sensor 11 ′′ and a processing unit 12 .
- the touch system 1 ′ includes three image sensors in this embodiment.
- the processing unit 12 processes the image windows acquired by the image sensors to accordingly generate a two-dimensional space.
- the positioning method of the present invention is implemented by performing coordinate operation and vector arithmetic on the two-dimensional space. It is appreciated that, locations of the first image sensor 11 , the second image sensor 11 ′ and the third image sensor 11 ′′ are not limited to those shown in FIG. 3 .
- the third image senor 11 ′′ may also be disposed at lower left corner.
- FIG. 4 a shows a schematic diagram of the positioning method for a touch system according to a first aspect of the second embodiment of the present invention, in which the processing unit 12 generates a two-dimensional space S according to the image windows acquired by all image sensors and four corners of the two-dimensional space S are assumed as (0,0), (X,0), (0,Y) and (X,Y).
- This aspect is applied to the case that numbers of pointer images in the image windows acquired by two image sensors of the touch system 1 ′ are smaller than that acquired by the rest image sensor.
- the image windows acquired by the first image sensor 11 and the second image sensor 11 ′ include only one pointer image and the image window acquired by the third image sensor 11 ′′ includes two pointer images.
- This aspect is configured to obtain two pairs of possible positions or a pair of current correct positions.
- FIG. 4 b shows a flow chart of the positioning method according to the present aspect including the steps of: respectively acquiring an image window with three image sensors (Step S 21 ); identifying numbers of pointer images in the image windows (Step S 22 ); generating a two-dimensional space according to the three image windows when the numbers of pointer images in two of the image windows are smaller than that in the rest image window (Step S 23 ); connecting mapping positions of two image sensors acquiring fewer pointer image with mapping positions of two outermost edges of the pointer images in the image windows acquired by the same two image sensors on the two-dimensional space to form four edge lines (Step S 24 ); calculating four second internal bisectors of a quadrilateral formed by the edge lines (Step S 25 ); connecting a mapping position of the image sensor acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor on the two-dimensional space to form two second position lines (Step S 26 ); defining cross points of the second position lines and
- the image sensors 11 , 11 ′ and 11 ′′ respectively acquire an image window at a sample time “t”, and two of the acquired image windows contain only one pointer image (Step S 21 ).
- the image windows respectively acquired by the image sensors 11 , 11 ′ and 11 ′′ at a sample time “t ⁇ 1” all include two pointer images. That is, one of the pointers 81 and 82 does not block the other with respect to any image sensor at the sample time “t ⁇ 1”.
- the processing unit 12 identifies numbers of pointer images in the image windows (Step S 22 ).
- the processing unit 12 When the processing unit 12 identifies that the numbers of pointer images in two of the image windows are smaller than that in the rest image window, the processing unit 12 generates a two-dimensional space S according to the three image windows. For example, the image windows acquired by the first image sensor 11 and the second image sensor 11 ′ contain only one pointer image while the image window acquired by the third image sensor 11 ′′ contains two pointer images (Step S 23 ).
- the processing unit 12 now maps the pointers 81 and 82 to the pointer images 81 ′ and 82 ′ on the two-dimensional space S.
- mapping positions of the image sensors on the two-dimensional space S are determined according to locations of the image sensors disposed on the touch surface 10 .
- the processing unit 12 connects mapping positions (0,0) and (X,0) of two image sensors acquiring fewer pointer image (i.e. the first image sensor 11 and second image sensor 11 ′ herein) respectively with mapping positions of two outermost edges of the pointer image in the image windows acquired by the same two image sensors on the two-dimensional space S to form four edge lines L 1 to L 4 (Step S 24 ).
- the processing unit 12 calculates four second internal bisectors V 1 to V 4 of a quadrilateral ADBC formed by the first edge line L 1 to the fourth edge line L 4 (Step S 25 ).
- the processing unit 12 connects a mapping position (X,Y) of the image sensor acquiring more pointer images (i.e.
- the processing unit 12 defines four cross points of the second position lines PL 1 , PL 2 and the second internal bisectors V 1 to V 4 as four second possible positions P 1 to P 4 (Step S 27 ).
- the processing unit 12 may obtain a pair of current correct positions by comparing the second possible positions P 1 to P 4 obtained in this aspect with other possible positions, which will be obtained in other aspects hereinafter; or by comparing the second possible positions P 1 to P 4 with a pair of previous correct positions (as illustrated in the first embodiment) determined in a previous sample time “t ⁇ 1” of the first image sensor 11 to the third image sensor 11 ′′ (Step S 28 ). For example, two of the second possible positions P 1 to P 4 having shortest distances, closest moving directions or moving speeds with respect to the pair of previous correct positions are identified as the pair of current correct positions, such as P 1 and P 2 herein.
- FIG. 4 c shows a schematic diagram of the positioning method for a touch system 1 ′ according to a second aspect of the second embodiment of the present invention.
- This aspect is also applied to the case that numbers of pointer images in the image windows acquired by two image sensors of the touch system 1 ′ are smaller than that acquired by the rest image sensor.
- This aspect is configured to obtain two pairs of possible positions or a pair of current correct positions.
- FIG. 4 d shows a flow chart of the positioning method according to the present aspect including the steps of: respectively acquiring an image window with three image sensors (Step S 31 ); identifying numbers of pointer images in the image windows (Step S 32 ); generating a two-dimensional space according to the three image windows when the numbers of pointer images in two of the image windows are smaller than that in the rest image window (Step S 33 ); connecting a mapping position of the image sensor acquiring more pointer images with mapping positions of two outermost edges of the pointer images in the image window acquired by the same image sensor on the two-dimensional space to form two edge lines, and connecting a mapping position of one of two image sensors acquiring fewer pointer image with mapping positions of two outermost edges of the pointer image in the image window acquired by the same image sensor on the two-dimensional space to form another two edge lines (Step S 34 ); calculating four internal bisectors of a quadrilateral formed by the four edge lines (Step S 35 ); connecting a mapping position of the image sensor acquiring more point
- the image sensors 11 , 11 ′ and 11 ′′ respectively acquire an image window at a sample time “t”, and two of the acquired image windows contain only one pointer image (Step S 31 ).
- image windows respectively acquired by the image sensors 11 , 11 ′ and 11 ′′ at a sample time “t ⁇ 1” all include two pointer images.
- the processing unit 12 identifies numbers of pointer images in the image windows (Step S 32 ). When the numbers of pointer images in two of the image windows are identified to be smaller than that in the rest image window, the processing unit 12 generates a two-dimensional space S according to the three image windows (Step S 33 ), wherein the pointers 81 and 82 are respectively mapped to the pointer images 81 ′ and 82 ′ on the two-dimensional space S.
- the processing unit 12 obtains possible positions (Steps S 34 to S 37 ) or a pair of current correct positions (Step S 38 ) according to the image sensor acquiring more pointer images (i.e. the third image sensor 11 ′′ herein) and one of the two image sensors acquiring fewer pointer image (i.e. the first image sensor 11 or the second image sensor 11 ′ herein) by using the method illustrated in the first embodiment, and details thereof were already illustrated in the first embodiment and thus will not be repeated herein.
- the image sensor acquiring more pointer images (i.e. the third image sensor 11 ′′ herein) and one of the two image sensors acquiring fewer pointer image (i.e. the first image sensor 11 or the second image sensor 11 ′ herein) by using the method illustrated in the first embodiment, and details thereof were already illustrated in the first embodiment and thus will not be repeated herein.
- the processing unit 12 may compare the possible positions obtained according to a current frame with the first possible positions of the first embodiment or the second possible positions of the first aspect of the second embodiment to obtain a pair of current correct positions, such as comparing shortest distances between those possible positions. Or the processing unit 12 may compare the possible positions obtained in this aspect with a pair of previous correct positions determined in a previous sample time “t ⁇ 1” of the first image sensor 11 to third image sensor 11 ′′ to obtain a pair of current correct positions.
- FIGS. 5 a to 5 d show schematic diagrams of the positioning method for a touch system 1 ′ according to a third aspect of the second embodiment of the present invention, in which the processing unit 12 generates a two-dimensional space S according to the image windows acquired by all image sensors and four corners of the two-dimensional space S are assumed as (0,0), (X,0), (0,Y) and (X,Y).
- This aspect is applied to the case that numbers of pointer images in the image windows acquired by two image sensors of the touch system 1 ′ are larger than that acquired by the rest image sensor.
- the image windows acquired by the first image sensor 11 and the third image sensor 11 ′′ contain two pointer images while the image window acquired by the second image sensor 11 ′ contains only one pointer image.
- This aspect is configured to obtain two pairs of possible positions or a pair of current correct positions.
- FIG. 5 e shows a flow chart of the positioning method according to the present aspect including the steps of: respectively acquiring an image window with three image sensors (Step S 41 ); identifying numbers of pointer images in the image windows (Step S 42 ); generating a two-dimensional space according to the three image windows when the numbers of pointer images in two of the image windows are larger than that in the rest image window (Step S 43 ); connecting a mapping position of one of two image sensors acquiring more pointer images with mapping positions of two outermost edges of the pointer images in the image window acquired by the same image sensor on the two-dimensional space to form two edge lines, and connecting a mapping position of the image sensor acquiring fewer pointer image with mapping positions of two outermost edges of the pointer image in the image window acquired by the same image sensor on the two-dimensional space to form another two edge lines (Step S 44 ); calculating four third internal bisectors of a quadrilateral formed by the four edge lines (Step S 45 ); connecting a mapping position of one of two image sensors
- the image sensors 11 , 11 ′ and 11 ′′ respectively acquire an image window at a sample time “t”, and one of the acquired image windows contains only one pointer image (Step S 41 ).
- image windows respectively acquired by the image sensors 11 , 11 ′ and 11 ′′ at a sample time “t ⁇ 1” all include two pointer images.
- the processing unit 12 identifies numbers of pointer images in the image windows (Step S 42 ).
- the processing unit 12 When the processing unit 12 identifies that the numbers of pointer images in two of the image windows are larger than that in the rest image window, the processing unit 12 generates a two-dimensional space S according to the three image windows (Step S 43 ).
- the processing unit 12 respectively maps the pointers 81 and 82 to the pointer images 81 ′ and 82 ′ on the two-dimensional space S.
- the first image sensor 11 , the second image sensor 11 ′ and the third image sensor 11 ′′ are respectively mapped to mapping positions (0,0), (X,0) and (X,Y) on the two-dimensional space S.
- the processing unit 12 connects a mapping position (0,0) or (X,Y) of one of two image sensors acquiring more pointer images (i.e. the first image sensor 11 in FIGS. 5 a and 5 b ; the third image sensor 11 ′′ in FIGS. 5 c and 5 d ) with mapping positions of two outermost edges of the pointer images in the image window acquired by the same image sensor on the two-dimensional space S to form two edge lines L 1 and L 2 , and connects a mapping position (X,0) of the image sensor acquiring fewer pointer image (i.e.
- the processing unit 12 calculates four third internal bisectors V 1 to V 4 of a quadrilateral ADBC formed by the four edge lines L 1 to L 4 (Step S 45 ).
- the processing unit 12 connects a mapping position (0,0) or (X,Y) of one of two image sensors acquiring more pointer images (i.e. the third image sensor 11 ′′ in FIGS. 5 a and 5 d ; the first image sensor 11 in FIGS.
- Four cross points P 1 to P 4 of the third position lines PL 1 , PL 2 and the third internal bisectors V 1 to V 4 are defined as third possible positions (Step S 47 ).
- the processing unit 12 may compare the third possible positions P 1 to P 4 with the first possible positions of the first embodiment, the second possible positions of the first aspect of the second embodiment or the possible positions of the second aspect of the second embodiment to obtain a pair of current correct positions. Or the processing unit 12 may compare the third possible positions with a pair of previous correct positions (as illustrated in the first embodiment) determined in a previous sample time “t ⁇ 1” of the first image sensor 11 to the third image sensor 11 ′′ so as to obtain a pair of current correct positions (Step S 48 ). It is appreciated that, this aspect may also obtain two pairs of possible positions according to two image sensors acquiring different numbers of pointer images (e.g. the first image sensor 11 and the second image sensor 11 ′, or the second image sensor 11 ′ and the third image sensor 11 ′′), and details thereof were already illustrated in the first embodiment and thus will not be repeated herein.
- FIG. 6 a shows a schematic diagram of the positioning method for a touch system 1 ′ according to a fourth aspect of the second embodiment of the present invention, in which the processing unit 12 generates a two-dimensional space S according to the image windows acquired by all image sensors and four corners of the two-dimensional space S are assumed as (0,0), (X,0), (0,Y) and (X,Y).
- This aspect is applied to the case that numbers of pointer images in the image windows acquired by two image sensors of the touch system 1 ′ are larger than that acquired by the rest image sensor.
- the image windows acquired by the first image sensor 11 and the second image sensor 11 ′ contain two pointer images while the image window acquired by the third image sensor 11 ′′ contains only one pointer image.
- This aspect is configured to obtain two pairs of possible positions or a pair of current correct positions.
- FIG. 6 b shows a flow chart of the positioning method according to the present aspect including the steps of: respectively acquiring an image window with three image sensors (Step S 51 ); identifying numbers of pointer images in the image windows (Step S 52 ); generating a two-dimensional space according to the three image windows when the numbers of pointer images in two of the image windows are larger than that in the rest image window (Step S 53 ); connecting mapping positions of two image sensors acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image windows acquired by the same two image sensors on the two-dimensional space to form a quadrilateral (Step S 54 ); defining four corners of the quadrilateral as fourth possible positions (Step S 55 ); and comparing the fourth possible positions with a pair of previous correct positions to obtain a pair of current correct positions (Step S 56 ), wherein the pair of previous correct positions is determined in a previous sample time of the first image sensor to the third image sensor.
- the image sensors 11 , 11 ′ and 11 ′′ respectively acquire an image window at a sample time “t”, and one of the acquired image windows contains only one pointer image (Step S 51 ).
- image windows respectively acquired by the image sensors 11 , 11 ′ and 11 ′′ at a sample time “t ⁇ 1” all include two pointer images.
- the processing unit 12 identifies numbers of pointer images in the image windows (Step S 52 ).
- the processing unit 12 When the numbers of pointer images in two of the image windows are identified to be larger than that in the rest image window, the processing unit 12 generates a two-dimensional space S according to the three image windows (Step S 53 ), wherein the pointers 81 and 82 are respectively mapped to the pointer images 81 ′ and 82 ′ on the two-dimensional space S; and the first image sensor 11 , the second image sensor 11 ′ and the third image sensor 11 ′′ are respectively mapped to mapping positions (0,0), (X,0) and (X,Y) on the two-dimensional space S.
- the processing unit 12 connects mapping positions (0,0) and (X,0) of two image sensors acquiring more pointer images (i.e. the first image sensor 11 and the second image sensor 11 ′) respectively with mapping positions of a predetermined point (e.g. center point or center of weight) of the pointer images in the image windows acquired by the same two image sensors on the two-dimensional space S to form a quadrilateral ADBC (Step S 54 ).
- Four corners of the quadrilateral ADBC are defined as fourth possible positions (Step S 55 ).
- the processing unit 12 may compare the fourth possible positions P 1 to P 4 obtained in this aspect with the possible positions obtained in the first embodiment or in every aspect of the second embodiment to obtain a pair of current correct positions. Or the processing unit 12 may compare the fourth possible positions obtained in this aspect with a pair of previous correct positions (as illustrated in the first embodiment) determined in a previous sample time “t ⁇ 1” of the first image sensor 11 to the third image sensor 11 ′′ so as to obtain a pair of current correct positions (Step S 56 ). It is appreciated that, this aspect may also obtain two pairs of possible positions according to two image sensors acquiring different numbers of pointer images (e.g. the second image sensor 11 ′ and the third image sensor 11 ′′), and details thereof were already illustrated in the first embodiment and thus will not be repeated herein.
- the positioning method for a touch system of the present invention may obtain a pair of current correct positions by comparing two pairs of possible positions in a current frame with a pair of previous correct positions in a previous frame; or obtain a pair of current correct positions by comparing two pairs of possible positions in a current frame respectively obtained from different embodiments or aspects described above.
- the previous frame is an effective frame previous to the current frame. For example, if an immediately previous frame of the current frame has a poor image quality so that it is identified as an invalid frame, the previous frame of the current frame may be the second or the nth frame previous to the current frame.
- the positioning method of the present invention is also applicable to the positioning of more than two pointers.
- the present invention is not limited to compare a pair of possible positions at the same time, and every possible position may be sequentially and separately compared so as to obtain current correct positions.
- four possible positions (P 1 , P 2 , P 3 , P 4 ) obtained in any embodiment or aspect above may be respectively compared with another four possible positions (P 1 ′, P 2 ′, P 3 ′, P 4 ′) obtained in another embodiment or aspect.
- positions, moving speeds and/or moving directions of four possible positions (P 1 , P 2 , P 3 , P 4 ) obtained in any embodiment or aspect above may be compared with that of a pair of previous correct positions so as to obtain two or a pair of current correct positions.
- the present invention further provides a touch system ( FIGS. 2 a and 3 ) and a positioning method therefore ( FIGS. 2 d , 4 b , 4 d , 5 e and 6 b ) that can correctly trace and position two-dimensional coordinates of a plurality of pointers with respect to a touch system.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides a positioning method for a touch system that obtains a pair of current correct positions according to the following steps: obtaining two pairs of possible positions from a current frame to be compared with a pair of previous correct positions obtained from a previous frame; or comparing four pairs of possible positions with each other obtained from the current frame. The present invention further provides a touch system.
Description
This application claims the priority benefit of Taiwan Patent Application Serial Number 099119224, filed on Jun. 14, 2010, the full disclosure of which is incorporated herein by reference.
1. Field of the Invention
This invention generally relates to a touch system and, more particularly, to an optical touch system and a positioning method therefor.
2. Description of the Related Art
Please referring to FIGS. 1 a and 1 b, FIG. 1 a shows a schematic diagram of an optical touch system and FIG. 1 b shows a schematic diagram of image windows acquired by the two image sensors included in the touch system shown in FIG. 1 a.
The touch system 9 includes a touch surface 90 and two image sensors 91 and 91′. The image sensors 91 and 91′ are configured to acquire image windows W91 and W91′ respectively looking across the touch surface 90. When a finger 81 is hovering above or touches the touch surface 90, the image windows W91 and W91′ acquired by the image sensors 91 and 91′ respectively include finger images I81 and I81′ of the finger 81. A processing unit 92 can calculate a two-dimensional coordinate of the finger 81 with respect to the touch surface 90 according to a one-dimensional position of the finger image I81 in the image window W91 and a one-dimensional position of the finger image I81′ in the image window W91′.
However, when a plurality of fingers are hovering above or touch the touch surface 90 simultaneously, one finger may block other finger or fingers with respect to a part of the image sensors. For example in FIG. 1 a, when two fingers 81 and 82 are hovering above or touch the touch surface 90, the image sensor 91 acquires images of the fingers 81 and 82 following a route “a” and the image sensor 91′ acquires images of the fingers 81 and 82 following routes “b” and “c” respectively. To the image sensor 91, as the finger 81 blocks the finger 82, the image window W91 acquired by the image sensor 91 only includes a finger image I81+I82 (i.e. a combined image of the finger images I81 and I82). Therefore, the processing unit 92 is not able to correctly calculate two-dimensional coordinates of different fingers with respect to the touch surface 90 according to the finger images I81′, I82′ and I81+I82.
Accordingly, it is necessary to provide a positioning method for an optical touch system that can correctly position a plurality of pointers.
The present invention provides a touch system and a positioning method therefor configured to correctly obtain two-dimensional coordinates of a plurality of pointers with respect to a touch system.
The present invention provides a positioning method for a touch system. The touch system includes a first image sensor and a second image sensor for acquiring image windows looking across a touch surface and containing images of two pointers operating above the touch surface. The positioning method includes the steps of: acquiring a first image window with the first image sensor; acquiring a second image window with the second image sensor; identifying numbers of pointer images in the first image window and the second image window; generating a two-dimensional space according to the first image window and the second image window when the first image window and the second image window contain different numbers of pointer images; connecting, on the two-dimensional space, a mapping position of the first image sensor with mapping positions of two outermost edges of the pointer image in the first image window and connecting, on the two-dimensional space, a mapping position of the second image sensor with mapping positions of two outermost edges of the pointer image in the second image window to form a quadrilateral; calculating four first internal bisectors of the quadrilateral; and connecting, on the two-dimensional space, a mapping position of the image sensor acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the first internal bisectors thereby generating first possible positions.
In another aspect, a pair of previous correct positions of the pointers with respect to the touch surface was determined in a previous sample time (image capture time) before the first image sensor acquires the first image window and the second image sensor acquires the second image window. The positioning method further includes the steps of: comparing the first possible positions with the pair of previous correct positions to obtain a pair of current correct positions.
In another aspect, the touch system further includes a third image sensor for acquiring image windows looking across the touch surface and containing images of the two pointers. The positioning method further includes the steps of: acquiring a third image window with the third image sensor; identifying the numbers of pointer images in the first, second and third image windows; mapping the third image window to the two-dimensional space when the numbers of pointer images in two of the image windows are smaller than that in the rest image window; connecting, on the two-dimensional space, mapping positions of two image sensors acquiring fewer pointer image with mapping positions of two outermost edges of the pointer image in the image windows acquired by the same two image sensors to form a quadrilateral; calculating four second internal bisectors of the quadrilateral; connecting, on the two-dimensional space, a mapping position of the image sensor acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the second internal bisectors thereby generating second possible positions; and comparing the first possible positions with the second possible positions to obtain a pair of current correct positions.
In another aspect, the touch system further includes a third image sensor for acquiring image windows looking across the touch surface and containing images of the two pointers. The positioning method further includes the steps of: acquiring a third image window with the third image sensor; identifying the numbers of pointer images in the first, second and third image windows; mapping the third image window to the two-dimensional space when the numbers of pointer images in two of the image windows are larger than that in the rest image window; connecting, on the two-dimensional space, a mapping position of one of two image sensors acquiring more pointer images with mapping positions of two outermost edges of the pointer images in the image window acquired by the same image sensor and connecting, on the two-dimensional space, a mapping position of the image sensor acquiring fewer pointer image with mapping positions of two outermost edges of the pointer image in the image window acquired by the same image sensor to form a quadrilateral; calculating four third internal bisectors of the quadrilateral; connecting, on the two-dimensional space, a mapping position of one of two image sensors acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the third internal bisectors thereby generating third possible positions; and comparing the first possible positions with third possible positions to obtain a pair of current correct positions.
In another aspect, the touch system further includes a third image sensor for acquiring image windows looking across the touch surface and containing images of the two pointers. The positioning method further includes the steps of: acquiring a third image window with the third image sensor; identifying the numbers of pointer images in the first, second and third image windows; mapping the third image window to the two-dimensional space when the numbers of pointer images in two of the image windows are larger than that in the rest image window; connecting, on the two-dimensional space, mapping positions of two image sensors acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image windows acquired by the same two image sensors to form a quadrilateral; defining four corners of the quadrilateral as fourth possible positions of the pointers with respect to the touch surface; and comparing the first possible positions with the fourth possible positions to obtain a pair of current correct positions.
The present invention further provides a positioning method for a touch system. The touch system includes a first image sensor, a second image sensor and a third image sensor for acquiring image windows looking across a touch surface and containing images of two pointers operating above the touch surface. The positioning method includes the steps of: respectively acquiring an image window with three image sensors; identifying numbers of pointer images in the image windows; generating a two-dimensional space according to the three image windows; executing the following steps when the numbers of pointer images in two of the image windows is smaller then that in the rest image window: connecting, on the two-dimensional space, mapping positions of two image sensors acquiring fewer pointer image with mapping positions of two outermost edges of the pointer image in the image windows acquired by the same two image sensors to form a quadrilateral; calculating four second internal bisectors of the quadrilateral; and connecting, on the two-dimensional space, a mapping position of the image sensor acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the second internal bisectors thereby generating second possible positions; and executing the following steps when the numbers of pointer images in two of the image windows is larger then that in the rest image window: connecting, on the two-dimensional space, a mapping position of one of two image sensors acquiring more pointer images with mapping positions of two outermost edges of the pointer images in the image window acquired by the same image sensor and connecting, on the two-dimensional space, a mapping position of the image sensor acquiring fewer pointer image with mapping positions of two outermost edges of the pointer image in the image window acquired by the same image sensor to form a quadrilateral; calculating four third internal bisectors of the quadrilateral; and connecting, on the two-dimensional space, a mapping position of one of two image sensors acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the third internal bisectors thereby generating third possible positions.
In another aspect, a pair of previous correct positions of the pointers with respect to the touch surface was determined in a previous sample time (image capture time) before the image sensors acquire the image windows. The positioning method further includes the steps of: comparing the second possible positions with the pair of previous correct positions to obtain a pair of current correct positions when the numbers of pointer images in two of the image windows are smaller than that in the rest image window; and comparing the third possible positions with the pair of previous correct positions to obtain a pair of current correct positions when the numbers of pointer images in two of the image windows are larger than that in the rest image window.
In another aspect, the positioning method further includes the steps of: selecting two image sensors acquiring different numbers of pointer images; connecting, on the two-dimensional space, mapping positions of the two image sensors respectively with mapping positions of two outermost edges of the pointer images in the image windows acquired by the same two image sensors to form a quadrilateral; calculating four first internal bisectors of the quadrilateral; connecting, on the two-dimensional space, a mapping position of one of the two image sensors acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the first internal bisectors thereby generating first possible positions, wherein comparing the second possible positions with the first possible positions to obtain a pair of current correct positions when the numbers of pointer images in two of the image windows are smaller than that in the rest image window, and comparing the third possible positions with the first possible positions to obtain a pair of current correct positions when the numbers of pointer images in two of the image windows are larger than that in the rest image window.
In another aspect, when the numbers of pointer images in two of the image windows are larger than that in the rest image window, the positioning method further includes the steps of: connecting, on the two-dimensional space, mapping positions of two image sensors acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image windows acquired by the same two image sensors to form a quadrilateral; defining four corners of the quadrilateral as fourth possible positions; and comparing the third possible positions with the fourth possible positions to obtain a pair of current correct positions.
The present invention further provides a touch system including a touch surface, at least two image sensors and a processing unit. A plurality of pointers are operated above the touch surface to accordingly control the touch system. The image sensors are configured to acquire image windows looking across the touch surface and containing images of the pointers operating above the touch surface. The processing unit generates a two-dimensional space according the image windows acquired by the image sensors, obtains a quadrilateral and four internal bisectors of the quadrilateral by connecting mapping positions of the image sensors with mapping positions of two outermost edges of the pointer image in the image windows acquired by the image sensors on the two-dimensional space, and connects a mapping position of the image sensor acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the internal bisectors thereby generating possible positions.
In the touch system and positioning method therefore of the present invention, when the number of pointer images in the image window acquired by at least one image sensor is equal to an actual number of the pointers, correct positions of all pointers still can be calculated even though the number of pointer images in the image window acquired by the rest image sensor is smaller than the actual number of the pointers.
In the touch system and positioning method therefore of the present invention, two-dimensional information, such as two-dimensional coordinates, edge lines, position lines and internal bisectors, processed by the processing unit is mapped from the one-dimensional image windows acquired by a plurality of image sensors, wherein the internal bisectors may be calculated by using vector arithmetic from the four sides of the quadrilateral.
Other objects, advantages, and novel features of the present invention will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings.
It should be noted that, wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
In addition, only a part of components are shown in the drawings and the components that are not directly pertinent to the present invention are omitted.
A touch system of the present invention includes at least two image sensors. A positioning method for the touch system is applicable to a touch system controlled by a user (not shown) with a plurality of pointers and in the touch system one pointer blocks another pointer with respect to at least one image sensor. That is, numbers of pointer images contained in the image windows acquired by the plurality of image sensors are different from actual numbers of the pointers. In addition, when the numbers of pointer images contained in the image windows acquired by the plurality of image sensors is equal to the actual numbers of the pointers, the two-dimensional coordinate of every pointer may be traced by using other conventional methods.
Please referring to FIG. 2 a, it shows a schematic diagram of the touch system according to the first embodiment of the present invention. The touch system 1 includes a touch surface 10, a first image sensor 11, a second image sensor 11′ and a processing unit 12. The touch surface 10 may be a white board, a touch screen or the surface of a suitable object. When the touch surface 10 is a touch screen, the touch surface 10 may also configured to display the operation status, such as the motion of a cursor or a predetermined function (e.g. screen rolling, object zooming or the like). In addition, the touch system 1 may further include a display for displaying the operation status.
The first image sensor 11 and the second image sensor 11′ may be, for example CCD image sensors, CMOS image sensors or the like, and are configured to synchronously acquire an image window looking across the touch surface 10 within each image capture time. The image sensors may have the ability of blocking visible lights so as to eliminate the interference from ambient lights; for example, but not limited to, an optical bandpass filter may be disposed in front of the image sensors. The processing unit 12 is configured to process the image windows acquired by the first image sensor 11 and the second image sensor 11′, and to trace and to position the pointers, such as to calculate the two-dimensional coordinates of the pointers 81 and 82 with respect to the touch surface 10. In this invention, the pointer may be a finger, a touch pen, a rod or other suitable objects. It is appreciated that, locations of the first image sensor 11 and the second image sensor 11′ are not limited to those shown in FIG. 2 a. For example, the first image sensor 11 may be disposed at the lower left corner and the second image sensor 11′ may be disposed at the lower right corner.
Please referring to FIG. 2 b, it shows an image window W11 acquired by the first image sensor 11 and an image window W11′ acquired by the second image sensor 11′ shown in FIG. 2 a. The image window W11 contains two pointer images I81 and I82 respectively corresponding to the pointers 81 and 82, and has a numerical range, such as 0 to 960, to form a one-dimensional space. The image window W11′ contains a pointer image I′ (combined image) corresponding to the pointers 81 and 82, and has a numerical range, such as 0 to 960, to form another one-dimensional space. It is appreciated that, the numerical range may be determined by an actual size of the touch surface 10.
To the second image sensor 11′, as the pointer 81 blocks the pointer 82, the image window W11′ in FIG. 2 b includes only one pointer image I′. According to the one-dimensional numerical ranges of the image windows W11 and W11′, a two-dimensional space S (as shown in FIG. 2 c) can be mapped and the two-dimensional space S is corresponding to the touch surface 10. In other words, a pair of numerical numbers of the image windows W11 and W11′ corresponds to a two-dimensional coordinate on the two-dimensional space S. For example, (W11,W11′)=(0,0) corresponds to the upper left corner of the two-dimensional space S and (W11,W11′)=(960,960) corresponds to the lower right corner of the two-dimensional space S, but the present invention is not limited thereto. A corresponding relationship between a pair of numerical numbers of the image windows and a two-dimensional coordinate may be determined according to the actual application.
The positioning method of every embodiment or aspect of the present invention may be implemented by performing two-dimensional coordinate operation and vector arithmetic on the two-dimensional space S.
Please referring to FIGS. 2 a to 2 d, FIG. 2 d shows a flow chart of the positioning method for a touch system according to the first embodiment of the present invention including the steps of: acquiring a first image window with a first image sensor (Step S10); acquiring a second image window with a second image sensor (Step S11); identifying numbers of pointer images in the first image window and the second image window (Step S12); generating a two-dimensional space according to the first image window and the second image window when the first image window and the second image window contain different numbers of pointer images (Step S13); connecting, on the two-dimensional space, a mapping position of the first image sensor with mapping positions of two outermost edges of the pointer image in the first image window to form a first edge line and a second edge line, and connecting, on the two-dimensional space, a mapping position of the second image sensor with mapping positions of two outermost edges of the pointer image in the second image window to form a third edge line and a fourth edge line (Step S14); calculating four internal bisectors of a quadrilateral formed by the first edge line to the fourth edge line (Step S15); connecting, on the two-dimensional space, a mapping position of the image sensor acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to form two first position lines (Step S16); defining cross points of the first position lines and the first internal bisectors as first possible positions (Step S17); and comparing the first possible positions with a pair of previous correct positions to obtain a pair of current correct positions (Step S18), wherein the pair of previous correct positions is determined in a previous image capture time (sample time) of the first image sensor and the second image sensor. In addition, before comparing the first possible positions and a pair of previous correct positions, two possible positions associated with the two internal bisectors of two opposite corners of the quadrilateral may be defined as a pair of first possible positions, and each pair of the first possible positions is then compared with the pair of previous correct positions pair by pair.
The image sensors 11 and 11′ respectively acquire an image window W11 and W11′ at a sample time “t”, and one of the image windows W11 and W11′ includes only one pointer image (Steps S10, S11). In the meantime, it is assumed that both image windows W11 and W11′ respectively acquired by the image sensors 11 and 11′ at a sample time “t−1” include two pointer images. That is, one of the pointers 81 and 82 does not block the other with respect to any image sensor at the sample time “t−1”.
Please referring to FIGS. 2 a to 2 c, the processing unit 12 processes the image windows W11 and W11′ so as to identify whether the image windows W11 and W11′ contain an identical number of pointer images (Step S12). When the processing unit 12 identifies that the first image window W11 and the second image window W11′ contain different numbers of pointer images, the processing unit 12 generates a two-dimensional space S (FIG. 2 c) according to the first image window W11 and the second image window W11′. For example, the first image window W11 contains two pointer images I81 and I82 while the second image window W11′ contains only one pointer image I′ (Step S13). Next, the processing unit 12 obtains positions of the pointers 81 and 82 with respect to the touch surface 10 using the positioning method of the present invention. The processing unit 12 now respectively maps the pointers 81 and 82 to the pointer images 81′ and 82′ on the two-dimensional space S. In addition, the first image sensor 11 and the second image sensor 11′ are respectively mapped to mapping positions (0,0) and (960,0) on the two-dimensional space S. It is appreciated that, mapping positions of the image sensors on the two-dimensional space S are determined according to locations of the image sensors disposed on the touch surface 10.
Next, the processing unit 12 connects a mapping position (0,0) of the first image sensor 11 on the two-dimensional space S with mapping positions of two outermost edges E81 and E82 of the pointer images I81 and I82 in the first image window W11 on the two-dimensional space S so as to form a first edge line L1 and a second edge line L2; and connects a mapping position (960,0) of the second image sensor 11′ on the two-dimensional space S with mapping positions of two outermost edges E81′ and E82′ of the pointer image I′ in the second image window W11′ on the two-dimensional space S so as to form a third edge line L3 and a fourth edge line L4 (Step S14). The processing unit 12 then calculates four first internal bisectors V1 to V4 of a quadrilateral ADBC formed by the first edge line L1 to the fourth edge line L4, wherein the first internal bisector V1 may be obtained by using the vectors {right arrow over (AD)} and {right arrow over (AC)}. Similarly, internal bisectors V2 to V4 may be obtained in the same way (Step S15).
Next, the processing unit 12 connects a mapping position (0,0) of the image sensor acquiring more pointer images (i.e. the first image sensor 11 herein) on the two-dimensional space S with mapping positions (i.e. centers C81 and C82 of the pointer images) of a predetermined point (e.g. center pointer or center of weight) of the pointer images in the first image window W11 on the two-dimensional space S to form two first poison lines PL1 and PL2 (Step S16). Then, the processing unit 12 defines four cross points of the first position lines PL1, PL2 and the first internal bisectors V1 to V4 as four first possible positions P1 to P4; wherein two first possible positions associated with the two internal bisectors of two opposite corners of the quadrilateral ADBC may be defined as a pair of first possible positions. For example, P1 and P2 may be defined as a pair of first possible positions and P3 and P4 may be defined as another pair of first possible positions (Step S17). Finally, the processing unit 12 compares the first possible positions P1 to P4 with a pair of previous correct positions determined in a previous sample time “t−1” of the first image sensor 11 and the second image sensor 11′ so as to determine a pair of current correct positions (Step S18). For example in an embodiment, the characteristic such as a distance, a moving direction, a moving speed or the like of the pair of previous correct positions and two pairs of first possible positions P1, P2 and P3, P4 may be respectively compared. When the pair of previous correct positions has a shortest distance, a closest moving direction or a closest moving speed with one pair of the first possible positions, the pair of first possible positions is identified as the current correct positions, such as P3 and P4 herein. In another embodiment, the four first possible positions P1 to P4 may be respectively compared with the pair of previous correct positions to obtain two current correct positions.
It is appreciated that, some steps shown in FIG. 2 d may be combined together and the steps shown therein are only for illustrating the implementation of the positioning method of the present invention rather than limitations to the present invention. For example, the process of obtaining the quadrilateral in Step S15 may be performed in Step S14.
Please referring to FIG. 3 , it shows a schematic diagram of the touch system 1′ according to the second embodiment of the present invention including a touch surface 10, a first image sensor 11, a second image sensor 11′, a third image sensor 11″ and a processing unit 12. The difference between this embodiment and the first embodiment is that the touch system 1′ includes three image sensors in this embodiment. Similarly, the processing unit 12 processes the image windows acquired by the image sensors to accordingly generate a two-dimensional space. The positioning method of the present invention is implemented by performing coordinate operation and vector arithmetic on the two-dimensional space. It is appreciated that, locations of the first image sensor 11, the second image sensor 11′ and the third image sensor 11″ are not limited to those shown in FIG. 3 . For example, the third image senor 11″ may also be disposed at lower left corner.
Please referring to FIG. 4 a, it shows a schematic diagram of the positioning method for a touch system according to a first aspect of the second embodiment of the present invention, in which the processing unit 12 generates a two-dimensional space S according to the image windows acquired by all image sensors and four corners of the two-dimensional space S are assumed as (0,0), (X,0), (0,Y) and (X,Y). This aspect is applied to the case that numbers of pointer images in the image windows acquired by two image sensors of the touch system 1′ are smaller than that acquired by the rest image sensor. For example, the image windows acquired by the first image sensor 11 and the second image sensor 11′ include only one pointer image and the image window acquired by the third image sensor 11″ includes two pointer images. This aspect is configured to obtain two pairs of possible positions or a pair of current correct positions.
Please referring to FIGS. 4 a and 4 b, the image sensors 11, 11′ and 11″ respectively acquire an image window at a sample time “t”, and two of the acquired image windows contain only one pointer image (Step S21). In the meanwhile, it is assumed that the image windows respectively acquired by the image sensors 11, 11′ and 11″ at a sample time “t−1” all include two pointer images. That is, one of the pointers 81 and 82 does not block the other with respect to any image sensor at the sample time “t−1”.
The processing unit 12 identifies numbers of pointer images in the image windows (Step S22). When the processing unit 12 identifies that the numbers of pointer images in two of the image windows are smaller than that in the rest image window, the processing unit 12 generates a two-dimensional space S according to the three image windows. For example, the image windows acquired by the first image sensor 11 and the second image sensor 11′ contain only one pointer image while the image window acquired by the third image sensor 11″ contains two pointer images (Step S23). The processing unit 12 now maps the pointers 81 and 82 to the pointer images 81′ and 82′ on the two-dimensional space S. In addition, the first image sensor 11, the second image sensor 11′ and the third image sensor 11″ are respectively mapped to mapping positions (0,0), (X,0) and (X,Y) on the two-dimensional space S. Similarly, mappings positions of the image sensors on the two-dimensional space S are determined according to locations of the image sensors disposed on the touch surface 10.
Next, the processing unit 12 connects mapping positions (0,0) and (X,0) of two image sensors acquiring fewer pointer image (i.e. the first image sensor 11 and second image sensor 11′ herein) respectively with mapping positions of two outermost edges of the pointer image in the image windows acquired by the same two image sensors on the two-dimensional space S to form four edge lines L1 to L4 (Step S24). The processing unit 12 calculates four second internal bisectors V1 to V4 of a quadrilateral ADBC formed by the first edge line L1 to the fourth edge line L4 (Step S25). The processing unit 12 connects a mapping position (X,Y) of the image sensor acquiring more pointer images (i.e. the third image sensor 11″) with mapping positions C81 and C82 of a predetermined point (e.g. center point or center of weight) of the pointer images in the image window acquired by the same image sensor on the two-dimensional space S to form two second position lines PL1 and PL2 (Step S26). The processing unit 12 defines four cross points of the second position lines PL1, PL2 and the second internal bisectors V1 to V4 as four second possible positions P1 to P4 (Step S27).
In this aspect, the processing unit 12 may obtain a pair of current correct positions by comparing the second possible positions P1 to P4 obtained in this aspect with other possible positions, which will be obtained in other aspects hereinafter; or by comparing the second possible positions P1 to P4 with a pair of previous correct positions (as illustrated in the first embodiment) determined in a previous sample time “t−1” of the first image sensor 11 to the third image sensor 11″ (Step S28). For example, two of the second possible positions P1 to P4 having shortest distances, closest moving directions or moving speeds with respect to the pair of previous correct positions are identified as the pair of current correct positions, such as P1 and P2 herein.
Please referring to FIG. 4 c, it shows a schematic diagram of the positioning method for a touch system 1′ according to a second aspect of the second embodiment of the present invention. This aspect is also applied to the case that numbers of pointer images in the image windows acquired by two image sensors of the touch system 1′ are smaller than that acquired by the rest image sensor. This aspect is configured to obtain two pairs of possible positions or a pair of current correct positions.
Please referring to FIGS. 4 c and 4 d together, the image sensors 11, 11′ and 11″ respectively acquire an image window at a sample time “t”, and two of the acquired image windows contain only one pointer image (Step S31). In the meanwhile, it is assumed that image windows respectively acquired by the image sensors 11, 11′ and 11″ at a sample time “t−1” all include two pointer images.
The processing unit 12 identifies numbers of pointer images in the image windows (Step S32). When the numbers of pointer images in two of the image windows are identified to be smaller than that in the rest image window, the processing unit 12 generates a two-dimensional space S according to the three image windows (Step S33), wherein the pointers 81 and 82 are respectively mapped to the pointer images 81′ and 82′ on the two-dimensional space S.
Next, the processing unit 12 obtains possible positions (Steps S34 to S37) or a pair of current correct positions (Step S38) according to the image sensor acquiring more pointer images (i.e. the third image sensor 11″ herein) and one of the two image sensors acquiring fewer pointer image (i.e. the first image sensor 11 or the second image sensor 11′ herein) by using the method illustrated in the first embodiment, and details thereof were already illustrated in the first embodiment and thus will not be repeated herein.
In this aspect, the processing unit 12 may compare the possible positions obtained according to a current frame with the first possible positions of the first embodiment or the second possible positions of the first aspect of the second embodiment to obtain a pair of current correct positions, such as comparing shortest distances between those possible positions. Or the processing unit 12 may compare the possible positions obtained in this aspect with a pair of previous correct positions determined in a previous sample time “t−1” of the first image sensor 11 to third image sensor 11″ to obtain a pair of current correct positions.
Please referring to FIGS. 5 a to 5 d, they show schematic diagrams of the positioning method for a touch system 1′ according to a third aspect of the second embodiment of the present invention, in which the processing unit 12 generates a two-dimensional space S according to the image windows acquired by all image sensors and four corners of the two-dimensional space S are assumed as (0,0), (X,0), (0,Y) and (X,Y). This aspect is applied to the case that numbers of pointer images in the image windows acquired by two image sensors of the touch system 1′ are larger than that acquired by the rest image sensor. For example, the image windows acquired by the first image sensor 11 and the third image sensor 11″ contain two pointer images while the image window acquired by the second image sensor 11′ contains only one pointer image. This aspect is configured to obtain two pairs of possible positions or a pair of current correct positions.
Please referring to FIGS. 5 a and 5 e together, the image sensors 11, 11′ and 11″ respectively acquire an image window at a sample time “t”, and one of the acquired image windows contains only one pointer image (Step S41). In the meanwhile, it is assumed that image windows respectively acquired by the image sensors 11, 11′ and 11″ at a sample time “t−1” all include two pointer images.
The processing unit 12 identifies numbers of pointer images in the image windows (Step S42). When the processing unit 12 identifies that the numbers of pointer images in two of the image windows are larger than that in the rest image window, the processing unit 12 generates a two-dimensional space S according to the three image windows (Step S43). The processing unit 12 respectively maps the pointers 81 and 82 to the pointer images 81′ and 82′ on the two-dimensional space S. In addition, the first image sensor 11, the second image sensor 11′ and the third image sensor 11″ are respectively mapped to mapping positions (0,0), (X,0) and (X,Y) on the two-dimensional space S.
Next, the processing unit 12 connects a mapping position (0,0) or (X,Y) of one of two image sensors acquiring more pointer images (i.e. the first image sensor 11 in FIGS. 5 a and 5 b; the third image sensor 11″ in FIGS. 5 c and 5 d) with mapping positions of two outermost edges of the pointer images in the image window acquired by the same image sensor on the two-dimensional space S to form two edge lines L1 and L2, and connects a mapping position (X,0) of the image sensor acquiring fewer pointer image (i.e. the second image sensor 11′) with mapping positions of two outermost edges of the pointer image in the image window acquired by the same image sensor on the two-dimensional space S to form another two edge lines L3 and L4 (Step S44). Then, the processing unit 12 calculates four third internal bisectors V1 to V4 of a quadrilateral ADBC formed by the four edge lines L1 to L4 (Step S45). The processing unit 12 connects a mapping position (0,0) or (X,Y) of one of two image sensors acquiring more pointer images (i.e. the third image sensor 11″ in FIGS. 5 a and 5 d; the first image sensor 11 in FIGS. 5 b and 5 c) with mapping positions C81 an C82 of a predetermined point (e.g. center point or center of weight) of the pointer images in the image window acquired by the same image sensor on the two-dimensional space S to form two third position lines PL1 and PL2 (Step S46). Four cross points P1 to P4 of the third position lines PL1, PL2 and the third internal bisectors V1 to V4 are defined as third possible positions (Step S47).
In this aspect, the processing unit 12 may compare the third possible positions P1 to P4 with the first possible positions of the first embodiment, the second possible positions of the first aspect of the second embodiment or the possible positions of the second aspect of the second embodiment to obtain a pair of current correct positions. Or the processing unit 12 may compare the third possible positions with a pair of previous correct positions (as illustrated in the first embodiment) determined in a previous sample time “t−1” of the first image sensor 11 to the third image sensor 11″ so as to obtain a pair of current correct positions (Step S48). It is appreciated that, this aspect may also obtain two pairs of possible positions according to two image sensors acquiring different numbers of pointer images (e.g. the first image sensor 11 and the second image sensor 11′, or the second image sensor 11′ and the third image sensor 11″), and details thereof were already illustrated in the first embodiment and thus will not be repeated herein.
Please referring to FIG. 6 a, it shows a schematic diagram of the positioning method for a touch system 1′ according to a fourth aspect of the second embodiment of the present invention, in which the processing unit 12 generates a two-dimensional space S according to the image windows acquired by all image sensors and four corners of the two-dimensional space S are assumed as (0,0), (X,0), (0,Y) and (X,Y). This aspect is applied to the case that numbers of pointer images in the image windows acquired by two image sensors of the touch system 1′ are larger than that acquired by the rest image sensor. For example, the image windows acquired by the first image sensor 11 and the second image sensor 11′ contain two pointer images while the image window acquired by the third image sensor 11″ contains only one pointer image. This aspect is configured to obtain two pairs of possible positions or a pair of current correct positions.
Please referring to FIGS. 6 a and 6 b together, the image sensors 11, 11′ and 11″ respectively acquire an image window at a sample time “t”, and one of the acquired image windows contains only one pointer image (Step S51). In the meanwhile, it is assumed that image windows respectively acquired by the image sensors 11, 11′ and 11″ at a sample time “t−1” all include two pointer images.
The processing unit 12 identifies numbers of pointer images in the image windows (Step S52). When the numbers of pointer images in two of the image windows are identified to be larger than that in the rest image window, the processing unit 12 generates a two-dimensional space S according to the three image windows (Step S53), wherein the pointers 81 and 82 are respectively mapped to the pointer images 81′ and 82′ on the two-dimensional space S; and the first image sensor 11, the second image sensor 11′ and the third image sensor 11″ are respectively mapped to mapping positions (0,0), (X,0) and (X,Y) on the two-dimensional space S.
Next, the processing unit 12 connects mapping positions (0,0) and (X,0) of two image sensors acquiring more pointer images (i.e. the first image sensor 11 and the second image sensor 11′) respectively with mapping positions of a predetermined point (e.g. center point or center of weight) of the pointer images in the image windows acquired by the same two image sensors on the two-dimensional space S to form a quadrilateral ADBC (Step S54). Four corners of the quadrilateral ADBC are defined as fourth possible positions (Step S55).
In this aspect, the processing unit 12 may compare the fourth possible positions P1 to P4 obtained in this aspect with the possible positions obtained in the first embodiment or in every aspect of the second embodiment to obtain a pair of current correct positions. Or the processing unit 12 may compare the fourth possible positions obtained in this aspect with a pair of previous correct positions (as illustrated in the first embodiment) determined in a previous sample time “t−1” of the first image sensor 11 to the third image sensor 11″ so as to obtain a pair of current correct positions (Step S56). It is appreciated that, this aspect may also obtain two pairs of possible positions according to two image sensors acquiring different numbers of pointer images (e.g. the second image sensor 11′ and the third image sensor 11″), and details thereof were already illustrated in the first embodiment and thus will not be repeated herein.
In a word, the positioning method for a touch system of the present invention may obtain a pair of current correct positions by comparing two pairs of possible positions in a current frame with a pair of previous correct positions in a previous frame; or obtain a pair of current correct positions by comparing two pairs of possible positions in a current frame respectively obtained from different embodiments or aspects described above. In the present invention, the previous frame is an effective frame previous to the current frame. For example, if an immediately previous frame of the current frame has a poor image quality so that it is identified as an invalid frame, the previous frame of the current frame may be the second or the nth frame previous to the current frame.
In addition, although two pointers are used for illustration in the above embodiments and aspects, the positioning method of the present invention is also applicable to the positioning of more than two pointers.
In addition, in the comparison of possible positions, the present invention is not limited to compare a pair of possible positions at the same time, and every possible position may be sequentially and separately compared so as to obtain current correct positions. For example, four possible positions (P1, P2, P3, P4) obtained in any embodiment or aspect above may be respectively compared with another four possible positions (P1′, P2′, P3′, P4′) obtained in another embodiment or aspect. Or positions, moving speeds and/or moving directions of four possible positions (P1, P2, P3, P4) obtained in any embodiment or aspect above may be compared with that of a pair of previous correct positions so as to obtain two or a pair of current correct positions.
As mentioned above, as conventional touch systems have the problem of unable to correctly position a plurality of pointers, the present invention further provides a touch system (FIGS. 2 a and 3) and a positioning method therefore (FIGS. 2 d, 4 b, 4 d, 5 e and 6 b) that can correctly trace and position two-dimensional coordinates of a plurality of pointers with respect to a touch system.
Although the invention has been explained in relation to its preferred embodiment, it is not used to limit the invention. It is to be understood that many other possible modifications and variations can be made by those skilled in the art without departing from the spirit and scope of the invention as hereinafter claimed.
Claims (20)
1. A positioning method for a touch system, the touch system comprising a first image sensor and a second image sensor for acquiring image windows looking across a touch surface and containing images of two pointers operating above the touch surface, the positioning method comprising the steps of:
acquiring a first image window with the first image sensor;
acquiring a second image window with the second image sensor;
identifying numbers of pointer images in the first image window and the second image window;
generating a two-dimensional space according to the first image window and the second image window when the first image window and the second image window contain different numbers of pointer images;
connecting, on the two-dimensional space, a mapping position of the first image sensor with mapping positions of two outermost edges of the pointer image in the first image window and connecting, on the two-dimensional space, a mapping position of the second image sensor with mapping positions of two outermost edges of the pointer image in the second image window to form a quadrilateral;
calculating four first internal bisectors of the quadrilateral; and
connecting, on the two-dimensional space, a mapping position of the image sensor acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the first internal bisectors thereby generating first possible positions.
2. The positioning method as claimed in claim 1 , further comprising:
defining two first possible positions associated with the two first internal bisectors of two opposite corners of the quadrilateral as a pair of first possible positions.
3. The positioning method as claimed in claim 1 , wherein a pair of previous correct positions of the pointers with respect to the touch surface was determined in a previous sample time before the first image sensor acquires the first image window and the second image sensor acquires the second image window, and the positioning method further comprises:
comparing the first possible positions with the pair of previous correct positions to obtain a pair of current correct positions.
4. The positioning method as claimed in claim 3 , wherein the comparison process is to compare a distance, a moving direction and/or a moving speed of the first possible positions with that of the pair of previous correct positions.
5. The positioning method as claimed in claim 1 , wherein the touch system further comprises a third image sensor for acquiring image windows looking across the touch surface and containing images of the two pointers, and the positioning method further comprises:
acquiring a third image window with the third image sensor;
identifying the numbers of pointer images in the first, second and third image windows;
mapping the third image window to the two-dimensional space when the numbers of pointer images in two of the image windows are smaller than that in the rest image window;
connecting, on the two-dimensional space, mapping positions of two image sensors acquiring fewer pointer image with mapping positions of two outermost edges of the pointer image in the image windows acquired by the same two image sensors to form a quadrilateral;
calculating four second internal bisectors of the quadrilateral;
connecting, on the two-dimensional space, a mapping position of the image sensor acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the second internal bisectors thereby generating second possible positions; and
comparing the first possible positions with the second possible positions to obtain a pair of current correct positions.
6. The positioning method as claimed in claim 5 , wherein the predetermined point is a center point or a center of weight of the pointer image.
7. The positioning method as claimed in claim 1 , wherein the touch system further comprises a third image sensor for acquiring image windows looking across the touch surface and containing images of the two pointers, and the positioning method further comprises:
acquiring a third image window with the third image sensor;
identifying the numbers of pointer images in the first, second and third image windows;
mapping the third image window to the two-dimensional space when the numbers of pointer images in two of the image windows are larger than that in the rest image window;
connecting, on the two-dimensional space, a mapping position of one of two image sensors acquiring more pointer images with mapping positions of two outermost edges of the pointer images in the image window acquired by the same image sensor and connecting, on the two-dimensional space, a mapping position of the image sensor acquiring fewer pointer image with mapping positions of two outermost edges of the pointer image in the image window acquired by the same image sensor to form a quadrilateral;
calculating four third internal bisectors of the quadrilateral;
connecting, on the two-dimensional space, a mapping position of one of two image sensors acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the third internal bisectors thereby generating third possible positions; and
comparing the first possible positions with the third possible positions to obtain a pair of current correct positions.
8. The positioning method as claimed in claim 7 , wherein the predetermined point is a center point or a center of weight of the pointer image.
9. The positioning method as claimed in claim 1 , wherein the touch system further comprises a third image sensor for acquiring image windows looking across the touch surface and containing images of the two pointers, and the positioning method further comprises:
acquiring a third image window with the third image sensor;
identifying the numbers of pointer images in the first, second and third image windows;
mapping the third image window to the two-dimensional space when the numbers of pointer images in two of the image windows are larger than that in the rest image window;
connecting, on the two-dimensional space, mapping positions of two image sensors acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image windows acquired by the same two image sensors to form a quadrilateral;
defining four corners of the quadrilateral as fourth possible positions; and
comparing the first possible positions with the fourth possible positions to obtain a pair of current correct positions.
10. The positioning method as claimed in claim 9 , wherein the predetermined point is a center point or a center of weight of the pointer image.
11. The positioning method as claimed in claim 1 , wherein the predetermined point is a center point or a center of weight of the pointer image.
12. A positioning method for a touch system, the touch system comprising a first image sensor, a second image sensor and a third image sensor for acquiring image windows looking across a touch surface and containing images of two pointers operating above the touch surface, the positioning method comprising the steps of:
respectively acquiring an image window with three image sensors;
identifying numbers of pointer images in the image windows;
generating a two-dimensional space according to the three image windows;
executing the following steps when the numbers of pointer images in two of the image windows is smaller then that in the rest image window:
connecting, on the two-dimensional space, mapping positions of two image sensors acquiring fewer pointer image with mapping positions of two outermost edges of the pointer image in the image windows acquired by the same two image sensors to form a quadrilateral;
calculating four second internal bisectors of the quadrilateral; and
connecting, on the two-dimensional space, a mapping position of the image sensor acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the second internal bisectors thereby generating second possible positions; and
executing the following steps when the numbers of pointer images in two of the image windows is larger then that in the rest image window:
connecting, on the two-dimensional space, a mapping position of one of two image sensors acquiring more pointer images with mapping positions of two outermost edges of the pointer images in the image window acquired by the same image sensor and connecting, on the two-dimensional space, a mapping position of the image sensor acquiring fewer pointer image with mapping positions of two outermost edges of the pointer image in the image window acquired by the same image sensor to form a quadrilateral;
calculating four third internal bisectors of the quadrilateral; and
connecting, on the two-dimensional space, a mapping position of one of two image sensors acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the third internal bisectors thereby generating third possible positions.
13. The positioning method as claimed in claim 12 , wherein a pair of previous correct positions of the pointers with respect to the touch surface was determined in a previous sample time before the image sensors acquire the image windows, and the positioning method further comprises:
comparing the second possible positions with the pair of previous correct positions to obtain a pair of current correct positions when the numbers of pointer images in two of the image windows are smaller than that in the rest image window; and
comparing the third possible positions with the pair of previous correct positions to obtain a pair of current correct positions when the numbers of pointer images in two of the image windows are larger than that in the rest image window.
14. The positioning method as claimed in claim 12 , further comprising:
selecting two image sensors acquiring different numbers of pointer images;
connecting, on the two-dimensional space, mapping positions of the two image sensors respectively with mapping positions of two outermost edges of the pointer images in the image windows acquired by the same two image sensors to form a quadrilateral;
calculating four first internal bisectors of the quadrilateral;
connecting, on the two-dimensional space, a mapping position of one of the two image sensors acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the first internal bisectors thereby generating first possible positions.
15. The positioning method as claimed in claim 14 , further comprising:
comparing the second possible positions with the first possible positions to obtain a pair of current correct positions when the numbers of pointer images in two of the image windows are smaller than that in the rest image window; and
comparing the third possible positions with the first possible positions to obtain a pair of current correct positions when the numbers of pointer images in two of the image windows are larger than that in the rest image window.
16. The positioning method as claimed in claim 14 , wherein the predetermined point is a center point or a center of weight of the pointer image.
17. The positioning method as claimed in claim 12 , wherein when the numbers of pointer images in two of the image windows are larger than that in the rest image window, the positioning method further comprising:
connecting, on the two-dimensional space, mapping positions of two image sensors acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image windows acquired by the same two image sensors to form a quadrilateral;
defining four corners of the quadrilateral as fourth possible positions; and
comparing the third possible positions with the fourth possible positions to obtain a pair of current correct positions.
18. The positioning method as claimed in claim 17 , wherein the predetermined point is a center point or a center of weight of the pointer image.
19. The positioning method as claimed in claim 12 , wherein the predetermined point is a center point or a center of weight of the pointer image.
20. A touch system, comprising:
a touch surface, wherein a plurality of pointers are operated above the touch surface to accordingly control the touch system;
at least two image sensors configured to acquire image windows looking across the touch surface and containing images of the pointers operating above the touch surface; and
a processing unit generating a two-dimensional space according the image windows acquired by the image sensors, obtaining a quadrilateral and four internal bisectors of the quadrilateral by connecting mapping positions of the image sensors with mapping positions of two outermost edges of the pointer image in the image windows acquired by the image sensors on the two-dimensional space, and connecting a mapping position of the image sensor acquiring more pointer images with mapping positions of a predetermined point of the pointer images in the image window acquired by the same image sensor to intersect with the internal bisectors thereby generating possible positions.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW099119224A TWI408587B (en) | 2010-06-14 | 2010-06-14 | Touch system and positioning method therefor |
TW099119224 | 2010-06-14 | ||
TW99119224A | 2010-06-14 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20110304590A1 US20110304590A1 (en) | 2011-12-15 |
US8587563B2 true US8587563B2 (en) | 2013-11-19 |
Family
ID=45095871
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/115,468 Expired - Fee Related US8587563B2 (en) | 2010-06-14 | 2011-05-25 | Touch system and positioning method therefor |
Country Status (2)
Country | Link |
---|---|
US (1) | US8587563B2 (en) |
TW (1) | TWI408587B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9965094B2 (en) | 2011-01-24 | 2018-05-08 | Microsoft Technology Licensing, Llc | Contact geometry tests |
US8988087B2 (en) | 2011-01-24 | 2015-03-24 | Microsoft Technology Licensing, Llc | Touchscreen testing |
US8982061B2 (en) * | 2011-02-12 | 2015-03-17 | Microsoft Technology Licensing, Llc | Angular contact geometry |
US9542092B2 (en) | 2011-02-12 | 2017-01-10 | Microsoft Technology Licensing, Llc | Prediction-based touch contact tracking |
US8773377B2 (en) | 2011-03-04 | 2014-07-08 | Microsoft Corporation | Multi-pass touch contact tracking |
US8913019B2 (en) | 2011-07-14 | 2014-12-16 | Microsoft Corporation | Multi-finger detection and component resolution |
US9378389B2 (en) | 2011-09-09 | 2016-06-28 | Microsoft Technology Licensing, Llc | Shared item account selection |
US9785281B2 (en) | 2011-11-09 | 2017-10-10 | Microsoft Technology Licensing, Llc. | Acoustic touch sensitive testing |
US8914254B2 (en) | 2012-01-31 | 2014-12-16 | Microsoft Corporation | Latency measurement |
KR101898979B1 (en) * | 2012-02-16 | 2018-09-17 | 삼성디스플레이 주식회사 | Method of operating a touch panel, touch panel and display device |
US9317147B2 (en) | 2012-10-24 | 2016-04-19 | Microsoft Technology Licensing, Llc. | Input testing tool |
CN103761012B (en) * | 2013-08-27 | 2016-07-13 | 合肥工业大学 | A Fast Algorithm Applicable to Large Size Infrared Touch Screen |
US10698536B2 (en) * | 2015-07-08 | 2020-06-30 | Wistron Corporation | Method of detecting touch position and touch apparatus thereof |
CN105511788B (en) * | 2015-12-08 | 2019-04-30 | 惠州Tcl移动通信有限公司 | A kind of the picture amplification display method and system of mobile terminal |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5933138A (en) * | 1996-04-30 | 1999-08-03 | Driskell; Stanley W. | Method to assess the physical effort to acquire physical targets |
US20100328243A1 (en) * | 2009-06-30 | 2010-12-30 | E-Pin Optical Industry Co., Ltd | Mems scanning touch panel and coordinate dection method thereof |
US20110080363A1 (en) * | 2009-10-06 | 2011-04-07 | Pixart Imaging Inc. | Touch-control system and touch-sensing method thereof |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002236541A (en) * | 2001-02-09 | 2002-08-23 | Ricoh Co Ltd | Position detecting device, touch panel using the same, portable device, and shape detecting device |
TWI362608B (en) * | 2008-04-01 | 2012-04-21 | Silitek Electronic Guangzhou | Touch panel module and method for determining position of touch point on touch panel |
TW201005606A (en) * | 2008-06-23 | 2010-02-01 | Flatfrog Lab Ab | Detecting the locations of a plurality of objects on a touch surface |
-
2010
- 2010-06-14 TW TW099119224A patent/TWI408587B/en not_active IP Right Cessation
-
2011
- 2011-05-25 US US13/115,468 patent/US8587563B2/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5933138A (en) * | 1996-04-30 | 1999-08-03 | Driskell; Stanley W. | Method to assess the physical effort to acquire physical targets |
US20100328243A1 (en) * | 2009-06-30 | 2010-12-30 | E-Pin Optical Industry Co., Ltd | Mems scanning touch panel and coordinate dection method thereof |
US20110080363A1 (en) * | 2009-10-06 | 2011-04-07 | Pixart Imaging Inc. | Touch-control system and touch-sensing method thereof |
Also Published As
Publication number | Publication date |
---|---|
TWI408587B (en) | 2013-09-11 |
TW201145118A (en) | 2011-12-16 |
US20110304590A1 (en) | 2011-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8587563B2 (en) | Touch system and positioning method therefor | |
KR102702585B1 (en) | Electronic apparatus and Method for controlling the display apparatus thereof | |
US11308347B2 (en) | Method of determining a similarity transformation between first and second coordinates of 3D features | |
JP5054008B2 (en) | Method and circuit for tracking and real-time detection of multiple observer eyes | |
US20200011668A1 (en) | Simultaneous location and mapping (slam) using dual event cameras | |
US12014459B2 (en) | Image processing device, image processing method, and program for forming an accurate three-dimensional map | |
CN109388233B (en) | Transparent display device and control method thereof | |
JP5291605B2 (en) | Camera posture estimation apparatus and camera posture estimation program | |
US9639212B2 (en) | Information processor, processing method, and projection system | |
CN104081307A (en) | Image processing apparatus, image processing method, and program | |
KR102169309B1 (en) | Information processing apparatus and method of controlling the same | |
JP2017106959A (en) | Projection apparatus, projection method, and computer program for projection | |
CN107204044B (en) | Picture display method based on virtual reality and related equipment | |
CN112657176A (en) | Binocular projection man-machine interaction method combined with portrait behavior information | |
CN114706489B (en) | Virtual method, device, equipment and storage medium of input equipment | |
JP2019149119A (en) | Image processing device, image processing method, and program | |
JP2022132063A (en) | Pose determination method and device for augmented reality providing device | |
US11380071B2 (en) | Augmented reality system and display method for anchoring virtual object thereof | |
CN111754571A (en) | Gesture recognition method and device and storage medium thereof | |
US9489077B2 (en) | Optical touch panel system, optical sensing module, and operation method thereof | |
EP3309713B1 (en) | Method and device for interacting with virtual objects | |
CN113793349A (en) | Target detection method and apparatus, computer-readable storage medium, and electronic device | |
JP4221330B2 (en) | Interface method, apparatus, and program | |
CN102298458A (en) | Touch system and positioning method thereof | |
US11789543B2 (en) | Information processing apparatus and information processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: PIXART IMAGING INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SU, TZUNG MIN;LIN, CHIH HSIN;REEL/FRAME:026344/0761 Effective date: 20110310 |
|
REMI | Maintenance fee reminder mailed | ||
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.) |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20171119 |