CN102033656B - Gesture identification method and interaction system using same - Google Patents

Gesture identification method and interaction system using same Download PDF

Info

Publication number
CN102033656B
CN102033656B CN2009101760720A CN200910176072A CN102033656B CN 102033656 B CN102033656 B CN 102033656B CN 2009101760720 A CN2009101760720 A CN 2009101760720A CN 200910176072 A CN200910176072 A CN 200910176072A CN 102033656 B CN102033656 B CN 102033656B
Authority
CN
China
Prior art keywords
shadow
image
gesture identification
gesture
image form
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009101760720A
Other languages
Chinese (zh)
Other versions
CN102033656A (en
Inventor
陈信嘉
苏宗敏
吕志宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pixart Imaging Inc
Original Assignee
Pixart Imaging Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pixart Imaging Inc filed Critical Pixart Imaging Inc
Priority to CN201210345585.1A priority Critical patent/CN102999158B/en
Priority to CN2009101760720A priority patent/CN102033656B/en
Publication of CN102033656A publication Critical patent/CN102033656A/en
Application granted granted Critical
Publication of CN102033656B publication Critical patent/CN102033656B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a gesture identification method and an interaction system using the same. The method comprises the following steps of: continuously capturing image windows by using an image sensor; capturing the shielding shadow information of at least one referent in the image windows; when a single referent is judged to be included according to the shielding shadow information, calculating the position coordinates of the referent relative to the interaction system according to the positions of the shielding shadows in the image windows; and when a plurality of referents are judged to be included according to the shielding shadow information, carrying out gesture identification according to the correlations of the shielding shadows in the image windows. The invention also provides the interaction system. In the gesture identification method and the interaction system using the same, the contact point coordinates of the referents do not need to be figured out respectively, therefore, even if the referents are shielded with each other relative to the image sensor, the gesture identification can also be carried out correctly.

Description

Gesture identification and the interaction systems that uses the method
Technical field
The present invention relates to a kind of interaction systems, and the interaction systems that relates in particular to a kind of gesture identification and use the method.
Background technology
Please refer to shown in Figure 1ly, it has shown a kind of existing touch-control system 9.This touch-control system 9 comprises touch surface 90 and at least 2 video cameras 91,92, and video camera 91,92 the visual field comprise whole touch surface 90.When the user utilized finger to touch touch surface 90, video camera 91,92 fechtables comprised the image form that covers shadow of finger tips.Processing unit then can be according to the position of covering shadow of finger tips in the described image form, calculate finger and touch the two-dimensional position coordinate of touch surface 90, and according to the variation of this two-dimensional position coordinate, the control display device is carried out corresponding action relatively.
Yet the operating principle of described touch-control system 9 is to point according to the position calculation of covering shadow of finger tips in each image form to touch the two-dimensional position coordinate of touch surface 90.When the user utilizes a plurality of fingers to touch touch surface 90, with respect to video camera 92, because finger can cover mutually to each other, the shadow that covers of all finger tips might not appear in the image form that video camera 92 captures.
For example among Fig. 1, the user utilizes finger 81 and 82 to touch touch surface 90, at this moment video camera 91 acquisition image form W 91, what it comprised finger 81 and 82 covers shadow I 81And I 82Yet, because finger 81 and 82 covers mutually with respect to video camera 92, so the image form W that captures of video camera 92 92In only comprise that covers a shadow.When processing unit according to described image form W 91And W 92Calculate finger and touch the two-dimensional position coordinate time of touch surface 90, just correct Two-dimensional position coordinates and cause the generation of misoperation.
Be head it off, can be by two video cameras 93 and 94 being set in addition in two other corner of touch surface 90, with two image forms of other acquisition W 93, W 94, processing unit then can be according to image form W 91And W 93Calculate respectively finger 81 and 82 and touch the two-dimensional position coordinate of touch surface 90.Yet this kind solution can increase system cost.
Summary of the invention
In view of this, the interaction systems that the object of the invention is to propose a kind of gesture identification and use the method is to solve existing problem in the above-mentioned existing touch-control system, the mutual relationship of covering shadow in its consecutive image form that captures according to image sensor is carried out the gesture identification, can solve the problem that causes correctly calculating the contact point coordinate because of the mutual shelter of indicant.
The present invention proposes a kind of gesture identification of interaction systems, described interaction systems comprises image sensor, reflecting element and at least one light source, and described image sensor is used for acquisition and comprises that at least one indicant covers light source and/or the formed image form that covers shadow of reflecting element.Described gesture identification comprises the following steps: to utilize described image sensor acquisition image form; Capture the shield light shadow information in the described image form; Determine whether according to described shield light shadow information and to comprise a plurality of indicants; And when judgement comprises a plurality of indicant, carry out the gesture identification according to the mutual relationship of covering shadow in the consecutive image form.
Among a kind of embodiment of gesture identification of the present invention, described shield light shadow information comprises average shadow number, average shadow spacing and/or maximum shadow spacing.
Among a kind of embodiment of gesture identification of the present invention, the step of carrying out the gesture identification according to the mutual relationship of covering shadow in the consecutive image form also comprise the following steps: with described average shadow number and average shadow spacing one of them with the comparison of preset threshold value; When described average shadow number or average shadow spacing during greater than described preset threshold value, carry out upper and lower, left and right, zoom in or out the gesture identification; When described average shadow number or average shadow spacing during less than described preset threshold value, be rotated the gesture identification; And according to the display frame of the gesture update image display that picks out.
The present invention proposes a kind of interaction systems in addition, and this system comprises luminescence unit, image sensor and processing unit.Described image sensor is used for continuously, and at least one indicant of acquisition covers the formed image form that covers shadow of described luminescence unit.The mutual relationship of covering shadow in the consecutive image form that described processing unit captures according to described image sensor is carried out the gesture identification.
Among a kind of embodiment of interaction systems of the present invention, described luminescence unit is active light source or passive light source.When luminescence unit was passive light source, luminescence unit comprised that mirror surface and described interaction systems also comprise at least one active light source.
The present invention proposes a kind of gesture identification of interaction systems in addition, and described interaction systems comprises luminescence unit and image sensor, and described image sensor is used for acquisition and comprises that a plurality of indicants cover the formed image form that covers shadow of described luminescence unit.Described gesture identification comprises the following steps: to utilize described image sensor to capture continuously the image form; And carry out the gesture identification according to a plurality of mutual relationships of covering shadow in the consecutive image form.
Among a kind of embodiment of gesture identification of the present invention, described mutual relationship of covering shadow comprises that described average shadow spacing of covering shadow changes, maximum shadow spacing changes and sense of displacement.
According to the interaction systems of gesture identification of the present invention and use the method, in first mode, described interaction systems changes the action of control cursor according to the two-dimensional coordinate of indicant; In the second pattern, described interaction systems dwindles (zoom in/out), object rotation (rotation), Picture switch or menu etc. according to the display frame of the mutual relationship refresh display that covers shadow of a plurality of indicants such as display frame display frame scrolling (scroll), object are amplified.At gesture identification of the present invention and use in the interaction systems of the method owing to not needing to calculate respectively the contact point coordinate of a plurality of indicants, even therefore indicant with respect to image sensor for covering mutually, also can correctly carry out the gesture identification.
Description of drawings
Fig. 1 has shown the schematic diagram of existing touch-control system;
Fig. 2 a has shown the stereographic map of the interaction systems of the embodiment of the invention;
Fig. 2 b has shown the operation chart of the interaction systems of first embodiment of the invention;
Fig. 3 a has shown that the interaction systems that utilizes first embodiment of the invention carries out the schematic diagram that cursor is controlled;
Fig. 3 b has shown the schematic diagram of the image form that the image sensor of Fig. 3 a captures;
Fig. 4 a has shown the process flow diagram of gesture identification of the interaction systems of the embodiment of the invention;
Fig. 4 b has shown the process flow diagram of the second pattern among Fig. 4 a;
Fig. 5 a~5d has shown respectively the schematic diagram of identification right/left/lower/upper gesture in the gesture identification of interaction systems of first embodiment of the invention;
Fig. 5 e~5f has shown respectively the schematic diagram of identification zoom in/out gesture in the gesture identification of interaction systems of first embodiment of the invention;
Fig. 5 g~5h has shown respectively the schematic diagram of identification rotate gesture in the gesture identification of interaction systems of first embodiment of the invention;
Fig. 6 a has shown the operation chart of the interaction systems of second embodiment of the invention;
Fig. 6 b~6c has shown the schematic diagram of the image form that the image sensor of Fig. 6 a captures;
Fig. 7 a~7b has shown respectively the schematic diagram of identification right/left gesture in the gesture identification of interaction systems of second embodiment of the invention;
Fig. 7 c~7d has shown respectively the schematic diagram of identification zoom in/out gesture in the gesture identification of interaction systems of second embodiment of the invention; And
Fig. 7 e~7f has shown respectively the schematic diagram of identification rotate gesture in the gesture identification of interaction systems of second embodiment of the invention.
The main element symbol description
10,10 ' interaction systems, 100 panels
The Second Edge of the first side 100b panel of 100a panel
The 4th limit of the 3rd limit 100d panel of 100c panel
The surface of 100d the 4th mirror image 100s panel
11 luminescence unit 11a mirror surfaces
121 first light sources, 121 ' the second mirror images
122 secondary light sources 122 ' the 3rd mirror image
13,13 ' image sensor, 14 processing units
15 image displays, 150 display screens
151 cursors 20,20 ', 20 " image forms
IS virtual image space, RS real image space
T 81, the T indicant contact point T 81', the contact point of T ' first mirror picture
A 81The included angle A on contact point and the 3rd limit 81The angle on ' first mirror picture and the 3rd limit
R 81The first sense path R 81The ' the second sense path
I 81, I 82First covers shadow I 81' the second covers shadow
I 1, I 2First covers shadow I 1', I 2' the second covers shadow
I 81", I 82" cover shadow G 1The first shadow group
G 2The second shadow group C center line
Average shadow spacing 8 users of Sav
81,82 fingers, 9 touch-control systems
91~94 video cameras, 90 touch surface
W 91~W 94Image form S 1~S 5Step
Embodiment
For allow above and other purpose of the present invention, feature and advantage can be more obvious, hereinafter will cooperate appended diagram, be described in detail below.In addition, need to prove, in explanation of the present invention, identical member is with identical symbolic representation.
Please be simultaneously with reference to shown in Fig. 2 a and Fig. 2 b, Fig. 2 a has shown the stereographic map of the interaction systems 10 of the embodiment of the invention, Fig. 2 b has shown the operation chart of the interaction systems 10 of first embodiment of the invention.Described interaction systems 10 comprises panel 100, luminescence unit 11, the first light source 121, secondary light source 122, image sensor 13, processing unit 14 and image display 15.
Described panel 100 comprises first side 100a, Second Edge 100b, the 3rd limit 100c, the 4th limit 100d and surperficial 100s.The embodiment of described panel 100 comprises blank (whiteboard) or Touch Screen (touch screen).
Described luminescence unit 11 is arranged on the surperficial 100s of first side 100a of panel 100.Luminescence unit 11 can be active light source or passive light source.When luminescence unit 11 is active light source, but its self-luminescence and described luminescence unit 11 are preferably line source.When luminescence unit 11 was passive light source, it was used for the light that reflection other light sources (for example the first light source 121 and secondary light source 122) sends; At this moment, luminescence unit 11 comprises towards the mirror surface 11a of the 3rd limit 100c of panel, and wherein this mirror surface 11a can utilize suitable material to form.Described the first light source 121 is arranged on the surperficial 100s of Second Edge 100b of panel, and preferred luminous towards the 4th limit 100d of panel.Described secondary light source 122 is arranged on the surperficial 100s of the 3rd limit 100c of panel, and preferred luminous towards the first side 100a of panel; Wherein said the first light source 121 and secondary light source 122 are preferably active light source, are line source for example, but are not limited to this.
Shown in Fig. 2 b, when luminescence unit 11 is passive light source (for example reflecting element), the first light source 121 can with respect to mirror surface 11a map out the second mirror image 121 ', secondary light source 122 can with respect to mirror surface 11a map out the 3rd mirror image 122 ', the 4th limit 100d of panel can map out the 4th mirror image 100d ' with respect to mirror surface 11a; The 4th limit 100d of wherein said luminescence unit 11, the first light source 121, secondary light source 122 and panel defines a real image space RS jointly; Described luminescence unit 11, the second mirror image 121 ', the 3rd mirror image 122 ' and the 4th mirror image 100d ' jointly define a virtual image space IS.
Described image sensor 13 is arranged at the corner of panel 100, and in this embodiment, described image sensor 13 is arranged at the corner of the 3rd limit 100c and the 4th limit 100d intersection of panel.The visual field VA of image sensor 13 comprises described real image space RS and virtual image space IS at least, be used for the indicant (pointer) that acquisition comprises real image space RS, virtual image space IS and is positioned at real image space RS, for example point 81, the image form that covers shadow.In one embodiment, described image sensor 13 comprises that lens (or lens combination) are used for adjusting the field range VA of image sensor 13, so that image sensor 13 can capture the complete image of described real image space RS and virtual image space IS.The embodiment of image sensor 13 includes, but not limited to ccd image sensor and cmos image sensor.
Described processing unit 14 couples image sensor 13, and the image that captures for the treatment of image sensor 13 is with the one or more indicants of identification.When picking out when only comprising an indicant, then according to the position of covering shadow of image form indicating thing, relatively calculate the two-dimensional position coordinate that indicant touches panel surface 100s.When picking out when comprising a plurality of indicant, 14 mutual relationships of covering shadow according to image form indicating thing of processing unit are carried out the gesture identification, and relatively control image display according to the gesture that picks out and upgrade its display frame, after its detailed account form will be specified in.
Described image display 15 couples processing unit 14, can show cursor 151 on the display screen 150 of image display 15, shown in Fig. 2 b.Processing unit 14 touches the two-dimensional position changes in coordinates of panel surface 100s according to calculating indicant, the relatively action of cursor 151 on the control display screen 150, or a plurality of display frames of covering the mutual relationship update displayed screen 150 of shadow in the image form that captures according to image sensor 13, such as display frame scrolling, object convergent-divergent, object rotation, Picture switch or menu etc.
Be clear demonstration interaction systems of the present invention, among Fig. 2 a and Fig. 2 b, described panel 100 is independent of outside the image display 15, but it is not for limiting the present invention, in other embodiments, described panel 100 also can be incorporated on the display screen 150 of image display 15.In addition, when described panel 100 was Touch Screen, the display screen 150 of image display 15 also can be used as panel 100, and described luminescence unit 11, the first light source 121, secondary light source 122 and image sensor 13 then are arranged on the surface of display screen 150.
Be understandable that, although among Fig. 2 a and Fig. 2 b, described panel 100 is shown as three limits that rectangle and described luminescence unit 11, the first light source 121 and secondary light source 122 are shown as being arranged at panel 100 orthogonally, but it only is a kind of embodiment of the present invention, is not for limiting the present invention.In other embodiments, described panel 100 can be made into other shapes; Described luminescence unit 11, the first light source 121, secondary light source 122 and image sensor 13 also can be arranged on the panel 100 with other spatial relationship.Spirit of the present invention is, utilize image sensor 13 acquisition image forms, and carry out the gesture identification according to covering the displacement of shadow in this image form and covering shadow mutual relationship to each other, and according to the display frame of the relative update image display of the gesture that picks out.
The first embodiment
Please refer to shown in Fig. 3 a and Fig. 3 b, Fig. 3 a has shown that the interaction systems 10 that utilizes first embodiment of the invention carries out the schematic diagram that cursor is controlled; Fig. 3 b has shown the schematic diagram of the image form 20 that image sensor 13 captures among Fig. 3 a.As shown in the figure, work as indicant, for example point 81 tip and touched panel surface 100s in the RS of real image space when upper, this sentences contact point T 81Expression, indicant maps out the first mirror picture in the IS of virtual image space with respect to the mirror surface 11a of luminescence unit 11 (being in this embodiment reflecting element), and this sentences contact point T 81' expression.Described image sensor 13 is according to the first sensing route R 81The most advanced and sophisticated image of acquisition indicant is to cover shadow I in the 20 interior formation first of image form 81And according to the second sensing route R 81The most advanced and sophisticated image of ' acquisition first mirror picture is to cover shadow I in the 20 interior formation second of image form 81', shown in Fig. 3 b.In this embodiment, store in advance in the processing unit 14 cover shadow in image form 20 the one dimension position and the 3rd limit 100c of sensing route and panel between the relativeness of angle.Therefore, when forming image form 20 when the most advanced and sophisticated image of image sensor 13 acquisition indicants and first mirror picture thereof, 14 of processing units can be obtained respectively the first included angle A according to the one dimension position of covering shadow of image form 20 81With the second included angle A 81'.Then, according to the trigonometric function relation, processing unit 14 can be obtained the T that touches that indicant touches panel surface 100s 81The two-dimensional position coordinate.
For example in one embodiment, described panel surface 100s form right angle coordinate system, the 3rd limit 100c be as the X-axis of rectangular coordinate system, and the 4th limit 100d is as the Y-axis of rectangular coordinate system, and with image sensor 13 positions as initial point.Therefore, touch a T 81The coordinate that is positioned at rectangular coordinate system then can be expressed as (distance of relative the 4th limit 100d, the distance of relative the 3rd limit 100c).In addition, store in advance the first side 100a of panel and the distance D between the 3rd limit 100c in the processing unit 14 1By this, processing unit 14 can be obtained the T that touches that indicant 81 touches panel surface 100s according to the following step 81The two-dimensional position coordinate: (a) processing unit 14 is obtained the first sensing route R 81And the first included angle A between the 3rd limit 100c of panel 81And the second sensing route R 81' and the 3rd limit 100c of panel between the second included angle A 81'; (b) according to equation D 2=2D 1/ (tanA 81+ tanA 81') obtain the T that touches of indicant 81 81And the distance D between the 4th limit 100d of panel 2(c) according to D 2* tanA 81Obtain and touch a T 81The y coordinate.Therefore, touch a T 81The two-dimensional position coordinate then be (D 2, D 2* tanA 81).
Shown in Fig. 3 a and Fig. 3 b, the running of the interaction systems 10 of first embodiment of the invention comprises two kinds of patterns.When the image form 20 that captures according to image sensor 13 when processing unit 14 judges that only an indicant touches panel surface 100s, then control interaction systems 10 and work in first mode.In first mode, image sensor 13 captures image continuously with a sampling frequency, and the T that touches that indicant 81 touches panel surface 100s is calculated in 14 one dimension positions that shadow is arranged in image form 20 of covering according to indicant 81 of processing unit 81The two-dimensional position coordinate, and according to touching a T 81The two-dimensional position changes in coordinates, relatively control the action of cursor 151 on the image display 15.For example when indicant 81 when the 4th limit 100d of panel moves, the contact point T of first mirror picture 81' also simultaneously towards the 4th mirror image 100d ' movement.At this moment, cover shadow I corresponding to indicant in the image form 20 81And cover shadow I corresponding to the first mirror picture 81' also move towards the left side of image form 20.By this, 14 of processing units cover shadow I described in each image form 20 81And I 81' the position, calculate and to touch a T 81The two-dimensional position coordinate, and according to the variation of this two-dimensional position coordinate, move towards the left of display screen 150 relative to the cursor 151 of controlling on the image display 15.Be understandable that, cover shadow I described in the moving direction of indicant and the image form 20 81And I 81' moving direction and the relation between the moving direction of cursor 151 be not limited to disclosed content in above-described embodiment, the described shadow I that covers 81And I 81' from the moving direction of cursor 151 may be according to the different of software processing mode and in contrast to the moving direction of indicant.
When the image form 20 that captures according to image sensor 13 when processing unit 14 is determined with a plurality of indicants and touches panel surface 100s, then control interaction systems 10 and work in the second pattern.In the second pattern, processing unit 14 calculates according to each image form 20 no longer one by one and respectively touches a T 81The two-dimensional position coordinate, only judge gesture (gesture) according to a plurality of mutual relationships of covering shadow in the image form 20, and according to the display frame of the display screen 150 of the gesture update image display 15 of judging, amplify such as picture rolling, object dwindle, object rotation, Picture switch or display menu etc.
Please refer to shown in Fig. 4 a, it has shown the process flow diagram of gesture identification of the present invention.Described method comprises the following steps: to utilize image sensor acquisition image form (step S 1); Capture shield light shadow information (the step S in the described image form 2); Determine whether according to described shield light shadow information and to comprise a plurality of indicant (step S 3); If not, enter first mode (step S 4); If enter the second pattern (step S 5).
Please refer to shown in Fig. 4 b, it has shown the step S of Fig. 4 a 5In the embodiment of the second pattern, described shield light shadow information comprises average shadow number, average shadow spacing and maximum shadow spacing.The second pattern comprise the following steps: to judge average shadow number and average shadow spacing one of them whether greater than preset threshold value (step S 51); If then be rotated gesture identification (step S according to the mutual relationship of covering shadow in the consecutive image form 52); If not, then carry out up/down/left/right/zoom in/out gesture identification (step S according to the mutual relationship of covering shadow in the consecutive image form 53); And according to display frame (the step S of the gesture update image display of institute's identification 54).Be understandable that, Fig. 4 b can be set as when average shadow number and be rotated the gesture identification during less than the preset threshold value, and carries out the identification of translation gesture during greater than the preset threshold value when average shadow number; Perhaps be set as when average shadow spacing and be rotated the gesture identification during less than the preset threshold value, and carry out the identification of translation gesture during greater than the preset threshold value when average shadow spacing.
In another embodiment, the second pattern can only comprise a step: be rotated the gesture identification according to the mutual relationship of covering shadow in the consecutive image form.In another embodiment, the second pattern can only comprise a step: carry out up/down/left/right/zoom in/out gesture identification according to the mutual relationship of covering shadow in the consecutive image form.That is to say, the second pattern of interaction systems can only be rotated gesture identification or up/down/left/right/zoom in/out gesture identification one of them.
Please when the interaction systems 10 that utilizes first embodiment of the invention carries out the gesture identification, at first utilize image sensor 13 acquisition images to form image form 20 simultaneously with reference to shown in Fig. 3 a~4b, it comprises that at least one is corresponding to indicant contact point T 81Cover shadow I 81And at least one corresponding to first mirror as contact point T 81' cover shadow I 81' (step S 1).Then, the shield light shadow information in the processing unit 14 acquisition image forms 20, such as the average shadow number that covers shadow, average shadow spacing and maximum shadow spacing etc. is for using (step S in the subsequent step 2).Then, processing unit 14 according to the shield light shadow information process decision chart that captures as whether having a plurality of indicant (step S in the form 20 3), wherein each indicant can cover shadow in 20 maximum 2 of the generations of image form, and therefore when appearance was covered shadow greater than two on the image form 20, then expression comprised a plurality of indicants.
When judgement included only an indicant, shown in Fig. 3 a and Fig. 3 b, 14 control of processing unit interaction systems 10 entered first mode (step S 4).In first mode, cover shadow (I for example in the image form 20 that processing unit 14 captures according to image sensor 13 81And I 81') the contact point (T for example of one dimension position calculation indicant touch panel surface 100s 81) the two-dimensional position coordinate, and relatively control the action of cursor 151 on the image display 15 according to this two-dimensional position changes in coordinates.
When processing unit 14 is determined with a plurality of indicant touch panels surface 100s according to the shield light shadow information, shown in Fig. 5 a to Fig. 5 h, then controls interaction systems 10 and enter the second pattern (step S 5).In the second pattern, processing unit 14 carries out the gesture identification according to covering shadow mutual relationship each other in the image form 20, and relatively control the frame updating of the display screen 150 shown pictures of image display 15 according to the gesture that picks out, such as the amplification of carrying out picture rolling, object or form dwindle, object rotation, Picture switch or display menu etc.
Please refer to shown in Fig. 5 a to Fig. 5 h, the embodiment of the second pattern then is described, wherein said luminescence unit 11 describes as an example of passive light source example in this explanation.In addition, being understandable that, only is exemplary shown in Fig. 5 a to Fig. 5 h, is not for limiting the present invention.
The picture rolling gesture: please refer to shown in Fig. 5 a~5d, shield light shadow information in the image form 20 that processing unit 14 captures according to image sensing unit 13 is judged and is comprised a plurality of contact points (T for example 1And T 2) time, then enter the second pattern.Then, whether processing unit 14 process decision charts are as on average covering the shadow number greater than the preset threshold value in the form 20, and this preset threshold value for example is 6, or judges that whether average shadow interval S av is greater than preset threshold value (step S 51).When on average covering the shadow number not greater than the preset threshold value or on average shadow interval S av is not greater than the preset threshold value, then carry out translation gesture identification (step S 53).
When the identification of translation gesture, at first will cover shadow and hive off, the foundation of for example hiving off according to the center line C conduct of image form 20 is to distinguish the first shadow group G 1With the second shadow group G 2, wherein said the first shadow group G 1May be real image shadow group or virtual image shadow group, described the second shadow group G 2May be virtual image shadow group or real image shadow group.
For example among Fig. 5 a~5d, the average shadow number in the image form 20 is not greater than preset threshold value or average shadow interval S av during not greater than the preset threshold value, and 14 of processing units carry out up/down/left/right gesture identification (step S 53).For example among Fig. 5 a, processing unit 14 picks out the first shadow group G in the image form 20 1With the second shadow group G 2Therefore all move right, judge that the user is carrying out the gesture of display frame to the right/left scrolling, thereby the display screen 150 of relatively controlling image display 15 carries out corresponding display frame and upgrades (step S 54).
In like manner, among Fig. 5 b, processing unit 14 picks out the first shadow group G in the image form 20 1With the second shadow group G 2Therefore all be moved to the left, judge that the user is carrying out the gesture of display frame to the left/right scrolling, thereby the display screen 150 of relatively controlling image display 15 carries out corresponding display frame and upgrades (step S 54).
Among Fig. 5 c, processing unit 14 picks out the first shadow group G in the image form 20 1With the second shadow group G 2Average shadow spacing increase gradually, therefore judge that the user is carrying out the gesture of display frame to lower/upper scrolling, thereby control display screen 150 carries out corresponding display frame and upgrades (step S relatively 54).
Among Fig. 5 d, processing unit 14 picks out the first shadow group G in the image form 20 1With the second shadow group G 2Average shadow spacing reduce gradually, therefore judge that the user is carrying out display frame to be made progress/gesture of lower scrolling, thereby control display screen 150 carries out corresponding display frame and upgrades (step S relatively 54).
In another embodiment, when the judgement of the shield light shadow information in the image form 20 that processing unit 14 captures according to image sensing unit 13 comprises a plurality of contact point, then directly carry out translation gesture identification (step S 53) and execution in step S not 51
Object amplifies and to dwindle gesture: carrying out before object amplifies the step of dwindling, the user forms single contact point at panel surface 100s first, and entering first mode, and control cursor 151 moves on the object O in first mode, shown in Fig. 3 a.Then, the user forms a plurality of contact points at panel surface 100s again, such as Fig. 5 e~5f.Shield light shadow information in the image form 20 that processing unit 14 captures according to image sensing unit 13 is judged and is comprised a plurality of contact points (T for example 1, T 2) time, then enter the second pattern.
Then, processing unit 14 process decision charts as the average shadow number in the form 20 and average shadow interval S av one of them whether greater than preset threshold value (step S 51).When average shadow number, then will not cover first shadow and hive off during not greater than the preset threshold value greater than preset threshold value or average shadow interval S av, for example according to the center line C of image form 20 as the foundation of hiving off, to distinguish the first shadow group G 1With the second shadow group G 2
For example among Fig. 5 e~5f, the average shadow number in the image form 20 is not greater than the preset threshold value or on average shadow interval S av is not greater than the preset threshold value, and 14 of processing units carry out zoom in/out gesture identification (step S 53).For example among Fig. 5 e, processing unit 14 picks out the first shadow group G in the image form 20 1With the second shadow group G 2Average shadow spacing haply unchanged and maximum shadow spacing increase, therefore judge that the user is carrying out the gesture with the object zoom in/out, thereby the display screen 150 of relatively controlling image display 15 carries out corresponding display frame and upgrades (step S 54).
For example among Fig. 5 f, processing unit 14 picks out the first shadow group G in the image form 20 1With the second shadow group G 2Average shadow spacing haply unchanged and maximum shadow spacing reduce, therefore judge that the user is carrying out the gesture of object being dwindled/amplifying, thereby the display screen 150 of relatively controlling image display 15 carries out corresponding display frame renewal (step S 54).
In another embodiment, when the judgement of the shield light shadow information in the image form 20 that processing unit 14 captures according to image sensing unit 13 comprises a plurality of contact point, then directly carry out zoom in/out gesture identification (step S 53) and execution in step S not 51
In addition, also may not need before carrying out the identification of zoom in/out gesture enter first mode and directly carry out the second pattern, for example when panel 100 is contact panel, because the user can directly point to object, therefore when carrying out the identification of zoom in/out gesture, can directly enter the second pattern.
The object rotate gesture: before carrying out the step of object rotation, the user forms single contact point at panel surface 100s first, and entering first mode, and control cursor 151 moves on the object O in first mode, shown in Fig. 3 a.Then, the user forms a plurality of contact point T at panel surface 100s again, shown in Fig. 5 g~5h, when the judgement of the shield light shadow information in the image form 20 that processing unit 14 captures according to image sensing unit 13 comprises a plurality of contact point T, then enter the second pattern.
Then, processing unit 14 process decision charts as the average shadow number in the form 20 and average shadow interval S av one of them whether greater than preset threshold value (step S 51).When average shadow number, does not then cover shadow and hives off, and directly judge sense of rotation according to covering shadow towards the shadow number that covers of two side direction displacements of image form 20 during greater than the preset threshold value greater than preset threshold value or average shadow interval S av.
For example among Fig. 5 g, processing unit 14 picks out the shadow number that covers that moves right in the image form 20 and covers the shadow number greater than what be moved to the left, therefore judge that the user is carrying out object dextrorotation/left-handed gesture, thereby relatively control image display 15 and carry out corresponding display frame renewal (step S 54).
For example among Fig. 5 h, processing unit 14 picks out the shadow number that covers that is moved to the left in the image form 20 and covers the shadow number greater than what move right, therefore judge the user carrying out with object left-handed/gesture of dextrorotation, thereby relatively control image display 15 and carry out corresponding display frame and upgrade (step S 54).
In another embodiment, when the judgement of the shield light shadow information in the image form 20 that processing unit 14 captures according to image sensing unit 13 comprises a plurality of contact point, then directly be rotated gesture identification (step S 52) and execution in step S not 51
In addition, also may not need before being rotated the gesture identification enter first mode and directly carry out the second pattern, for example when panel 100 is contact panel, because the user can directly point to object, therefore when being rotated the gesture identification, can directly enter the second pattern.
The Picture switch gesture: the user directly forms a plurality of contact point T at panel surface 100s, shown in Fig. 5 g~5h, when the judgement of the shield light shadow information in the image form 20 that processing unit 14 captures according to image sensing unit 13 comprises a plurality of contact point T, then directly enter the second pattern.
Processing unit 14 directly determines whether and carries out Picture switch according to covering the shadow number that covers of shadow towards two side direction displacements of image form 20.For example among Fig. 5 g and Fig. 5 h, processing unit 14 picks out in the image form 20 to the right or is moved to the left covers the shadow number greater than the mobile shadow number that covers to the left or to the right, therefore judge that the user is carrying out the gesture of Picture switch, thereby relatively control image display 15 and carry out corresponding Picture switch function.
The display menu gesture: the user directly forms a plurality of contact point T at panel surface 100s, shown in Fig. 5 g~5h, when the judgement of the shield light shadow information in the image form 20 that processing unit 14 captures according to image sensing unit 13 comprises a plurality of contact point T, then directly enter the second pattern.
Processing unit 14 directly determines whether display menu according to covering shadow towards the shadow number that covers of two side direction displacements of image form 20.For example among Fig. 5 g, processing unit 14 picks out the shadow number that covers that moves right in the image form 20 and covers the shadow number greater than what be moved to the left; Among Fig. 5 h, processing unit 14 picks out the shadow number that covers that is moved to the left in the image form 20 and covers the shadow number greater than what move right, therefore judge that the user is carrying out the gesture of display menu, thereby relatively control the corresponding menu of image display 15 demonstrations.
The second embodiment
Please refer to shown in Fig. 6 a to Fig. 6 c, Fig. 6 a shown the interaction systems 10 of second embodiment of the invention ' operation chart, Fig. 6 b and Fig. 6 c shown respectively the image form 20 of the image sensor 13 and 13 of Fig. 6 a ' capture ' and 20 " schematic diagram.In this embodiment, interaction systems 10 ' comprise luminescence unit 11, the first light source 121, secondary light source 122 and image sensor 13 and 13 '.Luminescence unit 11 is active light source, and preferred luminous towards the 3rd limit 100c of panel.Luminescence unit 11, the first light source 121 and secondary light source 122 are arranged at respectively first side 100a, Second Edge 100b and the 4th limit 100d of panel.Therefore, image sensor 13 capture image form 20 ' shadow I is covered at interior first of the indicant tip that only comprises 81And I 82The image form 20 of image sensor 13 ' capture " in only comprise the indicant tip cover shadow I 81" and I 82".
In this embodiment, processing unit 14 according to the image form 20 of image sensor 13 and 13 ' capture ' and 20 " in a plurality of mutual relationships of covering shadow carry out the gesture identification.
The picture rolling gesture: please refer to shown in Fig. 7 a~7b, when processing unit 14 according to the image form 20 of image sensing unit 13 and 13 ' capture ' and 20 " in the shield light shadow information judge and comprise a plurality of contact points (T for example 1And T 2) time, then enter the second pattern.Then, processing unit 14 process decision charts as form 20 ' and 20 " in average shadow number whether greater than preset threshold value or process decision chart as form 20 ' and 20 " in average shadow spacing whether greater than preset threshold value (step S 51).
When image form 20 ' and 20 " in average shadow number not greater than preset threshold value or average shadow spacing during not greater than the preset threshold value, 14 of processing units carry out left/right gesture identification (step S 53).For example among Fig. 7 a and Fig. 7 b, processing unit 14 pick out respectively imaging windows 20 ' and 20 " in cover shadow all or be moved to the left to the right; therefore judge that the user is carrying out the gesture of display frame to lower/upper scrolling, thereby the display screen 150 of relatively controlling image display 15 carries out corresponding display frame and upgrades (step S 54).
In another embodiment, when the image form 20 and 20 of processing unit 14 according to image sensing unit 13 and 13 ' capture " in the shield light shadow information judge when comprising a plurality of contact point, then directly carry out translation gesture identification (step S 53) and execution in step S not 51
Object amplifies and to dwindle gesture: carrying out before object amplifies the step of dwindling, the user control first cursor movement to want on the object of convergent-divergent.Then, the user forms a plurality of contact points (T for example at panel surface 100s again 1And T 2), shown in Fig. 7 c~7d.When processing unit 14 according to the image form 20 of image sensing unit 13 and 13 ' capture ' and 20 " in the shield light shadow information judge and then enter the second pattern when comprising a plurality of contact point.
Then, processing unit 14 process decision charts as form 20 and 20 " in average shadow number whether greater than preset threshold value or process decision chart as form 20 and 20 " in average shadow spacing whether greater than preset threshold value (step S 51).When the average shadow number in the image form 20 not greater than preset threshold value or average shadow spacing during not greater than the preset threshold value, 14 of processing units carry out zoom in/out gesture identification (step S 53).For example among Fig. 7 c and the 7d, processing unit 14 pick out respectively image form 20 ' and 20 " in the average shadow spacing of covering shadow increase or reduce; therefore judge that the user is carrying out the gesture with the object zoom in/out, thereby relatively control image display 15 and carry out corresponding display frame and upgrade (step S 54).
In another embodiment, when processing unit 14 according to the image form 20 of image sensing unit 13 and 13 ' capture ' and 20 " in the shield light shadow information judge and then directly carry out zoom in/out gesture identification (step S when comprising a plurality of contact point 53) and execution in step S not 51
In addition, also may not need before carrying out the identification of zoom in/out gesture enter first mode and directly carry out the second pattern, for example when panel 100 is contact panel, because the user can directly point to object, therefore when carrying out the identification of zoom in/out gesture, can directly enter the second pattern.
The object rotate gesture: before the step of carrying out the object rotation, the user controls first cursor movement to the object of institute's wish rotation.Then, the user forms a plurality of contact point T at panel surface 100s again, shown in Fig. 7 e~7f, when processing unit 14 according to the image form 20 of image sensing unit 13 and 13 ' capture ' and 20 " in the shield light shadow information judge and then enter the second pattern when comprising a plurality of contact point T.
Then, processing unit 14 process decision charts as form 20 ' and 20 " in average shadow number whether greater than preset threshold value or processing unit 14 process decision charts as form 20 ' and 20 " in average shadow spacing whether greater than preset threshold value (step S 51).When average shadow number greater than preset threshold value or average shadow spacing during greater than the preset threshold value, then according to cover shadow towards image form 20 ' and 20 " in the number that moves of two side directions judge sense of rotation.
For example among Fig. 7 e, processing unit 14 pick out respectively image form 20 ' and 20 " in the shadow number that covers that moves right cover the shadow number greater than what be moved to the left; Among Fig. 7 f, processing unit 14 pick out respectively image form 20 ' and 20 " in the shadow number that covers that is moved to the left cover the shadow number greater than what move right; therefore judge that the user is carrying out object dextrorotation/left-handed gesture, thereby relatively control image display 15 and carry out corresponding display frame and upgrade (step S 54).
In another embodiment, when processing unit 14 according to the image form 20 of image sensing unit 13 and 13 ' capture ' and 20 " in the shield light shadow information judge and then directly be rotated gesture identification (step S when comprising a plurality of contact point 52) and execution in step S not 51
In addition, also may not need before being rotated the gesture identification enter first mode and directly carry out the second pattern.
Picture switch gesture or display menu gesture: the user directly forms a plurality of contact point T at panel surface 100s, shown in Fig. 7 e~7f, when processing unit 14 according to the image form 20 of image sensing unit 13 and 13 ' capture ' and 20 " in the shield light shadow information judge and then directly enter the second pattern when comprising a plurality of contact point T.
Processing unit 14 directly according to cover shadow towards image form 20 ' and 20 " the shadow number that covers of two side direction displacements determine whether and carry out Picture switch or display menu gesture.For example among Fig. 7 e and Fig. 7 f, processing unit 14 pick out respectively image form 20 ' and 20 " in to the right or be moved to the left cover the shadow number greater than the mobile shadow number that covers to the left or to the right; therefore judge that the user is carrying out the gesture of Picture switch or display menu gesture, thereby relatively control image display 15 and carry out corresponding Picture switch or display menu.
Be understandable that, the corresponding control function of mutual relationship of covering shadow in above-mentioned the second pattern is not limited to disclosed content among Fig. 5 a~5h and Fig. 7 a~7f.Spirit of the present invention be according to the mutual relationship of covering shadow in the image form carry out the gesture identification and the position coordinates that must not calculate one by one contact point to avoid because indicant covers mutually the situation that causes calculating the contact point coordinate.
As previously mentioned, because existing touch-control system system carries out the gesture identification by the variation of contact point two-dimensional coordinate, when covering mutually, indicant easily causes correctly to calculate the situation of contact point coordinate.Therefore the mutual relationship of the present invention by covering shadow in the recognisable image form only need utilize an image sensor correctly to carry out the gesture identification with the foundation as the gesture identification, has the effect that reduces system cost.
Although the present invention is disclosed by above-described embodiment, yet above-described embodiment is not that any the technical staff in the technical field of the invention without departing from the spirit and scope of the present invention, should do various changes and modification for restriction the present invention.Therefore protection scope of the present invention should be as the criterion with the scope that appended claims was defined.

Claims (9)

1. the gesture identification of an interaction systems, described interaction systems comprises image sensor, reflecting element and at least one light source, described image sensor is used for acquisition and comprises that at least one indicant covers described light source and/or the formed image form that covers shadow of described reflecting element, and described gesture identification comprises the following steps:
Utilize described image sensor acquisition image form;
Capture the shield light shadow information in the described image form, wherein, described shield light shadow information comprises average shadow number, average shadow spacing and maximum shadow spacing;
Determine whether according to described shield light shadow information and to comprise a plurality of indicants; And
When judgement comprises a plurality of indicant, only carry out the gesture identification according to the mutual relationship of covering shadow in the consecutive image form and do not calculate one by one the two-dimensional position coordinate of each described indicant according to described image form, the step of wherein, carrying out the gesture identification according to the mutual relationship of covering shadow in the consecutive image form also comprises the following steps:
With described average shadow number and average shadow spacing one of them with the comparison of preset threshold value;
When described average shadow number or described average shadow spacing during less than described preset threshold value, the gesture identification that carry out upper and lower, left and right, zooms in or out;
When described average shadow number or described average shadow spacing during greater than described preset threshold value, be rotated the gesture identification; And
Display frame according to the gesture update image display that picks out;
Wherein, when described gesture was recognized as upper and lower, left and right, described display frame was updated to the display frame scrolling; When described gesture be recognized as amplification, when dwindling, described display frame is updated to object and amplifies and to dwindle; When described gesture was recognized as rotate gesture, described display frame was updated to the object rotation; Mobile cover the shadow number not simultaneously when described gesture is recognized as to the left and right, described display frame is updated to Picture switch or display menu.
2. gesture identification according to claim 1, the method also comprises the following steps:
When judgement comprises single indicant, according to the position coordinates of the described indicant of position calculation that covers shadow in the described image form with respect to described interaction systems; And
Position coordinates according to indicant described in the consecutive image form changes, and relatively controls the action of cursor on the image display.
3. gesture identification according to claim 1, wherein, when described average shadow number or described average shadow spacing during less than the preset threshold value, described gesture identification also comprises the following steps:
Shadow be will cover according to the center line of described image form and a real image shadow group and a virtual image shadow group will be divided into;
Wherein, when the described real image shadow group in the described image form and described virtual image shadow group's displacement in the same way the time, carry out a left side or the identification of right hand gesture; When the average shadow spacing between the described real image shadow group in the described image form and the described virtual image shadow group changes, go up or lower gesture identification; When the maximum shadow spacing between the described real image shadow group in the described image form and the described virtual image shadow group changes, zoom in or out the gesture identification.
4. gesture identification according to claim 1, wherein, in the step of the gesture identification that is rotated, according to displacement in the consecutive image form in the same way cover the higher direction recognition sense of rotation of covering shadow of shadow number.
5. gesture identification according to claim 1 wherein, carries out the step of gesture identification for carrying out upper and lower, left and right, zooming in or out the gesture identification according to the mutual relationship of covering shadow in the consecutive image form.
6. gesture identification according to claim 1 wherein, carries out the step of gesture identification for being rotated the gesture identification according to the mutual relationship of covering shadow in the consecutive image form.
7. interaction systems, this interaction systems comprises:
Luminescence unit, this luminescence unit are passive light source, and this passive light source comprises mirror surface;
At least one active light source;
Image sensor is used for capturing continuously at least one indicant and covers the formed image form that covers shadow of described luminescence unit, and this image form comprises that real image covers shadow and shadow is covered in the virtual image;
Image display; And
Processing unit, couple described image display, when the described image form that captures according to described image sensor is only judged an indicant, cover the one dimension position that shadow is arranged in described image form according to described indicant described, calculate the two-dimensional position coordinate of described indicant; When the described image form that captures according to described image sensor is determined with a plurality of indicant, the mutual relationship of covering shadow in the consecutive image form that only captures according to described image sensor is carried out the gesture identification and is not calculated one by one the two-dimensional position coordinate of each described indicant according to described image form, wherein, described processing unit covers shadow and the virtual image according to real image in the consecutive image form and covers the mutual relationship of shadow and carry out the gesture identification, and upgrades the display frame of described image display according to the gesture that picks out.
8. the gesture identification of an interaction systems, described interaction systems comprises luminescence unit and image sensor, this luminescence unit is the passive light source with mirror surface, this image sensor is used for acquisition and comprises that a plurality of indicants cover the formed image form that covers shadow of described luminescence unit, this image form comprises that real image covers shadow and shadow is covered in the virtual image, and described gesture identification comprises the following steps:
Utilize described image sensor to capture continuously the image form;
Center line according to described image form is divided into a real image shadow group and a virtual image shadow group with the described shadow that covers; And
Only carry out the gesture identification according to a plurality of mutual relationships of covering shadow in the consecutive image form and do not calculate one by one the two-dimensional position coordinate of each described indicant according to described image form, wherein, described mutual relationship of covering shadow comprises that described real image covers shadow and the average shadow spacing variation of Shadow to Light is covered in the described virtual image, maximum shadow spacing changes and sense of displacement.
9. gesture identification according to claim 8, wherein, described mutual relationship of covering shadow comprises that described average shadow spacing of covering shadow changes, maximum shadow spacing changes and sense of displacement, and described gesture identification also comprises the following steps: the display frame according to the gesture update image display that picks out.
CN2009101760720A 2009-09-28 2009-09-28 Gesture identification method and interaction system using same Expired - Fee Related CN102033656B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201210345585.1A CN102999158B (en) 2009-09-28 2009-09-28 The gesture identification of interaction systems and interaction systems
CN2009101760720A CN102033656B (en) 2009-09-28 2009-09-28 Gesture identification method and interaction system using same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009101760720A CN102033656B (en) 2009-09-28 2009-09-28 Gesture identification method and interaction system using same

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201210345585.1A Division CN102999158B (en) 2009-09-28 2009-09-28 The gesture identification of interaction systems and interaction systems

Publications (2)

Publication Number Publication Date
CN102033656A CN102033656A (en) 2011-04-27
CN102033656B true CN102033656B (en) 2013-01-09

Family

ID=43886626

Family Applications (2)

Application Number Title Priority Date Filing Date
CN2009101760720A Expired - Fee Related CN102033656B (en) 2009-09-28 2009-09-28 Gesture identification method and interaction system using same
CN201210345585.1A Expired - Fee Related CN102999158B (en) 2009-09-28 2009-09-28 The gesture identification of interaction systems and interaction systems

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201210345585.1A Expired - Fee Related CN102999158B (en) 2009-09-28 2009-09-28 The gesture identification of interaction systems and interaction systems

Country Status (1)

Country Link
CN (2) CN102033656B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI435251B (en) * 2011-08-09 2014-04-21 Wistron Corp Method and system for estimating the tendency of pressure change on a touch panel
CN102968177B (en) * 2011-08-31 2015-10-28 敦宏科技股份有限公司 Gesture method for sensing
TWI448918B (en) 2011-09-09 2014-08-11 Pixart Imaging Inc Optical panel touch system
CN103019457A (en) * 2011-09-23 2013-04-03 原相科技股份有限公司 Optical touch system
CN103425227B (en) * 2012-05-17 2016-01-13 原相科技股份有限公司 The sensing module of tool electricity-saving function and method thereof
TWI502413B (en) * 2013-10-07 2015-10-01 Wistron Corp Optical touch device and gesture detecting method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101482772A (en) * 2008-01-07 2009-07-15 纬创资通股份有限公司 Electronic device and its operation method
CN101498985A (en) * 2008-01-30 2009-08-05 义隆电子股份有限公司 Touch control panel for multi-object operation and its use method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0736603A (en) * 1993-07-16 1995-02-07 Wacom Co Ltd Two-dimensional position detector
JP2003173237A (en) * 2001-09-28 2003-06-20 Ricoh Co Ltd Information input-output system, program and storage medium
CN100478862C (en) * 2005-10-05 2009-04-15 索尼株式会社 Display apparatus and display method
US20090044988A1 (en) * 2007-08-17 2009-02-19 Egalax_Empia Technology Inc. Device and method for determining function represented by continuous relative motion between/among multitouch inputs on signal shielding-based position acquisition type touch panel

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101482772A (en) * 2008-01-07 2009-07-15 纬创资通股份有限公司 Electronic device and its operation method
CN101498985A (en) * 2008-01-30 2009-08-05 义隆电子股份有限公司 Touch control panel for multi-object operation and its use method

Also Published As

Publication number Publication date
CN102999158B (en) 2015-12-02
CN102999158A (en) 2013-03-27
CN102033656A (en) 2011-04-27

Similar Documents

Publication Publication Date Title
TWI412975B (en) Gesture recognition method and interactive system using the same
TWI501121B (en) Gesture recognition method and touch system incorporating the same
US8436832B2 (en) Multi-touch system and driving method thereof
JP5412227B2 (en) Video display device and display control method thereof
US7411575B2 (en) Gesture recognition method and touch system incorporating the same
CN102033656B (en) Gesture identification method and interaction system using same
TWI393037B (en) Optical touch displaying device and operating method thereof
JP2006209563A (en) Interface device
CN102341814A (en) Gesture recognition method and interactive input system employing same
CN103150020A (en) Three-dimensional finger control operation method and system
CN102880304A (en) Character inputting method and device for portable device
CN111527468A (en) Air-to-air interaction method, device and equipment
CN103609093A (en) Interactive mobile phone
US11023050B2 (en) Display control device, display control method, and computer program
CN101989150A (en) Gesture recognition method and touch system using same
CN102033657B (en) Touch system, method for sensing height of referent and method for sensing coordinates of referent
WO2021004413A1 (en) Handheld input device and blanking control method and apparatus for indication icon of handheld input device
US9489077B2 (en) Optical touch panel system, optical sensing module, and operation method thereof
WO2014181587A1 (en) Portable terminal device
Ebrahimpour-Komleh et al. Design of an interactive whiteboard system using computer vision techniques
TW201516850A (en) Electric apparatus
JP2010237979A (en) Information terminal equipment
JP2009140314A (en) Display device
JP2007206777A (en) Information input system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130109

Termination date: 20200928