CN102999158B - The gesture identification of interaction systems and interaction systems - Google Patents
The gesture identification of interaction systems and interaction systems Download PDFInfo
- Publication number
- CN102999158B CN102999158B CN201210345585.1A CN201210345585A CN102999158B CN 102999158 B CN102999158 B CN 102999158B CN 201210345585 A CN201210345585 A CN 201210345585A CN 102999158 B CN102999158 B CN 102999158B
- Authority
- CN
- China
- Prior art keywords
- shadow
- viewing windows
- image
- gesture
- gesture identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Position Input By Displaying (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a kind of gesture identification of interaction systems and a kind of interaction systems, described interaction systems comprises an image sensor, a mirror surface and at least one light source, described image sensor comprises at least one indicant and covers for capturing the viewing windows covering shadow that described light source and/or described mirror surface formed, and described gesture identification comprises the following steps: to utilize described image sensor acquire images form; Capture the shield light shadow information in described viewing windows; When judge described viewing windows comprise two cover shadow time enter first mode, according to the position calculation of covering shadow in described viewing windows, indicant is relative to the position coordinates of described interaction systems; Enter the second pattern when the described viewing windows of judgement comprises when two or more covers shadow, the mutual relationship according to covering shadow in consecutive image form carries out gesture identification.
Description
The divisional application that the application is the applying date is on September 28th, 2009, application number is 200910176072.0, name is called the application for a patent for invention of " interaction systems of gesture identification and use the method ".
Technical field
The present invention relates to a kind of interaction systems, and particularly relate to a kind of gesture identification and use the interaction systems of the method.
Background technology
Please refer to shown in Fig. 1, which show a kind of existing touch-control system 9.This touch-control system 9 comprises touch surface 90 and at least 2 video cameras 91,92, and the visual field of video camera 91,92 comprises whole touch surface 90.When user utilizes finger to touch touch surface 90, video camera 91,92 fechtable comprises the viewing windows covering shadow of finger tips.Processing unit then can according to the position of covering shadow of finger tips in described viewing windows, calculate the two-dimensional position coordinate that finger touches touch surface 90, and according to the change of this two-dimensional position coordinate, relation control display performs corresponding action.
But the operating principle of described touch-control system 9 is the two-dimensional position coordinates touching touch surface 90 according to the position calculation finger covering shadow of finger tips in each viewing windows.When user utilizes multiple finger to touch touch surface 90, relative to video camera 92, because finger can cover mutually to each other, what might not there will be all finger tips in the viewing windows that video camera 92 captures covers shadow.
Such as, in Fig. 1, user utilizes finger 81 and 82 to touch touch surface 90, now video camera 91 acquire images form W
91, what it comprised finger 81 and 82 covers shadow I
81and I
82; But, cover mutually relative to video camera 92 owing to pointing 81 and 82, therefore the viewing windows W that captures of video camera 92
92in only comprise one and cover shadow.When processing unit is according to described viewing windows W
91and W
92calculate finger and touch the two-dimensional position coordinate time of touch surface 90, just likely cannot correctly calculate two-dimensional position coordinate and cause the generation of misoperation.
For head it off, by arranging two video cameras 93 and 94 in addition in two other corner of touch surface 90, to capture two viewing windows W in addition
93, W
94, processing unit then can according to viewing windows W
91and W
93calculate the two-dimensional position coordinate that finger 81 and 82 touches touch surface 90 respectively.But this kind of solution can increase system cost.
Summary of the invention
The invention provides a kind of gesture identification of interaction systems, described interaction systems comprises an image sensor, a mirror surface and at least one light source, described image sensor comprises at least one indicant and covers for capturing the viewing windows covering shadow that described light source and/or described mirror surface formed, and described gesture identification comprises the following steps: to utilize described image sensor acquire images form; Capture the shield light shadow information in described viewing windows; When judge described viewing windows comprise two cover shadow time enter first mode, according to the position calculation of covering shadow in described viewing windows, indicant is relative to the position coordinates of described interaction systems; Enter the second pattern when the described viewing windows of judgement comprises when two or more covers shadow, the mutual relationship according to covering shadow in consecutive image form carries out gesture identification.
Present invention also offers a kind of interaction systems, this interaction systems comprises: a mirror surface; An image sensor, covers for capturing at least one indicant continuously the viewing windows covering shadow that described mirror surface formed; Processing unit, when judge the described viewing windows that captures of described image sensor comprise two cover shadow time enter first mode, according to described indicant, cover the one-dimensional position that shadow is arranged in described viewing windows, calculate the two-dimensional position coordinate of described indicant; Enter the second pattern when judging that the described viewing windows that captures of described image sensor comprises when two or more covers shadow, the mutual relationship according to covering shadow in consecutive image form carries out gesture identification.
In view of this, the object of the invention is to propose a kind of gesture identification and use the interaction systems of the method to solve problem existing in above-mentioned existing touch-control system, the mutual relationship of covering shadow in its consecutive image form captured according to image sensor carries out gesture identification, can solve because the mutual shelter of indicant causes correctly calculating the problem of contact point coordinate.
The present invention proposes a kind of gesture identification of interaction systems, described interaction systems comprises image sensor, reflecting element and at least one light source, and described image sensor comprises at least one indicant and covers for capturing the viewing windows covering shadow that light source and/or reflecting element formed.Described gesture identification comprises the following steps: to utilize described image sensor acquire images form; Capture the shield light shadow information in described viewing windows; Determine whether to comprise multiple indicant according to described shield light shadow information; And when judgement comprises multiple indicant, the mutual relationship according to covering shadow in consecutive image form carries out gesture identification.
In a kind of embodiment of gesture identification of the present invention, described shield light shadow information comprises average shadow number, average shadow spacing and/or maximum shadow spacing.
In a kind of embodiment of gesture identification of the present invention, the step of carrying out gesture identification according to the mutual relationship of covering shadow in consecutive image form also comprises the following steps: that one of them compares with predetermined threshold level by described average shadow number and average shadow spacing; When described average shadow number or average shadow spacing are greater than described predetermined threshold level, carry out upper and lower, left and right, zoom in or out gesture identification; When described average shadow number or average shadow spacing are less than described predetermined threshold level, carry out rotate gesture identification; And the display frame of image display is upgraded according to picked out gesture.
The present invention separately proposes a kind of interaction systems, and this system comprises luminescence unit, image sensor and processing unit.Described image sensor is used for continuously at least one indicant of acquisition and covers the viewing windows covering shadow that described luminescence unit formed.The mutual relationship of covering shadow in the consecutive image form that described processing unit captures according to described image sensor carries out gesture identification.
In a kind of embodiment of interaction systems of the present invention, described luminescence unit is active light source or passive light source.When luminescence unit is passive light source, luminescence unit comprises mirror surface and described interaction systems also comprises at least one active light source.
The present invention separately proposes a kind of gesture identification of interaction systems, and described interaction systems comprises luminescence unit and image sensor, and described image sensor comprises multiple indicant and covers for capturing the viewing windows covering shadow that described luminescence unit formed.Described gesture identification comprises the following steps: to utilize the continuous acquire images form of described image sensor; And carry out gesture identification according to mutual relationship of covering shadow multiple in consecutive image form.
In a kind of embodiment of gesture identification of the present invention, described in cover shadow mutual relationship comprise described in cover the average shadow spacing change of shadow, the change of maximum shadow spacing and sense of displacement.
According to gesture identification of the present invention and the interaction systems using the method, in a first mode, described interaction systems controls the action of cursor according to the two-dimensional coordinate change of indicant; In a second mode, described interaction systems according to the display frame of covering the mutual relationship refresh display of shadow of multiple indicant, such as, makes display frame display frame scrolling (scroll), object Scalable (zoomin/out), object rotate (rotation), picture switching or menu etc.Gesture identification of the present invention and use the method interaction systems in, owing to not needing the contact point coordinate calculating multiple indicant respectively, though therefore indicant relative to image sensor for cover mutually, also correctly can carry out gesture identification.
Accompanying drawing explanation
Fig. 1 shows the schematic diagram of existing touch-control system;
Fig. 2 a shows the stereographic map of the interaction systems of the embodiment of the present invention;
Fig. 2 b shows the operation chart of the interaction systems of first embodiment of the invention;
Fig. 3 a shows the schematic diagram utilizing the interaction systems of first embodiment of the invention to carry out cursor manipulation;
The schematic diagram of the viewing windows that the image sensor that Fig. 3 b shows Fig. 3 a captures;
Fig. 4 a shows the process flow diagram of the gesture identification of the interaction systems of the embodiment of the present invention;
Fig. 4 b shows the process flow diagram of the second pattern in Fig. 4 a;
Fig. 5 a ~ 5d respectively illustrates the schematic diagram of identification right/left in the gesture identification of the interaction systems of first embodiment of the invention/lower/upper gesture;
Fig. 5 e ~ 5f respectively illustrates the schematic diagram of identification zoom in/out gesture in the gesture identification of the interaction systems of first embodiment of the invention;
Fig. 5 g ~ 5h respectively illustrates the schematic diagram of identification rotate gesture in the gesture identification of the interaction systems of first embodiment of the invention;
Fig. 6 a shows the operation chart of the interaction systems of second embodiment of the invention;
The schematic diagram of the viewing windows that the image sensor that Fig. 6 b ~ 6c shows Fig. 6 a captures;
Fig. 7 a ~ 7b respectively illustrates the schematic diagram of identification right/left gesture in the gesture identification of the interaction systems of second embodiment of the invention;
Fig. 7 c ~ 7d respectively illustrates the schematic diagram of identification zoom in/out gesture in the gesture identification of the interaction systems of second embodiment of the invention; And
Fig. 7 e ~ 7f respectively illustrates the schematic diagram of identification rotate gesture in the gesture identification of the interaction systems of second embodiment of the invention.
Description of reference numerals
10,10 ' interaction systems 100 panel
The Second Edge of the first limit 100b panel of 100a panel
4th limit of the 3rd limit 100d panel of 100c panel
The surface of 100d the 4th mirror image 100s panel
11 luminescence unit 11a mirror surfaces
121 first light source 121 ' second mirror images
122 secondary light sources 122 ' the 3rd mirror image
13,13 ' image sensor 14 processing unit
15 image display 150 display screens
151 cursor 20,20 ', 20 " viewing windows
IS virtual image space, RS real image space
T
81, T indicant contact point T
81', the contact point of T ' first mirror image
A
81the included angle A on contact point and the 3rd limit
81the angle on the ' the first mirror image and the 3rd limit
R
81first sense path R
81' the second sense path
I
81, I
82first covers shadow I
81' the second covers shadow
I
1, I
2first covers shadow I
1', I
2' the second covers shadow
I
81", I
82" cover shadow G
1first shadow group
G
2second shadow group C center line
Sav average shadow spacing 8 user
81,82 finger 9 touch-control systems
91 ~ 94 video camera 90 touch surface
W
91~ W
94viewing windows S
1~ S
5step
Embodiment
More obviously, hereafter can will coordinate appended diagram, be described in detail below to allow above and other object of the present invention, feature and advantage.In addition, it should be noted that, in explanation of the present invention, identical component represents with identical symbol.
Shown in Fig. 2 a and Fig. 2 b, Fig. 2 a shows the stereographic map of the interaction systems 10 of the embodiment of the present invention, and Fig. 2 b shows the operation chart of the interaction systems 10 of first embodiment of the invention.Described interaction systems 10 comprises panel 100, luminescence unit 11, first light source 121, secondary light source 122, image sensor 13, processing unit 14 and image display 15.
Described panel 100 comprises the first limit 100a, Second Edge 100b, the 3rd limit 100c, the 4th limit 100d and surperficial 100s.The embodiment of described panel 100 comprises blank (whiteboard) or Touch Screen (touchscreen).
Described luminescence unit 11 is arranged on the surperficial 100s of the first limit 100a of panel 100.Luminescence unit 11 can be active light source or passive light source.When luminescence unit 11 is active light source, it can self-luminescence and described luminescence unit 11 is preferably line source.When luminescence unit 11 is passive light source, its light sent for reflecting other light sources (such as the first light source 121 and secondary light source 122); Now, luminescence unit 11 comprises the mirror surface 11a of the 3rd limit 100c towards panel, and wherein this mirror surface 11a can utilize suitable material to be formed.Described first light source 121 is arranged on the surperficial 100s of the Second Edge 100b of panel, and preferably towards the 4th limit 100d luminescence of panel.Described secondary light source 122 is arranged on the surperficial 100s of the 3rd limit 100c of panel, and preferably towards the first limit 100a luminescence of panel; Wherein said first light source 121 and secondary light source 122 are preferably active light source, such as, be line source, but be not limited to this.
Shown in Fig. 2 b, when luminescence unit 11 is passive light source (such as reflecting element), first light source 121 can map out the second mirror image 121 ' relative to mirror surface 11a, secondary light source 122 can map out the 3rd mirror image 122 ' relative to mirror surface 11a, and the 4th limit 100d of panel can map out the 4th mirror image 100d ' relative to mirror surface 11a; 4th limit 100d of wherein said luminescence unit 11, first light source 121, secondary light source 122 and panel defines a real image space RS jointly; Described luminescence unit 11, second mirror image 121 ', the 3rd mirror image 122 ' and the 4th mirror image 100d ' define a virtual image space IS jointly.
Described image sensor 13 is arranged at the corner of panel 100, and in this embodiment, described image sensor 13 is arranged at the 3rd limit 100c of panel and the corner of the 4th limit 100d intersection.The visual field VA of image sensor 13 at least comprises described real image space RS and virtual image space IS, for capturing the indicant (pointer) comprising real image space RS, virtual image space IS and be positioned at real image space RS, such as point 81, the viewing windows covering shadow.In one embodiment, described image sensor 13 comprises lens (or lens combination) for adjusting the field range VA of image sensor 13, with the complete image making image sensor 13 can capture described real image space RS and virtual image space IS.The embodiment of image sensor 13 includes, but not limited to ccd image sensor and cmos image sensor.
Described processing unit 14 couples image sensor 13, for the treatment of the image that image sensor 13 captures, with the one or more indicant of identification.When pick out only comprise an indicant time, then according to the position of covering shadow of indicant in viewing windows, relatively calculate the two-dimensional position coordinate that indicant touches panel surface 100s.When pick out comprise multiple indicant time, processing unit 14 carries out gesture identification according to the mutual relationship of covering shadow of indicant in viewing windows, and upgrade its display frame, after its detailed account form will be specified according to picked out gesture relation control image display.
Described image display 15 couples processing unit 14, the display screen 150 of image display 15 can show cursor 151, as shown in Figure 2 b.Processing unit 14 touches the two-dimensional position changes in coordinates of panel surface 100s according to calculated indicant, the action of cursor 151 on relation control display screen 150, or multiple mutual relationship of covering shadow upgrades the display frame of display screen 150 in the viewing windows to capture according to image sensor 13, such as display frame scrolling, object convergent-divergent, object rotate, picture switches or menu etc.
For clear display interaction systems of the present invention, in Fig. 2 a and Fig. 2 b, described panel 100 is independent of outside image display 15, but it is not intended to limit the present invention, in other embodiments, described panel 100 also can be incorporated on the display screen 150 of image display 15.In addition, when described panel 100 is Touch Screen, the display screen 150 of image display 15 also can be used as panel 100, and described luminescence unit 11, first light source 121, secondary light source 122 and image sensor 13 are then arranged on the surface of display screen 150.
Be understandable that, although in Fig. 2 a and Fig. 2 b, described panel 100 is shown as rectangle and described luminescence unit 11, first light source 121 and secondary light source 122 are shown as being arranged at three limits of panel 100 orthogonally, but it is only a kind of embodiment of the present invention, is not intended to limit the present invention.In other embodiments, described panel 100 can be made into other shapes; Described luminescence unit 11, first light source 121, secondary light source 122 and image sensor 13 also can be arranged on panel 100 with other spatial relationship.Spirit of the present invention is, utilize image sensor 13 acquire images form, and according to covering the displacement of shadow in this viewing windows and covering shadow mutual relationship to each other and carry out gesture identification, and relatively upgrade the display frame of image display according to picked out gesture.
First embodiment
Please refer to shown in Fig. 3 a and Fig. 3 b, Fig. 3 a shows the schematic diagram utilizing the interaction systems 10 of first embodiment of the invention to carry out cursor manipulation; Fig. 3 b shows the schematic diagram of the viewing windows 20 that image sensor 13 in Fig. 3 a captures.As shown in the figure, work as indicant, such as, when the tip of finger 81 is touched on the panel surface 100s in the RS of real image space, this sentences contact point T
81represent, indicant relative to luminescence unit 11(in this embodiment for reflecting element) mirror surface 11a map out the first mirror image in the IS of virtual image space, this sentences contact point T
81' represent.Described image sensor 13 is according to the first sensing route R
81the most advanced and sophisticated image of acquisition indicant, covers shadow I to form first in viewing windows 20
81; And according to the second sensing route R
81' capture the most advanced and sophisticated image of the first mirror image, cover shadow I to form second in viewing windows 20
81', as shown in Figure 3 b.In this embodiment, store in advance in processing unit 14 and cover the one-dimensional position of shadow in viewing windows 20 and the relativeness of angle between sensing route and the 3rd limit 100c of panel.Therefore, when image sensor 13 captures the most advanced and sophisticated image of indicant and the first mirror image thereof and forms viewing windows 20, processing unit 14 can obtain the first included angle A respectively according to the one-dimensional position covering shadow of viewing windows 20
81with the second included angle A
81'.Then, according to trigonometric function relation, what processing unit 14 can obtain that indicant touches panel surface 100s touches a T
81two-dimensional position coordinate.
Such as in one embodiment, described panel surface 100s form right angle coordinate system, the 3rd limit 100c as the X-axis of rectangular coordinate system, the 4th limit 100d as the Y-axis of rectangular coordinate system, and using image sensor 13 position as initial point.Therefore, a T is touched
81the coordinate being positioned at rectangular coordinate system then can be expressed as (distance of relative 4th limit 100d, the distance of relative 3rd limit 100c).In addition, the distance D between the first limit 100a of panel and the 3rd limit 100c is stored in processing unit 14 in advance
1.By this, what according to the following step, processing unit 14 can obtain that indicant 81 touches panel surface 100s touches a T
81two-dimensional position coordinate: (a) processing unit 14 obtains the first sensing route R
81and the first included angle A between the 3rd limit 100c of panel
81and second senses route R
81' and the 3rd limit 100c of panel between the second included angle A
81'; B () is according to equation D
2=2D
1/ (tanA
81+ tanA
81') obtain indicant 81 touch a T
81and the distance D between the 4th limit 100d of panel
2; C () is according to D
2× tanA
81obtain and touch a T
81y coordinate.Therefore, a T is touched
81two-dimensional position coordinate be then (D
2, D
2× tanA
81).
Shown in Fig. 3 a and Fig. 3 b, the running of the interaction systems 10 of first embodiment of the invention comprises two kinds of patterns.When according to the viewing windows 20 that image sensor 13 captures, processing unit 14 judges that only an indicant touches panel surface 100s, then control interaction systems 10 and work in first mode.In a first mode, image sensor 13 is with the continuous acquire images of a sampling frequency, and processing unit 14 covers according to indicant 81 one-dimensional position that shadow is arranged in viewing windows 20, and what calculate that indicant 81 touches panel surface 100s touches a T
81two-dimensional position coordinate, and according to touching a T
81two-dimensional position changes in coordinates, the action of cursor 151 on relation control image display 15.Such as when indicant 81 moves towards the 4th limit 100d of panel, the contact point T of the first mirror image
81' also simultaneously towards the 4th mirror image 100d ' movement.Now, what correspond to indicant in viewing windows 20 covers shadow I
81and correspond to the first mirror image cover shadow I
81' also move towards the left side of viewing windows 20.By this, processing unit 14 covers shadow I according to each viewing windows 20
81and I
81' position, calculate touch a T
81two-dimensional position coordinate, and according to the change of this two-dimensional position coordinate, the cursor 151 on relation control image display 15 moves towards the left of display screen 150.Be understandable that, described in the moving direction of indicant and viewing windows 20, cover shadow I
81and I
81' moving direction and cursor 151 moving direction between relation to be not limited in above-described embodiment disclosed content, described in cover shadow I
81and I
81' may according to the different of software processing mode in contrast to the moving direction of indicant from the moving direction of cursor 151.
When processing unit 14 according to the viewing windows 20 that image sensor 13 captures be determined with multiple indicant touch panel surface 100s time, then control interaction systems 10 work in the second pattern.In a second mode, processing unit 14 calculates according to each viewing windows 20 no longer one by one and respectively touches a T
81two-dimensional position coordinate, only judge gesture (gesture) according to mutual relationship of covering shadow multiple in viewing windows 20, and upgrade the display frame of the display screen 150 of image display 15 according to judged gesture, such as picture rolling, object Scalable, object rotation, picture switching or display menu etc.
Please refer to shown in Fig. 4 a, which show the process flow diagram of gesture identification of the present invention.Described method comprises the following steps: to utilize image sensor acquire images form (step S
1); Capture shield light shadow information (the step S in described viewing windows
2); Determine whether to comprise multiple indicant (step S according to described shield light shadow information
3); If not, first mode (step S is entered
4); If so, the second pattern (step S is entered
5).
Please refer to shown in Fig. 4 b, which show the step S of Fig. 4 a
5in the embodiment of the second pattern, described shield light shadow information comprises average shadow number, average shadow spacing and maximum shadow spacing.Second pattern comprises the following steps: to judge average shadow number and average shadow spacing, and whether one of them is greater than predetermined threshold level (step S
51); If so, then rotate gesture identification (step S is carried out according to the mutual relationship of covering shadow in consecutive image form
52); If not, then up/down/left/right/zoom in/out gesture identification (step S is carried out according to the mutual relationship of covering shadow in consecutive image form
53); And display frame (the step S of image display is upgraded according to the gesture of institute's identification
54).Be understandable that, Fig. 4 b can be set as carrying out rotate gesture identification when average shadow number is less than predetermined threshold level, and carries out the identification of translation gesture when average shadow number is greater than predetermined threshold level; Or be set as carrying out rotate gesture identification when average shadow spacing is less than predetermined threshold level, and carry out the identification of translation gesture when average shadow spacing is greater than predetermined threshold level.
In another embodiment, the second pattern only can comprise a step: the mutual relationship according to covering shadow in consecutive image form carries out rotate gesture identification.In another embodiment, the second pattern only can comprise 1 one steps: carry out up/down/left/right/zoom in/out gesture identification according to the mutual relationship of covering shadow in consecutive image form.That is, the second pattern of interaction systems can only carry out rotate gesture identification or up/down/left/right/zoom in/out gesture identification one of them.
Shown in Fig. 3 a ~ 4b, when utilizing the interaction systems 10 of first embodiment of the invention to carry out gesture identification, first utilize image sensor 13 acquire images to form viewing windows 20, it comprises at least one and corresponds to indicant contact point T
81cover shadow I
81and at least one corresponds to the first mirror image contact point T
81' cover shadow I
81' (step S
1).Then, the shield light shadow information in processing unit 14 acquire images form 20, such as, cover the average shadow number of shadow, average shadow spacing and maximum shadow spacing etc., use (step S in subsequent step
2).Then, processing unit 14 judges whether have multiple indicant (step S in viewing windows 20 according to captured shield light shadow information
3), wherein each indicant can produce maximum 2 and covers shadow on viewing windows 20, therefore when viewing windows 20 occurring be greater than two and covering shadow, then represents and comprises multiple indicant.
When judging to only include an indicant, as shown in Figure 3 a and Figure 3 b shows, processing unit 14 controls interaction systems 10 and enters first mode (step S
4).In a first mode, shadow (such as I is covered in the viewing windows 20 that processing unit 14 captures according to image sensor 13
81and I
81') one-dimensional position calculate contact point (the such as T of indicant touch panel surface 100s
81) two-dimensional position coordinate, and according to the action of cursor 151 on this two-dimensional position changes in coordinates relation control image display 15.
When processing unit 14 is determined with multiple indicant touch panel surface 100s according to shield light shadow information, as shown in Fig. 5 a to Fig. 5 h, then control interaction systems 10 and enter the second pattern (step S
5).In a second mode, processing unit 14 carries out gesture identification according to covering shadow mutual relationship each other in viewing windows 20, and according to the display screen 150 of picked out gesture relation control image display 15 frame updating of picture, such as carry out the Scalable of picture rolling, object or form, object rotate, picture switches or display menu etc.
Please refer to shown in Fig. 5 a to Fig. 5 h, the embodiment of the second pattern is then described, wherein said luminescence unit 11 is described for passive light source in this illustrates.In addition, be understandable that, be only exemplary shown in Fig. 5 a to Fig. 5 h, be not intended to limit the present invention.
Picture rolling gesture: please refer to shown in Fig. 5 a ~ 5d, when the shield light shadow information in the viewing windows 20 that processing unit 14 captures according to image sensing unit 13 judges to comprise multiple contact point (such as T
1and T
2) time, then enter the second pattern.Then, processing unit 14 judges on average covering shadow number and whether be greater than predetermined threshold level in viewing windows 20, and this predetermined threshold level is such as 6, or judges whether average shadow interval S av is greater than predetermined threshold level (step S
51).When on average cover shadow number be not greater than predetermined threshold level or average shadow interval S av be not greater than predetermined threshold level time, then carry out translation gesture identification (step S
53).
When the identification of translation gesture, first will cover shadow and hive off, such as according to the center line C of viewing windows 20 as the foundation of hiving off, to distinguish the first shadow group G
1with the second shadow group G
2, wherein said first shadow group G
1may be real image shadow group or virtual image shadow group, described second shadow group G
2may be virtual image shadow group or real image shadow group.
Such as, in Fig. 5 a ~ 5d, the average shadow number in viewing windows 20 is not greater than predetermined threshold level or average shadow interval S av when not being greater than predetermined threshold level, and processing unit 14 carries out up/down/left/right gesture identification (step S
53).Such as, in Fig. 5 a, processing unit 14 picks out the first shadow group G in viewing windows 20
1with the second shadow group G
2all move right, therefore judge that user is performing the gesture of display frame to right/left scrolling, thus the display screen 150 of relation control image display 15 carries out corresponding display frame renewal (step S
54).
In like manner, in Fig. 5 b, processing unit 14 picks out the first shadow group G in viewing windows 20
1with the second shadow group G
2all be moved to the left, therefore judge that user is performing the gesture of display frame to left/right scrolling, thus the display screen 150 of relation control image display 15 carries out corresponding display frame renewal (step S
54).
In Fig. 5 c, processing unit 14 picks out the first shadow group G in viewing windows 20
1with the second shadow group G
2average shadow spacing increase gradually, therefore judge user performing the gesture of display frame to lower/upper scrolling, thus relation control display screen 150 carries out corresponding display frame renewal (step S
54).
In Fig. 5 d, processing unit 14 picks out the first shadow group G in viewing windows 20
1with the second shadow group G
2average shadow spacing reduce gradually, therefore judge user performing by display frame upwards/gesture of lower scrolling, thus relation control display screen 150 carries out corresponding display frame renewal (step S
54).
In another embodiment, when the shield light shadow information in the viewing windows 20 that processing unit 14 captures according to image sensing unit 13 judges to comprise multiple contact point, then translation gesture identification (step S is directly carried out
53) and do not perform step S
51.
Object Scalable gesture: before the step of carrying out object Scalable, user first forms single contact point on panel surface 100s, and to enter first mode, and control cursor 151 moves on object O in a first mode, as shown in Figure 3 a.Then, user forms multiple contact point again on panel surface 100s, as Fig. 5 e ~ 5f.When the shield light shadow information in the viewing windows 20 that processing unit 14 captures according to image sensing unit 13 judges to comprise multiple contact point (such as T
1, T
2) time, then enter the second pattern.
Then, processing unit 14 judges average shadow number in viewing windows 20 and average shadow interval S av whether one of them is greater than predetermined threshold level (step S
51) when average shadow number is not greater than predetermined threshold level or average shadow interval S av is not greater than predetermined threshold level, then first will covers shadow and hive off, such as according to the center line C of viewing windows 20 as the foundation of hiving off, to distinguish the first shadow group G
1with the second shadow group G
2.
Such as, in Fig. 5 e ~ 5f, the average shadow number in viewing windows 20 is not greater than predetermined threshold level or average shadow interval S av is not greater than predetermined threshold level, and processing unit 14 carries out zoom in/out gesture identification (step S
53).Such as, in Fig. 5 e, processing unit 14 picks out the first shadow group G in viewing windows 20
1with the second shadow group G
2average shadow spacing haply unchanged and maximum shadow spacing increase, therefore judge that user is performing the gesture of object zoom in/out, thus the display screen 150 of relation control image display 15 carries out corresponding display frame renewal (step S
54).
Such as, in Fig. 5 f, processing unit 14 picks out the first shadow group G in viewing windows 20
1with the second shadow group G
2average shadow spacing haply unchanged and maximum shadow spacing reduce, therefore judge that user is performing the gesture of object reduce/enlarge, thus the display screen 150 of relation control image display 15 carries out corresponding display frame renewal (step S
54).
In another embodiment, when the shield light shadow information in the viewing windows 20 that processing unit 14 captures according to image sensing unit 13 judges to comprise multiple contact point, then zoom in/out gesture identification (step S is directly carried out
53) and do not perform step S
51.
In addition, also may not need enter first mode and directly carry out the second pattern before carrying out the identification of zoom in/out gesture, such as when panel 100 is contact panel, because user directly can point to object, therefore can directly enter the second pattern when carrying out the identification of zoom in/out gesture.
Object rotate gesture: before the step of carrying out object rotation, user first forms single contact point on panel surface 100s, and to enter first mode, and control cursor 151 moves on object O in a first mode, as shown in Figure 3 a.Then, user forms multiple contact point T again on panel surface 100s, as shown in Fig. 5 g ~ 5h, when the shield light shadow information in the viewing windows 20 that processing unit 14 captures according to image sensing unit 13 judges to comprise multiple contact point T, then enter the second pattern.
Then, processing unit 14 judges average shadow number in viewing windows 20 and average shadow interval S av whether one of them is greater than predetermined threshold level (step S51).When average shadow number is greater than predetermined threshold level or average shadow interval S av is greater than predetermined threshold level, does not then carry out covering shadow and hive off, and directly judge sense of rotation according to covering the shadow number that covers of shadow towards direction, the both sides displacement of viewing windows 20.
Such as, in Fig. 5 g, processing unit 14 pick out move right in viewing windows 20 cover shadow number be greater than be moved to the left cover shadow number, therefore judge that user is performing the gesture of object dextrorotation/left-handed, thus relation control image display 15 carries out corresponding display frame renewal (step S
54).
Such as, in Fig. 5 h, processing unit 14 pick out be moved to the left in viewing windows 20 cover shadow number be greater than move right cover shadow number, therefore judge that user is performing the gesture of left-handed for object/dextrorotation, thus relation control image display 15 carries out corresponding display frame renewal (step S
54).
In another embodiment, when the shield light shadow information in the viewing windows 20 that processing unit 14 captures according to image sensing unit 13 judges to comprise multiple contact point, then rotate gesture identification (step S is directly carried out
52) and do not perform step S
51.
In addition, also may not need enter first mode and directly carry out the second pattern before carrying out rotate gesture identification, such as when panel 100 is contact panel, because user directly can point to object, therefore can directly enter the second pattern when carrying out rotate gesture identification.
Picture switches gesture: user directly forms multiple contact point T on panel surface 100s, as shown in Fig. 5 g ~ 5h, when the shield light shadow information in the viewing windows 20 that processing unit 14 captures according to image sensing unit 13 judges to comprise multiple contact point T, then directly enter the second pattern.
Processing unit 14 directly determines whether to carry out picture switching according to covering the shadow number that covers of shadow towards direction, the both sides displacement of viewing windows 20.Such as, in Fig. 5 g and Fig. 5 h, shadow number is greater than movement to the left or to the right that what processing unit 14 to pick out in viewing windows 20 to the right or was moved to the left cover cover shadow number, therefore judge that user is performing the gesture of picture switching, thus relation control image display 15 carries out corresponding picture handoff functionality.
Display menu gesture: user directly forms multiple contact point T on panel surface 100s, as shown in Fig. 5 g ~ 5h, when the shield light shadow information in the viewing windows 20 that processing unit 14 captures according to image sensing unit 13 judges to comprise multiple contact point T, then directly enter the second pattern.
Processing unit 14 directly determines whether display menu according to covering the shadow number that covers of shadow towards direction, the both sides displacement of viewing windows 20.Such as, in Fig. 5 g, processing unit 14 pick out move right in viewing windows 20 cover shadow number be greater than be moved to the left cover shadow number; In Fig. 5 h, processing unit 14 pick out be moved to the left in viewing windows 20 cover shadow number be greater than move right cover shadow number, therefore judge that user is performing the gesture of display menu, thus relation control image display 15 shows corresponding menu.
Second embodiment
Please refer to shown in Fig. 6 a to Fig. 6 c, Fig. 6 a shows the operation chart of the interaction systems 10 ' of second embodiment of the invention, and Fig. 6 b and Fig. 6 c respectively illustrates image sensor 13 and the 13 ' viewing windows 20 ' captured and 20 of Fig. 6 a " schematic diagram.In this embodiment, interaction systems 10 ' comprises luminescence unit 11, first light source 121, secondary light source 122 and image sensor 13 and 13 '.Luminescence unit 11 is active light source, and preferably towards the 3rd limit 100c luminescence of panel.Luminescence unit 11, first light source 121 and secondary light source 122 are arranged at the first limit 100a, Second Edge 100b and the 4th limit 100d of panel respectively.Therefore, shadow I is covered at interior first of the indicant tip that only comprises of viewing windows 20 ' that image sensor 13 captures
81and I
82; The viewing windows 20 that image sensor 13 ' captures " in only comprise indicant tip cover shadow I
81" and I
82".
In this embodiment, processing unit 14 is according to image sensor 13 and the 13 ' viewing windows 20 ' captured and 20 " in multiple mutual relationship of covering shadow carry out gesture identification.
Picture rolling gesture: please refer to shown in Fig. 7 a ~ 7b, when processing unit 14 is according to image sensing unit 13 and the 13 ' viewing windows 20 ' captured and 20 " in shield light shadow information judge comprise multiple contact point (such as T
1and T
2) time, then enter the second pattern.Then, processing unit 14 judge viewing windows 20 ' and 20 " in average shadow number whether be greater than predetermined threshold level or judge viewing windows 20 ' and 20 " in average shadow spacing whether be greater than predetermined threshold level (step S
51).
When viewing windows 20 ' and 20 " in average shadow number be not greater than predetermined threshold level or average shadow spacing when not being greater than predetermined threshold level, processing unit 14 carries out left/right gesture identification (step S
53).Such as, in Fig. 7 a and Fig. 7 b, processing unit 14 picks out imaging windows 20 ' and 20 respectively " in cover shadow or be moved to the left all to the right; therefore judge that user is performing the gesture of display frame to lower/upper scrolling, thus the display screen 150 of relation control image display 15 carries out corresponding display frame renewal (step S
54).
In another embodiment, when processing unit 14 is according to image sensing unit 13 and the 13 ' viewing windows captured 20 and 20 " in shield light shadow information judge when comprising multiple contact point, then directly carry out translation gesture identification (step S
53) and do not perform step S
51.
Object Scalable gesture: before the step of carrying out object Scalable, user first control cursor move to for convergent-divergent object on.Then, user forms multiple contact point (such as T again on panel surface 100s
1and T
2), as shown in Fig. 7 c ~ 7d.When processing unit 14 is according to image sensing unit 13 and the 13 ' viewing windows 20 ' captured and 20 " in shield light shadow information judge when comprising multiple contact point, then enter the second pattern.
Then, processing unit 14 judges whether the average shadow spacing in viewing windows 20 and 20 " in average shadow number whether be greater than predetermined threshold level or judge viewing windows 20 and 20 " is greater than predetermined threshold level (step S
51).When the average shadow number in viewing windows 20 is not greater than predetermined threshold level or average shadow spacing is not greater than predetermined threshold level, processing unit 14 carries out zoom in/out gesture identification (step S
53).Such as, in Fig. 7 c and 7d, processing unit 14 picks out viewing windows 20 ' and 20 respectively " in the average shadow spacing of covering shadow increase or reduce; therefore judge that user is performing the gesture of object zoom in/out, thus relation control image display 15 carries out corresponding display frame renewal (step S
54).
In another embodiment, when processing unit 14 is according to image sensing unit 13 and the 13 ' viewing windows 20 ' captured and 20 " in shield light shadow information judge when comprising multiple contact point, then directly carry out zoom in/out gesture identification (step S
53) and do not perform step S
51.
In addition, also may not need enter first mode and directly carry out the second pattern before carrying out the identification of zoom in/out gesture, such as when panel 100 is contact panel, because user directly can point to object, therefore can directly enter the second pattern when carrying out the identification of zoom in/out gesture.
Object rotate gesture: before the step of carrying out object rotation, user first control cursor move to for rotate object on.Then, user forms multiple contact point T again on panel surface 100s, as shown in Fig. 7 e ~ 7f, when processing unit 14 is according to image sensing unit 13 and the 13 ' viewing windows 20 ' captured and 20 " in shield light shadow information judge when comprising multiple contact point T, then enter the second pattern.
Then, processing unit 14 judges whether the average shadow spacing in viewing windows 20 ' and 20 " in average shadow number whether be greater than predetermined threshold level or processing unit 14 judges viewing windows 20 ' and 20 " is greater than predetermined threshold level (step S
51).When average shadow number is greater than predetermined threshold level or average shadow spacing is greater than predetermined threshold level, then according to covering shadow towards viewing windows 20 ' and 20, " number of middle direction, both sides movement judges sense of rotation.
Such as, in Fig. 7 e, processing unit 14 picks out viewing windows 20 ' and 20 respectively " in move right cover shadow number be greater than be moved to the left cover shadow number; In Fig. 7 f, processing unit 14 picks out viewing windows 20 ' and 20 respectively " in be moved to the left cover shadow number be greater than move right cover shadow number; therefore judge that user is performing the gesture of object dextrorotation/left-handed, thus relation control image display 15 carries out corresponding display frame renewal (step S
54).
In another embodiment, when processing unit 14 is according to image sensing unit 13 and the 13 ' viewing windows 20 ' captured and 20 " in shield light shadow information judge when comprising multiple contact point, then directly carry out rotate gesture identification (step S
52) and do not perform step S
51.
In addition, also may not need enter first mode and directly carry out the second pattern before carrying out rotate gesture identification.
Picture switches gesture or display menu gesture: user directly forms multiple contact point T on panel surface 100s, as shown in Fig. 7 e ~ 7f, when processing unit 14 is according to image sensing unit 13 and the 13 ' viewing windows 20 ' captured and 20 " in shield light shadow information judge when comprising multiple contact point T, then directly enter the second pattern.
Processing unit 14 is directly according to covering shadow towards viewing windows 20 ' and 20 " the shadow number that covers of direction, both sides displacement determine whether to carry out picture switching or display menu gesture.Such as, in Fig. 7 e and Fig. 7 f, processing unit 14 picks out viewing windows 20 ' and 20 respectively " in the right or covering of being moved to the left shadow number is greater than movement to the left or to the right covers shadow number; therefore judge that user is performing picture and switching or the gesture of display menu gesture, thus relation control image display 15 carries out corresponding picture switching or display menu.
Be understandable that, the controlling functions corresponding to mutual relationship covering shadow in above-mentioned second pattern is not limited to content disclosed in Fig. 5 a ~ 5h and Fig. 7 a ~ 7f.Spirit of the present invention be to carry out gesture identification according to the mutual relationship of covering shadow in viewing windows and the position coordinates that must not calculate contact point one by one to avoid causing because indicant covers mutually calculating the situation of contact point coordinate.
As previously mentioned, because existing touch-control system system carries out gesture identification by the change of contact point two-dimensional coordinate, easily cause the situation that correctly cannot calculate contact point coordinate when indicant covers mutually.The present invention, by covering the mutual relationship of shadow in recognisable image form using the foundation as gesture identification, therefore only needs to utilize an image sensor correctly can carry out gesture identification, has the effect reducing system cost.
Although the present invention is by disclosed in above-described embodiment, but above-described embodiment is not intended to limit the present invention, any the technical staff in the technical field of the invention, without departing from the spirit and scope of the present invention, various variation and amendment should be done.Therefore the scope that protection scope of the present invention should define with appended claims is as the criterion.
Claims (14)
1. the gesture identification of an interaction systems, described interaction systems comprises an image sensor, a mirror surface and at least one light source, described image sensor comprises at least one corresponding to indicant contact point and at least one viewing windows covering shadow corresponding to mirror image contact point for capturing, and described gesture identification comprises the following steps:
Utilize described image sensor acquire images form;
Capture the shield light shadow information in described viewing windows;
When judge described viewing windows comprise two cover shadow time enter first mode, according to the position calculation of covering shadow in described viewing windows, indicant is relative to the position coordinates of described interaction systems; And
Enter the second pattern when the described viewing windows of judgement comprises when two or more covers shadow, the mutual relationship according to covering shadow in consecutive image form carries out gesture identification,
Wherein, described second pattern comprises:
By average shadow number and average shadow spacing, one of them compares with predetermined threshold level; And
When described average shadow number or described average shadow spacing are less than described predetermined threshold level, the gesture identification carry out upper and lower, left and right, zooming in or out.
2. gesture identification according to claim 1, wherein, described first mode also comprises the following steps:
Position coordinates according to indicant described in consecutive image form changes, the action of cursor on relation control image display.
3. gesture identification according to claim 1, wherein, described second pattern also comprises the following steps:
When described average shadow number or described average shadow spacing are greater than described predetermined threshold level, carry out rotate gesture identification; And
The display frame of image display is upgraded according to picked out gesture;
Wherein, when described gesture is recognized as upper and lower, left and right, described display frame is updated to display frame scrolling; When described gesture be recognized as amplification, reduce time, described display frame is updated to object Scalable; When described gesture is recognized as rotate gesture, described display frame is updated to object and rotates; When described gesture be recognized as move left and right cover shadow number different time, described display frame is updated to picture and switches or display menu.
4. gesture identification according to claim 1, wherein, when described average shadow number or described average shadow spacing are less than predetermined threshold level, described gesture identification also comprises the following steps:
Shadow will be covered according to the center line of described viewing windows and be divided into a real image shadow group and a virtual image shadow group;
Wherein, when the described real image shadow group in described viewing windows and described virtual image shadow group displacement in the same way time, carry out the identification of left or right gesture; When the described real image shadow group in described viewing windows and the average shadow spacing between described virtual image shadow group change, carry out upper or lower gesture identification; When the described real image shadow group in described viewing windows and the maximum shadow spacing between described virtual image shadow group change, zoom in or out gesture identification.
5. gesture identification according to claim 3, wherein, in the step of the gesture identification carrying out rotating, according to displacement in consecutive image form in the same way cover the higher direction recognition sense of rotation of covering shadow of shadow number.
6. an interaction systems, this interaction systems comprises:
At least one active light source;
A mirror surface;
An image sensor, comprises at least one corresponding to indicant contact point and at least one viewing windows covering shadow corresponding to mirror image contact point for capturing continuously; And
Processing unit, when judge the described viewing windows that captures of described image sensor comprise two cover shadow time enter first mode, according to described indicant, cover the one-dimensional position that shadow is arranged in described viewing windows, calculate the two-dimensional position coordinate of described indicant; Enter the second pattern when judging that the described viewing windows that captures of described image sensor comprises when two or more covers shadow, the mutual relationship according to covering shadow in consecutive image form carries out gesture identification,
Wherein, the viewing windows that described image sensor captures comprises real image and covers shadow and shadow is covered in the virtual image, and described processing unit covers according to real image in consecutive image form the mutual relationship that shadow and the virtual image cover shadow and carries out gesture identification.
7. interaction systems according to claim 6, wherein, described interaction systems also comprises the image display coupling described processing unit, and described processing unit also upgrades the display frame of described image display according to picked out gesture.
8. interaction systems according to claim 6, wherein, the mutual relationship of covering shadow in the consecutive image form that described processing unit only captures according to described image sensor is carried out gesture identification and does not calculate the two-dimensional position coordinate of each described indicant one by one according to described viewing windows.
9. the gesture identification of an interaction systems, described interaction systems comprises an image sensor, a mirror surface and at least one light source, described image sensor comprises at least one corresponding to indicant contact point and at least one viewing windows covering shadow corresponding to mirror image contact point for capturing, and described gesture identification comprises the following steps:
Utilize described image sensor acquire images form;
Capture the shield light shadow information in described viewing windows;
When judge described viewing windows comprise two cover shadow time enter first mode, according to the position calculation of covering shadow in described viewing windows, indicant is relative to the position coordinates of described interaction systems; And
Enter the second pattern when the described viewing windows of judgement comprises when two or more covers shadow, the mutual relationship according to covering shadow in consecutive image form carries out gesture identification,
Wherein, described second pattern comprises:
By average shadow number and average shadow spacing, one of them compares with predetermined threshold level; And
When described average shadow number or described average shadow spacing are greater than described predetermined threshold level, carry out rotate gesture identification.
10. gesture identification according to claim 9, wherein, described first mode also comprises the following steps:
Position coordinates according to indicant described in consecutive image form changes, the action of cursor on relation control image display.
11. gesture identifications according to claim 9, wherein, described second pattern also comprises:
The display frame of image display is upgraded according to picked out gesture;
Wherein, when described gesture is recognized as rotate gesture, described display frame is updated to object and rotates.
12. gesture identifications according to claim 9, wherein, in the step of the gesture identification carrying out rotating, according to displacement in consecutive image form in the same way cover the higher direction recognition sense of rotation of covering shadow of shadow number.
The gesture identification of 13. 1 kinds of interaction systems, described interaction systems comprises an image sensor, a mirror surface and at least one light source, described image sensor comprises at least one corresponding to indicant contact point and at least one viewing windows covering shadow corresponding to mirror image contact point for capturing, and described gesture identification comprises the following steps:
Utilize described image sensor acquire images form;
Capture the shield light shadow information in described viewing windows;
When judge described viewing windows comprise two cover shadow time enter first mode, according to the position calculation of covering shadow in described viewing windows, indicant is relative to the position coordinates of described interaction systems; And
Enter the second pattern when the described viewing windows of judgement comprises when two or more covers shadow, the mutual relationship according to covering shadow in consecutive image form carries out gesture identification,
Wherein, described second pattern comprises:
By average shadow number and average shadow spacing, one of them compares with predetermined threshold level; And
When described average shadow number or described average shadow spacing are less than predetermined threshold level, shadow will be covered and be divided into a real image shadow group and a virtual image shadow group, and cover according to real image in consecutive image form the mutual relationship that shadow and the virtual image cover shadow and carry out gesture identification.
14. gesture identifications according to claim 13, wherein, when the described real image shadow group in described viewing windows and described virtual image shadow group displacement in the same way time, carry out the identification of left or right gesture; When the described real image shadow group in described viewing windows and the average shadow spacing between described virtual image shadow group change, carry out upper or lower gesture identification; When the described real image shadow group in described viewing windows and the maximum shadow spacing between described virtual image shadow group change, zoom in or out gesture identification.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009101760720A CN102033656B (en) | 2009-09-28 | 2009-09-28 | Gesture identification method and interaction system using same |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009101760720A Division CN102033656B (en) | 2009-09-28 | 2009-09-28 | Gesture identification method and interaction system using same |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102999158A CN102999158A (en) | 2013-03-27 |
CN102999158B true CN102999158B (en) | 2015-12-02 |
Family
ID=43886626
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009101760720A Expired - Fee Related CN102033656B (en) | 2009-09-28 | 2009-09-28 | Gesture identification method and interaction system using same |
CN201210345585.1A Expired - Fee Related CN102999158B (en) | 2009-09-28 | 2009-09-28 | The gesture identification of interaction systems and interaction systems |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009101760720A Expired - Fee Related CN102033656B (en) | 2009-09-28 | 2009-09-28 | Gesture identification method and interaction system using same |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN102033656B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI435251B (en) * | 2011-08-09 | 2014-04-21 | Wistron Corp | Method and system for estimating the tendency of pressure change on a touch panel |
CN102968177B (en) * | 2011-08-31 | 2015-10-28 | 敦宏科技股份有限公司 | Gesture method for sensing |
TWI448918B (en) | 2011-09-09 | 2014-08-11 | Pixart Imaging Inc | Optical panel touch system |
CN103019457A (en) * | 2011-09-23 | 2013-04-03 | 原相科技股份有限公司 | Optical touch system |
CN103425227B (en) * | 2012-05-17 | 2016-01-13 | 原相科技股份有限公司 | The sensing module of tool electricity-saving function and method thereof |
TWI502413B (en) * | 2013-10-07 | 2015-10-01 | Wistron Corp | Optical touch device and gesture detecting method thereof |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1945515A (en) * | 2005-10-05 | 2007-04-11 | 索尼株式会社 | Display apparatus and display method |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0736603A (en) * | 1993-07-16 | 1995-02-07 | Wacom Co Ltd | Two-dimensional position detector |
JP2003173237A (en) * | 2001-09-28 | 2003-06-20 | Ricoh Co Ltd | Information input-output system, program and storage medium |
US20090044988A1 (en) * | 2007-08-17 | 2009-02-19 | Egalax_Empia Technology Inc. | Device and method for determining function represented by continuous relative motion between/among multitouch inputs on signal shielding-based position acquisition type touch panel |
CN101482772B (en) * | 2008-01-07 | 2011-02-09 | 纬创资通股份有限公司 | Electronic device and its operation method |
CN101498985B (en) * | 2008-01-30 | 2012-05-30 | 义隆电子股份有限公司 | Touch control panel for multi-object operation and its use method |
-
2009
- 2009-09-28 CN CN2009101760720A patent/CN102033656B/en not_active Expired - Fee Related
- 2009-09-28 CN CN201210345585.1A patent/CN102999158B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1945515A (en) * | 2005-10-05 | 2007-04-11 | 索尼株式会社 | Display apparatus and display method |
Also Published As
Publication number | Publication date |
---|---|
CN102033656A (en) | 2011-04-27 |
CN102033656B (en) | 2013-01-09 |
CN102999158A (en) | 2013-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI412975B (en) | Gesture recognition method and interactive system using the same | |
TWI501121B (en) | Gesture recognition method and touch system incorporating the same | |
TWI393037B (en) | Optical touch displaying device and operating method thereof | |
US8436832B2 (en) | Multi-touch system and driving method thereof | |
CA2481396C (en) | Gesture recognition method and touch system incorporating the same | |
US8693732B2 (en) | Computer vision gesture based control of a device | |
JP5167523B2 (en) | Operation input device, operation determination method, and program | |
CN102999158B (en) | The gesture identification of interaction systems and interaction systems | |
CN102341814A (en) | Gesture recognition method and interactive input system employing same | |
US20140053115A1 (en) | Computer vision gesture based control of a device | |
JP2006209563A (en) | Interface device | |
CN103150020A (en) | Three-dimensional finger control operation method and system | |
CN102880304A (en) | Character inputting method and device for portable device | |
JP2012238293A (en) | Input device | |
KR20120136719A (en) | The method of pointing and controlling objects on screen at long range using 3d positions of eyes and hands | |
CN109964202B (en) | Display control apparatus, display control method, and computer-readable storage medium | |
KR101233793B1 (en) | Virtual mouse driving method using hand motion recognition | |
CN101989150A (en) | Gesture recognition method and touch system using same | |
CN102033657B (en) | Touch system, method for sensing height of referent and method for sensing coordinates of referent | |
WO2021004413A1 (en) | Handheld input device and blanking control method and apparatus for indication icon of handheld input device | |
JP6075193B2 (en) | Mobile terminal device | |
JP5118663B2 (en) | Information terminal equipment | |
CN104102332A (en) | Display equipment and control system and method thereof | |
Ebrahimpour-Komleh et al. | Design of an interactive whiteboard system using computer vision techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20151202 Termination date: 20200928 |