New! View global litigation for patent families

CN102279670B - Gesture-based human-machine interface - Google Patents

Gesture-based human-machine interface Download PDF

Info

Publication number
CN102279670B
CN102279670B CN 201110115071 CN201110115071A CN102279670B CN 102279670 B CN102279670 B CN 102279670B CN 201110115071 CN201110115071 CN 201110115071 CN 201110115071 A CN201110115071 A CN 201110115071A CN 102279670 B CN102279670 B CN 102279670B
Authority
CN
Grant status
Grant
Patent type
Application number
CN 201110115071
Other languages
Chinese (zh)
Other versions
CN102279670A (en )
Inventor
D·L·S·吉蒙兹
N·P·奥兹
P·S·塔皮亚
D·E·卡姆皮罗
Original Assignee
波音公司
Filing date
Publication date
Grant date

Links

Abstract

本发明涉及基于手势的人机界面,例如用于控制在计算机上执行的程序的图形化用户界面。 The present invention relates to a gesture-based human-machine interface, such as graphical user interface for controlling a program executed on a computer. 用户的手势被监测并且提供基于检测到的手势的响应。 A user's gesture is monitored and provide a response based on the detected gesture. 物体被用来指向显示在屏幕上的信息。 The object is used to point to the information displayed on the screen. 不仅响应于对物体指向何处的确定而且响应于对物体离屏幕的距离的确定,显示在屏幕上的信息被修改。 Not only in response to determining where the pointing object and in response to determining the distance of the object from the screen, the information displayed on the screen is modified.

Description

基于手势的人机界面 Gesture-based human-machine interface

技术领域 FIELD

[0001]本发明涉及基于手势的人机界面,例如可用于控制在计算机上执行的程序的图形化用户界面。 [0001] The present invention relates to a gesture-based human-machine interface, for example, for controlling the graphical user interface program to be executed on a computer. 尽管适合于很多类型的程序,不过特别感兴趣于控制一个或更多个无人飞行器的飞行的程序。 Although suitable for many types of programs, but of particular interest to control one or more of flying unmanned aircraft program.

背景技术 Background technique

[0002]人机界面已经在过去的数十年发生了很大的改变。 [0002] human-machine interface has undergone great changes over the past decades. 即使在计算机控制的较窄领域中,界面已经从命令行演变成要求使用鼠标或类似指点装置以便选择显示给用户的图标的图形化用户界面。 Even in the narrow field of computer control, the command line interface has evolved into the use of a mouse or the like requires a pointing device to select the icon of a graphical user interface displayed to the user.

[0003]最近以来,触摸屏装置已经开始流行。 [0003] More recently, touchscreen devices have become popular. 当触摸屏装置开启基于手势控制的可能性时,允许多点输入的触摸屏装置特别有利。 When the touch screen gesture control device based on the possibility of opening, particularly advantageously allow multi-touch screen input device. 苹果的iPhone(TM)是触摸可被用来选择项目、向上或向下滚屏、放大或缩小并旋转项目的较好示例。 Apple iPhone (TM) is a touch may be used to select items, scroll up or down, and zoom in or out of the rotary preferred exemplary project. 例如,屏幕倾向于具有较慢的反应时间、较差的精确性和较差的可靠性,而频繁使用触摸屏导致残余物和灰尘的积累,残余物和灰尘导致进一步的性能退化。 For example, the screen tends to have a slower reaction time, poor accuracy and poor reliability, but frequent use of the touch screen results in the accumulation of residue and dust, residues and dust lead to further degradation.

[0004] 通过避免接触屏幕,避免触摸屏装置的一些问题的系统已经被提出。 [0004] has been proposed by avoiding contact with the screen, the system avoids some of the problems of the touch panel device. 代之,用户的手势被监测,并且提供基于检测到的手势的响应。 Instead, the user's gesture is monitored, and provides a response based on the detected gesture. 例如,监测用户的手的系统已经被提出,使得用手产生的手势被用来选择、滚动、变焦、旋转等,类似于依赖于触摸屏幕的现有系统。 For example, the user's hand monitoring systems have been proposed, such that the hand is used to select the generated gesture, scroll, zoom, rotation, etc., similar to the conventional system relies on a touch screen.

发明内容 SUMMARY

[0005]在此背景下,本发明属于通过基于手势的人机界面使用计算机系统的方法。 [0005] In this context, the present invention pertains to a method of using a computer system through a gesture-based user interface. 该方法包括使用物体指向显示在计算机系统的屏幕上的信息,并用至少两个照相机捕捉屏幕前的景象。 The method includes the use of the object on the screen of a computer pointing information display system, and capturing the scene in front of the screen with at least two cameras. 处理器被用来分析用照相机捕捉的景象以识别物体,以确定物体指向在屏幕上的何处和物体离屏幕的距离。 The processor is used to analyze the scene captured by the camera to identify objects in order to determine the distance of the object point on the screen where the object and away from the screen. 之后,处理器响应于物体指向何处和物体离屏幕的距离的确定来修改显示在屏幕上的信息。 Thereafter, in response to the processor where the object point and the object to modify the information displayed on the screen to determine the distance from the screen.

[0006]以此方式,可以避免触摸屏的缺点。 [0006] In this way, avoid the disadvantages of the touch screen. 此外,可以利用关于物体离屏幕多远的信息。 Further, by using the information about how far the object is from the screen. 这个信息可以以不同的方式被使用。 This information can be used in different ways. 例如,可通过放大物体指向的屏幕部分来改变显示在屏幕上的信息,放大率取决于从屏幕到物体所确定的距离。 For example, changes may be displayed on a screen an enlarged portion of the screen information of the object pointed to by the determined amplification factor depending on the distance to the object from the screen. 因此,靠近屏幕的指点可以被用来提供比在更远处的指点更大的放大率。 Thus, pointing close to the screen may be used to provide greater magnification than in the more distant guidance. 可以设定限值,使得比某距离更远的指点产生一致的放大率,而在距屏幕设定距离处放大率到达最大值。 Limits may be set such that a distance farther than a certain magnification pointing to produce consistent, set at a distance from the screen reaches a maximum magnification. 这些距离之间的放大率如何变化可以被控制,如,它可以线性地或按指数规律地变化。 How magnification between the distance change can be controlled, e.g., it can change linearly or exponentially.

[0007]该方法可以包括追踪物体的移动以确定物体指向屏幕上的何处。 [0007] The method may include tracking a moving object to determine the point where the object on the screen. 该方法可以包括确定物体的纵向伸展以确定物体指向屏幕上的何处。 The method may include determining the longitudinal extension of the object to determine where on the screen the object point. 这两个可选的特征可以被用作可替换方案或者它们可以被用来相互增强。 Optional features of these two alternatives may be used or they can be used mutually reinforcing.

[0008]通过确定在一段时间内物体离屏幕的距离,物体移动的速度可以被确定。 [0008], the speed of movement of the object can be determined by determining the distance of an object in a period of time from the screen. 这个速度可以被用作对显示在屏幕上的信息的进一步的控制。 This speed can be used for further control of the display on the screen information. 物体向屏幕的快速移动可以被解释得与物体向屏幕的逐渐移动不同。 Fast moving objects may be screen to explain the object is gradually moved to a different screen. 例如,逐渐移动可以被解释为单击,而快速移动可以被解释为双击。 For example, gradually move can be interpreted as a click, and quickly move can be interpreted as a double-click.

[0009]可选地,该方法可以包括用两个物体指向显示在计算机系统的屏幕上的信息,并用至少两个照相机捕捉屏幕前面的景象。 [0009] Alternatively, the method may include the information on the screen of a computer system by pointing two objects displayed in front of the screen and capture the scene with at least two cameras. 处理器可以被用来分析照相机所捕捉的景象以识别物体,从而确定物体指向屏幕上的何处和物体离屏幕的距离。 The processor may be used to analyze the captured scene camera to identify the object, the object to determine the point where the on-screen objects and the distance from the screen. 之后,处理器响应于物体指向屏幕上何处和物体离屏幕的距离的确定来修改显示在屏幕上的信息。 After that, the processor is responsive to an object point on the screen where the object and determine a distance from the screen to modify the information displayed on the screen. 这允许进一步的功能。 This allows further functionality. 例如,该对物体可以被用来独立地与屏幕上的不同控制交互作用,以调节音量控制并在某一区域上放大。 For example, an object which can be used independently of the interaction of different control on the screen, and to adjust the volume control at a certain zoom area. 这两个物体还可以一起被使用。 These two objects can also be used together. 可以用物体操作显示在屏幕上的图像,例如通过旋转物体。 An object can be the operation display image on the screen, for example by rotation of the object. 例如,使左边的物体向屏幕移动并使右边的物体远离屏幕移动可以引起图像绕竖直轴线顺时针方向旋转,并且使上面的物体向屏幕移动并使下面的物体远离屏幕移动可以引起图像绕水平轴线旋转,根据物体的相对对齐和物体之间的相对移动也可以存在其它旋转。 For example, an object's left and right to the object moving away from the screen of the mobile screen image may cause clockwise rotation of the vertical axis, and the above object and the following objects moving away from the screen towards the screen image may cause about a horizontal axis, may also be present other rotational relative movement between the object and the relative alignment of the object based.

[0010]很多不同的物体可以被用来指在屏幕上。 [0010] Many different objects may be used to refer to the screen. 例如,物体可以是用户的手。 For example, the object may be a user's hand. 优选地,物体可以是用户的手的伸展的手指。 Preferably, the object may be extended finger of the hand of the user. 之后,手指的指尖可以是被用来确定离屏幕的距离的点。 Thereafter, the tip of the finger may be used to determine the distance from the screen points. 手指伸展(extens1n)可以被用来确定用户指向屏幕上的何处。 Stretching fingers (extens1n) may be used to determine where on the screen the user is pointing.

[0011]本发明还在于包括基于手势的人机界面的计算机系统。 [0011] The present invention is further characterized by a computer system comprising a gesture-based human-machine interface. 该计算机系统包括:(a)可被操作来显示信息的屏幕,(b)被布置成捕捉屏幕前面的景象的至少两个照相机,和(C)处理器。 The computer system comprising: (a) a screen operable to display information, (b) is arranged to capture the scene in front of the screen of the at least two cameras, and (C) a processor. 处理器被布置成接收照相机提供的图像并分析图像,以识别指向显示在屏幕上的信息的物体。 The processor is arranged to provide an image receiving camera and analyzes the image, displayed on a screen object information to identify the point. 处理器还被布置成确定物体指向屏幕上的何处和物体离屏幕的距离。 The processor is further arranged to determine where the object point on the object and the distance to the screen from the screen. 处理器还被布置成响应于物体指向何处和物体离屏幕的距离的确定来修改显示在屏幕上的信息。 The processor is further arranged to respond to a point where the object and the object is determined to modify the distance from the screen information displayed on the screen.

[0012]可选择地,处理器被布置成追踪物体的移动以确定物体指向屏幕上的何处。 [0012] Alternatively, the processor is arranged to track the movement of the object to determine the point where the object on the screen. 此外或可替换地,处理器可以被布置成确定物体的纵向伸展,以确定物体指向屏幕上的何处。 Additionally or alternatively, the processor may be arranged to determine the longitudinal extension of the object, to determine where on the screen the object point. 处理器可以被布置成放大物体指向的屏幕部分来改变显示在屏幕上的信息,放大率取决于从屏幕到物体所确定的距离。 The processor may be arranged to screen an enlarged portion directed to the object change information displayed on the screen, the magnification is determined depending on the distance from the screen to the object. 如以上关于发明的方法所述的,两个物体可以被用来修改显示在屏幕上的信息。 As described above for the method of the invention, the two objects may be used to modify the information displayed on the screen. 物体可以是用户的手,例如用户的手的伸展的手指。 Object may be a user's hand, for example, extending the fingers of the hand of the user.

附图说明 BRIEF DESCRIPTION

[0013]为了使发明更容易被理解,仅通过示例参考附图,其中: [0013] In order to make the invention more readily understood, by way of example only with reference to the accompanying drawings, wherein:

[0014]图1是简化视图的透视图,其示出了根据本发明的实施例的包括人机界面的系统,该系统包括两个并排的屏幕和四个照相机; [0014] FIG. 1 is a simplified perspective view which illustrates a system comprising a man-machine interface of the present embodiment of the invention, the system comprises two side by side screens and four cameras;

[0015]图2是从用户视角观察的屏幕的透视图,其示出了用户通过指按钮来选择显示在屏幕上的按钮; [0015] FIG. 2 is a perspective view of the screen as viewed from a user perspective, which shows the user selecting a button displayed on the screen by the finger button;

[0016]图3a到图3d是系统的示意俯视图,其示出了根据本发明的人机界面的实施例,该系统包括屏幕和一个或更多个照相机,示出了照相机的视场如何结合; [0016] FIG. 3a to 3d are a schematic plan view of the system, showing a man-machine interface according to an embodiment of the present invention, the system comprises a screen and one or more cameras, showing how the field of view of the camera combined ;

[0017] 图3e到图3h是图3a到图3d中示出的系统的示意正视图,图3e、图3f、图3g和图3h分别对应于图3a、图3b、图3c和图3d; [0017] Figure 3e to Figure 3h is 3a to schematic Figure 3d shows the system front view, FIG. 3e, FIG. 3f, Figure 3g and 3h correspond to Figures 3a, 3b, 3c and 3d;

[0018]图4是示出根据本发明的人机界面的实施例的系统的示意图;以及 [0018] FIG. 4 is a diagram showing an embodiment of the system according to the present invention, the man-machine interface; and

[0019]图5a到图5c是屏幕的简化正视图,其示出了根据本发明的由人机界面的实施例提供的变焦设备。 [0019] FIGS. 5a to 5c is a simplified elevational view of a screen, which shows a zoom apparatus provided by an embodiment of the man-machine interface according to the invention.

具体实施方式 detailed description

[0020]图中示出了包括基于手势的人机界面的计算机系统10。 [0020] is shown comprising a gesture-based human-machine interface computer system 10. 计算机系统10包括被驱动来显示信息的一个或更多个屏幕12。 The computer system 10 includes a driver to display a message screen 12 or more. 信息的显示可以由用户通过用他的或她的手14在屏幕12前面做手势来控制。 Information may be displayed with his or her hand in front of the screen 14 is controlled by a user 12 through gestures. 这些手势用被布置在屏幕12周围的四个照相机16记录。 These gestures are arranged by the four cameras 12 recorded around the screen 16. 分析照相机16捕获的图像以确定用户的手14的三维位置并追踪手14的移动。 Analysis of the camera 16 captured image to determine the three-dimensional position of the user's hand 14 and track the movement of the hand 14. 手14的移动通过计算机系统10解释,例如以识别对应于显示在屏幕12上的图标选择或放大显示在屏幕12上的区域。 Movement of the hand 14 is explained by a computer system 10, for example, to identify on the screen corresponding to the display 12 or enlarged display icon selection area on the screen 12. 计算机系统10响应于这些手势来改变显示在屏幕12上的信息。 The computer system 10 in response to the information on a screen 12 to change the display to those gestures. 图2示出了用户将食指18向前朝显示在屏幕12上的按钮20移动的示例。 FIG 2 shows an example of a user with the index finger 18 moves forward toward the display screen 12 on the button 20. 这个移动模仿用户按压按钮20,并且计算机系统10将此解释为用户选择按钮20。 This movement mimic user presses the button 20, 10 and the computer system interprets this selection button 20 for the user. 这可以引起计算机系统10在屏幕12上显示新的图像。 This may cause the computer system 10 displays the new image on the screen 12.

[0021]尽管可以使用任意数量的屏幕12,不过图1和图2示出了使用两个并排的屏幕12的计算机系统10。 [0021] Although any number of screen 12, but in FIG. 1 and FIG. 2 shows a computer system 12 of two side by side screens 10. 用户的手臂22被示意性地示出在屏幕12的前方。 User's arm 22 is schematically depicted in front of the screen 12. 手臂22的移动由被布置在屏幕12的四个外角处并朝向屏幕12的中心的四个照相机16捕获。 Moving the arm 22 captured by four cameras are arranged at four outer corners of the screen 12 and toward the center 16 of the screen 12. 因此,当用户的手14在屏幕12的前面移动时,照相机捕捉用户的手14的移动。 Thus, when a user's hand is moved in front of the screen 12, 14, the camera 14 captures the user's hand movement. 使用四个照相机16能够建立屏幕12前面的空间的三维图。 16 can use the four dimensional camera 12 in front of the screen space. 因此,在x、y、z坐标系统中,物体的位置(例如用户的手指18的指尖)可以被确定。 Thus, the x, y, z coordinate system, the position of an object (e.g. a user's finger tips 18) may be determined. 在图1和图3中指示了这些坐标轴。 These axes indicated in FIG. 1 and FIG. 根据所有三个x、y、z轴的空间信息可以被用在人机界面中。 According to all three x, y, z-axis spatial information can be used in the display unit.

[0022]图3a到图3h示出了每个照相机16的视场24如何结合以提供空间体积,在其中计算机系统1能够确定物体的位置。 [0022] FIG. 3a to 3h illustrate the field of view 24 of each camera 16 in conjunction with how to provide spatial volume, in which the computer system 1 is able to determine the position of objects. 照相机16是相同的,并且也具有相同的视场24 O图3a和图3e分别是单个屏幕12的平面图和正视图,其仅示出了单个照相机16(为了清楚,照相机16被示意性地示为圆点)。 The camera 16 is identical and has the same field of view 24 O Figure 3a and Figure 3e are plan views of a single screen 12 and a front view, which only shows a single camera 16 (for clarity, the camera 16 is shown schematically as dots). 因此,它们较好地示出了从每个照相机16获得的视场24。 Thus, they are better illustrates the field of view of each camera 16 is obtained from 24 图3b和图3f分别是同一屏幕12的平面图和正视图,这次示出了被布置在屏幕12的右手边缘上的两个照相机16。 3b and 3f are a plan view and a front view of the same screen 12, this is shown to be disposed on the right hand edge of the screen 12 of the two cameras 16. 这个图示出了两个照相机16的视场如何结合。 The figure shows two cameras field of view 16 of how to combine. 图3c和图3g是屏幕12的平面图和正视图,示出了所有四个照相机16和它们的视场24。 Figure 3c and Figure 3g is a plan view and a front view of the screen 12 showing all four cameras 16 and their field of view 24. 如果物体被捕捉在至少两个照相机16的视场24以内,则物体在屏幕12前面的位置可以被确定。 If the object is captured within the field of view 24 of the at least two cameras 16, the object 12 may be determined in a position in front of the screen. 因此,无论在图3c和图3g内的何处存在视场24的重叠,物体的位置可以被确定。 Thus, no matter where in FIGS. 3c and 3g of overlapping fields of view 24, the position of the object can be determined. 有用的核心区26被示出在图3d和图3h中,在该核心区中可以确定物体的位置。 Useful core area 26 is shown in FIGS. 3d and 3h, the position of the object can be determined in the core region.

[0023]图4更详细地示出了计算机系统10。 [0023] FIG. 4 shows in more detail a computer system 10. 计算机系统10具有作为其集线器(hub)的计算机40。 The computer system 10 has a computer 40 as its hub (Hub) a. 计算机40可以包括很多不同的部分,例如主处理器42、其中包括储存在其内的程序的存储器,例如用于类似于屏幕12的外围设备的驱动器和用于操作类似于屏幕12的外围设备的卡。 The computer 40 may include many different parts, such as a main processor 42, which includes a program memory for storing therein, for example, a screen similar to the driver of the peripheral device 12 and peripheral devices for operating the screen similar to 12 card.

[0024] 如所见的,输入端(feed)44将四个照相机16连接到图像处理器46。 [0024] As can be seen, the input (feed) 44 is connected to four cameras 16 to the image processor 46. 图像处理器46可以是主处理器42的一部分,或者图像处理器46可以被提供为单独的处理器。 The image processor 46 may be a part of the main processor 42 or the image processor 46 may be provided as a separate processor. 无论是两者中的哪一种形式,图像处理器46都接收来自照相机16的图像。 Whatever the form of both, an image processor 46 are received from the image of the camera 16. 图像处理器46使用通常可用的软件来处理图像以改善它们的质量。 The image processor 46 using commonly available software to process images to improve their quality. 例如,可以改善亮度、对比度和清晰度以使得产生更高质量的图像。 For example, it is possible to improve the brightness, contrast and sharpness so that the higher quality image generation. 被处理的图像被传到主处理器42。 The image to be processed is passed to the main processor 42. 储存在存储器中的图像分析软件由主处理器43检索到并运行,以分析处理的图像,从而确定用户指向屏幕12上的何处。 Storing in a memory image analysis software by the host processor 43 to retrieve and run to analyze the image processing, the user points to determine where on the screen 12. 应当知道,这种图像分析软件是常用的。 We should know that this image analysis software is commonly used.

[0025] 一旦主处理器42已经确定用户指向屏幕上的何处,主处理器42确定屏幕12上呈现出的图像是否需要改变。 [0025] Once the main processor 42 has determined where the user is pointing on the screen, showing the main processor 42 determines the image on the screen 12 if a change. 如果确定需要,则主处理器42产生必要的信号以引起显示在屏幕12上的信息的必要改变。 If it is determined necessary, the main processor 42 generates the necessary signals to cause the necessary change information displayed on the screen 12. 这些信号被传到屏幕驱动器/卡48,屏幕驱动器/卡48提供被供应到屏幕12的当前信号。 These signals are passed to a screen driver / card 48, the screen driver / card 48 provides a current signal is supplied to the screen 12.

[0026]如图4所示,计算机40可包括用于接收来自屏幕12的触摸屏输入的输入装置50,即,意味着允许用户通过触摸屏幕12来选择显示在屏幕12上的图标。 [0026] As shown, computer 404 may comprise an input means for receiving input from the touch screen 50 of the screen 12, i.e., means 12 allows the user to select an icon displayed on the screen 12 by touching the screen. 提供这个特征在某些情况下可能是有用的。 This feature is provided in some cases may be useful. 例如,关键的选择可能要求用户触摸屏幕12作为进一步的步骤,以确保用户确定他们想做出那个选择。 For example, critical choices may require the user touches the screen 12 as a further step to ensure that users make sure they want to make that choice. 例如,这可以被用于引起系统紧急关闭的按钮:此动作明确是极端情况并且要求用户触摸屏幕12可以反映这一点。 For example, this system can be used to cause the emergency shutdown button: this action is clearly an extreme case and requires the user touches the screen 12 may reflect this. 因此提供输入装置50。 50 thus provides an input means.

[0027]如以上提及的,主处理器42得到由图像处理器46提供的仍被处理的图像,并分析这些图像以确定用户是否指向屏幕12。 [0027] The main processor 42 supplies the processed still image obtained by the image processor 46 as mentioned above, and to analyze these images to determine whether the user is pointing the screen 12. 这可以用常规的图像识别技术(例如使用被用来识别与具有伸向一个屏幕12的食指18的手14相关的形状的软件)来进行。 This may be a conventional image recognition technology (e.g., using software is used to identify the index finger toward a screen 12 having the shape of the hand 14 relating to 18) is performed. 之后主处理器42确定手指18指向屏幕12上的何处。 Main processor 42 then determines where on the screen 12 the finger 18 pointing. 主处理器42可以针对一只手发挥此功能或针对被认为适当的多只手来发挥此功能。 The main processor 42 can play this function for a hand or multi hand for the appropriate is thought to exert this function. 例如,主处理器42可以针对指在屏幕上的所有手来对其进行确定。 For example, the main processor 42 may be determined for the finger on the screen all chiral. 后面的说明针对单个手指18的示例,如将被逐步理解的,对于被期望或者被确定指在屏幕12上的多个手指18,该方法可以被重复。 Described later for an example of a single finger 18, as will be appreciated gradually, is expected or determined for the fingers 18, the method may be repeated in a plurality of fingers 12 on the screen.

[0028] 主处理器42如何确定手指18指在屏幕12上的何处可以以不同的方式进行。 [0028] the main processor 42 to determine how the finger means 18 can be carried out in different ways where on the screen 12.

[0029]在一个实施例中,主处理器42识别食指18的指尖在x、y、z坐标系统内的位置。 Position [0029] In one embodiment, the main processor 42 identifies the index finger 18 in the x, y, z coordinate system. 这可以通过对被四个照相机16捕获的图像进行三角测量来进行。 This can be done by the image 16 captured by four cameras to triangulate. 在根据一组四个图像识别了食指18的指尖的位置后,下一组四个图像可以以相同的方式被处理,以便确定食指18的指尖的下一个位置。 After identifying the position of the fingertip of the index finger 18 in accordance with a set of four images, the next set of four images may be processed in the same manner, in order to determine the next position of the fingertip of the index finger 18. 以此方式,食指18的指尖可以被追踪,并且如果它的运动继续,那么它的移动随时间重复向前以确定它将碰屏幕12的位置。 In this way, index finger 18 can be traced, and if it continues to exercise, then it's time to move forward to repeating it will determine the position of the touch screen 12.

[°03°]在可替换的实施例中,图像被分析以确定食指18的伸展(extens1n)和手指18指向的方向。 [° 03 °] In an alternative embodiment, the image is analyzed to determine the index finger extended (extens1n) 18 and 18 of the pointing direction of the finger. 当然,这个技术可以与例如以上描述的实施例相结合,以识别何时手指18沿着它指向的方向移动,因为这可以被解释为手指18“按压”显示在屏幕12上的物体。 Of course, this technique may be combined with the embodiments described above, for example, to identify when the finger 18 moves in the direction it points to, as this may be interpreted as a finger 18 'presses the "object displayed on the screen 12.

[0031]图5a到图5c示出了根据本发明所提供的变焦设备的实施例。 [0031] Figures 5a to 5c show an embodiment of a zoom apparatus according to the present invention is provided. 提供侧边有四个照相机16的单个屏幕12,每个都已经被描述过。 There are provided four sides of the camera 12 is a single screen 16, each of which have been described. 照相机16和屏幕12被连接到计算机系统10,如之前描述的,计算机系统10操作以提供基于手势的人机界面。 16 and 10 operations of the camera screen 12 is connected to the computer system 10, as previously described, a gesture-based computer system to provide a human-machine interface.

[0032]在图5a到图5c中所示的示例中,屏幕12显示了图80和相关信息。 [0032] In the example shown in FIGS. 5a to 5c in the FIG, 12 shows the screen of FIG. 80 and related information. 屏幕12的顶部具有标题信息82,一列四个可选按钮84设在屏幕12的左手边缘。 The top of the screen 12 with header information 82, a selectable button 84 provided in the four left-hand edge of the screen 12. 按钮84可以带有文本86以表示可以选择的新的信息屏或改变显示在图80上的信息。 Button 84 with text 86 can be selected to represent the new information screen, or change the information displayed on 80 of FIG. 图80占据了屏幕12的大部分并且被布置为偏向屏幕12的右下方。 FIG 80 occupies most of the screen 12 and is arranged to bias the bottom right of the screen 12. 图80将飞机88示为具有说明其当前飞行方向的箭头的圆点。 FIG 88 is shown an aircraft 80 having a dot explain the current direction of flight of the arrow. 识别飞机88的信息还可以被显示在圆点旁边,如90处所示。 88 aircraft identification information may also be displayed, as shown at 90 in the next dot. 进一步的信息被提供在沿图80的底部边缘的一行框92中。 Further information is provided along the bottom edge 80 of the block row 92.

[0033]用户可能想放大例如图80上的感兴趣的飞机88,以例如更详细地示出显示在图80上的地理信息。 [0033] The user may want to zoom in on interesting example of an aircraft 88 in FIG. 80, for example, shown in more detail a display of geographic information 80 on FIG. 为了这么做,用户可以指向一个按钮84以选择缩放模式,并且之后可以指向图80上的感兴趣的飞机88。 To do this, the user can point to a button 84 to select the zoom mode, and then the aircraft can point of interest on the graph 8088. 如图5b和图5c所示,这引起用户指向的区域以更大的放大率被显示在圆94中。 As shown in FIG. 5b and 5c, which causes the user pointing area is displayed in a circle 94 to a greater magnification. 圆94被显示为覆盖在背景图80上。 Circle 94 is shown overlaid on the background image 80. 如本领域中已知的,变焦的圆94的边缘和背景图80在需要时可以合并。 As is known in the art, a circular edge 94 and zoom 80 may be incorporated in the background if desired. 为了调节放大率因子,用户仅使他或她的食指18向着或远离屏幕12(即在z方向上)移动。 To adjust the magnification factor, only the user his or her index finger 18 toward or away from the screen 12 (i.e. in the z-direction). 将食指18移向屏幕导致更大的放大率。 The index finger toward the screen 18 resulting in greater magnification.

[0034]因此,用户的手指18的x、y位置被用来确定图80上被放大的区域,手指18的z位置被用来确定放大率。 [0034] Thus, the user's finger x 18, y position 80 is used to determine the area to be magnified in FIG, z position of the finger 18 is used to determine the magnification. Z位置的上限值和下限值可以被设置成对应于上限放大率因素和下限放大率因素。 Upper and lower limit values ​​of the Z position may be set to correspond to the upper and lower amplification factors magnification factor. 例如,放大率可以设置成I,而用户的指尖18至少是离屏幕12的某距离(如,30厘米)。 For example, the magnification may be provided I, and the user's finger 18 is at least a certain distance from the screen 12 (e.g., 30 cm). 而且,离屏幕的最小间隔(如,5厘米)可以被设置为最大放大率,使得如果用户的手指18比5厘米更近地靠近屏幕12,放大率不再增加。 Furthermore, the minimum distance from the screen (e.g., 5 cm) may be set to the maximum magnification, such that if the user's finger 18 closer than 5 cm close to the screen 12, not increase the magnification. 可以根据需要选择这些距离之间的放大率如何变化。 You can be selected according to how the magnification between the distance change. 例如,放大率可以随距离线性地变化或者它可以遵循一些其它的关系,例如指数关系。 For example, the magnification can be varied linearly with the distance or it may follow some other relationship, for example, exponential.

[0035]图5b和图5c反映了如下情况,即用户从图5b中的开始位置将他们的食指18移动得更靠近屏幕12同时指在感兴趣的飞机88处,使得放大率增加,如图5c所示。 [0035] FIGS. 5b and 5c reflects a case that the user from the start position of FIG. 5b their index finger 18 moves closer to the screen 12 refers to both the aircraft 88 of interest, such that the magnification increases as shown in FIG 5c. 和朝向屏幕12移动一样,用户横向移动他们的手指18,则放大率将增加并且放大的区域将移动以跟随手指18的横向移动。 And moving the same toward the screen 12, the user moves their finger 18 laterally, and increases the amplification factor of the amplification region to move laterally to follow the movement of the finger 18.

[0036]如本领域中的技术人员将意识到的,可以对上述实施例进行修改,而不脱离于由随附的权利要求限定的本发明的范围。 [0036] As those skilled in the art will appreciate, modifications may be made to the above embodiments without departing from the scope of the invention defined by the appended claims.

[0037]例如,屏幕12的数量可以从一到任意数自由变化。 [0037] For example, the number of the screen 12 can vary from one to any number freedom. 此外,屏幕12的类型可以改变。 In addition, the type of screen 12 can be changed. 例如,屏幕12可以是像等离子屏幕、LCD屏幕、OLED屏幕的平屏幕,或者它可以是电子射线管或仅是图像被投射到其上的表面。 For example, the screen 12 may be other image, LCD, screen, flat screen, the OLED screen, or it may be an electron beam tube or only the image is projected onto a surface thereon. 当使用多个屏幕12时,他们不需要共同的类型。 When using multiple screens 12, they do not need a common type. 尽管CCD照相机是优选的,但使用的照相机16的类型还可以变化。 Although a CCD camera is preferred, the camera 16 of the type used can vary. 照相机16可以用可见光操作,但可以使用其它波长的电磁辐射。 A visible light camera 16 can operate, but using electromagnetic radiation of other wavelengths. 例如,红外线照相机可以被用在低光条件下。 For example, an infrared camera may be used in low light conditions.

[0038]软件可以被设置为监测任何物体并确定从屏幕12选择什么物体。 [0038] The software may be arranged to monitor any selected object and to determine what the object from the screen 12. 例如,上文描述的用户的手指18。 For example, the user's finger 18 described above. 可替换地,可以使用例如棒状物或棍的指点装置。 Alternatively, a pointing device may be used, for example, a rod or stick.

[0039]本发明可以被用来非常有效地访问被布置成树状结构的菜单。 [0039] The present invention can be very effectively used to access the menu are arranged in a tree structure. 例如,手指18可以指向屏幕12上呈现出的按钮或菜单选项以在屏幕12上产生新的信息显示。 For example, you can point the finger 18 on the screen 12 showing a button or menu option to generate new information on the display screen 12. 之后,用户可以移动它们的手指18以指向另一个按钮或菜单选项以在屏幕12上产生另一个新的信息显示,等等。 After that, the user can move their finger 18 to point to another button or menu option on the screen 12 to produce another new information display, and so on. 因此,通过仅移动手指18使得它指向屏幕12的不同部分,允许用户非常快速地巡览树形菜单结构。 Thus, only by moving the finger 18 so that it points to a different portion of the screen 12, it allows the user to very quickly navigate a menu tree structure.

[0040]例如可以通过追踪手指18的指尖来连续确定用户的手指18的位置。 [0040] for example, may be continuously determined location of the user's finger tip of the finger 18 through 18 of tracing. 这能够使手指18的移动速度被确定。 This enables the moving speed of the finger 18 is determined. 这个速度之后可以被用来控制屏幕12上的信息。 After this speed may be used to control the information on the screen 12. 例如,朝向屏幕12移动的速度可以被使用,使得逐渐移动引起与快速移动不同的反应。 For example, toward the screen 12 of the mobile speed may be used, such movement causes the fast moving progressively different response. 还可以使用横向移动,使得不同的速度产生不同的结果。 Lateral movement may also be used, such that different speeds to produce different results. 例如,较慢的横向移动可以引起显示在屏幕12上的物体在屏幕内来回移动,即,从左到右较慢移动可以将物体从中心位置移动到屏幕12的右手边缘上的位置。 For example, slow lateral movement may cause the object displayed on the screen 12 is moved back and forth within the screen, i.e., from left to right may be slower moving object moves from the center position to a position on the right-hand edge of the screen 12. 相反,快速移动可也引起物体从屏幕12被移除,S卩,从左到右快速移动可以引起物体飞出屏幕12的右手边缘。 In contrast, the fast movement may also cause the object is removed from the screen 12, S Jie, from left to right may cause fast moving object 12 flying right-hand edge of the screen.

[0041]如上提及的,主处理器42可以监测多于一个类似于用户手指18的物体。 [0041] As mentioned above, the main processor 42 may monitor more than one finger 18 is similar to the user object. 这能够使多个物体被用来控制屏幕12上的信息。 This enables a plurality of objects are used to control the information on the screen 12. 一对物体可以被用来与屏幕上的不同控制独立地交互,例如以调节选择新项目并改变与选择的项目相关的信息的类型。 One pair of object may be used to interact independently with the different control on the screen, select a new item, for example, to adjust and change the type of information associated with the selected item. 两个物体还可以一起使用。 Two objects can also be used together. 显示在屏幕上的图像可以用两个手14操作。 An image displayed on the screen 14 can operate with two hands. 显示在屏幕12上的物体可以旋转。 Displayed on the screen 12 may rotate the object. 例如,用户可以将他们的手放在相同的高度,每个手的手指18指向显示在屏幕12上的物体的左手和右手边缘。 For example, the user can place their hand on the same height, each of the fingers of the hand 18 points to the left and right edges of the object displayed on the screen 12. 通过使左手14向屏幕12移动并使右手14远离屏幕12移动,可以使物体绕垂直轴线顺时针旋转。 By moving the left hand 14 and right hand 14 moves away from the screen 12 to the screen 12, so that the object can be rotated clockwise about a vertical axis. 如果一个手14放在另一个手上,物体可以绕水平轴线旋转。 If a hand 14 on the other hand, the object can be rotated about a horizontal axis. 旋转轴线可以被限定为对应于手指18的指尖之间的线。 May be defined as the axis of rotation between the tip of the finger 18 corresponds to a line.

Claims (17)

1.一种通过基于手势的人机界面使用计算机系统的方法,所述方法包括: 使用用于指向的物体指向在所述计算机系统的屏幕上的位置,其中指向包括维持所述物体和所述屏幕之间的距离,所述距离为正值并且其中所述屏幕包括外角和中心; 用至少四个照相机来捕获所述屏幕前面的空间的图像,所述照相机被布置为连接在所述屏幕的外角处并且指向所述屏幕的中心,其中利用所述至少四个照相机捕获图像允许建立所述屏幕前面的空间的三维图,所述空间包括所述物体; 使用处理器来分析所述图像以识别所述物体,以便确定所述物体指向的所述屏幕上的所述位置,并确定所述物体与所述屏幕之间的所述距离;并且响应于对所述物体指向的所述屏幕上的所述位置和所述物体与所述屏幕之间的所述距离两者的确定,修改显示在所述屏幕上的信息; 其中修改 1. A method of using a computer system of a gesture-based human-machine interface, the method comprising: using a point to point to an object on the screen of the computer system in a position in which said object point and said maintaining comprises the distance between the screen and the distance is positive and wherein said screen comprises a central external corner; with at least four cameras to capture images in front of the screen space, the camera is arranged to be connected in the screen outside corners and toward the center of the screen, wherein the at least four cameras using capturing an image in front of the screen allow the establishment of three-dimensional space, said space including the object; analyzing the image using a processor to identify the the object and for determining said position on the screen of the object points, and determining the distance between the object and the screen; and in response to the object on the screen pointed to the position between the screen and the object and the distance determination between the two, modifying the information displayed on the screen; wherein modifying 示在所述屏幕上的所述信息包括以某一放大率使所述物体正在指向的所述屏幕的一部分上显示的所述信息放大,该放大率取决于所述物体与所述屏幕之间的确定的距离,其中所述确定的距离处于最大值和最小值之间,并且其中随着所述确定的距离在所述最大值和所述最小值之间变化,放大率比例按指数规律变化。 The information is shown on the screen at a certain magnification comprises a portion of the display on the screen said object being directed to the amplification information, depending on the magnification between the object and the screen the determined distance, wherein the distance is determined between the maximum and minimum values, and wherein said determined distance with a change between the maximum value and the minimum magnification ratio changes exponentially .
2.根据权利要求1所述的方法,其中确定所述物体指向的所述屏幕上的所述位置包括追踪所述物体的移动。 2. The method according to claim 1, wherein determining the position on the screen of the moving object comprises a tracking point of the object.
3.根据权利要求1所述的方法,其中确定所述物体指向的所述屏幕上的所述位置包括确定所述物体朝向所述屏幕的伸展。 3. The method according to claim 1, wherein determining the position on the screen of the object points of the object comprises determining extending toward the screen.
4.根据权利要求1所述的方法,其进一步包括: 使用所述处理器分析所述图像以确定所述物体朝向或远离所述屏幕移动的速度;和响应于对所述物体朝向或远离所述屏幕移动的速度的确定,修改显示在所述屏幕上的所述信息。 4. The method according to claim 1, further comprising: analyzing the image using the processor to determine the speed of the object toward or away from the screen to move; and in response to the object toward or away from the determining the speed of movement of said screen, modifying the information displayed on the screen.
5.根据权利要求1所述的方法,其进一步包括: 使用所述处理器分析所述图像以确定所述物体的移动速度;以及根据所述物体的不同移动速度不同地修改显示在所述屏幕上的所述信息。 5. The method according to claim 1, further comprising: using the processor, analyzing the image to determine the moving speed of the object; and variously modified depending on the moving speed of the object displayed on the screen the information on.
6.根据权利要求5所述的方法,其进一步包括: 使用所述处理器分析所述图像以确定所述物体在所述屏幕的前面的横向移动速度;以及响应于对所述物体的所述移动速度的确定修改显示在所述屏幕上的所述信息。 6. The method according to claim 5, further comprising: analyzing the image using the processor to determine whether the object in front of the lateral movement speed of the screen; and in response to the object of the moving speed is determined to modify the information displayed on the screen.
7.根据权利要求1所述的方法,进一步包括: 使用用于指向的两个物体来指向在所述计算机系统的所述屏幕上的位置; 使用所述处理器来分析所述图像以识别所述两个物体,从而确定所述两个物体指向的所述屏幕上的所述位置,并确定所述两个物体离所述屏幕的距离;并响应于对所述两个物体指向的所述屏幕上的所述位置和所述两个物体离所述屏幕的所述距离二者的确定来修改显示在所述屏幕上的所述信息。 7. The method according to claim 1, further comprising: using a two point to point to objects on the screen of the computer system location; using the processor to analyze the image to identify the said two objects, thereby determining the position on the screen of the two points to the object, and determining the object distance from the two screens; in response to the pointing of the two objects the position on the screen and the two objects from the screen to modify the information displayed on the screen is determined from both.
8.根据权利要求1所述的方法,其中所述物体是用户的手的伸展的手指。 8. The method according to claim 1, wherein the object is extended finger of the hand of the user.
9.一种包括基于手势的人机界面的设备,所述设备包括: 可操作以显示信息的屏幕,其中所述屏幕包括外角和中心; 至少四个照相机,所述至少四个照相机被布置成连接在所述屏幕的外角处并且指向所述屏幕的中心以捕获所述屏幕前面的空间的图像,其中利用所述至少四个照相机捕获图像允许建立所述屏幕前面的空间的三维图;以及处理器,其被布置成接收所述图像, 分析所述图像以便识别指向所述屏幕上的一个位置处的物体,以确定相对于所述物体和所述至少四个照相机的屏幕位置,并且确定所述物体离所述屏幕的距离,所述屏幕是用于确定所述物体指向所述屏幕上的所述位置的参考平面,所述距离为正值,并且响应于对所述物体指向的所述屏幕上的所述位置和所述物体离所述屏幕的所述距离二者的确定来修改显示在所述屏幕上的所述信息; 其 9. An apparatus comprising a man-machine interface based on a gesture, the device comprising: a screen operable to display information, and wherein said screen comprises a central external corner; at least four cameras, the cameras are arranged in at least four connected to the outer corners of the screen and directed towards the center of the screen to capture an image in front of the screen space, wherein the at least four cameras using capturing an image in front of the screen allow the establishment of three-dimensional space; and processing device, which is arranged to receive the image, analyzing the image to identify objects pointing at a position on the screen to determine the screen position of the object relative to the camera and the at least four, and determines said object distance from the screen, the screen for determining the position of the object plane reference point on the screen, the distance is positive, and in response to the pointing to the object determining both the distance and the position of the object on the screen from the display screen to modify the information on the screen; the 中修改显示在所述屏幕上的所述信息包括以某一放大率使所述物体正在指向的所述屏幕的一部分上显示的所述信息放大,该放大率取决于所述物体与所述屏幕之间的确定的距离,其中所述确定的距离处于最大值和最小值之间,并且其中随着所述确定的距离在所述最大值和所述最小值之间变化,放大率比例按指数规律变化。 Modifying the information in the display information on the screen includes a magnification to the object being directed portion of the screen displays an enlarged, depending on the magnification of the object and the screen determining the distance between, wherein the determined distance is between the maximum and minimum values, and wherein said determined distance with a change between the maximum value and the minimum magnification ratio exponentially variation.
10.根据权利要求9所述的设备,其中所述处理器被布置成分析所述图像以追踪所述物体的移动从而确定所述物体指向的所述屏幕上的所述位置。 10. The apparatus according to claim 9, wherein said processor is arranged to analyze the image to track the movement of the object to determine the position on the screen of the object points.
11.根据权利要求9所述的设备,其中所述处理器被布置成分析所述图像以确定所述物体朝向所述屏幕的伸展从而确定所述物体指向的所述屏幕上的所述位置。 11. The apparatus of claim 9, wherein said processor is arranged to analyze the image to determine the object extending towards the screen to determine the position on the screen of the object points.
12.根据权利要求9所述的设备,其中所述处理器还被布置成确定所述物体的移动速度并响应于对所述移动速度的确定来修改显示在所述屏幕上的所述信息。 12. The apparatus according to claim 9, wherein the processor is further arranged to determine the speed of movement of the object in response to determining that the moving speed of modifying the information displayed on the screen.
13.根据权利要求9所述的设备,其中所述物体是用户的手的伸展的手指。 13. The apparatus according to claim 9, wherein the object is extended finger of the hand of the user.
14.一种计算机系统的基于手势的人机界面,所述界面存储在所述计算机系统中的存储介质上,并且进一步包括: 被配置为使屏幕能够运行以显示信息的第一模块,其中所述屏幕包括外角和中心;被配置为向至少四个照相机发送指令的第二模块,所述至少四个照相机被布置成连接在所述屏幕的外角处并且指向所述屏幕的中心以捕获所述屏幕前面的空间的图像;以及被配置为控制处理器的第三模块,所述处理器被布置成接收所述图像, 分析所述图像以便识别指向在所述屏幕上的一个位置处的物体,以确定相对于所述物体和所述至少四个照相机的屏幕位置,确定所述物体离所述屏幕的距离,并确定所述物体的移动速度,所述屏幕是用于确定所述物体指向所述屏幕上的所述位置的参考平面,所述距离是正值,并且响应于对所述物体指向的所述屏幕上的所述位 14. The gesture-based human-machine interface of a computer system, said interface stored on the storage medium in the computer system, and further comprising: a screen configured to enable the module to operate to display the first information, wherein said screen comprising outer corners and a center; a second module is configured to send instructions to the camera at least four of the at least four cameras are arranged to be connected at the outside corners of the screen and directed towards the center of the screen to capture the in front of the screen space of the image; and a position of the object at a third module configured to control the processor, the processor being arranged to receive the image, analyzing the image to identify the points on the screen, to determine the position of the screen relative to the object and the at least four camera, determining the object distance from the screen, and determining the moving speed of the object, said screen for determining the pointing object said reference plane position on said screen, said distance is a positive value, and in response to the position of the object on the screen pointed to 置、所述物体离所述屏幕的所述距离以及所述移动速度的确定来修改显示在所述屏幕上的所述信息; 其中修改显示在所述屏幕上的所述信息包括以某一放大率使所述物体正在指向的所述屏幕的一部分上显示的所述信息放大,该放大率取决于所述物体与所述屏幕之间的确定的距离,其中所述确定的距离处于最大值和最小值之间,并且其中随着所述确定的距离在所述最大值和所述最小值之间变化,放大率比例按指数规律变化。 Set, and determining the distance from the moving speed of the object to modify the display screen of the information on the screen; wherein modifying the information displayed on the screen at a certain amplification comprises the information display of the object on a portion of the screen is directed amplification, the amplification factor is determined depending on the distance between the object and the screen, wherein said maximum distance is determined and between a minimum value, and wherein between the maximum value and as the change in the minimum determined distance, the magnification ratio change exponentially.
15.根据权利要求14所述的界面,其中所述物体包括用于确定用户指向所述屏幕上的何处的手指伸展。 15. The interface according to claim 14, wherein said object includes means for determining the point where the user's finger on the screen extension.
16.根据权利要求14所述的界面,其中所述处理器被布置成分析所述图像以确定所述物体朝向所述屏幕的伸展从而确定所述物体指向的所述屏幕上的所述位置。 16. The interface according to claim 14, wherein said processor is arranged to analyze the image to determine the object extending towards the screen to determine the position on the screen of the object points.
17.根据权利要求14所述的界面,其中所述处理器被布置成分析所述图像以追踪所述物体的移动从而确定所述物体指向的所述屏幕上的所述位置。 17. The interface according to claim 14, wherein said processor is arranged to analyze the image to track the movement of the object to determine the position on the screen of the object points.
CN 201110115071 2010-06-09 2011-04-28 Gesture-based human-machine interface CN102279670B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20100382168 EP2395413A1 (en) 2010-06-09 2010-06-09 Gesture-based human machine interface
EP10382168.2 2010-06-09

Publications (2)

Publication Number Publication Date
CN102279670A true CN102279670A (en) 2011-12-14
CN102279670B true CN102279670B (en) 2016-12-14

Family

ID=

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101040242A (en) * 2004-10-15 2007-09-19 皇家飞利浦电子股份有限公司 System for 3D rendering applications using hands
US7348963B2 (en) * 2002-05-28 2008-03-25 Reactrix Systems, Inc. Interactive video display system
CN101636207A (en) * 2007-03-20 2010-01-27 科乐美数码娱乐株式会社 Game device, progress control method, information recording medium, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7348963B2 (en) * 2002-05-28 2008-03-25 Reactrix Systems, Inc. Interactive video display system
CN101040242A (en) * 2004-10-15 2007-09-19 皇家飞利浦电子股份有限公司 System for 3D rendering applications using hands
CN101636207A (en) * 2007-03-20 2010-01-27 科乐美数码娱乐株式会社 Game device, progress control method, information recording medium, and program

Similar Documents

Publication Publication Date Title
US7956847B2 (en) Gestures for controlling, manipulating, and editing of media files using touch sensitive devices
US7231609B2 (en) System and method for accessing remote screen content
US20120102438A1 (en) Display system and method of displaying based on device interactions
US20130283213A1 (en) Enhanced virtual touchpad
US20120056837A1 (en) Motion control touch screen method and apparatus
US20140201666A1 (en) Dynamic, free-space user interactions for machine control
US20120054671A1 (en) Multi-touch interface gestures for keyboard and/or mouse inputs
US20100100849A1 (en) User interface systems and methods
US20110041098A1 (en) Manipulation of 3-dimensional graphical objects or view in a multi-touch display
US20110234503A1 (en) Multi-Touch Marking Menus and Directional Chording Gestures
US20090315740A1 (en) Enhanced Character Input Using Recognized Gestures
US7924271B2 (en) Detecting gestures on multi-event sensitive devices
US20120194561A1 (en) Remote control of computer devices
US20110227947A1 (en) Multi-Touch User Interface Interaction
US20130014052A1 (en) Zoom-based gesture user interface
US20110234492A1 (en) Gesture processing
US20090217211A1 (en) Enhanced input using recognized gestures
US20130044053A1 (en) Combining Explicit Select Gestures And Timeclick In A Non-Tactile Three Dimensional User Interface
US20110234491A1 (en) Apparatus and method for proximity based input
US20100328351A1 (en) User interface
US20140168062A1 (en) Systems and methods for triggering actions based on touch-free gesture detection
US20120327125A1 (en) System and method for close-range movement tracking
US20080165255A1 (en) Gestures for devices having one or more touch sensitive surfaces
US20120268369A1 (en) Depth Camera-Based Relative Gesture Detection
US20130057469A1 (en) Gesture recognition device, method, program, and computer-readable medium upon which program is stored