New! boolean search, graphs, thumbnail grids and downloads

Man-machine interactive system and real-time gesture tracking processing method for same

Info

Publication number
CN102426480A
Authority
CN
Grant status
Application
Patent type
Prior art keywords
gesture
gesture action
gesture tracking
time gesture
hand profile
Prior art date
Application number
CN 201110342972
Other languages
Chinese (zh)
Inventor
刘远民
陈大炜
Original Assignee
康佳集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Abstract

The invention discloses a man-machine interactive system and a real-time gesture tracking processing method for the same. The method comprises the following steps of: obtaining image information of user side, executing gesture detection by a gesture detection unit to finish separation of a gesture and a background, and automatically determining a smaller rectangular frame which surrounds the hand as a region of interest in the image information through a vision algorithm; calculating the hand profile state of each frame in a video sequence by a gesture tracking unit; executing validity check on hand actions according to the calculated hand profile state, and determining the gesture action finished by the user; and generating a corresponding gesture action control instruction according to the determined gesture action, and making corresponding feedbacks by a three-dimensional user interface according to the gesture action control instruction. In the system and the method, all or part of gesture actions of the user are sensed so as to finish accurate tracking on the gesture, thus, a real-time robust solution is provided for an effective gesture man-machine interface port based on a common vision sensor.

Description

一种人机交互系统及其实时手势跟踪处理方法 Alternate human interaction system and real-time gesture tracking processing method

技术领域 TECHNICAL FIELD

[0001] 本发明涉及人机交互技术领域,尤其涉及的是一种人机交互系统及其实时手势跟踪处理方法。 [0001] The present invention relates to the field of human-computer interaction technology, particularly relates to a man-machine interaction in real-time gesture tracking system and processing method.

背景技术 Background technique

[0002] 人机交互技术是目前用户界面研究中发展最快的领域之一,对此,各国都十分重视。 [0002] One area of ​​human-computer interaction technology is currently the fastest growing in user interface research, for which all countries attach great importance. 美国在国家关键技术中,将人机界面列为信息技术中与软件和计算机并列的六项关键技术之一。 US national key technology, the man-machine interface as one of the six key technologies in information technology and software and computer parallel. 在美国国防关键技术中,人机界面不仅是软件技术中的重要内容之一,而且是与计算机和软件技术并列的11项关键技术之一。 In the key US defense technology, human-machine interface is not only an important part of the software technology, and is one of 11 key technology and computer software technology and parallel. 欧共体的欧洲信息技术研究与发展战略计划(ESPRIT)还专门设立了用户界面技术项目,其中包括多通道人机交互界面(Multi-Modal Interface for Man-Machine Interface)。 European Information Technology Research and Development Strategic Plan EC (ESPRIT) also set up a user interface technology projects, including multi-channel interactive interface (Multi-Modal Interface for Man-Machine Interface). 保持在这一领域中的领先,对整个智能计算机系统是至关重要的。 Held in this area leading to the smart computer system is essential.

[0003] 人体所获得的信息80%来自视觉,因此,从自知心理学的角度研究基于机器视觉的人机交互方式是解决人机交互的重要手段。 [0003] Information obtained 80% of the human body from the visual, therefore, from the perspective of psychology, knowing human-computer interaction based on machine vision is an important means to resolve human-computer interaction. 手势是人机交互过程中一个非常自然、直观的交互通道,因此研究手势检测跟踪识别技术不仅有助于实现自然的人机交互,而且有助于机器人通过模仿用户的示范动作习得技能。 Gestures during a human-computer interaction is very natural, intuitive interaction channel, the study gesture recognition technology not only helps to detect the tracking of interacting with nature, but also helps the robot to mimic the user through the demonstration of operation of acquisition of skills.

[0004] 由于手势本身具有的多样性、多义性以及时间和空间上的差异性等特点,加之人手是复杂变形体以及视觉本身的不适定性,因此基于视觉的手势识别是一个多学科交叉的、富有挑战性的研究课题。 [0004] Since the gesture itself has diversity, ambiguity and difference in time and space and other features, coupled with manpower and deformable body is a complex visual discomfort qualitative itself, so the visual gesture recognition based on a multidisciplinary challenging research topic.

[0005] 目前的基于手势的人机交互主要有三种方式,一是麻省理工大学为代表的,利用数据手套、数据服装等装置,对手和身体的运动进行跟踪,完成人机交互;第二种是以微软为代表的体感游戏,它采用深度摄像头和RGB摄像头来实现手和身体的位置跟踪。 [0005] The current gesture-based human-computer interaction There are three main ways, first as a representative of the Massachusetts Institute of Technology, using data gloves, clothing, and other data devices, opponents and body motion tracking, complete human-computer interaction; second species is represented by Microsoft somatosensory game, which uses a depth camera and RGB cameras to achieve hand and body position tracking. 前面两种方式都具有高成本的特点,不宜于企业,特别是竞争激烈的家电企业的广泛应用;第三种就是业内众所周知的HandVu,它以普通摄像头为研究的对象,具有成本低、实时性能好等优点,但在跟踪过程中由外界环境影响较大,不能很好地解决由于光照和背景复杂而带来的跟踪失败问题。 The first two methods are highly cost characteristics, not widely used in enterprises, especially the highly competitive home appliance business; third is the industry's well-known HandVu, it is an ordinary camera as the object of study, low-cost, real-time performance and good, but in the process of tracking the impact of large external environment, can not solve the problem of tracking failure due to darkness and brought complex background.

[0006] 2010年微软推出的Kinect体感游戏,因其自然直观的人机交互功能,而使其深受广大消费者青睐。 [0006] In 2010, Microsoft launched the Kinect somatosensory game, because of its natural and intuitive interactive features, while it favored by consumers. 该系统采用了双摄像头(深度摄像头和RGB摄像头)有利于多传感器的信息融合,因此具有较高的手势检测和跟踪精度,但其成本高。 The system uses a dual camera (depth camera and an RGB camera) in favor of multi-sensor data fusion, and therefore has a higher gesture detection and tracking accuracy, but its high cost. 相反,基于普通单摄像头的实时手势检测跟踪器在这方面具有很强的优势,但其对手的跟踪和检测的准备度和精度都存在一定的不足,究其原因主要有:(1)手本身不是刚体,在运动过程中可能存在不同程度的形变;(2)光照条件的影响和变化;(3)对目标跟踪没有一个可信度度量标准,因此,当系统跟踪了其他目标而导致的跟踪失败问题难以解决。 On the contrary, based on ordinary single camera in real-time gesture detection and tracking has in this regard a strong advantage, but his opponent tracking and detection of readiness and precision there are some deficiencies, main reasons are: (1) hand itself not a rigid body, there may be during exercise varying degrees of deformation; influence and change (2) lighting conditions; (3) do not have a target tracking reliability metrics, therefore, when the system is tracking the other objectives resulting tracking failure problem difficult to solve.

[0007] 因此,现有技术还有待于改进和发展。 [0007] Thus, the prior art has yet to be improved and developed.

发明内容[0008] 本发明要解决的技术问题在于,针对现有技术的上述缺陷,提供一种人机交互系统及其实时手势跟踪处理方法,本发明可解决手等非钢体目标在普通单摄像头下的跟踪与检测不准确的问题,并可解决由于光照和背景复杂而带来的跟踪失败问题,利用计算机视觉与图像处理技术实现了自动的人手检测、跟踪与手势识别,实时、鲁棒、易于实现和操作, 能使计算机用户通过手部姿态与计算机进行更自然、更直观、更智能的交互。 [0008] The technical problem to be solved in that, for the above-mentioned drawbacks of the prior art to provide a human-computer interaction systems and real-time gesture tracking processing method of the present invention can solve the hand and other non-steel body single goal in common tracking and detection under the camera is not accurate, and the failure to solve the problems caused by track lighting and background complexity brought, computer vision and image processing technology to achieve automatic detection of manpower, tracking and gesture recognition, real-time, robust , easy to implement and operate, enabling computer users to be more natural, more intuitive and more intelligent interaction with the computer by hand gestures.

[0009] 本发明解决技术问题所采用的技术方案如下: [0009] Solution to the problem of the present invention is used as follows:

一种人机交互系统的实时手势跟踪处理方法,其中,包括步骤: Real-time gesture alternate human interaction system of tracking processing method comprising the steps of:

A、获取用户侧的图像信息并进行相应的图像降噪和增强处理; A, acquires image information at the user side and the corresponding noise reduction and image enhancement;

B、对经过处理的图像信息通过手势检测单元进行人手检测,完成手势与背景的分离, 并通过视觉算法在图像信息中自动确定包围人手的一个较小矩形框为感兴趣区域; B, the image information will be processed by hand gesture detection unit testing, complete separation of the gesture and the background, and surrounded by vision algorithms automatically determine staffing a smaller rectangle for the region of interest in the image information;

C、通过手势跟踪单元,在所述图像信息中的感兴趣区域完成手势特征点的亚像素级跟踪,在视频序列中计算出每帧的人手轮廓状态; C, through gestures tracking unit, complete with sub-pixel gesture tracking feature points in the image information in a region of interest, calculated manpower outline the state of each frame in a video sequence;

D、根据计算出的人手轮廓状态进行人手动作的有效性检测,并进行手势识别以对用户完成某个预定义手势的轨迹进行分类,确定用户完成的手势动作; D, based on the calculated state hands contours validity detect human movement and gesture recognition to the user complete a predefined gesture trajectory classification to determine the user completes the gesture;

E、根据确定的手势动作生成相应的手势动作控制指令,并将该手势动作控制指令发送至三维用户界面; E, to form the corresponding gesture based on the determined control command gestures, gesture and the control command to the three-dimensional user interface;

F、三维用户界面根据所述手势动作控制指令做出相应反馈。 F, three-dimensional user interface to make the appropriate gesture feedback according to the control instruction.

[0010] 所述的人机交互系统的实时手势跟踪处理方法,其中,所述步骤A之前还包括,a、 立体影像显示单元显示三维立体影像及三维用户图形界面。 [0010] The real-time interactive gesture tracking system processing method, further comprising the step prior to A, a, three-dimensional image display unit to display three-dimensional images and three-dimensional graphical user interface.

[0011 ] 所述的人机交互系统的实时手势跟踪处理方法,其中,所述步骤A具体包括: Al、视频图像获取单元获取用户所在环境深度图像信息; [0011] The real-time interactive system gesture tracking processing method, wherein the step A comprises specifically: Al, video image acquiring unit acquires image information of the user's environment depth;

A2、通过图像处理单元对视频图像获取单元获取的图像信息进行去噪与目标增强处理。 A2, acquires image information acquisition unit with the target denoising enhancement processing by the image processing unit of the video image.

[0012] 所述的人机交互系统的实时手势跟踪处理方法,其中,所述步骤C中的人手轮廓状态:包括位置、旋转角、放缩量以及各个手指的长度和角度。 [0012] The real-time interactive gesture tracking system processing method, wherein step C manpower outline states: including position, rotation angle, shrinkage and put each finger length and angle.

[0013] 所述的人机交互系统的实时手势跟踪处理方法,其中,所述步骤D还包括:手势动作是否开始的判断依据是在连续20帧的人手检测结果里,有超过12帧检测到人手处于同一个位置。 [0013] The real-time interactive gesture tracking system processing method, wherein step D further comprises: determining whether based on gestures began staffing the detection result is 20 consecutive frames, there are more than 12 detected staff in the same position.

[0014] 所述的人机交互系统的实时手势跟踪处理方法,其中,所述步骤D中的手势动作包括:左移、右移、上移、下移。 [0014] The real-time interactive gesture tracking system processing method, wherein step D gestures include: left, right, up, down.

[0015] 所述的人机交互系统的实时手势跟踪处理方法,其中,所述步骤E中的根据确定的手势动作生成相应的手势动作控制指令包括: [0015] The real-time interactive gesture tracking system processing method, wherein said step E in accordance with the determined gesture generates a corresponding gesture control instructions comprising:

E1、通过手势位置的不动,来识别出该动作为点击命令,生成相应的点击控制指令; E2、通过手势位置的快速左移、右移、上移、下移来识别出左、右、上、下四个命令,生成相应的左移、右移、上移、下移控制指令; E1, through gestures position does not move to recognize the action as a click command to generate the click command control; E2, through rapid gesture left position, right, up, down to identify the left and right, on the next four commands to generate the corresponding left, right, up, down control instruction;

E3、通过手势位置的挥手来识别出关闭动作,生成相应的关闭控制指令。 E3, by waving gestures to identify the location of the closing operation, to generate the corresponding closed control commands.

[0016] 一种人机交互系统,其中,包括: [0016] A human-computer interaction systems, which include:

视频图像获取单元,用于获取用户所在环境深度图像信息; Video image acquiring unit, for acquiring depth image information user's environment;

图像处理单元,用于对视频图像获取单元获取的图像信息进行去噪与目标增强处理;手势检测单元,用于对经过处理的图像信息进行人手检测,完成手势与背景的分离,并通过视觉算法在图像信息中自动确定包围人手的一个较小矩形框为感兴趣区域; An image processing unit for the video image acquisition unit acquires image information will be de-noising and enhancement objectives; gesture detection unit for the processed image information to detect manpower to complete the separation of the gesture and the background, and by vision algorithms automatically determining manpower enclosed in the image information box is a small rectangular region of interest;

手势跟踪单元,用于在所述图像信息中的感兴趣区域完成手势特征点的亚像素级跟踪,在视频序列中计算出每帧的人手轮廓状态; Gesture tracking unit, a region of interest for the image information to complete the sub-pixel level tracking feature points gestures calculated manually outline the state of each frame in a video sequence;

手势效性检测与手势动作确认单元,根据计算出的人手轮廓状态进行人手动作的有效性检测,并进行手势识别以对用户完成某个预定义手势的轨迹进行分类,确定用户完成的手势动作; Effectiveness of detection and gesture gestures confirmation unit, validity detect human movement based on the calculated outline of the state of manpower, and gesture recognition to the user complete a predefined gesture trajectory classification to determine the user completes the gesture;

手势指令控制命令生成单元,根据确定的手势动作生成相应的手势动作控制指令,并将该手势动作控制指令发送至三维用户界面; Gesture command control command generating unit configured to generate a corresponding gesture control commands based on the determined gesture, and the gesture control command to the three-dimensional user interface;

立体影像显示单元,用于显示三维立体影像及三维用户图形界面,以及用于根据所述手势动作控制指令做出相应反馈。 Stereoscopic image display unit for displaying three-dimensional images and three-dimensional graphical user interface, and control instructions for the gestures made in accordance with the appropriate feedback.

[0017] 所述的人机交互系统,其中,所述视频图像获取单元为摄像头。 [0017] The interactive system, wherein the video image acquisition unit for the camera.

[0018] 所述的人机交互系统,其中,所述人手轮廓状态:包括位置、旋转角、放缩量以及各个手指的长度和角度; [0018] The interactive system in which the state hands contours: including position, rotation angle, shrinkage and put each finger length and angle;

手势指令控制命令生成单元进一步包括: Gesture command control command generating unit further comprises:

第一生成模块,用于通过手势位置的不动,来识别出该动作为点击命令,生成相应的点击控制指令; The first generation module for position does not move through the gesture to recognize the action as a click command to generate the click command control;

第二生成模块,用于通过手势位置的快速左移、右移、上移、下移来识别出左、右、上、下四个命令,生成相应的左移、右移、上移、下移控制指令; The second generation module for fast left gesture by the location, right, up, down to identify the left, right, up, down four commands to generate the corresponding left, right, up, down shift control instruction;

第三生成模块,用于通过手势位置的挥手来识别出关闭动作,生成相应的关闭控制指令。 The third generation module for the position by waving gesture to recognize the closing movement, generates the appropriate shut down control instruction.

[0019] 本发明所提供的一种人机交互系统及其实时手势跟踪处理方法,本发明通过在三维立体影像显示设备上的图像传感和处理单元,感应用户的全部或部分手势动作,完成手势的准确跟踪,从而为有效的基于普通视觉传感器的手势人机界面接口提供了实时手势跟踪的解决方案,本发明利用计算机视觉与图像处理技术实现了自动的人手检测、跟踪与手势识别,实时、鲁棒、易于实现和操作,能使计算机用户通过手部姿态与计算机进行更自然、 更直观、更智能的交互;可用于智能家电、人机交互和虚拟现实平台等应用领域。 [0019] A human-computer interaction system of the present invention provides a real-time gesture tracking and processing method of the present invention, by three-dimensional image display image sensing and processing unit on the device, all or part of the induction user gesture, complete accurate gesture tracking, so as to effectively gesture-based human-machine interface common vision sensor interface provides real-time gesture tracking solutions, computer vision and image processing techniques of the present invention to achieve automatic detection of manpower, tracking and gesture recognition, real-time , robust, easy to implement and operate, enabling computer users to be more natural, more intuitive and more intelligent interaction with the computer by hand gestures; for intelligent home appliances, human-computer interaction and virtual reality platform applications. 应用于智能电视和其他智能家电产品的人机交互、各种体感游戏、和各种有关虚拟现实平台产品中, 因此本发明也具有重大的经济价值和应用价值。 HCI used in smart TVs and other smart appliances, a variety of somatosensory games, virtual reality platform and a variety of related products, and thus the present invention also has significant economic value and application value.

附图说明 BRIEF DESCRIPTION

[0020] 图1是本发明实施例的人机交互系统的实时手势跟踪处理方法流程图。 [0020] FIG. 1 is a real-time interactive gesture example embodiment of the present invention system flow chart tracking processing method.

[0021] 图2是本发明实施例的人手分类器级连结构图。 [0021] FIG. 2 is a link-level manpower classifier composition embodiment of the present invention.

[0022] 图3是本发明实施例的人机交互系统功能原理框图。 [0022] FIG. 3 is a functional block diagram of the human-computer interaction system according to embodiments of the present invention.

[0023] 图4是本发明实施例的人机交互系统硬件结构示意图。 [0023] FIG. 4 is a human-computer interaction system hardware configuration diagram of an embodiment of the present invention.

具体实施方式 detailed description

[0024] 本发明提供的一种人机交互系统及其实时手势跟踪处理方法,为使本发明的目的、技术方案及优点更加清楚、明确,以下参照附图并举实施例对本发明进一步详细说明。 [0024] A system of the present invention provides a human-computer interaction and real-time gesture tracking processing method, for the purpose of technical solutions and advantages of the present invention will become clear, unambiguous, embodiments of the present invention is described in further detail below with reference to the accompanying drawings simultaneously. 应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。 It should be understood that the specific embodiments described herein are merely used to explain the present invention and are not intended to limit the present invention.

[0025] 本发明实施例中需要的硬件设备如图4所示,为计算机300和视频图像采集设备400,本发明实施例提供的一种人机交互系统的实时手势跟踪处理方法,如图1所示,包括步骤: [0025] hardware embodiment of the present invention needed 4, 300 for the computer and video image capture device 400, real-time gesture alternate human interaction system according to an embodiment of the present invention, tracking processing method is shown in Figure 1 shown, comprising the steps of:

步骤S110、立体影像显示单元显示三维立体影像及三维用户图形界面。 Step S110, the stereoscopic image display unit to display three-dimensional images and three-dimensional graphical user interface.

[0026] 例如,在人机交互系统的计算机300的显示屏上显示可实现人机交互的三维立体影像及三维用户图形界面。 [0026] For example, the display can achieve human-computer interaction and three-dimensional images of three-dimensional graphical user interface on a display screen of a computer system 300 in human-computer interaction.

[0027] 步骤S120、获取用户侧的图像信息并进行相应的图像降噪和增强处理。 [0027] step S120, acquires image information at the user side and the corresponding noise reduction and image enhancement.

[0028] 譬如,当需进行人机交互时,可以通过视频图像获取单元(如摄像头等)获取用户(如图4所示的500)所在环境深度图像信息;并通过图像处理单元对视频图像获取单元获取的图像信息进行去噪与目标增强处理,为下一步的手势检测和跟踪提供有效保障。 [0028] For example, when the need for human-computer interaction, can obtain units (such as cameras) to obtain the user (as shown in Figure 5004) environment where the depth of the image information through video image; and access to video images by the image processing unit the image information acquisition unit denoising and enhancement objectives for the next gesture detection and tracking to provide effective protection. 然后进入步骤S130。 And then proceeds to step S130.

[0029] 步骤S130、对经过处理的图像信息通过手势检测单元进行人手检测,完成手势与背景的分离,并通过视觉算法在图像信息中自动确定包围人手的一个较小矩形框为感兴趣区域。 [0029] step S130, after the image information to be processed by hand gesture detection unit testing, complete separation of the gesture and the background, and surrounded by vision algorithms automatically determine staffing a smaller rectangle for the region of interest in the image information.

[0030] 该步骤中完成手势与背景的分离,为目标的跟踪中的特征点提取提供方便,并设定感兴趣区域,为系统的实时性需求提供保证。 [0030] This step is complete separation of the gesture and the background for the target tracking feature point extraction facilitate and set the region of interest, provide a guarantee for the real-time requirements of the system.

[0031] 本发明实施例中所述的人手检测是采用方向梯度直方图(HOG)特征,通过基于Adaboost的统计学习方法来实现的。 Manpower Detection [0031] embodiment of the present invention is to use a histogram of oriented gradients (HOG) characteristics through statistical learning method based on Adaboost achieved.

[0032] 用于学习人手模式的统计学习方法是Adaboost算法。 [0032] statistical learning method for learning manual mode is Adaboost algorithm. Adaboost算法是在人脸检测中应用极其广泛的一种成熟算法,它通过调用弱学习器不断学习训练样本中难学习的样本,从而达到较高的泛化精度。 Adaboost face detection algorithm is applied in a very wide range of a mature algorithm by calling the weak learner learning training samples difficult learning samples, so as to achieve higher generalization accuracy. Adaboost算法的主要过程是:首先给定训练样本集合,然后对该样本集合进行循环操作,每次循环用所选特征训练得到一个弱分类器,然后计算该假设的错误率,根据该错误率改变每个例子的权重进入下一个循环,若干个弱分类级联组成一个强分类器。 The main process Adaboost algorithm is: first given training sample set, then the sample collection cycle operations per cycle with selected features of the training to get a weak classifier, and then calculate the error rate is assumed to change according to the error rate examples of the heavy weight of each into the next cycle, a number of weak classifiers cascade to form a strong classifier. 最终的分类器由一系列相似的强分类器级联而成,分类器的分类能力随着级联结构中强分类器的数目增加而增加,如图2所示其中1、2_M为级联起来的各个强分类器,T表示候选区域被某个强分类器接受(即认为是人手区蜮),F表示候选区域被强分类器拒绝,是被排除了的候选区域,即认为是非人手区域。 The final category consists of a series of similar concatenation of strong classifier, classifier with the ability to increase the number of strong classifier cascade structure increased, as shown in Figure 2 which is cascaded 1,2_M each strong classifier, T represents a strong candidate region is classified accept (ie considered manpower district mythical creature), F represents the candidate region was denied a strong classifier, are excluded candidate regions, ie considered non-staff areas. 只有候选区域被所有强分类器接受才认为它是真正的人手区域,只要某一个强分类器拒绝,即认为它是非人手区域。 Only candidate regions are all strong classifier to accept that it is true only manpower area, as long as one strong classifier rejected it that it is non-staff areas.

[0033] 步骤S140、通过手势跟踪单元,在所述图像信息中的感兴趣区域完成手势特征点的亚像素级跟踪,在视频序列中计算出每帧的人手轮廓状态; [0033] step S140, through gestures tracking unit, a region of interest in the image information to complete the gesture feature point sub-pixel level tracking, calculates the state manpower outline of each frame in a video sequence;

手势的跟踪是为下一步的手势分析提供信息,其中,所述人手轮廓状态:包括位置、旋转角、放缩量以及各个手指的长度和角度。 Gesture gesture tracking is the next step to provide information analysis, wherein the outline of the state of manpower: including position, rotation angle, shrinkage and put each finger length and angle.

[0034] 所述亚像素解释为:在面阵摄像机的成像面以像素为最小单位。 [0034] The sub-pixel is interpreted as: surface imaging area array camera, in pixels, the smallest unit. 例如某CMOS摄像芯片,其像素间距为5. 2微米。 For example, a CMOS imaging chip, the pixel pitch of 5.2 microns. 摄像机拍摄时,将物理世界中连续的图像进行了离散化处理。 When the cameras, the physical world in successive discrete image processing. 到成像面上每一个像素点只代表其附近的颜色。 To the imaging surface of each pixel represents the color of its vicinity. 至于“附近”到什么程度? As for the "near" to what extent? 就很困难解释。 It is difficult to explain. 两个像素之间有5. 2微米的距离,在宏观上可以看作是连在一起的。 There are 5.2 m distance between two pixels in the macro can be seen as linked. 但是在微观上, 它们之间还有无限的更小的东西存在。 But at the micro, there are infinitely smaller things exist between them. 这个更小的东西我们称它为“亚像素”。 This thing we call the smaller "sub-pixel." 实际上“亚像素”应该是存在的,只是硬件上没有个细微的传感器把它检测出来。 In fact, "sub-pixel" should exist, just not a minor sensor detect it hardware. 于是软件上把它近似地计算出来。 So the software it is approximately calculated.

[0035] 步骤S150、根据计算出的人手轮廓状态进行人手动作的有效性检测,并进行手势识别以对用户完成某个预定义手势的轨迹进行分类,确定用户完成的手势动作;本实施例中,所述手势动作包括:左移、右移、上移、下移。 [0035] step S150, based on the calculated state hands contours validity detect human movement and gesture recognition to the user complete a predefined gesture trajectory classification to determine the user completes the gesture; the present embodiment, the gestures include: left, right, up, down. 而其中手势动作是否开始的判断依据是在连续20帧的人手检测结果里,有超过12帧检测到人手处于同一个位置。 Judged on the basis of which the gesture is whether to start staffing the test results for 20 consecutive frames, there are more than 12 detects the hand in the same position.

[0036] 本发明实施中所述的手势识别通过隐马尔科夫模型实现, 本发明所述手势识别的步骤包括: [0036] gesture embodiment of the present invention is identified by a hidden Markov model implementation, gesture recognition step of the present invention comprises:

步骤151 :对从轮廓跟踪获得的手势轨迹进行预处理去除密集点,获得预处理轨迹: 步骤152 :对预处理后的轨迹提取方向编码特征,对特征归一化; 步骤153 :采用前向递推算法计算步骤152得到的特征对应各类手势模型的概率,取概率最大者为识别结果。 Step 151: A gesture trajectory obtained from the contour tracing pretreated to remove dense point, obtained pretreatment track: Step 152: encoding feature extraction direction on the track after preprocessing, the feature normalization; Step 153: using forward delivery calculation reckoning step 152 to obtain the corresponding probability characteristics of various types of hand models, whichever is greatest probability recognition result.

[0037] 本发明所述人手轮廓跟踪采用条件概率密度传播和启发式扫描技术相结合的方法实现,所述轮廓跟踪算法的步骤如下: [0037] The present invention, the step of manpower contour tracking using conditional probability density spread and heuristic scanning technology combined method of realization, the contour tracing algorithm is as follows:

步骤51 :采用条件概率密度传播(Condensation)算法跟踪轮廓的平移、旋转和放缩运动分量,得到若干候选轮廓,这些候选轮廓关于手指的状态分量还未确定; Step 51: The conditional probability density propagation (Condensation) algorithm to track contours of translation, rotation and scaling motion component, to give a number of the candidate profile, candidate contour on the finger state component has not been determined;

步骤52 :对每个确定了平移、旋转及放缩运动分量的候选轮廓,逐步调整每个手指的长度和角度,得到各个轮廓的手指运动状态分量,从而产生所有状态分量都确定的最终的候选轮廓; Step 52: For each identified by translation, rotation and scaling motion component candidate contour, gradually adjust the length and angle of each finger, the finger motion to obtain state component of each profile, resulting in a final candidate for the status of all components are identified contour;

步骤53 :从最终的所有候选轮廓中产生一个轮廓作为跟踪结果。 Step 53: generating a contour from the final outline of all the candidates as the tracking results.

[0038] 步骤S160、根据确定的手势动作生成相应的手势动作控制指令,并将该手势动作控制指令发送至三维用户界面; [0038] Step S160, generates a corresponding gesture based on the determined control command gestures, gesture and the control command to the three-dimensional user interface;

其中,该步骤中的根据确定的手势动作生成相应的手势动作控制指令包括: E1、通过手势位置的不动,来识别出该动作为点击命令,生成相应的点击控制指令; E2、通过手势位置的快速左移、右移、上移、下移来识别出左、右、上、下四个命令,生成相应的左移、右移、上移、下移控制指令; Wherein the step according to the determined gesture generates a corresponding gesture control instructions include: E1, through gestures position does not move to recognize the action as a click command to generate the click control instruction; E2, through gestures location quick left, right, up, down to identify the left, right, up, down four commands to generate the corresponding left, right, up, down control instruction;

E3、通过手势位置的挥手来识别出关闭动作,生成相应的关闭控制指令。 E3, by waving gestures to identify the location of the closing operation, to generate the corresponding closed control commands.

[0039] 步骤S170、三维用户界面根据所述手势动作控制指令做出相应反馈。 [0039] step S170, the three-dimensional user interface controls in accordance with the instructions to make the appropriate feedback gestures. 譬如,根据用户的手势动作控制,立体影像显示单元显示的三维用户界面进行相应的动作显示等。 For example, according to the user's gesture control, three-dimensional stereoscopic image display unit displays the user interface corresponding action display.

[0040] 由上可见,本明实施例,通过感应用户的全部或部分手势动作,完成手势的准确跟踪,从而为有效的基于普通视觉传感器的手势人机界面接口提供了实时鲁棒的解决方案 [0040] As seen above, the embodiment of the present invention, by sensing the user's hand gestures whole or in part, complete and accurate gesture tracking, so as to effectively gesture-based human-machine interface common vision sensor interface provides real-time robust solution

基于上述实施例,本发明实施例还提供了一种人机交互系统,如图3所示,主要包括: 视频图像获取单元210,用于获取用户所在环境深度图像信息;具体如上述步骤S120 所述。 Based on the above-described embodiments, embodiments of the present invention further provides a human-machine interaction system, shown in Figure 3, including: a video image acquisition unit 210, the user's environment for acquiring depth image information; concrete as the above-described step S120 said. 其中,所述视频图像获取单元为摄像头。 Wherein, the image acquisition unit is a video camera.

[0041] 图像处理单元220,用于对视频图像获取单元获取的图像信息进行去噪与目标增强处理;具体如上述步骤S120所述。 [0041] The image processing unit 220 for the video image information acquisition unit acquires image denoising enhancement processing target; concrete as described above in step S120.

[0042] 手势检测单元230,用于对经过处理的图像信息进行人手检测,完成手势与背景的分离,并通过视觉算法在图像信息中自动确定包围人手的一个较小矩形框为感兴趣区域; 具体如上述步骤S130所述。 [0042] gesture detection unit 230, for information after the image processing is performed to detect the manpower to complete the separation of the gesture and the background, and surrounded by vision algorithms automatically determine staffing a small rectangular box to the area of ​​interest in the image information; specifically as described above in step S130.

[0043] 手势跟踪单元M0,用于在所述图像信息中的感兴趣区域完成手势特征点的亚像素级跟踪,在视频序列中计算出每帧的人手轮廓状态;具体如上述步骤S140所述。 [0043] gesture tracking unit M0, for the region of interest in the image information to complete the gesture feature point sub-pixel level tracking, video sequence is calculated in state hands contours of each frame; as specified in the above step S140 . 其中,所述人手轮廓状态:包括位置、旋转角、放缩量以及各个手指的长度和角度。 Wherein the hand contour status include: position, rotation angle, shrinkage and put each finger length and angle.

[0044] 手势效性检测与手势动作确认单元250,根据计算出的人手轮廓状态进行人手动作的有效性检测,并进行手势识别以对用户完成某个预定义手势的轨迹进行分类,确定用户完成的手势动作;具体如上述步骤S150所述。 [0044] the effectiveness of detection and gesture gestures confirmation unit 250, the validity detect human movement based on the calculated outline of the state of manpower, and gesture recognition to the user complete a predefined gesture trajectory classify, determine the user complete gestures; as specified in the above step S150.

[0045] 手势指令控制命令生成单元沈0,根据确定的手势动作生成相应的手势动作控制指令,并将该手势动作控制指令发送至三维用户界面;具体如上述步骤S160所述。 [0045] control command gesture command generator Shen 0, to form the corresponding gesture commands based on the determined control gesture and the gesture operation control command to the three-dimensional user interface; concrete as described above in step S160.

[0046] 立体影像显示单元270,用于显示三维立体影像及三维用户图形界面,以及用于根据所述手势动作控制指令做出相应反馈;具体如上述步骤S170所述。 [0046] stereoscopic image display unit 270 for displaying three-dimensional images and three-dimensional graphical user interface, as well as instructions for controlling the feedback to make the appropriate gesture; as specified in the above step S170.

[0047] 其中,所述手势指令控制命令生成单元进一步包括: [0047] wherein the gesture command control command generating unit further comprises:

第一生成模块,用于通过手势位置的不动,来识别出该动作为点击命令,生成相应的点击控制指令; The first generation module for position does not move through the gesture to recognize the action as a click command to generate the click command control;

第二生成模块,用于通过手势位置的快速左移、右移、上移、下移来识别出左、右、上、下四个命令,生成相应的左移、右移、上移、下移控制指令; The second generation module for fast left gesture by the location, right, up, down to identify the left, right, up, down four commands to generate the corresponding left, right, up, down shift control instruction;

第三生成模块,用于通过手势位置的挥手来识别出关闭动作,生成相应的关闭控制指令。 The third generation module for the position by waving gesture to recognize the closing movement, generates the appropriate shut down control instruction.

[0048] 综上所述,本发明所提供的一种人机交互系统及其实时手势跟踪处理方法,本发明通过在三维立体影像显示设备上的图像传感和处理单元,感应用户的全部或部分手势动作,完成手势的准确跟踪,从而为有效的基于普通视觉传感器的手势人机界面接口提供了实时手势跟踪的解决方案,本发明利用计算机视觉与图像处理技术实现了自动的人手检测、跟踪与手势识别,实时、鲁棒、易于实现和操作,能使计算机用户通过手部姿态与计算机进行更自然、更直观、更智能的交互;可用于智能家电、人机交互和虚拟现实平台等应用领域。 [0048] In summary A human-computer interaction system of the present invention provides a real-time gesture tracking and processing method of the present invention, by three-dimensional image display image sensing and processing unit, a user sensing devices on all or part gestures, complete and accurate gesture tracking, so as to effectively gesture-based human-machine interface common vision sensor interface provides real-time gesture tracking solutions, computer vision and image processing techniques of the present invention to achieve automatic detection of manpower, tracking and gesture recognition, real-time, robust, easy to implement and operate, enabling computer users to be more natural, more intuitive and more intelligent interaction with the computer by hand gestures; for intelligent appliances, and human-computer interaction applications such as virtual reality platform field. 应用于智能电视和其他智能家电产品的人机交互、各种体感游戏、和各种有关虚拟现实平台产品中,因此本发明也具有重大的经济价值和应用价值。 HCI used in smart TVs and other smart appliances, a variety of somatosensory games, virtual reality platform and a variety of related products, and thus the present invention also has significant economic value and application value.

[0049] 应当理解的是,本发明的应用不限于上述的举例,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,所有这些改进和变换都应属于本发明所附权利要求的保护范围。 [0049] It should be understood that the application of the present invention is not limited to the above example, those of ordinary skill in the art, can be improved or transformed in accordance with the above description, all such modifications and variations should belong to the appended claims of the invention the scope of protection.

Claims (10)

1. 一种人机交互系统的实时手势跟踪处理方法,其特征在于,包括步骤:A、获取用户侧的图像信息并进行相应的图像降噪和增强处理;B、对经过处理的图像信息通过手势检测单元进行人手检测,完成手势与背景的分离, 并通过视觉算法在图像信息中自动确定包围人手的一个较小矩形框为感兴趣区域;C、通过手势跟踪单元,在所述图像信息中的感兴趣区域完成手势特征点的亚像素级跟踪,在视频序列中计算出每帧的人手轮廓状态;D、根据计算出的人手轮廓状态进行人手动作的有效性检测,并进行手势识别以对用户完成某个预定义手势的轨迹进行分类,确定用户完成的手势动作;E、根据确定的手势动作生成相应的手势动作控制指令,并将该手势动作控制指令发送至三维用户界面;F、三维用户界面根据所述手势动作控制指令做出相应反馈。 A real-time gesture HMI system tracking processing method comprising the steps of: A, acquires image information at the user side and the corresponding noise reduction and image enhancement; B, on the processed image information hand gesture detection unit testing, complete separation of the gesture and the background, and surrounded by vision algorithms automatically determine staffing a smaller rectangle for the region of interest in the image information; C, through gestures tracking unit in the image information the region of interest to complete the gesture feature point sub-pixel level tracking, video sequence is calculated in state hands contours of each frame; D, based on the calculated state hands contours validity detect human movement and gesture recognition in order to users complete a predefined gesture trajectory classification to determine the user through gestures; E, to form the corresponding gesture control commands based on the determined gesture, and the gesture control command to the three-dimensional user interface; F, three-dimensional the user interface to make the appropriate gesture feedback according to the control instruction.
2.根据权利要求1所述的人机交互系统的实时手势跟踪处理方法,其特征在于,所述步骤A之前还包括,a、立体影像显示单元显示三维立体影像及三维用户图形界面。 The real-time gesture HCI system according to claim tracking processing method, wherein, prior to said step A further comprises, a, stereoscopic image display unit to display three-dimensional images and three-dimensional graphical user interface.
3.根据权利要求1所述的人机交互系统的实时手势跟踪处理方法,其特征在于,所述步骤A具体包括:Al、视频图像获取单元获取用户所在环境深度图像信息;A2、通过图像处理单元对视频图像获取单元获取的图像信息进行去噪与目标增强处理。 3. The real-time gesture HCI system according to claim tracking processing method, wherein the step A comprises specifically: Al, video image acquiring unit acquires the user's environment, the depth image information; the A2, by image processing unit acquires image information acquisition unit of the target video image de-noising and enhancement.
4.根据权利要求1所述的人机交互系统的实时手势跟踪处理方法,其特征在于,所述步骤C中的人手轮廓状态:包括位置、旋转角、放缩量以及各个手指的长度和角度。 4. Real-time gesture HMI system of claim 1, wherein the tracking processing method, wherein the step C manpower outline states: including position, rotation angle, shrinkage and put each finger length and angle .
5.根据权利要求1所述的人机交互系统的实时手势跟踪处理方法,其特征在于,所述步骤D还包括:手势动作是否开始的判断依据是在连续20帧的人手检测结果里,有超过12帧检测到人手处于同一个位置。 5. Real-time gesture HMI system of claim 1, wherein the tracking processing method, wherein said step D further comprises: determining whether based on gestures began staffing the detection result is 20 consecutive frames, there It detected more than 12 staff in the same position.
6.根据权利要求1所述的人机交互系统的实时手势跟踪处理方法,其特征在于,所述步骤D中的手势动作包括:左移、右移、上移、下移。 6. Real-time gesture HMI system of claim 1, wherein the tracking processing method, wherein said step (D) gestures include: left, right, up, down.
7.根据权利要求1所述的人机交互系统的实时手势跟踪处理方法,其特征在于,所述步骤E中的根据确定的手势动作生成相应的手势动作控制指令包括:E1、通过手势位置的不动,来识别出该动作为点击命令,生成相应的点击控制指令;E2、通过手势位置的快速左移、右移、上移、下移来识别出左、右、上、下四个命令,生成相应的左移、右移、上移、下移控制指令;E3、通过手势位置的挥手来识别出关闭动作,生成相应的关闭控制指令。 7. Real-time gesture HMI system of claim 1, wherein the tracking processing method, wherein said step E to generate a corresponding control instruction includes gestures in accordance with certain gestures: E1, through gestures location do not move to recognize the action as a click command to generate the click command control; E2, through rapid gesture left position, right, up, down to identify the left, right, up, down four commands to produce the corresponding left, right, up, down control instruction; E3, by waving gestures to identify the location of the closing operation, to generate the corresponding closed control commands.
8. —种人机交互系统,其特征在于,包括:视频图像获取单元,用于获取用户所在环境深度图像信息;图像处理单元,用于对视频图像获取单元获取的图像信息进行去噪与目标增强处理;手势检测单元,用于对经过处理的图像信息进行人手检测,完成手势与背景的分离,并通过视觉算法在图像信息中自动确定包围人手的一个较小矩形框为感兴趣区域;手势跟踪单元,用于在所述图像信息中的感兴趣区域完成手势特征点的亚像素级跟踪,在视频序列中计算出每帧的人手轮廓状态;手势效性检测与手势动作确认单元,根据计算出的人手轮廓状态进行人手动作的有效性检测,并进行手势识别以对用户完成某个预定义手势的轨迹进行分类,确定用户完成的手势动作;手势指令控制命令生成单元,根据确定的手势动作生成相应的手势动作控制指令,并将该手势动作控制指令发送至三维用户界面;立体影像显示单元,用于显示三维立体影像及三维用户图形界面,以及用于根据所述手势动作控制指令做出相应反馈。 8. - kind of human-computer interaction system comprising: a video image acquisition unit for obtaining user environment where the depth image information; an image processing unit for the video image acquisition unit acquires image information will be de-noising and objectives enhancement processing; gesture detection unit for information after the image processing will be detected manpower to complete the separation of the gesture and the background, and surrounded by vision algorithms automatically determine staffing a smaller rectangle for the region of interest in the image information; gesture tracking unit, a region of interest for the image information to complete the gesture feature point tracking sub-pixel is calculated for each frame in a video sequence manpower contour status; the effectiveness of detection and gesture gestures confirmation unit, according to the calculation out of state hands contours validity detect human movement and gesture recognition to the user complete a predefined gesture trajectory classification to determine the user completes the gesture; a gesture command control command generating unit according to the determined gesture generates a corresponding control command gestures, and the gestures to control commands sent to the three-dimensional user interface; stereoscopic image display unit for displaying three-dimensional images and three-dimensional graphical user interface, and control commands according to the gestures made corresponding feedback.
9.根据权利要求8所述的人机交互系统,其特征在于,所述视频图像获取单元为摄像头。 9. The HMI system according to claim 8, characterized in that the image acquisition unit is a video camera.
10.根据权利要求8所述的人机交互系统,其特征在于,所述人手轮廓状态:包括位置、 旋转角、放缩量以及各个手指的长度和角度;手势指令控制命令生成单元进一步包括:第一生成模块,用于通过手势位置的不动,来识别出该动作为点击命令,生成相应的点击控制指令;第二生成模块,用于通过手势位置的快速左移、右移、上移、下移来识别出左、右、上、下四个命令,生成相应的左移、右移、上移、下移控制指令;第三生成模块,用于通过手势位置的挥手来识别出关闭动作,生成相应的关闭控制指令。 According to claim 8, wherein the interactive system, characterized in that the hand contour status include: position, rotation angle, shrinkage and put each finger length and angle; gesture command control command generating unit further comprises: the first generation module for position does not move through the gesture to recognize the action as a click command to generate the click control instruction; a second generation module for fast left gesture by the location, right, move , down to identify the left, right, up, down four commands to generate the corresponding left, right, up, down control instruction; third generation module for the position by waving gestures to identify Close action to form the corresponding closed control commands.

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999034327A2 (en) * 1997-12-23 1999-07-08 Koninklijke Philips Electronics N.V. System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
CN101689244A (en) * 2007-05-04 2010-03-31 格斯图尔泰克股份有限公司 Camera-based user input for compact devices
US20100166258A1 (en) * 2008-12-30 2010-07-01 Xiujuan Chai Method, apparatus and computer program product for providing hand segmentation for gesture analysis
CN102117117A (en) * 2010-01-06 2011-07-06 致伸科技股份有限公司 System and method for control through identifying user posture by image extraction device
CN102081918A (en) * 2010-09-28 2011-06-01 北京大学深圳研究生院 Video image display control method and video image display device
CN102789568A (en) * 2012-07-13 2012-11-21 浙江捷尚视觉科技有限公司 Gesture identification method based on depth information

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103389793A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Human-computer interaction method and human-computer interaction system
CN102693084A (en) * 2012-05-08 2012-09-26 上海鼎为软件技术有限公司 Mobile terminal and method for response operation of mobile terminal
CN102693084B (en) * 2012-05-08 2016-08-03 上海鼎为电子科技(集团)有限公司 Mobile terminal and method of response actions
CN102722249A (en) * 2012-06-05 2012-10-10 上海鼎为软件技术有限公司 Manipulating method, manipulating device and electronic device
CN102722249B (en) * 2012-06-05 2016-03-30 上海鼎为电子科技(集团)有限公司 Control method, control devices and electronic devices
CN102769802A (en) * 2012-06-11 2012-11-07 西安交通大学 Man-machine interactive system and man-machine interactive method of smart television
CN102799263A (en) * 2012-06-19 2012-11-28 深圳大学 Posture recognition method and posture recognition control system
CN102799263B (en) * 2012-06-19 2016-05-25 深圳大学 A gesture recognition and gesture recognition control system
CN102789568A (en) * 2012-07-13 2012-11-21 浙江捷尚视觉科技有限公司 Gesture identification method based on depth information
CN102789568B (en) * 2012-07-13 2015-03-25 浙江捷尚视觉科技股份有限公司 Gesture identification method based on depth information
CN103777744A (en) * 2012-10-23 2014-05-07 中国移动通信集团公司 Method and device for achieving input control and mobile terminal
CN102982557A (en) * 2012-11-06 2013-03-20 桂林电子科技大学 Method for processing space hand signal gesture command based on depth camera
CN102982557B (en) * 2012-11-06 2015-03-25 桂林电子科技大学 Method for processing space hand signal gesture command based on depth camera
CN102945078A (en) * 2012-11-13 2013-02-27 深圳先进技术研究院 Human-computer interaction equipment and human-computer interaction method
CN102981742A (en) * 2012-11-28 2013-03-20 无锡市爱福瑞科技发展有限公司 Gesture interaction system based on computer visions
CN103136541A (en) * 2013-03-20 2013-06-05 上海交通大学 Double-hand three-dimensional non-contact type dynamic gesture identification method based on depth camera
CN103136541B (en) * 2013-03-20 2015-10-14 上海交通大学 Based on hands-D depth camera non-contact dynamic gesture recognition method
CN104143075A (en) * 2013-05-08 2014-11-12 光宝科技股份有限公司 Gesture judging method applied to electronic device
WO2014183262A1 (en) * 2013-05-14 2014-11-20 Empire Technology Development Llc Detection of user gestures
CN103399629A (en) * 2013-06-29 2013-11-20 华为技术有限公司 Method and device for capturing gesture displaying coordinates
CN103713738A (en) * 2013-12-17 2014-04-09 武汉拓宝电子系统有限公司 Man-machine interaction method based on visual tracking and gesture recognition
CN103713738B (en) * 2013-12-17 2016-06-29 武汉拓宝科技股份有限公司 One kind of human-computer interaction method for visual tracking and gesture recognition based on
CN103869986A (en) * 2014-04-02 2014-06-18 中国电影器材有限责任公司 Dynamic data generating method based on KINECT
CN104331149A (en) * 2014-09-29 2015-02-04 联想(北京)有限公司 Control method, control device and electronic equipment
CN104281265A (en) * 2014-10-14 2015-01-14 京东方科技集团股份有限公司 Application program control method, application program control device and electronic equipment
WO2016058303A1 (en) * 2014-10-14 2016-04-21 京东方科技集团股份有限公司 Application control method and apparatus and electronic device

Similar Documents

Publication Publication Date Title
Han et al. Enhanced computer vision with microsoft kinect sensor: A review
Li Hand gesture recognition using Kinect
Sato et al. Real-time input of 3D pose and gestures of a user's hand and its applications for HCI
Suarez et al. Hand gesture recognition with depth images: A review
Oka et al. Real-time fingertip tracking and gesture recognition
US20110289456A1 (en) Gestures And Gesture Modifiers For Manipulating A User-Interface
Kollorz et al. Gesture recognition with a time-of-flight camera
US6624833B1 (en) Gesture-based input interface system with shadow detection
US20130055150A1 (en) Visual feedback for tactile and non-tactile user interfaces
Argyros et al. Vision-based interpretation of hand gestures for remote control of a computer mouse
US20110304632A1 (en) Interacting with user interface via avatar
US20120309532A1 (en) System for finger recognition and tracking
Kim et al. Simultaneous gesture segmentation and recognition based on forward spotting accumulative HMMs
US20120062736A1 (en) Hand and indicating-point positioning method and hand gesture determining method used in human-computer interaction system
US8166421B2 (en) Three-dimensional user interface
Kehl et al. Real-time pointing gesture recognition for an immersive environment
US20120204133A1 (en) Gesture-Based User Interface
US20120202569A1 (en) Three-Dimensional User Interface for Game Applications
CN101393497A (en) Multi-point touch method based on binocular stereo vision
Shin et al. Gesture recognition using Bezier curves for visualization navigation from registered 3-D data
Varona et al. Hands-free vision-based interface for computer accessibility
Hsieh et al. A real time hand gesture recognition system using motion history image
Du et al. A virtual keyboard based on true-3D optical ranging
Hasan et al. Static hand gesture recognition using neural networks
Je et al. Hand gesture recognition to understand musical conducting action

Legal Events

Date Code Title Description
C06 Publication
C10 Request of examination as to substance