CN103294996A - 3D gesture recognition method - Google Patents

3D gesture recognition method Download PDF

Info

Publication number
CN103294996A
CN103294996A CN 201310168123 CN201310168123A CN103294996A CN 103294996 A CN103294996 A CN 103294996A CN 201310168123 CN201310168123 CN 201310168123 CN 201310168123 A CN201310168123 A CN 201310168123A CN 103294996 A CN103294996 A CN 103294996A
Authority
CN
Grant status
Application
Patent type
Prior art keywords
finger
gesture
step
image
obtained
Prior art date
Application number
CN 201310168123
Other languages
Chinese (zh)
Other versions
CN103294996B (en )
Inventor
程洪
代仲君
Original Assignee
电子科技大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Abstract

The invention relates to the field of computer vision and human-computer interaction, in particular to a 3D gesture recognition method. The 3D gesture recognition method comprises the following steps that an RGB image and a depth image are acquired from an image input device and used as training samples; detection and division of a hand image are conducted according to a self-adaptive dynamic depth threshold value; arms in the hand image are removed through image morphology operation to acquire central positions of palms; a gesture outline is obtained through edge extraction and described through a time sequence curve; fingertip positions and finger connection point positions in a gesture are acquired through the image morphology operation, division of the time sequence curve is conducted at corresponding positions of the time sequence curve, and the time sequence curves of fingers are subjected to specific combination to obtain a finger describer. By means of the 3D gesture recognition method, natural interaction with a computer is realized, a traditional human-computer interaction mode is expanded effectively, and the recognition rate is increased greatly and can reach 99%.

Description

一种3D手势识别方法 One kind of 3D gesture recognition method

技术领域 FIELD

[0001] 本发明涉及计算机视觉和人机交互领域,具体涉及一种3D手势识别方法。 [0001] The present invention relates to the field of interactive computer vision and, particularly, to a method of 3D gesture recognition.

背景技术 Background technique

[0002] 随着计算机技术的发展,人机交互已成为了人们生活中不可或缺的部分,但大部分交互都是基于鼠标、键盘、手持设备及窗口界面的二维交互技术,如何让交互变得更加自然近年来成为研究热题。 [0002] With the development of computer technology, human-computer interaction has become an indispensable part of people's lives, but most are based on the interaction of two-dimensional interactive technology mice, keyboards, handheld devices and window interface, how to make interactive more natural in recent years become hot research topic. 手势作为人类交互的主要手段之一,历史甚至早于有声语言,所以使用手势作为人机交互会更为友好、方便、简洁、直观,很自然成为传统人机交互的一种扩充。 As a gesture of one of the primary means of human interaction, history even before verbal language, so use gestures as human-computer interaction will be more friendly, convenient, simple and intuitive, it becomes a natural expansion of the traditional human-computer interaction. 要识别手势首先要感知,现有三种感知手势的传感装置:基于手持设备的传感装置,比如微软的数字手势环、红外手势感知系统;基于触摸的传感装置,比如iPhone ;基于视觉的传感装置,比如TOF摄像机、Kinect。 To identify the first sensing gesture, the gesture sensing means sensing the conventional three kinds: sensing means based handheld devices, such as Microsoft's digital loop gesture, gesture infrared sensing system; touch-based sensing device, such as the iPhone; based on visual sensing means, such as TOF camera, Kinect.

发明内容 SUMMARY

[0003] 本发明的目的在于提供一种3D手势识别方法,解决现有的人机交互方法不够方便快捷,以及在复杂的环境下无法实现的问题。 [0003] The object of the present invention is to provide a method of 3D gesture recognition, to solve the conventional method is not convenient man-machine interaction, and the complex environment in question can not be achieved.

[0004] 为解决上述的技术问题,本发明采用以下技术方案: [0004] In order to solve the above technical problem, the present invention employs the following technical solution:

[0005] 一种3D手势识别方法,包括以下步骤: [0005] A 3D gesture recognition method, comprising the steps of:

[0006] 步骤一,从图像输入设备获取RGB图像和深度图像作为训练样本; [0006] Step a, the image input apparatus acquires an image from RGB image and depth as training samples;

[0007] 步骤二,通过设定自适应动态深度阈值进行人手的检测和分割; [0007] Step two, by setting the depth adaptive dynamic threshold detection and segmentation of a human hand;

[0008] 步骤三,通过图像形态学操作去掉手臂,获取手掌中心位置; [0008] Step three, the image is removed by morphological operations arm, palm obtain the center position;

[0009] 步骤四,通过边缘提取得到手势轮廓,并利用时间序列曲线描述手势轮廓; [0009] Step 4 gesture profile obtained by edge extraction, and the time series profile curve describes the gesture;

[0010] 步骤五,通过图像形态学操作获得手势中指尖位置及手指连接点的位置,在时间序列曲线的对应位置上进行曲线的分割,并对手指的时间序列曲线进行特定的组合得到手指描述器特征向量; [0010] Step 5 is obtained and the position of the fingertip gestures connection point position of the finger by an image morphological operations, dividing the curve at the corresponding position on the time series profile, and the time series profile of the finger is obtained by combining a specific description of a finger is the feature vector;

[0011] 步骤六,将手势训练样本中的每一类分别表示成一个手指描述器特征矩阵; [0011] Step 6 gesture training samples in each class be denoted as a finger descriptor feature matrix;

[0012] 步骤七,通过图像输入设备实时获取RGB图像和深度图像,执行步骤二〜步骤五将实时输入的手势表不成一个手指描述器特征矩阵; [0012] Step seven, RGB images and depth images acquired by the image input device in real time, steps two to five steps of the gesture input in real time is not a finger table descriptor feature matrix;

[0013] 步骤八,将步骤七得到的手指描述器特征向量,与步骤六中得到的特征矩阵进行图像到类的动态时间规整分类处理,最后得到手势识别结果。 [0013] Step eight, finger descriptor feature vectors obtained in seven steps, and six feature matrix obtained in step a dynamic time warping the image to the class classification processing, the gesture recognition result finally obtained.

[0014] 更进一步的技术方案是,所述步骤二中,通过假设手为最前方的物体,设定自适应动态阈值:具体是利用二值化的方法通过自适应动态阈值分隔人手和背景,得到二值化的手势图。 [0014] A further aspect is that said step II, by assuming that the hand of the foremost object, an adaptive dynamic threshold setting: in particular binarization method manpower and separated by an adaptive background dynamic threshold, obtained binarized gesture FIG.

[0015] 更进一步的技术方案是,所述步骤三中,通过图像形态学操作去掉手臂,获取手掌中心位置,具体是检测出手臂轮廓,选择其上的极小值点作为手腕位置,去掉手腕以下的部分,对剩下的图像作腐蚀、膨胀去掉手指得到手掌,计算手掌几何中心。 [0015] A further aspect is that said step three, the image is removed by morphological operations arm, palm obtain the center position, in particular an arm contour detected, select the minimum value as a point on which the wrist position, wrist removed the following section, the remaining image as erosion, dilation finger is removed to give the palm, palm calculated geometric center.

[0016] 更进一步的技术方案是,所述步骤四中,通过边缘提取得到手势轮廓,具体是以手掌几何中心位置,手掌外接圆为半径画圆,覆盖二值图中的手掌,得到手势二值图,对手势二值图提取边缘,得到手势轮廓图。 [0016] A further aspect is that said step four, gesture obtained by extracting an edge profile, in particular in a central location, hand geometry, palm circumcircle circle radius, the binary image covering the palm, to give two gesture FIG values, edge extraction of a binary image gesture, the gesture profile obtained.

[0017] 更进一步的技术方案是,所述步骤四中,利用时间序列曲线描述手势轮廓,具体是通过对轮廓顶点的角度和距离的计算获得数据中蕴含的与形状相关的有用信息,并描绘成时间序列曲线,实现形状特征的提取。 [0017] A further aspect is, in step four, the gesture described by time series profile curve, in particular to obtain useful information about the shape contains data calculated by the contour of the apex angle and distance, and drawing time-series graph, achieve shape feature extraction. 例如手掌中心点为Po,轮廓初始点为P1,轮廓顶点为Pi (i = 2,...,η),则可得角度Z P1P0PiMP0P1, P0Pi), (i = 2,...,η),角度做360。 E.g. palm center point Po, the initial contour point P1, the contour vertex Pi (i = 2, ..., η), may have an angle Z P1P0PiMP0P1, P0Pi), (i = 2, ..., η) The angle made 360. 的归一化后作为时间序列曲线的横坐标;轮廓顶点与手掌中心点的欧氏距离IPtlPiI作为时间序列曲线的纵坐标,利用公式I P0Pi I = I P0Pi I /max {IP0P11,IP0P2 | P0Pn |}对距离作归一化,得到能过描述手势轮廓的时间序列曲线。 After normalization of abscissa as a time series profile; ordinate contour vertices and center of the palm IPtlPiI Euclidean distance as a time series profile, using the formula I P0Pi I = I P0Pi I / max {IP0P11, IP0P2 | P0Pn | } for normalizing the distance, can be obtained through a time series profile described gesture curve.

[0018] 更进一步的技术方案是,所述步骤五中,通过图像形态学操作获得手势中指尖位置及手指连接点的位置,实现对时间序列曲线的分割,具体是通过对手势轮廓进行多边形估计,再检测出多边形的凹凸点,凸点为指尖,凹点为手指连接点,最后对检测到的凹凸点进行滤波,得到指尖位置及手指连接点位置,将这些位置对应到时间序列曲线上,实现对曲线的分割。 [0018] A further aspect is the fifth step, the fingertip position and the gesture to obtain the position of the connection point by a finger image morphological operation, to segment the time series profile, in particular by a polygonal contour estimation gesture , then the point detected irregular polygon, bump fingertip, finger pits connection point, last point of the detected irregularities filtering, to obtain the position and the fingertip position of the finger connection points, these positions correspond to the time series curve on the curve to segment. 例如具体是通过使用OpenCV中〈approxPloyDP〉函数得到多边形估计,检测出多边形的凹凸点,凸点为指尖,凹点为手指连接点,凹凸点检测可以使用OpenCV中〈convexHull〉和〈cvConvexiyDefects〉函数,对检测到的凹凸点进行滤波,将y坐标值在中心点以下的凹凸点全部滤掉,实现对手指的分割。 Specifically, for example, <approxPloyDP> OpenCV function obtained by using the estimated polygon, irregular polygon detected points, a fingertip bumps, pits finger connection point, unevenness in the points can be detected using OpenCV <convexHull> and <cvConvexiyDefects> Function , the detected irregularities point filtering, the filtered y-coordinate value of all the irregularities point below the center point, to achieve the split finger.

[0019] 更进一步的技术方案是,所述步骤五中,所述手指的时间序列曲线进行特定的组合得到手指描述器,具体是将曲线分割得到的手指曲线段进行特定的组合得到手指描述器特征向量。 [0019] A further aspect is the fifth step, the time series profile of the finger is obtained by combining a specific description of a finger, a finger particular curve segments obtained by dividing the curve will be obtained by combining the finger-specific descriptor Feature vector. 所述手指描述器特征向量如下If=Iif1, f2,...fs,...fs], The finger is described as a feature vector If = Iif1, f2, ... fs, ... fs],

[0020] 其中s的值为 [0020] wherein the value of s

[0021] s={l, 2,...k, 12,23,...(k_l)k, 123,234,...,(k_2) (kl)k,...123...k}, [0021] s = {l, 2, ... k, 12,23, ... (k_l) k, 123,234, ..., (k_2) (kl) k, ... 123 ... k},

[0022] ke {1,2,...K},K是所有手势中包含的最多手指个数。 [0022] ke {1,2, ... K}, K is the number of all the fingers up gesture included.

[0023] 更进一步的技术方案是,所述步骤六中,将手势训练样本中的每一类分别表示成一个手指描述器特征矩阵,所述手指描述器特征矩阵由该类所有训练样本的手指描述器特 [0023] A further aspect is that said step six, the gesture training samples in each class are represented as a matrix of feature descriptors finger, the finger is described by the class feature matrix finger all the training samples descriptor Laid

征向量组成。 Eigenvectors. 所述手指描述器特征向量组成的矩阵G = [m,.J.,其 The matrix G finger descriptor feature vectors consisting of = [m, .J., Which

中C表示手势的某一类,Gc是一个ΝεΧΜ。 C represents a gesture in certain classes, Gc is a ΝεΧΜ. 的矩阵,fc,n是C类手势的第η个训练样本,Nc为训练样本数,Mc为c类手势手指描述器的总数,随c而变化。 Matrix, fc, n training samples are η gesture class C, the number Nc of the training samples, Mc is c Total number of finger gesture class descriptor, the change with c.

[0024] 更进一步的技术方案是,所述步骤七中,通过图像输入设备实时获取RGB图像和深度图像,执行步骤二〜步骤五将实时输入的手势表示成一个手指描述器特征向量。 [0024] A further aspect is that said step seven, the RGB images and depth images acquired by the image input device in real time, steps two to five steps of the gesture input in real time, expressed as a feature vector descriptor finger. 得到手指描述器特征向量ftest=!^/,f2,,...fs,,...fs,]。 Finger descriptor feature vectors obtained ftest =! ^ /, F2 ,, ... fs ,, ... fs,].

[0025] 更进一步的技术方案是,所述步骤八中,图像到类的动态时间规整分类处理,具体是对测试样本和训练样本进行图像到类的动态时间规整计算,得到测试样本与各类训练样本的相似度,选择相似度最大的,即是类的动态时间规整路径最短的作为测试样本的手势类型。 [0025] A further aspect is that said step eight, dynamic time warping the image to the class classification processing, and in particular test sample training samples based dynamic time warping images to calculate, to obtain test samples with various similarity training samples, selecting the greatest similarity, i.e., a dynamic time warping-based shortest path gesture type as the test sample.

[0026] 具体是对测试数据和训练样本进行动态时间规整计算: [0026] Specifically the test data and training samples calculated dynamic time warping:

[0027] I2C - DTW(Cr ' fj = 士Πΐί η '.ί 脈./:,)} [0027] I2C - DTW (Cr 'fj = Disabled Πΐί η' .ί :, pulse ./)}

s=l ….JVc)[0028] 其中,DTW(fe,n,s,f;)表示fe,n,s与fs'之间的最短规整路径,fe,n,s是fe,n中的一种手指组合,最后选择最小的I2C-DTW(G。,ftest)作为测试数据对应的手势类型,即手势识别结果O s = l ... .JVc) [0028] wherein, DTW (fe, n, s, f;) represents a shortest path between the warping fe, n, s and fs', fe, n, s is fe, n is one kind of a finger combination, finally selects the smallest I2C-DTW (G., ftest) as the test data corresponding to the type of the gesture, the gesture recognition result i.e. O

[0029] 与现有技术相比,本发明的有益效果是: [0029] Compared with the prior art, the beneficial effects of the present invention are:

[0030] 本发明通过用户的手势信息与计算机进行交互,就是以用户的手部轮廓信息作为传统的键盘鼠标交互方式的补充,来丰富人机交互的方式。 [0030] The present invention interact with the computer user's gesture information, the user's hand is contour information as additional conventional keyboard interaction, and to enrich human-computer interaction. 它仅通过Kinect实时获取含有用户手部的图像,在计算机中进行手部信息的分析,并将分析结果转化为应用程序的控制指令,实现与计算机的自然交互,有效扩展了传统的人机交互方式,在识别率上也有很大的提闻,能达到99%的识别率。 It only contains real-time images acquired by the Kinect user's hand, the hand information is analyzed in the computer, and the analysis results into an application control command to achieve natural interaction with the computer, effectively extends the traditional human-computer interaction manner, on the recognition rate provide a great smell, it can reach 99% recognition rate.

附图说明 BRIEF DESCRIPTION

[0031] 图1为本发明一种3D手势识别方法的整体流程和实例示意图。 Examples of the overall process and [0031] FIG. 3D A gesture recognition method of the present invention. FIG.

[0032] 图2为本发明一种3D手势识别方法的人手检测分割、手势轮廓提取及其特征提取示意图。 Hand detection [0032] FIG. 2 is a 3D gesture recognition INVENTION A method of segmentation, feature extraction and gesture schematic contour extraction.

[0033] 图3为本发明一种3D手势识别方法一个实施例中一个训练样本手指描述器特征矩阵。 [0033] Figure 3 is a training sample embodiment, a filter characteristic matrix described finger to implement a 3D gesture recognition method of the invention.

[0034] 图4为本发明一种3D手势识别方法一个实施例中一个测试数据手指描述器特征向量。 [0034] FIG. 4 is an embodiment of a test feature vector data descriptor finger to implement a 3D gesture recognition method of the invention.

具体实施方式 Detailed ways

[0035] 为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。 [0035] To make the objectives, technical solutions and advantages of the present invention will become more apparent hereinafter in conjunction with the accompanying drawings and embodiments of the present invention will be further described in detail. 应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。 It should be understood that the specific embodiments described herein are only intended to illustrate the present invention and are not intended to limit the present invention.

[0036] 图1示出了本发明一种3D手势识别方法的一个实施例,图1的中部为框图,两侧为具体实例:一种3D手势识别方法,包括以下步骤: [0036] FIG 1 illustrates a 3D gesture recognition method of an embodiment of the present invention, the middle of FIG. 1 is a block diagram, specific examples of both sides: one kind of 3D gesture recognition method, comprising the steps of:

[0037] 步骤一,从图像输入设备获取RGB图像和深度图像作为训练样本; [0037] Step a, the image input apparatus acquires an image from RGB image and depth as training samples;

[0038] 步骤二,通过设定自适应动态深度阈值进行人手的检测和分割; [0038] Step two, by setting the depth adaptive dynamic threshold detection and segmentation of a human hand;

[0039] 步骤三,通过图像形态学操作去掉手臂,获取手掌中心位置; [0039] Step three, the image is removed by morphological operations arm, palm obtain the center position;

[0040] 步骤四,通过边缘提取得到手势轮廓,并利用时间序列曲线描述手势轮廓; [0040] Step 4 gesture profile obtained by edge extraction, and the time series profile curve describes the gesture;

[0041] 步骤五,通过图像形态学操作获得手势中指尖位置及手指连接点的位置,在时间序列曲线的对应位置上进行曲线的分割,并对手指的时间序列曲线进行特定的组合得到手指描述器特征向量; [0041] Step 5 is obtained and the position of the fingertip gestures connection point position of the finger by an image morphological operations, dividing the curve at the corresponding position on the time series profile, and the time series profile of the finger is obtained by combining a specific description of a finger is the feature vector;

[0042] 步骤六,将手势训练样本中的每一类分别表示成一个手指描述器特征矩阵; [0042] Step 6 gesture training samples in each class be denoted as a finger descriptor feature matrix;

[0043] 步骤七,通过图像输入设备实时获取RGB图像和深度图像,执行步骤二〜步骤五将实时输入的手势表不成一个手指描述器特征矩阵; [0043] Step seven, RGB images and depth images acquired by the image input device in real time, steps two to five steps of the gesture input in real time is not a finger table descriptor feature matrix;

[0044] 步骤八,将步骤七得到的手指描述器特征向量,与步骤六中得到的特征矩阵进行图像到类的动态时间规整(Image-to-Class Dynamic Time Warping, I2C-DTW)分类处理,最后得到手势识别结果。 [0044] Step eight, finger descriptor feature vectors of Step Seven obtained, and the six step the characteristic matrix for dynamic time warping images to class (Image-to-Class Dynamic Time Warping, I2C-DTW) classification, Finally gesture recognition result obtained.

[0045] 根据本发明一种3D手势识别方法的一个优选实施例,所述步骤二中,通过假设手为最前方的物体,设定自适应动态阈值:具体是利用二值化的方法通过自适应动态阈值分隔人手和背景,得到二值化的手势图,如图2 (a)所示。 [0045] According to a preferred embodiment of one kind of 3D gesture recognition method of the present invention, the step II, the highest object in front, the adaptive dynamic threshold is set by assuming the hand: in particular binarization process by self dynamic adaptation threshold separation manpower and background, obtained binarized FIG gesture, as shown in FIG 2 (a) shown in FIG. 所述动态阈值即是通过彩色图像上的人脸检测得到人脸坐标后,对应到深度图像上得到人脸的平均深度值,自此将人的身体和背景分离,再使用Otsu自适应阈值分割算法得到的。 The dynamic threshold that is obtained after a human face by face detection coordinates on the color image, a depth corresponding to the average value obtained in the depth image of the face, since the isolation of human body and a background, an adaptive reuse Otsu thresholding algorithm.

[0046] 根据本发明一种3D手势识别方法的另一个优选实施例,所述步骤三中,通过图像形态学操作去掉手臂,获取手掌中心位置,具体是如图2 (b)所示,检测出手臂轮廓,选择其上的极小值点作为手腕位置,去掉手腕以下的部分,对剩下的图像作腐蚀、膨胀去掉手指得到手掌,计算手掌几何中心。 [0046] According to another preferred embodiment of a 3D gesture recognition method of the invention, step three, the image is removed by morphological operations arm, palm obtain the center position, in particular in FIG. 2 (b), the detection an arm profile, select the minimum value as a point on which the wrist position, the wrist portion is removed, the remaining images as erosion, dilation finger is removed to give the palm, palm calculated geometric center.

[0047] 根据本发明一种3D手势识别方法的另一个优选实施例,所述步骤四中,通过边缘提取得到手势轮廓,具体是以手掌几何中心位置,手掌外接圆为半径画圆,覆盖二值图中的手掌,得到手势二值图,如图2 (C),对手势二值图提取边缘,得到手势轮廓图,如图2(d)。 [0047] According to another embodiment, preferably one of the 3D gesture recognition method of the present invention, in step four, gesture obtained by extracting an edge profile, in particular in a central location, hand geometry, palm circumcircle circle radius, coverage titanium values ​​in the graph of the palm, the gesture obtained binary image, FIG. 2 (C), binary image of the edge extraction gesture, gesture profile obtained, as shown in FIG 2 (d).

[0048] 根据本发明一种3D手势识别方法的另一个优选实施例,所述步骤四中,利用时间序列曲线描述手势轮廓,具体是通过对轮廓顶点的角度和距离的计算获得数据中蕴含的与形状相关的有用信息,并描绘成时间序列曲线,实现形状特征的提取。 [0048] According to another embodiment, preferably one of the 3D gesture recognition method of the present invention, in step four, the gesture described by time series profile curve, in particular by calculating the data contains the vertex of the angle and distance profile obtained useful information about the shape and curve depicting time-series, the extracted shape feature implemented. 例如手掌中心点为Pd,轮廓初始点为P1,轮廓顶点为Pi(i=2,...,η),则可得角度Z P1P0Pi=<P0P1, P0Pi), (i=2,...,η),角度做360 °的归一化后作为时间序列曲线的横坐标;轮廓顶点与手掌中心点的欧氏距离IPtlPiI作为时间序列曲线的纵坐标,利用公式 E.g. palm center point Pd, the initial contour point P1, the contour vertex Pi (i = 2, ..., η), may have an angle Z P1P0Pi = <P0P1, P0Pi), (i = 2, ... , [eta]), to make the angle of 360 ° as the abscissa normalized time series profile; ordinate Euclidean distance profile of the vertex center of the palm IPtlPiI as time-series curves, using the formula

P0PiI = IP0PiIZmaxilP0P1I, P0P2I,..., P0PnIl对距离作归一化,得到能过描述手势轮廓的时间序列曲线,如图2 (e)。 P0PiI = IP0PiIZmaxilP0P1I, P0P2I, ..., P0PnIl for normalizing the distance, can be obtained through a time series profile described gesture curve, in FIG. 2 (e). 所述时间序列曲线由手势轮廓转换而来,以获得数据中蕴含的与形状相关的有用信息,实现形状特征的提取。 The time series profile curve by the gesture from the conversion, to obtain useful information related to the shape contains data, the shape feature extraction achieved.

[0049] 根据本发明一种3D手势识别方法的另一个优选实施例,所述步骤五中,通过图像形态学操作获得手势中指尖位置及手指连接点的位置,实现对时间序列曲线的分割,具体是通过对手势轮廓进行多边形估计,再检测出多边形的凹凸点,凸点为指尖,凹点为手指连接点,最后对检测到的凹凸点进行滤波,得到指尖位置及手指连接点位置,将这些位置对应到时间序列曲线上,实现对曲线的分割。 [0049] According to another embodiment, preferably one of the 3D gesture recognition method of the present invention, the fifth step, the position of the fingertip and finger gestures point connection position, to segment the time series profile obtained by the image morphological operations, in particular by a polygonal contour estimation gesture, then the point detected irregular polygon, bump fingertip, finger pits connection point, last point of the detected irregularities filtering, to obtain the position and the fingertip position of the finger connection points , these positions correspond to the time series profile to achieve segmentation of the curve. 例如具体是通过使用OpenCV中〈approxPloyDP〉函数得到多边形估计,检测出多边形的凹凸点,凸点为指尖,凹点为手指连接点,凹凸点检测可以使用OpenCV中〈convexHull〉和〈cvConvexiyDefects〉函数,对检测到的凹凸点进行滤波,将y坐标值在中心点以下的凹凸点全部滤掉,实现对手指的分割。 Specifically, for example, <approxPloyDP> OpenCV function obtained by using the estimated polygon, irregular polygon detected points, a fingertip bumps, pits finger connection point, unevenness in the points can be detected using OpenCV <convexHull> and <cvConvexiyDefects> Function , the detected irregularities point filtering, the filtered y-coordinate value of all the irregularities point below the center point, to achieve the split finger.

[0050] 根据本发明一种3D手势识别方法的另一个优选实施例,所述步骤五中,所述手指的时间序列曲线进行特定的组合得到手指描述器,具体是将曲线分割得到的手指曲线段进行特定的组合得到手指描述器特征向量。 [0050] According to another embodiment, preferably one of the 3D gesture recognition method of the present invention, the fifth step, the time series profile of the finger is obtained by combining the finger-specific description, a particular curve obtained by dividing a graph of the finger segments obtained by combining the finger-specific descriptor feature vectors. 所述手指描述器特征向量如下:f=[fi, f2,...fs,...fs], The descriptor feature vectors finger follows: f = [fi, f2, ... fs, ... fs],

[0051] 其中s的值为 [0051] wherein the value of s

[0052] s= {1,2,...k, 12,23,...(k_l) k, 123,234,...,(k_2) (k_l) k,...123...k}, [0052] s = {1,2, ... k, 12,23, ... (k_l) k, 123,234, ..., (k_2) (k_l) k, ... 123 ... k},

[0053] ke {1,2,...K},K是所有手势中包含的最多手指个数。 [0053] ke {1,2, ... K}, K is the number of all the fingers up gesture included.

[0054] 根据本发明一种3D手势识别方法的另一个优选实施例,所述步骤六中,将手势训练样本中的每一类分别表示成一个手指描述器特征矩阵,所述手指描述器特征矩阵由该类所有训练样本的手指描述器特征向量组成。 [0054] According to another embodiment, preferably one of the 3D gesture recognition method of the present invention, the step six, the gesture training samples in each class are represented as a matrix of feature descriptors finger, the finger is described wherein such a matrix of all training feature vector descriptor finger composition of the sample. 所述手指描述器特征向量组成的矩阵 Description of the finger is composed of a matrix eigenvector

Figure CN103294996AD00081

其中C表示手势的某一类,G。 Wherein C denotes a class of gesture, G. 是一个NCXM。 Is a NCXM. 的矩阵,fcn Matrix, fcn

是C类手势的第η个训练样本,Nc为训练样本数,Mc为C类手势手指描述器的总数,随C而变化。 The first gesture is a Class C η training samples, sample number Nc of the training, the total number of Class C Mc described's finger gesture, with C varies.

[0055] 根据本发明一种3D手势识别方法的另一个优选实施例,所述步骤七中,通过图像输入设备实时获取RGB图像和深度图像,执行步骤二〜步骤五将实时输入的手势表示成一个手指描述器特征向量。 [0055] According to another embodiment, preferably one of the 3D gesture recognition method of the present invention, the step VII, RGB images and depth images acquired by the image input device in real time, steps two to five steps of the gesture input in real time is expressed as a finger descriptor feature vectors. 得到手指描述器特征向量,f2',...fs',...fs' ],如图4所示。 Finger descriptor obtained feature vector, f2 ', ... fs', ... fs'], as shown in FIG.

[0056] 根据本发明一种3D手势识别方法的另一个优选实施例,所述步骤八中,图像到类的动态时间规整分类处理,具体是对测试样本和训练样本进行图像到类的动态时间规整计算,得到测试样本与各类训练样本的相似度,选择相似度最大的,即是类的动态时间规整路径最短的作为测试样本的手势类型。 [0056] According to another embodiment, preferably one of the 3D gesture recognition method of the present invention, in the step eight, dynamic time warping the image to the class classification processing, in particular training samples and test sample time dynamic image to the class regular calculation, the similarity of the test sample with various training samples, selecting the greatest similarity, i.e., a dynamic time warping-based shortest path gesture type as the test sample.

[0057] 具体是对测试数据和训练样本进行动态时间规整计算: [0057] Specifically the test data and training samples calculated dynamic time warping:

[0058] [0058]

Figure CN103294996AD00082

[0059] 其中,DTW(fe,n,s, f;)表示fe,n,s与fs'之间的最短规整路径,fe,n,s是f;,n中的一种手指组合,最后选择最小的I2C-DTW(G。,ftest)作为测试数据对应的手势类型,即手势识别结果O [0059] wherein, DTW (fe, n, s, f;) represents a shortest path between the warping fe, n, s and fs', fe, n, s is F;, n is one kind of the finger, and finally select the smallest I2C-DTW (G., ftest) as the test data corresponding to the type of the gesture, the gesture recognition result i.e. O

[0060] 另外,所述动态阈值的计算方法是:检测深度图片中灰度值最小的前k个点灰度值巧(1=1,2,...,10,除去灰度值为O〜10的噪声点,本专利中k=100,则动态阈值 [0060] Further, the dynamic threshold calculation method is: detecting the minimum gray value image depth first k Qiao gray values ​​(1 = 1, 2, ..., 10, removing gradation value O noise ~ 10 points, and the present patent k = 100, the dynamic threshold

Figure CN103294996AD00083

,其中P为实验估计值,根据具体情况用户可以更改。 , Where P is the experimental estimate, the user can change depending on the circumstances.

[0061] 在上述各实施例中,所述RGB图像可以是彩色图;所述深度图像可以是灰度图,它表示物体离图像输入设备的距离,灰度值越大距离越远,灰度值与距离的关系根据具体图像输入设备而定。 [0061] In the above embodiments, the image may be an RGB color image; the depth image may be a grayscale image, it represents the distance of the object from the image input device, the greater the farther the distance gradation value, gradation the relationship between the value of a particular image from the input device dependent.

[0062] 本发明包括人手检测与人手识别技术及一个综合利用两种技术的完整系统,两种技术及其综合应用系统都能在自然复杂背景下达到实时稳定的效果。 [0062] The present invention comprises a hand detection and recognition, and hand a complete system from combined techniques, and the two techniques to achieve real-time integrated application system stabilizing effect in the natural complex background. 人手检测将深度信息和彩色信息结合,利用人手的特殊性,能准确的检测到手和手腕,将手分割出来获取图像中手的位置。 Detecting the binding manpower depth information and color information using the specificity of a human hand, can accurately detect the hand and wrist, the hand position in the image acquiring segmented hand. 基于图像到类的动态时间规整算法创造性的融合了图像到类的匹配算法和时间动态规整算法,可以准确得到每一帧图像里手势的状态,包括手指的个数,位置,指尖角度。 Dynamic time image warping algorithm to the class of the inventive fusion matching algorithm and time to the class dynamic image warping algorithm, each frame can accurately obtain the gesture in a state including the number of fingers, the position, the angle of the fingertip. 两种技术相结合可以构造一个人机手势交互系统。 Both techniques can be combined to construct a human-computer interaction systems gesture. 本发明可广泛应用于智能家居,健康医疗,教育,计算机游戏等方面。 The present invention can be widely used in smart home, health care, education, computer games.

Claims (10)

  1. 1.一种3D手势识别方法,其特征在于包括以下步骤: 步骤一,从图像输入设备获取RGB图像和深度图像作为训练样本; 步骤二,通过设定自适应动态深度阈值进行人手的检测和分割; 步骤三,通过图像形态学操作去掉手臂,获取手掌中心位置; 步骤四,通过边缘提取得到手势轮廓,并利用时间序列曲线描述手势轮廓; 步骤五,通过图像形态学操作获得手势中指尖位置及手指连接点的位置,在时间序列曲线的对应位置上进行曲线的分割,并对手指的时间序列曲线进行特定的组合得到手指描述器特征向量; 步骤六,将手势训练样本中的每一类分别表示成一个手指描述器特征矩阵; 步骤七,通过图像输入设备实时获取RGB图像和深度图像,执行步骤二〜步骤五将实时输入的手势表不成一个手指描述器特征向量; 步骤八,将步骤七得到的手指描述器特征向量 A 3D gesture recognition method, comprising the following steps: First, the image input apparatus acquires an image from the RGB image and depth as the training samples; two step, performed by manually setting the detection and adaptive dynamic segmentation depth threshold ; step three, the image is removed by morphological operations arm, palm obtain the center position; step 4 gesture profile obtained by edge extraction, and the time series profile curve describes the gesture; step five, the fingertip position is obtained by the image gesture morphological operations and connection point position of the finger, in the corresponding position on a time series profile curve divided, and the time series profile of the finger is obtained by combining the finger-specific descriptor feature vectors; step 6 gesture training samples in each category were expressed as a finger descriptor feature matrix; step seven, acquired by the image input device real-time RGB images and depth images, step two to step five real-time input gesture table is not a finger descriptor feature vectors; step 8 step seven finger descriptor feature vectors obtained ,与步骤六中得到的特征矩阵进行图像到类的动态时间规整分类处理,最后得到手势识别结果。 , And the image feature matrix obtained in step six to dynamic time warping class classification processing, the gesture recognition result finally obtained.
  2. 2.根据权利要求1所述的一种3D手势识别方法,其特征在于:所述步骤二中,通过假设手为最前方的物体,设定自适应动态阈值:具体是利用二值化的方法通过自适应动态阈值分隔人手和背景,得到二值化的手势图。 2. According to a 3D gesture recognition method according to claim 1, wherein: said step two, the most object in front, the adaptive dynamic threshold is set by assuming the hand: in particular a method using binarization bACKGROUND manpower and separated by an adaptive dynamic threshold, to obtain binarized gesture FIG.
  3. 3.根据权利要求1所述的一种3D手势识别方法,其特征在于:所述步骤三中,通过图像形态学操作去掉手臂,获取手掌中心位置,具体是检测出手臂轮廓,选择其上的极小值点作为手腕位置,去掉手腕以下的部分,对剩下的图像作腐蚀、膨胀去掉手指得到手掌,计算手掌几何中心。 3. According to a 3D gesture recognition method according to claim 1, wherein: said step three, the image is removed by morphological operations arm, palm obtain the center position, detected specifically arm profile, on which the selection minimum point as the position of the wrist, the wrist portion is removed, the remaining images as erosion, dilation finger is removed to give the palm, palm calculated geometric center. ` `
  4. 4.根据权利要求1所述的一种3D手势识别方法,其特征在于:所述步骤四中,通过边缘提取得到手势轮廓,具体是以手掌几何中心位置,手掌外接圆为半径画圆,覆盖二值图中的手掌,得到手势二值图,对手势二值图提取边缘,得到手势轮廓图。 4. According to a 3D gesture recognition method according to claim 1, wherein: said step four, the gesture obtained by extracting an edge profile, in particular in a central location, hand geometry, palm circumcircle circle radius, covering binary image of the palm, the gesture obtained binary image, extracting an edge of a binary image gesture, the gesture profile obtained.
  5. 5.根据权利要求1到4任意一项所述的一种3D手势识别方法,其特征在于:所述步骤四中,利用时间序列曲线描述手势轮廓,具体是通过对轮廓顶点的角度和距离的计算获得数据中蕴含的与形状相关的有用信息,并描绘成时间序列曲线,实现形状特征的提取。 5. According to a 3D gesture recognition method of any one of claims 1 to 4, wherein: in said Step 4 using the time series profile curve describes gestures, in particular the angle and distance of the contour vertices calculated to obtain useful information about the shape contains data and graph depicting time-series, the extracted shape feature implemented.
  6. 6.根据权利要求5所述的一种3D手势识别方法,其特征在于:所述步骤五中,通过图像形态学操作获得手势中指尖位置及手指连接点的位置,实现对时间序列曲线的分割,具体是通过对手势轮廓进行多边形估计,再检测出多边形的凹凸点,凸点为指尖,凹点为手指连接点,最后对检测到的凹凸点进行滤波,得到指尖位置及手指连接点位置,将这些位置对应到时间序列曲线上,实现对曲线的分割。 A 3D gesture recognition method as claimed in claim 5, wherein: said fifth step, obtaining the fingertip position and gesture connection point position of the finger by an image morphological operation, to segment the time series profile , in particular by a polygonal contour estimation gesture, then the point detected irregular polygon, bump fingertip, finger pits connection point, last point of the detected irregularities filtering, to obtain the position of the fingertip and finger connection points position, these positions corresponding to the time series profile to achieve segmentation of the curve.
  7. 7.根据权利要求6所述的一种3D手势识别方法,其特征在于:所述步骤五中,所述手指的时间序列曲线进行特定的组合得到手指描述器,具体是将曲线分割得到的手指曲线段进行特定的组合得到手指描述器特征向量。 7. According to a 3D gesture recognition method according to claim 6, wherein: said fifth step, the time series profile of the finger is obtained by combining the finger-specific description, a particular curve obtained by dividing a finger curve segments obtained by combining the finger-specific descriptor feature vectors.
  8. 8.根据权利要求7所述的一种3D手势识别方法,其特征在于:所述步骤六中,将手势训练样本中的每一类分别表示成一个手指描述器特征矩阵,所述手指描述器特征矩阵由该类所有训练样本的手指描述器特征向量组成。 8. According to a 3D gesture recognition method according to claim 7, wherein: said step six, the gesture training samples in each class are represented as a matrix of feature descriptors finger, the finger is described wherein a matrix of all fingers class descriptor feature vectors of the training samples composed.
  9. 9.根据权利要求8所述的一种3D手势识别方法,其特征在于:所述步骤七中,通过图像输入设备实时获取RGB图像和深度图像,执行步骤二〜步骤五将实时输入的手势表示成一个手指描述器特征向量。 A 3D gesture recognition method of claim 8, wherein: said step seven, the RGB images and depth images acquired by the image input device in real time, steps two to five steps of the gesture input in real time represents finger into a feature vector descriptor.
  10. 10.根据权利要求9所述的一种3D手势识别方法,其特征在于:所述步骤八中,图像到类的动态时间规整分类处理,具体是对测试样本和训练样本进行图像到类的动态时间规整计算,得到测试样本与各类训练样本的相似度,选择相似度最大的,即是类的动态时间规整路径最短的作为测试·样本的手势类型。 A 3D gesture recognition method of claim 9, wherein: said eight step, dynamic time warping the image to the class classification processing, in particular training samples and test sample moving image to the class time warping calculation, the similarity of the test sample and the various types of training samples, select maximum similarity, that is the kind of dynamic time warping path shortest gesture type as the test-sample.
CN 201310168123 2013-05-09 2013-05-09 One kind of 3d gesture recognition method CN103294996B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201310168123 CN103294996B (en) 2013-05-09 2013-05-09 One kind of 3d gesture recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201310168123 CN103294996B (en) 2013-05-09 2013-05-09 One kind of 3d gesture recognition method

Publications (2)

Publication Number Publication Date
CN103294996A true true CN103294996A (en) 2013-09-11
CN103294996B CN103294996B (en) 2016-04-27

Family

ID=49095827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201310168123 CN103294996B (en) 2013-05-09 2013-05-09 One kind of 3d gesture recognition method

Country Status (1)

Country Link
CN (1) CN103294996B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679213A (en) * 2013-12-13 2014-03-26 电子科技大学 3D gesture recognition method
CN104268138A (en) * 2014-05-15 2015-01-07 西安工业大学 Method for capturing human motion by aid of fused depth images and three-dimensional models
CN104333794A (en) * 2014-11-18 2015-02-04 电子科技大学 Channel selection method based on depth gestures
CN104375631A (en) * 2013-10-22 2015-02-25 安徽寰智信息科技股份有限公司 Non-contact interaction method based on mobile terminal
CN104699238A (en) * 2013-12-10 2015-06-10 现代自动车株式会社 System and method for gesture recognition of vehicle
CN104714637A (en) * 2013-12-16 2015-06-17 纬创资通股份有限公司 Method, apparatus and computer readable medium for polygon gesture detection and interaction
CN104750242A (en) * 2013-12-31 2015-07-01 现代自动车株式会社 Apparatus and method for recognizing user's gesture for carrying out operation of vehicle
CN104766055A (en) * 2015-03-26 2015-07-08 济南大学 Method for removing wrist image in gesture recognition
CN104899600A (en) * 2015-05-28 2015-09-09 北京工业大学 Depth map based hand feature point detection method
CN104978012A (en) * 2014-04-03 2015-10-14 华为技术有限公司 Pointing interactive method, device and system
WO2015112194A3 (en) * 2014-01-22 2015-11-05 Lsi Corporation Image processor comprising gesture recognition system with static hand pose recognition based on dynamic warping
WO2015197026A1 (en) * 2014-06-27 2015-12-30 华为技术有限公司 Method, apparatus and terminal for acquiring sign data of target object
CN105320937A (en) * 2015-09-25 2016-02-10 北京理工大学 Kinect based traffic police gesture recognition method
CN105654103A (en) * 2014-11-12 2016-06-08 联想(北京)有限公司 Image identification method and electronic equipment
CN105739702A (en) * 2016-01-29 2016-07-06 电子科技大学 Multi-posture fingertip tracking method for natural man-machine interaction
CN106446911A (en) * 2016-09-13 2017-02-22 李志刚 Hand recognition method based on image edge line curvature and distance features
WO2017113736A1 (en) * 2015-12-27 2017-07-06 乐视控股(北京)有限公司 Method of distinguishing finger from wrist, and device for same
CN107292904B (en) * 2016-03-31 2018-06-15 北京市商汤科技开发有限公司 Species PalmTracking method and system based on the depth image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050271279A1 (en) * 2004-05-14 2005-12-08 Honda Motor Co., Ltd. Sign based human-machine interaction
CN102769802A (en) * 2012-06-11 2012-11-07 西安交通大学 Man-machine interactive system and man-machine interactive method of smart television
CN102945079A (en) * 2012-11-16 2013-02-27 武汉大学 Intelligent recognition and control-based stereographic projection system and method
CN102968178A (en) * 2012-11-07 2013-03-13 电子科技大学 Gesture-based PPT (Power Point) control system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050271279A1 (en) * 2004-05-14 2005-12-08 Honda Motor Co., Ltd. Sign based human-machine interaction
CN102769802A (en) * 2012-06-11 2012-11-07 西安交通大学 Man-machine interactive system and man-machine interactive method of smart television
CN102968178A (en) * 2012-11-07 2013-03-13 电子科技大学 Gesture-based PPT (Power Point) control system
CN102945079A (en) * 2012-11-16 2013-02-27 武汉大学 Intelligent recognition and control-based stereographic projection system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李文生等: "基于多点手势识别的人机交互技术框架", 《计算机工程与设计》 *
贾建军: "基于视觉的手势识别技术的研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104375631A (en) * 2013-10-22 2015-02-25 安徽寰智信息科技股份有限公司 Non-contact interaction method based on mobile terminal
CN104699238A (en) * 2013-12-10 2015-06-10 现代自动车株式会社 System and method for gesture recognition of vehicle
CN103679213A (en) * 2013-12-13 2014-03-26 电子科技大学 3D gesture recognition method
CN104714637B (en) * 2013-12-16 2017-09-01 纬创资通股份有限公司 And interactive polygon gesture detecting method, apparatus and computer program products
CN104714637A (en) * 2013-12-16 2015-06-17 纬创资通股份有限公司 Method, apparatus and computer readable medium for polygon gesture detection and interaction
CN104750242A (en) * 2013-12-31 2015-07-01 现代自动车株式会社 Apparatus and method for recognizing user's gesture for carrying out operation of vehicle
WO2015112194A3 (en) * 2014-01-22 2015-11-05 Lsi Corporation Image processor comprising gesture recognition system with static hand pose recognition based on dynamic warping
CN104978012A (en) * 2014-04-03 2015-10-14 华为技术有限公司 Pointing interactive method, device and system
CN104978012B (en) * 2014-04-03 2018-03-16 华为技术有限公司 One kind of interactive pointing method, apparatus and system for
CN104268138B (en) * 2014-05-15 2017-08-15 西安工业大学 Integration of depth and three-dimensional model of human motion capture methods
CN104268138A (en) * 2014-05-15 2015-01-07 西安工业大学 Method for capturing human motion by aid of fused depth images and three-dimensional models
US9984461B2 (en) 2014-06-27 2018-05-29 Huawei Technologies Co., Ltd. Method, apparatus, and terminal for obtaining vital sign data of target object
WO2015197026A1 (en) * 2014-06-27 2015-12-30 华为技术有限公司 Method, apparatus and terminal for acquiring sign data of target object
CN105654103A (en) * 2014-11-12 2016-06-08 联想(北京)有限公司 Image identification method and electronic equipment
CN104333794A (en) * 2014-11-18 2015-02-04 电子科技大学 Channel selection method based on depth gestures
CN104766055A (en) * 2015-03-26 2015-07-08 济南大学 Method for removing wrist image in gesture recognition
CN104899600A (en) * 2015-05-28 2015-09-09 北京工业大学 Depth map based hand feature point detection method
CN104899600B (en) * 2015-05-28 2018-07-17 北京工业大学 Species hand feature point detection method based on the depth map
CN105320937B (en) * 2015-09-25 2018-08-14 北京理工大学 Police gesture recognition method based Kinect
CN105320937A (en) * 2015-09-25 2016-02-10 北京理工大学 Kinect based traffic police gesture recognition method
WO2017113736A1 (en) * 2015-12-27 2017-07-06 乐视控股(北京)有限公司 Method of distinguishing finger from wrist, and device for same
CN105739702A (en) * 2016-01-29 2016-07-06 电子科技大学 Multi-posture fingertip tracking method for natural man-machine interaction
CN107292904B (en) * 2016-03-31 2018-06-15 北京市商汤科技开发有限公司 Species PalmTracking method and system based on the depth image
CN106446911A (en) * 2016-09-13 2017-02-22 李志刚 Hand recognition method based on image edge line curvature and distance features
CN106446911B (en) * 2016-09-13 2018-09-18 李志刚 Manual recognition method based on the image edge lines of curvature and distance characteristic

Also Published As

Publication number Publication date Type
CN103294996B (en) 2016-04-27 grant

Similar Documents

Publication Publication Date Title
Argyros et al. Vision-based interpretation of hand gestures for remote control of a computer mouse
Keskin et al. Real time hand tracking and 3d gesture recognition for interactive interfaces using hmm
Li Hand gesture recognition using Kinect
Cheng et al. Survey on 3D Hand Gesture Recognition.
Al-Rahayfeh et al. Eye tracking and head movement detection: A state-of-art survey
Licsár et al. User-adaptive hand gesture recognition system with interactive training
US20120062736A1 (en) Hand and indicating-point positioning method and hand gesture determining method used in human-computer interaction system
Dominio et al. Combining multiple depth-based descriptors for hand gesture recognition
US20140306877A1 (en) Gesture Based Interface System and Method
US20130343601A1 (en) Gesture based human interfaces
Hasan et al. RETRACTED ARTICLE: Static hand gesture recognition using neural networks
US20120113241A1 (en) Fingertip tracking for touchless user interface
KR20100032699A (en) The system controled a action of the display device, based a gesture information recognition of the user
CN102426480A (en) Man-machine interactive system and real-time gesture tracking processing method for same
JPH08211979A (en) Hand shake input device and method
Conseil et al. Comparison of Fourier descriptors and Hu moments for hand posture recognition.
CN101901052A (en) Target control method based on mutual reference of both hands
CN101561710A (en) Man-machine interaction method based on estimation of human face posture
US20120280897A1 (en) Attribute State Classification
CN102270035A (en) In a non-touch manner and to select the operation target apparatus and method
CN103226387A (en) Video fingertip positioning method based on Kinect
CN102063618A (en) Dynamic gesture identification method in interactive system
CN101344816A (en) Human-machine interaction method and device based on sight tracing and gesture discriminating
CN102467657A (en) Gesture recognizing system and method
US20140258942A1 (en) Interaction of multiple perceptual sensing inputs

Legal Events

Date Code Title Description
C06 Publication
C10 Entry into substantive examination
C14 Grant of patent or utility model