Connect public, paid and private patent data with Google Patents Public Datasets

Man-machine interactive system and real-time gesture tracking processing method for same

Info

Publication number
CN102426480A
Authority
CN
Grant status
Application
Patent type
Prior art keywords
gesture
method
tracking
action
user
Prior art date
Application number
CN 201110342972
Other languages
Chinese (zh)
Inventor
刘远民
陈大炜
Original Assignee
康佳集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Abstract

The invention discloses a man-machine interactive system and a real-time gesture tracking processing method for the same. The method comprises the following steps of: obtaining image information of user side, executing gesture detection by a gesture detection unit to finish separation of a gesture and a background, and automatically determining a smaller rectangular frame which surrounds the hand as a region of interest in the image information through a vision algorithm; calculating the hand profile state of each frame in a video sequence by a gesture tracking unit; executing validity check on hand actions according to the calculated hand profile state, and determining the gesture action finished by the user; and generating a corresponding gesture action control instruction according to the determined gesture action, and making corresponding feedbacks by a three-dimensional user interface according to the gesture action control instruction. In the system and the method, all or part of gesture actions of the user are sensed so as to finish accurate tracking on the gesture, thus, a real-time robust solution is provided for an effective gesture man-machine interface port based on a common vision sensor.

Description

一种人机交互系统及其实时手势跟踪处理方法 A human-machine interaction in real-time gesture tracking system and processing method

技术领域 FIELD

[0001] 本发明涉及人机交互技术领域,尤其涉及的是一种人机交互系统及其实时手势跟踪处理方法。 [0001] The present invention relates to human-computer interaction, and more particularly relates to an interactive system and method for real-time tracking process gesture.

背景技术 Background technique

[0002] 人机交互技术是目前用户界面研究中发展最快的领域之一,对此,各国都十分重视。 [0002] human-computer interaction technology is currently one of the fastest growing areas of research in the development of the user interface, to which all countries attach great importance. 美国在国家关键技术中,将人机界面列为信息技术中与软件和计算机并列的六项关键技术之一。 US national key technology, the man-machine interface as one of six key technologies in parallel with information technology and computer software. 在美国国防关键技术中,人机界面不仅是软件技术中的重要内容之一,而且是与计算机和软件技术并列的11项关键技术之一。 In the key US defense technology, human-machine interface is not only an important part of software technology, and is one of 11 key technologies and computer and software technology side by side. 欧共体的欧洲信息技术研究与发展战略计划(ESPRIT)还专门设立了用户界面技术项目,其中包括多通道人机交互界面(Multi-Modal Interface for Man-Machine Interface)。 EC of the European Information Technology Research and Development Strategic Plan (ESPRIT) also set up a user interface technology projects, including multi-channel interactive interface (Multi-Modal Interface for Man-Machine Interface). 保持在这一领域中的领先,对整个智能计算机系统是至关重要的。 Remain in the lead in this field, the entire intelligent computer system is essential.

[0003] 人体所获得的信息80%来自视觉,因此,从自知心理学的角度研究基于机器视觉的人机交互方式是解决人机交互的重要手段。 [0003] Information obtained 80% of the human body from the visual, therefore, from the perspective of psychological insight of human-computer interaction based on machine vision is an important means to solve human-computer interaction. 手势是人机交互过程中一个非常自然、直观的交互通道,因此研究手势检测跟踪识别技术不仅有助于实现自然的人机交互,而且有助于机器人通过模仿用户的示范动作习得技能。 Gestures during a human-computer interaction is very natural, intuitive interaction channel, the study detected gesture tracking and recognition technology not only contributes to the natural human-computer interaction, but also helps the robot through the acquisition of skills by example to imitate the user.

[0004] 由于手势本身具有的多样性、多义性以及时间和空间上的差异性等特点,加之人手是复杂变形体以及视觉本身的不适定性,因此基于视觉的手势识别是一个多学科交叉的、富有挑战性的研究课题。 [0004] Because of the diversity of the gesture itself, ambiguity and difference in time and space and other characteristics, together with manpower is not applicable deformable body and a visual qualitative complex itself, thus visually recognized based on the gesture is a multi-disciplinary and challenging research topic.

[0005] 目前的基于手势的人机交互主要有三种方式,一是麻省理工大学为代表的,利用数据手套、数据服装等装置,对手和身体的运动进行跟踪,完成人机交互;第二种是以微软为代表的体感游戏,它采用深度摄像头和RGB摄像头来实现手和身体的位置跟踪。 [0005] The current gesture-based human-computer interaction There are three main ways, first as a representative of the Massachusetts Institute of Technology, using data gloves, clothing, and other data devices, opponents and body motion tracking, complete human-computer interaction; second species is represented by Microsoft somatosensory game, which uses a depth camera and RGB cameras to achieve the hand and body position tracking. 前面两种方式都具有高成本的特点,不宜于企业,特别是竞争激烈的家电企业的广泛应用;第三种就是业内众所周知的HandVu,它以普通摄像头为研究的对象,具有成本低、实时性能好等优点,但在跟踪过程中由外界环境影响较大,不能很好地解决由于光照和背景复杂而带来的跟踪失败问题。 The first two methods are highly cost characteristics, it should not be widely used in enterprises, especially the highly competitive home appliance business; third is the industry's well-known HandVu, it is an ordinary camera as the research object, low cost, real-time performance and good, but influenced by the external environment in the tracking process, we can not solve the problem due to the failure to track lighting and background complexity brought.

[0006] 2010年微软推出的Kinect体感游戏,因其自然直观的人机交互功能,而使其深受广大消费者青睐。 [0006] In 2010, Microsoft launched the Kinect somatosensory game, because of its natural and intuitive human-computer interaction function, and so favored by consumers. 该系统采用了双摄像头(深度摄像头和RGB摄像头)有利于多传感器的信息融合,因此具有较高的手势检测和跟踪精度,但其成本高。 The system uses a dual camera (depth camera and RGB camera) facilitate multi-sensor data fusion, and therefore having higher gesture detection and tracking accuracy, but its high cost. 相反,基于普通单摄像头的实时手势检测跟踪器在这方面具有很强的优势,但其对手的跟踪和检测的准备度和精度都存在一定的不足,究其原因主要有:(1)手本身不是刚体,在运动过程中可能存在不同程度的形变;(2)光照条件的影响和变化;(3)对目标跟踪没有一个可信度度量标准,因此,当系统跟踪了其他目标而导致的跟踪失败问题难以解决。 On the contrary, based on ordinary single-camera real-time gesture detection and tracking has in this regard a strong advantage, but the tracking and detection of their opponents readiness and accuracy there are some deficiencies, main reasons are: (1) the hand itself is not a rigid body, may be present in different degrees of motion during deformation; Effects and variations (2) lighting conditions; (3) does not have a target tracking confidence metric, therefore, the system keeps track of when the result of the target tracking other failure problem difficult to solve.

[0007] 因此,现有技术还有待于改进和发展。 [0007] Thus, the prior art has yet to be improved and developed.

发明内容[0008] 本发明要解决的技术问题在于,针对现有技术的上述缺陷,提供一种人机交互系统及其实时手势跟踪处理方法,本发明可解决手等非钢体目标在普通单摄像头下的跟踪与检测不准确的问题,并可解决由于光照和背景复杂而带来的跟踪失败问题,利用计算机视觉与图像处理技术实现了自动的人手检测、跟踪与手势识别,实时、鲁棒、易于实现和操作, 能使计算机用户通过手部姿态与计算机进行更自然、更直观、更智能的交互。 [0008] The present invention is to solve the technical problem, for the above-described drawbacks of the prior art, to provide a system and a real-time interactive gesture tracking processing method of the present invention can solve the hands and other non-target in the ordinary single steel body tracking and detection in the camera does not correct the problem, and to solve the problems caused by failure to track lighting and background complexity brought about by computer vision and image processing technology to achieve automatic detection of the hand, tracking and gesture recognition, real-time, robust , easy to implement and operate, enabling computer users to more natural, more intuitive, more intelligent interaction with the computer by hand gesture.

[0009] 本发明解决技术问题所采用的技术方案如下: [0009] Solving the Problems The present invention adopts the following technical solution:

一种人机交互系统的实时手势跟踪处理方法,其中,包括步骤: A human-machine interaction system real-time gesture tracking processing method, comprising the steps of:

A、获取用户侧的图像信息并进行相应的图像降噪和增强处理; A, acquires the image information of the user and the corresponding side of the noise reduction and image enhancement processing;

B、对经过处理的图像信息通过手势检测单元进行人手检测,完成手势与背景的分离, 并通过视觉算法在图像信息中自动确定包围人手的一个较小矩形框为感兴趣区域; . B, the image information is processed by the hand gesture detecting means detects, complete separation of the background of the gesture, and automatically determine a small rectangle surrounded by a human hand in the image information by vision algorithms region of interest;

C、通过手势跟踪单元,在所述图像信息中的感兴趣区域完成手势特征点的亚像素级跟踪,在视频序列中计算出每帧的人手轮廓状态; C, by means gesture tracking, gesture tracking complete sub-pixel feature points in a region of interest in the image information is calculated for each frame manpower profile state in the video sequence;

D、根据计算出的人手轮廓状态进行人手动作的有效性检测,并进行手势识别以对用户完成某个预定义手势的轨迹进行分类,确定用户完成的手势动作; D, is calculated according to the effectiveness of detection of the state of the hand contour hand operation, and a user gesture recognition to complete a predefined gesture trajectory classify users to complete the operation of determining the gesture;

E、根据确定的手势动作生成相应的手势动作控制指令,并将该手势动作控制指令发送至三维用户界面; E, generates a corresponding control command gestures according to the determined gesture operation, and the gesture operation control command to the three-dimensional user interface;

F、三维用户界面根据所述手势动作控制指令做出相应反馈。 F, three-dimensional user interface to make the gesture motion according to the respective feedback control instruction.

[0010] 所述的人机交互系统的实时手势跟踪处理方法,其中,所述步骤A之前还包括,a、 立体影像显示单元显示三维立体影像及三维用户图形界面。 [0010] The real-time interactive gesture tracking system processing method, wherein, prior to said step A further comprises, a, a stereoscopic image display unit to display a three-dimensional image and a three-dimensional graphical user interface.

[0011 ] 所述的人机交互系统的实时手势跟踪处理方法,其中,所述步骤A具体包括: Al、视频图像获取单元获取用户所在环境深度图像信息; [0011] The gesture in real time interactive processing method tracking system, wherein the step A comprises: Al, video image acquisition unit acquires the environment where the user information of the depth image;

A2、通过图像处理单元对视频图像获取单元获取的图像信息进行去噪与目标增强处理。 A2, the information acquiring unit acquires the image denoising target processing unit by the image enhancement processing on the video image.

[0012] 所述的人机交互系统的实时手势跟踪处理方法,其中,所述步骤C中的人手轮廓状态:包括位置、旋转角、放缩量以及各个手指的长度和角度。 [0012] The gesture in real time interactive processing method tracking system, wherein the profile of the state of the manual step C: comprising a position, a rotation angle, and the discharge angles of the shrinkage and the length of the fingers.

[0013] 所述的人机交互系统的实时手势跟踪处理方法,其中,所述步骤D还包括:手势动作是否开始的判断依据是在连续20帧的人手检测结果里,有超过12帧检测到人手处于同一个位置。 [0013] The gesture in real time interactive processing method tracking system, wherein, said step D further comprises: determining whether to start the operation of the gesture based on the detection result of the hand for 20 consecutive frames, there are more than 12 detected staff in the same position.

[0014] 所述的人机交互系统的实时手势跟踪处理方法,其中,所述步骤D中的手势动作包括:左移、右移、上移、下移。 [0014] The gesture in real time interactive processing method tracking system, wherein the gesture operation the step D comprises: left, right, up, down.

[0015] 所述的人机交互系统的实时手势跟踪处理方法,其中,所述步骤E中的根据确定的手势动作生成相应的手势动作控制指令包括: [0015] The real-time interactive gesture tracking system processing method, wherein, in said step E according to the determined gesture motion gesture generates a corresponding operation control instructions comprising:

E1、通过手势位置的不动,来识别出该动作为点击命令,生成相应的点击控制指令; E2、通过手势位置的快速左移、右移、上移、下移来识别出左、右、上、下四个命令,生成相应的左移、右移、上移、下移控制指令; E1, Fixed position gestures to recognize that the action is a click command, control command generating the click; E2 of, left gesture by flash position, right, up, down to identify the left, right, , the lower four order to form the corresponding left, right, up, down control instruction;

E3、通过手势位置的挥手来识别出关闭动作,生成相应的关闭控制指令。 E3, by waving gesture to identify the position of the closing operation, generates a corresponding control command to close.

[0016] 一种人机交互系统,其中,包括: [0016] An interactive system, comprising:

视频图像获取单元,用于获取用户所在环境深度图像信息; Video image acquisition unit for acquiring the environment where the user information of the depth image;

图像处理单元,用于对视频图像获取单元获取的图像信息进行去噪与目标增强处理;手势检测单元,用于对经过处理的图像信息进行人手检测,完成手势与背景的分离,并通过视觉算法在图像信息中自动确定包围人手的一个较小矩形框为感兴趣区域; An image processing unit, configured to obtain the video image information acquiring unit denoising target enhancement processing; gesture detection unit for image information processing is performed after detecting hand, separation is accomplished gesture and background and visual algorithm automatically determining surrounded by a human hand in the image information block is a small rectangular area of ​​interest;

手势跟踪单元,用于在所述图像信息中的感兴趣区域完成手势特征点的亚像素级跟踪,在视频序列中计算出每帧的人手轮廓状态; Hand tracking means for the image information in the region of interest in the completion of the sub-pixel gesture feature point tracking, each frame is calculated manpower profile state in the video sequence;

手势效性检测与手势动作确认单元,根据计算出的人手轮廓状态进行人手动作的有效性检测,并进行手势识别以对用户完成某个预定义手势的轨迹进行分类,确定用户完成的手势动作; Detection of the gesture and the gesture operation validity confirmation unit is checked for validity hand operation based on the calculated hand profile state, and a user gesture recognition to complete a predefined gesture trajectory classify users to complete the operation of determining the gesture;

手势指令控制命令生成单元,根据确定的手势动作生成相应的手势动作控制指令,并将该手势动作控制指令发送至三维用户界面; Gesture command control command generating unit generates a control command corresponding to the gesture motion according to the determined gesture operation, and the gesture operation control command to the three-dimensional user interface;

立体影像显示单元,用于显示三维立体影像及三维用户图形界面,以及用于根据所述手势动作控制指令做出相应反馈。 The stereoscopic image display unit for displaying a three-dimensional image and a three-dimensional graphical user interface, and a corresponding control command for making a feedback based on the gestures.

[0017] 所述的人机交互系统,其中,所述视频图像获取单元为摄像头。 [0017] The interactive system, wherein the video camera image acquisition unit.

[0018] 所述的人机交互系统,其中,所述人手轮廓状态:包括位置、旋转角、放缩量以及各个手指的长度和角度; [0018] The interactive system, wherein the hand contour Status: includes the position, rotation angle, and the discharge angles of the shrinkage and the length of the fingers;

手势指令控制命令生成单元进一步包括: Gesture command control command generating unit further comprises:

第一生成模块,用于通过手势位置的不动,来识别出该动作为点击命令,生成相应的点击控制指令; A first generating module, configured by the position does not move gesture to recognize that the action is a click command, control command generating the click;

第二生成模块,用于通过手势位置的快速左移、右移、上移、下移来识别出左、右、上、下四个命令,生成相应的左移、右移、上移、下移控制指令; A second generation module for gesture by flash left position, right, up, down to identify the left, right, upper, lower four order to form the corresponding left, right, move, under shift control command;

第三生成模块,用于通过手势位置的挥手来识别出关闭动作,生成相应的关闭控制指令。 Third generating means for waving gesture by identifying the position of the closing operation, generates a corresponding control command to close.

[0019] 本发明所提供的一种人机交互系统及其实时手势跟踪处理方法,本发明通过在三维立体影像显示设备上的图像传感和处理单元,感应用户的全部或部分手势动作,完成手势的准确跟踪,从而为有效的基于普通视觉传感器的手势人机界面接口提供了实时手势跟踪的解决方案,本发明利用计算机视觉与图像处理技术实现了自动的人手检测、跟踪与手势识别,实时、鲁棒、易于实现和操作,能使计算机用户通过手部姿态与计算机进行更自然、 更直观、更智能的交互;可用于智能家电、人机交互和虚拟现实平台等应用领域。 [0019] An interactive system according to the present invention provides a gesture tracking and real-time processing method according to the present invention by a three-dimensional image display on the image sensing device and processing unit, all or part of the induction user gestures, complete accurately track gesture, a gesture-based human-machine interface such as a visual sensor interface common to provide effective solutions to real-time tracking of the gesture, using computer vision and image processing techniques of the present invention to achieve automatic detection of the hand, tracking and gesture recognition, real-time , robust, easy to implement and operate, enabling computer users to be more natural, more intuitive and more intelligent interaction with the computer by hand gesture; intelligent home appliances can be used in applications, human-computer interaction and virtual reality platforms. 应用于智能电视和其他智能家电产品的人机交互、各种体感游戏、和各种有关虚拟现实平台产品中, 因此本发明也具有重大的经济价值和应用价值。 Used in smart TVs and other home appliances intelligent human-computer interaction, a variety of somatosensory games, virtual reality platform and a variety of related products, the present invention also have significant economic and practical value.

附图说明 BRIEF DESCRIPTION

[0020] 图1是本发明实施例的人机交互系统的实时手势跟踪处理方法流程图。 [0020] FIG. 1 is a gesture in real time interactive system according to the tracking process flowchart of a method embodiment of the present invention.

[0021] 图2是本发明实施例的人手分类器级连结构图。 [0021] FIG. 2 is a hand classification stage coupled patterned embodiment of the present invention.

[0022] 图3是本发明实施例的人机交互系统功能原理框图。 [0022] FIG. 3 is a functional block diagram of an embodiment of the interactive system of the present invention.

[0023] 图4是本发明实施例的人机交互系统硬件结构示意图。 [0023] FIG. 4 is a schematic diagram of a hardware interactive system structure of the embodiment of the present invention.

具体实施方式 detailed description

[0024] 本发明提供的一种人机交互系统及其实时手势跟踪处理方法,为使本发明的目的、技术方案及优点更加清楚、明确,以下参照附图并举实施例对本发明进一步详细说明。 [0024] An interactive system according to the present invention provides real-time gesture tracking and processing method, for the purpose, technical solution and advantages of the present invention will become more apparent, clear embodiment of the present invention is described in more detail below with reference to the accompanying drawings. 应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。 It should be understood that the specific embodiments described herein are only intended to illustrate the present invention and are not intended to limit the present invention.

[0025] 本发明实施例中需要的硬件设备如图4所示,为计算机300和视频图像采集设备400,本发明实施例提供的一种人机交互系统的实时手势跟踪处理方法,如图1所示,包括步骤: [0025] Example hardware embodiment of the present invention is required as shown in FIG tracking processing method 400, a human-machine interaction system real gesture according to an embodiment of the present invention, a computer 300 and video image capture device 4, FIG. 1 shown, comprising the steps of:

步骤S110、立体影像显示单元显示三维立体影像及三维用户图形界面。 Step S110, the stereoscopic image display unit to display three-dimensional images and three-dimensional graphical user interface.

[0026] 例如,在人机交互系统的计算机300的显示屏上显示可实现人机交互的三维立体影像及三维用户图形界面。 [0026] For example, the display may be interacting with a computer and a three-dimensional image of the three-dimensional graphical user interface on a display screen interactive computer system 300.

[0027] 步骤S120、获取用户侧的图像信息并进行相应的图像降噪和增强处理。 [0027] step S120, the acquired image information on the user side and the corresponding noise reduction and image enhancement processing.

[0028] 譬如,当需进行人机交互时,可以通过视频图像获取单元(如摄像头等)获取用户(如图4所示的500)所在环境深度图像信息;并通过图像处理单元对视频图像获取单元获取的图像信息进行去噪与目标增强处理,为下一步的手势检测和跟踪提供有效保障。 [0028] For example, when the need for human interaction can be acquired unit (e.g., video cameras) acquires the user (500 shown in FIG. 4) by setting the depth image information where the video image; and acquiring video images by the image processing unit unit acquires image information of the target denoising enhancement processing, detection and tracking to provide effective protection for the next gesture. 然后进入步骤S130。 Then proceeds to step S130.

[0029] 步骤S130、对经过处理的图像信息通过手势检测单元进行人手检测,完成手势与背景的分离,并通过视觉算法在图像信息中自动确定包围人手的一个较小矩形框为感兴趣区域。 [0029] Step S130, the image information to be processed through a gesture by detecting means detects hand, the gesture is completed and background separation, and to determine a small rectangle of the region of interest surrounded by manpower vision algorithms in the image information automatically.

[0030] 该步骤中完成手势与背景的分离,为目标的跟踪中的特征点提取提供方便,并设定感兴趣区域,为系统的实时性需求提供保证。 [0030] In this step separation is accomplished gesture background feature point tracking of targets to facilitate extraction, and sets the region of interest, to provide a guarantee of real-time requirements of the system.

[0031] 本发明实施例中所述的人手检测是采用方向梯度直方图(HOG)特征,通过基于Adaboost的统计学习方法来实现的。 Detecting the human hand in the Examples [0031] The present invention is the use of a gradient direction histogram (the HOG) characterized by statistical learning method based on Adaboost achieved.

[0032] 用于学习人手模式的统计学习方法是Adaboost算法。 [0032] statistical learning method for learning manual mode is Adaboost algorithm. Adaboost算法是在人脸检测中应用极其广泛的一种成熟算法,它通过调用弱学习器不断学习训练样本中难学习的样本,从而达到较高的泛化精度。 Adaboost face detection algorithm is applied in an extremely wide range of sophisticated algorithms, by calling it weak learners learning sample training samples difficult to learn, so as to achieve higher generalization accuracy. Adaboost算法的主要过程是:首先给定训练样本集合,然后对该样本集合进行循环操作,每次循环用所选特征训练得到一个弱分类器,然后计算该假设的错误率,根据该错误率改变每个例子的权重进入下一个循环,若干个弱分类级联组成一个强分类器。 Adaboost algorithm is the main process: First, given the training sample set, then the set of samples for cyclic operation, wherein each of the selected exercise cycle obtained by a weak classifier, and then calculates the assumed error rate, the error rate is changed according to the weight of each example of a reentry cycle, a plurality of weak classifiers to cascade a strong classifier. 最终的分类器由一系列相似的强分类器级联而成,分类器的分类能力随着级联结构中强分类器的数目增加而增加,如图2所示其中1、2_M为级联起来的各个强分类器,T表示候选区域被某个强分类器接受(即认为是人手区蜮),F表示候选区域被强分类器拒绝,是被排除了的候选区域,即认为是非人手区域。 The final classifier cascade consists of a series of strong classifier obtained by similar classification ability classifier increases as the number of the strong classifier cascade structure is increased, as shown in FIG. 2 wherein 1,2_M cascading each strong classifier, T represents a candidate region is accepted strong classifier (i.e. that region manpower worm), F indicates the candidate is rejected strong classifier region, is excluded candidate region, i.e. that non-hand region. 只有候选区域被所有强分类器接受才认为它是真正的人手区域,只要某一个强分类器拒绝,即认为它是非人手区域。 Only candidate regions are all strong classifier accepts only think it is a real area of ​​manpower, as long as one strong classifier rejected it that it is non-staff areas.

[0033] 步骤S140、通过手势跟踪单元,在所述图像信息中的感兴趣区域完成手势特征点的亚像素级跟踪,在视频序列中计算出每帧的人手轮廓状态; Subpixel tracking [0033] step S140, the tracking unit by a gesture, a region of interest in the image information is completed gesture feature point is calculated for each frame manpower profile state in the video sequence;

手势的跟踪是为下一步的手势分析提供信息,其中,所述人手轮廓状态:包括位置、旋转角、放缩量以及各个手指的长度和角度。 The gesture is for the next gesture tracking information analysis, wherein the contour manpower status include: the position, rotation angle, angle and length as well as shrinkage put individual fingers.

[0034] 所述亚像素解释为:在面阵摄像机的成像面以像素为最小单位。 [0034] The sub-pixel interpreted as: area array in the image plane of the camera in pixels as the minimum unit. 例如某CMOS摄像芯片,其像素间距为5. 2微米。 For example, a CMOS imaging chip, the pixel pitch is 5.2 microns. 摄像机拍摄时,将物理世界中连续的图像进行了离散化处理。 When the cameras, the physical world in successive discrete image processing. 到成像面上每一个像素点只代表其附近的颜色。 Each pixel to represent the color of the imaging surface in the vicinity thereof. 至于“附近”到什么程度? As for the "near" to what extent? 就很困难解释。 It is difficult to explain. 两个像素之间有5. 2微米的距离,在宏观上可以看作是连在一起的。 There are from 5.2 microns between the two pixels in the macro can be seen as connected together. 但是在微观上, 它们之间还有无限的更小的东西存在。 But at the micro, there is something infinitely smaller exists between them. 这个更小的东西我们称它为“亚像素”。 This thing we call smaller "sub-pixel." 实际上“亚像素”应该是存在的,只是硬件上没有个细微的传感器把它检测出来。 In fact, "sub-pixel" should exist, not only a subtle sensors detect it on the hardware. 于是软件上把它近似地计算出来。 Thus the software that is approximately calculated.

[0035] 步骤S150、根据计算出的人手轮廓状态进行人手动作的有效性检测,并进行手势识别以对用户完成某个预定义手势的轨迹进行分类,确定用户完成的手势动作;本实施例中,所述手势动作包括:左移、右移、上移、下移。 [0035] In step S150, the calculated according to the effectiveness of the hand contour detection hand operation state, and a user gesture recognition to complete a predefined gesture trajectory classification, determination of the complete user gestures; in this embodiment the gestures comprising: left, right, up, down. 而其中手势动作是否开始的判断依据是在连续20帧的人手检测结果里,有超过12帧检测到人手处于同一个位置。 Wherein the gesture is judged based on whether the operation is to start the hand detection results of 20 consecutive frames, there are more than 12 detects a human hand in the same position.

[0036] 本发明实施中所述的手势识别通过隐马尔科夫模型实现, 本发明所述手势识别的步骤包括: [0036] In the embodiment of the gesture recognition is achieved by the present invention, hidden Markov model, the gesture recognition step of the invention comprises:

步骤151 :对从轮廓跟踪获得的手势轨迹进行预处理去除密集点,获得预处理轨迹: 步骤152 :对预处理后的轨迹提取方向编码特征,对特征归一化; 步骤153 :采用前向递推算法计算步骤152得到的特征对应各类手势模型的概率,取概率最大者为识别结果。 Step 151: locus of gesture obtained from the intensive removal contour tracing point is pretreated to obtain a pre-track: Step 152: the direction of the trajectory coding feature extraction after pretreatment, the feature normalization; Step 153: Before using the delivery calculating estimation method corresponding to step 152 wherein the resulting probability of each gesture model, by taking the maximum of the probability of the recognition result.

[0037] 本发明所述人手轮廓跟踪采用条件概率密度传播和启发式扫描技术相结合的方法实现,所述轮廓跟踪算法的步骤如下: [0037] The present invention said step of hand contour tracing method using conditional density propagation and heuristic techniques to achieve a combination of the contour tracing algorithm is as follows:

步骤51 :采用条件概率密度传播(Condensation)算法跟踪轮廓的平移、旋转和放缩运动分量,得到若干候选轮廓,这些候选轮廓关于手指的状态分量还未确定; Step 51: The conditional probability density propagation (Condensation) contour tracking algorithm translation, rotation and scaling components of motion, to obtain a plurality of candidate contour, the candidate contour component with respect to the state of the finger has not been determined;

步骤52 :对每个确定了平移、旋转及放缩运动分量的候选轮廓,逐步调整每个手指的长度和角度,得到各个轮廓的手指运动状态分量,从而产生所有状态分量都确定的最终的候选轮廓; Step 52: determine for each translation, rotation and scaling contour candidate motion component, to gradually adjust the length and angle of each finger, the finger motion to obtain contour of each component, to produce the final status of all components are determined candidate of contour;

步骤53 :从最终的所有候选轮廓中产生一个轮廓作为跟踪结果。 Step 53: generating a tracking result from the final contour as a candidate contour in all.

[0038] 步骤S160、根据确定的手势动作生成相应的手势动作控制指令,并将该手势动作控制指令发送至三维用户界面; [0038] step S160, the generated operation control instruction corresponding to the gesture motion according to the determined gesture and the gesture operation control command to the three-dimensional user interface;

其中,该步骤中的根据确定的手势动作生成相应的手势动作控制指令包括: E1、通过手势位置的不动,来识别出该动作为点击命令,生成相应的点击控制指令; E2、通过手势位置的快速左移、右移、上移、下移来识别出左、右、上、下四个命令,生成相应的左移、右移、上移、下移控制指令; Wherein the step according to the determined gesture operation generates a corresponding gesture operation control instructions comprising: E1, by not moving gesture position to recognize that the action is a click command, generate the click control instruction; E2 of, by gesture position fast left, right, up, down to identify the left, right, upper, lower four order to form the corresponding left, right, up, down control instruction;

E3、通过手势位置的挥手来识别出关闭动作,生成相应的关闭控制指令。 E3, by waving gesture to identify the position of the closing operation, generates a corresponding control command to close.

[0039] 步骤S170、三维用户界面根据所述手势动作控制指令做出相应反馈。 [0039] In step S170, the three-dimensional user interface control instruction to make the appropriate feedback to the gestures. 譬如,根据用户的手势动作控制,立体影像显示单元显示的三维用户界面进行相应的动作显示等。 For example, display the corresponding operation according to a user's gesture operation control, the three-dimensional stereoscopic image display unit displays a user interface.

[0040] 由上可见,本明实施例,通过感应用户的全部或部分手势动作,完成手势的准确跟踪,从而为有效的基于普通视觉传感器的手势人机界面接口提供了实时鲁棒的解决方案 [0040] be seen from the next embodiment of the present embodiment, by sensing all or part of the user's hand gestures, the gesture is completed accurately track, so as gesture-based human-machine interface common vision sensor interface provides real-time robust solution effective

基于上述实施例,本发明实施例还提供了一种人机交互系统,如图3所示,主要包括: 视频图像获取单元210,用于获取用户所在环境深度图像信息;具体如上述步骤S120 所述。 Based on the above-described embodiments, embodiments of the present invention also provides an interactive system, shown in Figure 3, including: a video image obtaining unit 210, configured to obtain user information of the environment where the depth image; as described in step S120, the specific above. 其中,所述视频图像获取单元为摄像头。 Wherein said video camera image acquisition unit.

[0041] 图像处理单元220,用于对视频图像获取单元获取的图像信息进行去噪与目标增强处理;具体如上述步骤S120所述。 [0041] The image processing unit 220, the image information acquiring unit for acquiring video images of the target denoising enhancement processing; particularly as the above step S120.

[0042] 手势检测单元230,用于对经过处理的图像信息进行人手检测,完成手势与背景的分离,并通过视觉算法在图像信息中自动确定包围人手的一个较小矩形框为感兴趣区域; 具体如上述步骤S130所述。 [0042] The gesture detection unit 230, for image information processing is performed after the hand is detected, the gesture is completed and background separation, and to determine a small rectangle of the region of interest surrounded by manpower vision algorithms in the image information automatically; specifically as described in step S130.

[0043] 手势跟踪单元M0,用于在所述图像信息中的感兴趣区域完成手势特征点的亚像素级跟踪,在视频序列中计算出每帧的人手轮廓状态;具体如上述步骤S140所述。 [0043] The gesture tracking unit M0, a region of interest in the image information is completed gesture feature point tracking sub-pixel, a video sequence is calculated in each frame manpower profile state; step S140 as described above in particular . 其中,所述人手轮廓状态:包括位置、旋转角、放缩量以及各个手指的长度和角度。 Wherein, the hand profile Status: includes the position, rotation angle, angle and length as well as shrinkage put individual fingers.

[0044] 手势效性检测与手势动作确认单元250,根据计算出的人手轮廓状态进行人手动作的有效性检测,并进行手势识别以对用户完成某个预定义手势的轨迹进行分类,确定用户完成的手势动作;具体如上述步骤S150所述。 [0044] Detection of the gesture and the gesture operation validity checking unit 250, the validity of the detection operation of the hand based on the calculated hand profile state, and a user gesture recognition to complete a predefined gesture trajectory classification, the user has completed is determined gestures; particularly as the above-described step S150.

[0045] 手势指令控制命令生成单元沈0,根据确定的手势动作生成相应的手势动作控制指令,并将该手势动作控制指令发送至三维用户界面;具体如上述步骤S160所述。 [0045] The control command generating unit gesture command Shen 0, generates a corresponding control command gestures according to the determined gesture operation, and the gesture operation control command to the three-dimensional user interface; particularly as the above-described step S160.

[0046] 立体影像显示单元270,用于显示三维立体影像及三维用户图形界面,以及用于根据所述手势动作控制指令做出相应反馈;具体如上述步骤S170所述。 [0046] The stereoscopic image display unit 270 for displaying a three-dimensional image and a three-dimensional graphical user interface, and make the appropriate instruction for controlling the feedback operation of the gesture; particularly as the above step S170.

[0047] 其中,所述手势指令控制命令生成单元进一步包括: [0047] wherein said gesture command control command generating unit further comprises:

第一生成模块,用于通过手势位置的不动,来识别出该动作为点击命令,生成相应的点击控制指令; A first generating module, configured by the position does not move gesture to recognize that the action is a click command, control command generating the click;

第二生成模块,用于通过手势位置的快速左移、右移、上移、下移来识别出左、右、上、下四个命令,生成相应的左移、右移、上移、下移控制指令; A second generation module for gesture by flash left position, right, up, down to identify the left, right, upper, lower four order to form the corresponding left, right, move, under shift control command;

第三生成模块,用于通过手势位置的挥手来识别出关闭动作,生成相应的关闭控制指令。 Third generating means for waving gesture by identifying the position of the closing operation, generates a corresponding control command to close.

[0048] 综上所述,本发明所提供的一种人机交互系统及其实时手势跟踪处理方法,本发明通过在三维立体影像显示设备上的图像传感和处理单元,感应用户的全部或部分手势动作,完成手势的准确跟踪,从而为有效的基于普通视觉传感器的手势人机界面接口提供了实时手势跟踪的解决方案,本发明利用计算机视觉与图像处理技术实现了自动的人手检测、跟踪与手势识别,实时、鲁棒、易于实现和操作,能使计算机用户通过手部姿态与计算机进行更自然、更直观、更智能的交互;可用于智能家电、人机交互和虚拟现实平台等应用领域。 [0048] In summary, a human-machine interaction system according to the present invention provides a gesture tracking and real-time processing method according to the present invention by a three-dimensional image display and processing of all the image sensing unit, the sensing device or on the user's action gesture, the gesture is completed accurately track, so as gesture-based human-machine interface common vision sensor interface provides solutions to real-time tracking of valid gesture, using computer vision and image processing techniques of the present invention to achieve automatic detection of the hand, tracking and gesture recognition, real-time, robust, easy to implement and operate, enabling computer users to be more natural, more intuitive and more intelligent interaction with the computer by hand gesture; can be used for smart appliances, human-computer interaction and virtual reality platform applications field. 应用于智能电视和其他智能家电产品的人机交互、各种体感游戏、和各种有关虚拟现实平台产品中,因此本发明也具有重大的经济价值和应用价值。 Used in smart TVs and other home appliances intelligent human-computer interaction, a variety of somatosensory games, virtual reality platform and a variety of related products, the present invention also have significant economic and practical value.

[0049] 应当理解的是,本发明的应用不限于上述的举例,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,所有这些改进和变换都应属于本发明所附权利要求的保护范围。 [0049] It should be appreciated that the present invention is applied is not limited to the above-described example, those of ordinary skill in the art, can be modified or converted according to the above description, all such modifications and variations shall fall within the appended claims of the invention the scope of protection.

Claims (10)

1. 一种人机交互系统的实时手势跟踪处理方法,其特征在于,包括步骤:A、获取用户侧的图像信息并进行相应的图像降噪和增强处理;B、对经过处理的图像信息通过手势检测单元进行人手检测,完成手势与背景的分离, 并通过视觉算法在图像信息中自动确定包围人手的一个较小矩形框为感兴趣区域;C、通过手势跟踪单元,在所述图像信息中的感兴趣区域完成手势特征点的亚像素级跟踪,在视频序列中计算出每帧的人手轮廓状态;D、根据计算出的人手轮廓状态进行人手动作的有效性检测,并进行手势识别以对用户完成某个预定义手势的轨迹进行分类,确定用户完成的手势动作;E、根据确定的手势动作生成相应的手势动作控制指令,并将该手势动作控制指令发送至三维用户界面;F、三维用户界面根据所述手势动作控制指令做出相应反馈。 An interactive real-time gesture tracking system processing method characterized by comprising the step of: A, acquiring image information user side and the corresponding noise reduction and image enhancement processing; B, the image information processed by the hand gesture detecting means for detecting a gesture and background separation is accomplished, and in the image information is determined automatically by the vision algorithm a small rectangular frame surrounding the region of interest of a human hand; C, by a gesture tracking unit in the image information the region of interest is completed gesture feature point tracking sub-pixel, a video sequence is calculated in each frame manpower profile state; D, the validity of the detection operation of the hand based on the calculated hand profile state, and on the gesture recognition to user has completed a predefined gesture trajectory classification, determination of the gesture motion of the user is completed; E, generates a corresponding control instruction according to the determined gesture motion gestures, gesture and the operation control command to the three-dimensional user interface; F, D the user interface to make the gesture motion according to the respective feedback control instruction.
2.根据权利要求1所述的人机交互系统的实时手势跟踪处理方法,其特征在于,所述步骤A之前还包括,a、立体影像显示单元显示三维立体影像及三维用户图形界面。 The gesture in real time interactive system of the tracking processing method according to claim 1, wherein, prior to said step A further comprises, a, a stereoscopic image display unit to display a three-dimensional image and a three-dimensional graphical user interface.
3.根据权利要求1所述的人机交互系统的实时手势跟踪处理方法,其特征在于,所述步骤A具体包括:Al、视频图像获取单元获取用户所在环境深度图像信息;A2、通过图像处理单元对视频图像获取单元获取的图像信息进行去噪与目标增强处理。 The gesture in real time interactive system of the tracking processing method according to claim 1, wherein said step A comprises: Al, video image acquisition unit acquires the environment where the user information of the depth image; A2, by image processing means acquiring unit acquires the image information of the target video image denoising enhancement processing.
4.根据权利要求1所述的人机交互系统的实时手势跟踪处理方法,其特征在于,所述步骤C中的人手轮廓状态:包括位置、旋转角、放缩量以及各个手指的长度和角度。 The gesture in real time interactive system of the tracking processing method according to claim 1, characterized in that the profile of the state of the manual step C: comprising length and angle position, rotation angle, and the individual fingers put shrinkage .
5.根据权利要求1所述的人机交互系统的实时手势跟踪处理方法,其特征在于,所述步骤D还包括:手势动作是否开始的判断依据是在连续20帧的人手检测结果里,有超过12帧检测到人手处于同一个位置。 The gesture in real time interactive system of the tracking processing method according to claim 1, wherein said step D further comprises: a gesture operation is determined based on whether or not the start of a detection result of the hand for 20 consecutive frames, there over 12 detects a human hand in the same position.
6.根据权利要求1所述的人机交互系统的实时手势跟踪处理方法,其特征在于,所述步骤D中的手势动作包括:左移、右移、上移、下移。 The gesture in real time interactive system of the tracking processing method according to claim 1, wherein the gesture operation the step D comprises: left, right, up, down.
7.根据权利要求1所述的人机交互系统的实时手势跟踪处理方法,其特征在于,所述步骤E中的根据确定的手势动作生成相应的手势动作控制指令包括:E1、通过手势位置的不动,来识别出该动作为点击命令,生成相应的点击控制指令;E2、通过手势位置的快速左移、右移、上移、下移来识别出左、右、上、下四个命令,生成相应的左移、右移、上移、下移控制指令;E3、通过手势位置的挥手来识别出关闭动作,生成相应的关闭控制指令。 The gesture in real time interactive system of the tracking processing method according to claim 1, characterized in that, to form the corresponding hand movement control command in accordance with the determined gesture operation comprises the step E: E1, through gestures position Fixed to recognize that the action is a click command, control command generating the click; E2 of, left gesture by flash position, right, up, down to identify the left, right, upper, lower four commands to form the corresponding left, right, up, down control instruction; E3, waving gesture by identifying the position of the closing operation, generates a corresponding control command to close.
8. —种人机交互系统,其特征在于,包括:视频图像获取单元,用于获取用户所在环境深度图像信息;图像处理单元,用于对视频图像获取单元获取的图像信息进行去噪与目标增强处理;手势检测单元,用于对经过处理的图像信息进行人手检测,完成手势与背景的分离,并通过视觉算法在图像信息中自动确定包围人手的一个较小矩形框为感兴趣区域;手势跟踪单元,用于在所述图像信息中的感兴趣区域完成手势特征点的亚像素级跟踪,在视频序列中计算出每帧的人手轮廓状态;手势效性检测与手势动作确认单元,根据计算出的人手轮廓状态进行人手动作的有效性检测,并进行手势识别以对用户完成某个预定义手势的轨迹进行分类,确定用户完成的手势动作;手势指令控制命令生成单元,根据确定的手势动作生成相应的手势动作控制指令,并将该手 8. - kind of interactive system, characterized by comprising: a video image acquisition unit configured to acquire user information of the environment where the depth image; an image processing unit, the image information acquiring unit for acquiring video images of the target denoising enhancement processing; gesture detection unit for image information processing is performed after the hand is detected, the gesture is completed and background separation, and to determine a small rectangle of the region of interest surrounded by manpower vision algorithms in the image information automatically; gesture tracking means for the image information in the region of interest in the completion of the gesture feature point tracking sub-pixel, a video sequence is calculated in each frame manpower profile state; with gesture gestures validity confirmation unit detects, based on the calculation the state of the hand contour hand operation is checked for validity, and a user gesture recognition to complete a predefined gesture trajectory classify actions of the user is determined to complete the gesture; gesture command control command generation means, an operation according to the determined gesture generating a control command corresponding to the gesture motion, on the hand and 动作控制指令发送至三维用户界面;立体影像显示单元,用于显示三维立体影像及三维用户图形界面,以及用于根据所述手势动作控制指令做出相应反馈。 Operation control command to the three-dimensional user interface; the stereoscopic image display unit for displaying a three-dimensional image and a three-dimensional graphical user interface, and for controlling an operation instruction according to the gesture feedback accordingly.
9.根据权利要求8所述的人机交互系统,其特征在于,所述视频图像获取单元为摄像头。 9. The interactive system of claim 8, wherein said video camera image acquisition unit.
10.根据权利要求8所述的人机交互系统,其特征在于,所述人手轮廓状态:包括位置、 旋转角、放缩量以及各个手指的长度和角度;手势指令控制命令生成单元进一步包括:第一生成模块,用于通过手势位置的不动,来识别出该动作为点击命令,生成相应的点击控制指令;第二生成模块,用于通过手势位置的快速左移、右移、上移、下移来识别出左、右、上、下四个命令,生成相应的左移、右移、上移、下移控制指令;第三生成模块,用于通过手势位置的挥手来识别出关闭动作,生成相应的关闭控制指令。 10. The interactive system according to claim 8, wherein said hand profile status include: the position, rotation angle, and the discharge angles of the shrinkage and the length of the fingers; gesture command a control command generating unit further comprises: a first generating module, configured by the position does not move gesture to recognize that the action is a click command, control command generating the click; a second generation module for gesture by flash left position, right, move , to identify the down left, right, upper, lower four order to form the corresponding left, right, up, down control instruction; third generating module, for identifying the position closed by a waving gesture operation, generates a corresponding control command to close.
CN 201110342972 2011-11-03 2011-11-03 Man-machine interactive system and real-time gesture tracking processing method for same CN102426480A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110342972 CN102426480A (en) 2011-11-03 2011-11-03 Man-machine interactive system and real-time gesture tracking processing method for same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110342972 CN102426480A (en) 2011-11-03 2011-11-03 Man-machine interactive system and real-time gesture tracking processing method for same

Publications (1)

Publication Number Publication Date
CN102426480A true true CN102426480A (en) 2012-04-25

Family

ID=45960476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110342972 CN102426480A (en) 2011-11-03 2011-11-03 Man-machine interactive system and real-time gesture tracking processing method for same

Country Status (1)

Country Link
CN (1) CN102426480A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693084A (en) * 2012-05-08 2012-09-26 上海鼎为软件技术有限公司 Mobile terminal and method for response operation of mobile terminal
CN102722249A (en) * 2012-06-05 2012-10-10 上海鼎为软件技术有限公司 Manipulating method, manipulating device and electronic device
CN102769802A (en) * 2012-06-11 2012-11-07 西安交通大学 Man-machine interactive system and man-machine interactive method of smart television
CN102789568A (en) * 2012-07-13 2012-11-21 浙江捷尚视觉科技有限公司 Gesture identification method based on depth information
CN102799263A (en) * 2012-06-19 2012-11-28 深圳大学 Posture recognition method and posture recognition control system
CN102945078A (en) * 2012-11-13 2013-02-27 深圳先进技术研究院 Human-computer interaction equipment and human-computer interaction method
CN102981742A (en) * 2012-11-28 2013-03-20 无锡市爱福瑞科技发展有限公司 Gesture interaction system based on computer visions
CN102982557A (en) * 2012-11-06 2013-03-20 桂林电子科技大学 Method for processing space hand signal gesture command based on depth camera
CN103139627A (en) * 2013-02-07 2013-06-05 上海集成电路研发中心有限公司 Intelligent television and gesture control method thereof
CN103136541A (en) * 2013-03-20 2013-06-05 上海交通大学 Double-hand three-dimensional non-contact type dynamic gesture identification method based on depth camera
CN103227962A (en) * 2013-03-29 2013-07-31 上海集成电路研发中心有限公司 Method capable of identifying distance of line formed by image sensors
CN103389793A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Human-computer interaction method and human-computer interaction system
CN103399629A (en) * 2013-06-29 2013-11-20 华为技术有限公司 Method and device for capturing gesture displaying coordinates
CN103713738A (en) * 2013-12-17 2014-04-09 武汉拓宝电子系统有限公司 Man-machine interaction method based on visual tracking and gesture recognition
CN103777744A (en) * 2012-10-23 2014-05-07 中国移动通信集团公司 Method and device for achieving input control and mobile terminal
CN103869986A (en) * 2014-04-02 2014-06-18 中国电影器材有限责任公司 Dynamic data generating method based on KINECT
CN104143075A (en) * 2013-05-08 2014-11-12 光宝科技股份有限公司 Gesture judging method applied to electronic device
WO2014183262A1 (en) * 2013-05-14 2014-11-20 Empire Technology Development Llc Detection of user gestures
CN104252231A (en) * 2014-09-23 2014-12-31 河南省辉耀网络技术有限公司 Camera based motion sensing recognition system and method
CN104281265A (en) * 2014-10-14 2015-01-14 京东方科技集团股份有限公司 Application program control method, application program control device and electronic equipment
CN104331149A (en) * 2014-09-29 2015-02-04 联想(北京)有限公司 Control method, control device and electronic equipment
CN104662561A (en) * 2012-06-27 2015-05-27 若威尔士有限公司 Skin-based user recognition
CN104915011A (en) * 2015-06-28 2015-09-16 合肥金诺数码科技股份有限公司 Open environment gesture interaction game system
CN104978014A (en) * 2014-04-11 2015-10-14 维沃移动通信有限公司 Method for quickly calling application program or system function, and mobile terminal thereof
CN105045399A (en) * 2015-09-07 2015-11-11 哈尔滨市一舍科技有限公司 Electronic device with 3D camera assembly
CN105068662A (en) * 2015-09-07 2015-11-18 哈尔滨市一舍科技有限公司 Electronic device used for man-machine interaction
WO2017075932A1 (en) * 2015-11-02 2017-05-11 深圳奥比中光科技有限公司 Gesture-based control method and system based on three-dimensional displaying
WO2017161496A1 (en) * 2016-03-22 2017-09-28 广东虚拟现实科技有限公司 Fringe set searching method, device and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999034327A2 (en) * 1997-12-23 1999-07-08 Koninklijke Philips Electronics N.V. System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
CN101689244A (en) * 2007-05-04 2010-03-31 格斯图尔泰克股份有限公司 Camera-based user input for compact devices
US20100166258A1 (en) * 2008-12-30 2010-07-01 Xiujuan Chai Method, apparatus and computer program product for providing hand segmentation for gesture analysis
CN102081918A (en) * 2010-09-28 2011-06-01 北京大学深圳研究生院 Video image display control method and video image display device
CN102117117A (en) * 2010-01-06 2011-07-06 致伸科技股份有限公司 System and method for control through identifying user posture by image extraction device
CN102789568A (en) * 2012-07-13 2012-11-21 浙江捷尚视觉科技有限公司 Gesture identification method based on depth information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999034327A2 (en) * 1997-12-23 1999-07-08 Koninklijke Philips Electronics N.V. System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
CN101689244A (en) * 2007-05-04 2010-03-31 格斯图尔泰克股份有限公司 Camera-based user input for compact devices
US20100166258A1 (en) * 2008-12-30 2010-07-01 Xiujuan Chai Method, apparatus and computer program product for providing hand segmentation for gesture analysis
CN102117117A (en) * 2010-01-06 2011-07-06 致伸科技股份有限公司 System and method for control through identifying user posture by image extraction device
CN102081918A (en) * 2010-09-28 2011-06-01 北京大学深圳研究生院 Video image display control method and video image display device
CN102789568A (en) * 2012-07-13 2012-11-21 浙江捷尚视觉科技有限公司 Gesture identification method based on depth information

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103389793A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Human-computer interaction method and human-computer interaction system
CN102693084B (en) * 2012-05-08 2016-08-03 上海鼎为电子科技(集团)有限公司 A mobile terminal and a method of operation in response to
CN102693084A (en) * 2012-05-08 2012-09-26 上海鼎为软件技术有限公司 Mobile terminal and method for response operation of mobile terminal
CN102722249A (en) * 2012-06-05 2012-10-10 上海鼎为软件技术有限公司 Manipulating method, manipulating device and electronic device
CN102722249B (en) * 2012-06-05 2016-03-30 上海鼎为电子科技(集团)有限公司 Control method, control device and electronic device
CN102769802A (en) * 2012-06-11 2012-11-07 西安交通大学 Man-machine interactive system and man-machine interactive method of smart television
CN102799263A (en) * 2012-06-19 2012-11-28 深圳大学 Posture recognition method and posture recognition control system
CN102799263B (en) * 2012-06-19 2016-05-25 深圳大学 A gesture recognition method and gesture recognition control system
CN104662561A (en) * 2012-06-27 2015-05-27 若威尔士有限公司 Skin-based user recognition
CN102789568A (en) * 2012-07-13 2012-11-21 浙江捷尚视觉科技有限公司 Gesture identification method based on depth information
CN102789568B (en) * 2012-07-13 2015-03-25 浙江捷尚视觉科技股份有限公司 Gesture identification method based on depth information
CN103777744A (en) * 2012-10-23 2014-05-07 中国移动通信集团公司 Method and device for achieving input control and mobile terminal
CN102982557A (en) * 2012-11-06 2013-03-20 桂林电子科技大学 Method for processing space hand signal gesture command based on depth camera
CN102982557B (en) * 2012-11-06 2015-03-25 桂林电子科技大学 Method for processing space hand signal gesture command based on depth camera
CN102945078A (en) * 2012-11-13 2013-02-27 深圳先进技术研究院 Human-computer interaction equipment and human-computer interaction method
CN102981742A (en) * 2012-11-28 2013-03-20 无锡市爱福瑞科技发展有限公司 Gesture interaction system based on computer visions
CN103139627A (en) * 2013-02-07 2013-06-05 上海集成电路研发中心有限公司 Intelligent television and gesture control method thereof
CN103136541A (en) * 2013-03-20 2013-06-05 上海交通大学 Double-hand three-dimensional non-contact type dynamic gesture identification method based on depth camera
CN103136541B (en) * 2013-03-20 2015-10-14 上海交通大学 Based on the three-dimensional non-contact hands depth camera dynamic gesture recognition method
CN103227962A (en) * 2013-03-29 2013-07-31 上海集成电路研发中心有限公司 Method capable of identifying distance of line formed by image sensors
CN104143075A (en) * 2013-05-08 2014-11-12 光宝科技股份有限公司 Gesture judging method applied to electronic device
WO2014183262A1 (en) * 2013-05-14 2014-11-20 Empire Technology Development Llc Detection of user gestures
US9740295B2 (en) 2013-05-14 2017-08-22 Empire Technology Development Llc Detection of user gestures
CN103399629A (en) * 2013-06-29 2013-11-20 华为技术有限公司 Method and device for capturing gesture displaying coordinates
CN103713738A (en) * 2013-12-17 2014-04-09 武汉拓宝电子系统有限公司 Man-machine interaction method based on visual tracking and gesture recognition
CN103713738B (en) * 2013-12-17 2016-06-29 武汉拓宝科技股份有限公司 Human-computer interaction method for visual tracking and gesture recognition based
CN103869986A (en) * 2014-04-02 2014-06-18 中国电影器材有限责任公司 Dynamic data generating method based on KINECT
CN104978014A (en) * 2014-04-11 2015-10-14 维沃移动通信有限公司 Method for quickly calling application program or system function, and mobile terminal thereof
CN104252231A (en) * 2014-09-23 2014-12-31 河南省辉耀网络技术有限公司 Camera based motion sensing recognition system and method
CN104331149A (en) * 2014-09-29 2015-02-04 联想(北京)有限公司 Control method, control device and electronic equipment
WO2016058303A1 (en) * 2014-10-14 2016-04-21 京东方科技集团股份有限公司 Application control method and apparatus and electronic device
CN104281265B (en) * 2014-10-14 2017-06-16 京东方科技集团股份有限公司 A method of controlling an application, device, and electronic apparatus
KR20160060003A (en) * 2014-10-14 2016-05-27 보에 테크놀로지 그룹 컴퍼니 리미티드 A method, a device, and an electronic equipment for controlling an Application Program
KR101718837B1 (en) 2014-10-14 2017-03-22 보에 테크놀로지 그룹 컴퍼니 리미티드 A method, a device, and an electronic equipment for controlling an Application Program
CN104281265A (en) * 2014-10-14 2015-01-14 京东方科技集团股份有限公司 Application program control method, application program control device and electronic equipment
CN104915011A (en) * 2015-06-28 2015-09-16 合肥金诺数码科技股份有限公司 Open environment gesture interaction game system
CN105045399A (en) * 2015-09-07 2015-11-11 哈尔滨市一舍科技有限公司 Electronic device with 3D camera assembly
CN105068662A (en) * 2015-09-07 2015-11-18 哈尔滨市一舍科技有限公司 Electronic device used for man-machine interaction
WO2017075932A1 (en) * 2015-11-02 2017-05-11 深圳奥比中光科技有限公司 Gesture-based control method and system based on three-dimensional displaying
WO2017161496A1 (en) * 2016-03-22 2017-09-28 广东虚拟现实科技有限公司 Fringe set searching method, device and system

Similar Documents

Publication Publication Date Title
Li Hand gesture recognition using Kinect
Oka et al. Real-time fingertip tracking and gesture recognition
Argyros et al. Vision-based interpretation of hand gestures for remote control of a computer mouse
US8166421B2 (en) Three-dimensional user interface
Kim et al. Simultaneous gesture segmentation and recognition based on forward spotting accumulative HMMs
US20110221974A1 (en) System and method for hand gesture recognition for remote control of an internet protocol tv
US20120062736A1 (en) Hand and indicating-point positioning method and hand gesture determining method used in human-computer interaction system
US20120204133A1 (en) Gesture-Based User Interface
US20120202569A1 (en) Three-Dimensional User Interface for Game Applications
Sugano et al. Appearance-based gaze estimation using visual saliency
CN101393599A (en) Game role control method based on human face expression
US20130009865A1 (en) User-centric three-dimensional interactive control environment
US20140375547A1 (en) Touch free user interface
US20120121185A1 (en) Calibrating Vision Systems
US20130069867A1 (en) Information processing apparatus and method and program
Park et al. Real-time 3D pointing gesture recognition for mobile robots with cascade HMM and particle filter
US20130077820A1 (en) Machine learning gesture detection
US20130249786A1 (en) Gesture-based control system
CN102184021A (en) Television man-machine interaction method based on handwriting input and fingertip mouse
CN102854983A (en) Man-machine interaction method based on gesture recognition
WO2013008236A1 (en) System and method for computer vision based hand gesture identification
Lee et al. Hand region extraction and gesture recognition from video stream with complex background through entropy analysis
WO2012081012A1 (en) Computer vision based hand identification
WO2012164562A1 (en) Computer vision based control of a device using machine learning
CN103366610A (en) Augmented-reality-based three-dimensional interactive learning system and method

Legal Events

Date Code Title Description
C06 Publication
C10 Request of examination as to substance