CN103207674B - Electronic presentation systems somatosensory Technology - Google Patents

Electronic presentation systems somatosensory Technology Download PDF

Info

Publication number
CN103207674B
CN103207674B CN 201310091358 CN201310091358A CN103207674B CN 103207674 B CN103207674 B CN 103207674B CN 201310091358 CN201310091358 CN 201310091358 CN 201310091358 A CN201310091358 A CN 201310091358A CN 103207674 B CN103207674 B CN 103207674B
Authority
CN
Grant status
Grant
Patent type
Prior art keywords
gesture
presenter
image
presentation
computer
Prior art date
Application number
CN 201310091358
Other languages
Chinese (zh)
Other versions
CN103207674A (en )
Inventor
吴俊敏
黄治峰
姜文斌
Original Assignee
苏州展科光电科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Grant date

Links

Abstract

本发明公开了一种基于体感技术的电子演示系统,包括用于实时向被演示者展示内容的屏幕、将演示者的演示内容投放到屏幕上的投影装置,其特征在于所述系统还包括用于捕获演示者动作的体感装置、与投影装置连接的信息处理装置,所述体感装置将捕获演示者动作的图像传输给信息处理装置,所述信息处理装置根据演示者动作的图像判断演示者演示指令和演示内容,并通过投影装置将演示内容实时投放到屏幕上进行实时演示。 The present invention discloses an electronic presentation system based somatosensory technology, including real-time to a display screen is content presenter, the presenter of the presentation content delivered to devices on a projection screen, characterized in that the system further comprises in somatosensory means for capturing motion of the presenter, the information processing apparatus is connected to the projection device, said body sensing device captures an image transfer motion of the presenter to the information processing apparatus, the information processing apparatus image determination presenter demo the presenter operation and content presentation instructions, and by a projection means serving to demonstrate in real time for real-time content presentation on the screen. 该系统将多媒体技术、图形图像处理技术、机器视觉技术有机结合起来,替代黑板的作用,大大丰富了现场演示的最佳效果。 The system will be multimedia, graphics and image processing technology, machine vision technology combine to replace the role of the board, greatly enriched the best live presentation.

Description

基于体感技术的电子演示系统 Electronic presentation systems somatosensory Technology

技术领域 FIELD

[0001]本发明属于体感设备技术领域,具体涉及一种基于体感技术的电子演示系统及其电子演示方法。 [0001] The present invention belongs to the technical field somatosensory apparatus, particularly relates to an electronic presentation system and method based on an electronic display somatosensory technology.

背景技术 Background technique

[0002]课堂和办公环境离不开演示,以前黑板是最主要的演示道具,而现在多媒体技术的发展使黑板渐渐不能满足多媒体展示的要求,从而促使其替代品电子白板的出现。 [0002] classroom and office environment can not be separated presentation, presentation before the board is the most important props, and now the development of multimedia technology to make the blackboard gradually not meet the requirements of multimedia presentations, thus contributing to the emergence of its alternatives whiteboard.

[0003]目前,就目前电子白板的应用技术而言,主要是应用触摸屏技术的电子白板。 [0003] At present, the current application of technology whiteboard, the whiteboard application mainly touch screen technology. 通常是将触摸屏放置在显示设备上,演示者通过手或触摸笔操控电子白板,使电子白板感应到手或触摸笔的运动,完成对电子白板的操控。 The touch screen is usually placed on the display device, the presenter manipulated by hand or touch pen whiteboard, the whiteboard induction hand or touch pen movements, complete control of the whiteboard. 但是,这种技术需要配备大尺寸,高分辨率的触摸屏和显示设备,对设备的要求高且成本较大。 However, this process requires a large, high-resolution touch screen and a display device, and the high cost of the equipment required is large. 并且演示者只能通过触摸屏来实现对电子白板的操控而不能脱离触摸设备。 And presenters can only be achieved on the whiteboard and can not be manipulated from a touch device through the touch screen. 用户只能通过触摸完成操作,在一定程度上限制了演示者的演示风格和肢体动作,从而影响了演示的最佳效果。 Users can only be done by touch operation, limiting the presenter's presentation style and body movements to a certain extent, thus affecting the best presentation. 本发明因此而来。 The present invention therefore comes.

发明内容 SUMMARY

[0004]本发明目的在于提供一种基于体感技术的电子演示系统,解决了现有技术中演示系统只能通过触摸屏进行演示,限制了演示者的演示风格和肢体动作,影响演示的最佳效果O [0004] The object of the present invention is to provide an electronic presentation system somatosensory technology based on the prior art to solve the demonstration system can be demonstrated by the touch screen, limiting the presenter's presentation style and body movements, impact best presentation O

[0005]为了解决现有技术中的这些问题,本发明提供的技术方案是: [0005] In order to solve these problems of the prior art, the technical solutions provided by the present invention are:

[0006] —种基于体感技术的电子演示系统,包括用于实时向被演示者展示内容的屏幕、将演示者的演示内容投放到屏幕上的投影装置,其特征在于所述系统还包括用于捕获演示者动作的体感装置、与投影装置连接的信息处理装置,所述体感装置将捕获演示者动作的图像传输给信息处理装置,所述信息处理装置根据演示者动作的图像判断演示者演示指令和演示内容,并通过投影装置将演示内容实时投放到屏幕上进行实时演示。 [0006] - electronic presentation system somatosensory Technology, including a real-time screen is to display the contents of the presenter, the presenter of the presentation content delivered to devices on a projection screen, characterized in that the system further comprises means for somatosensory device captures motion of the presenter, the information processing apparatus is connected to the projection device, said body sensing device captures an image transfer motion of the presenter to the information processing apparatus, the information processing apparatus image determination presenter presentation command based presenter operation and presentations, and presentations by the projection device in real-time to put real-time presentation on the screen.

[0007] 优选的,所述体感装置为Kinect设备;所述Kinect设备通过Kinect摄像头拍摄演示者的视频图像,并将演示者的视频图像通过usb数据线传送至信息处理装置。 [0007] Preferably, the sensing means is Kinect apparatus body; said video image photographing device Kinect presenter Kinect by the camera, and the video image presenter usb transmitted through the data line to the information processing apparatus.

[0008]优选的,所述信息处理装置为计算机,所述计算机通过视频输出线与投影装置连接,接收从Kinect设备传来的视频图像,并对视频图像进行处理获取演示者的演示指令和演示内容。 [0008] Preferably, the information processing apparatus is a computer, the computer through the video output line connected to the projector, the image received from the video apparatus Kinect transmitted, image and video processing instructions for a demonstration and presentation presenter content.

[0009]优选的,所述投影装置选用投影仪。 [0009] Preferably, the projection device selected projector.

[0010]优选的,所述屏幕设置在投影装置的对侧,并与投影装置的投影范围相适应。 [0010] Preferably, the projection screen disposed on opposite sides of the device, and adapted to the projection range of the device.

[0011]本发明的另一目的在于提供了一种采用所述的电子演示系统进行电子演示方法,其特征在于所述方法包括以下步骤: [0011] Another object of the present invention is to provide an electronic display system employing the method of an electronic display, characterized in that said method comprises the steps of:

[0012I (I)通过体感装置捕获演示者动作; [0012I (I) captured by the motion of the presenter somatosensory means;

[0013] (2)信息处理装置根据演示者动作的图像判断演示者演示指令和演示内容,并通过投影装置将演示内容实时投放到屏幕上进行实时演示。 [0013] (2) The information processing apparatus according to the presentation command and presentations presenter presenter image determination operation, and will serve to demonstrate the real-time content for real-time presentation on a screen by the projection means.

[0014]优选的,所述方法中演示指令为根据演示者动作的图像判断的演示者手势指令,根据演示者手势指令判断用户需要实时演示的内容。 [0014] Preferably, the method according to the presentation command presenter image gesture command judging operation of the presenter, the presenter gesture command is determined according to the user needs real-time content presentation.

[0015]本发明是为了克服传统电子白板的不足,提供体积更加小,成本更加廉价,配合演示效果更好的电子白板产品,利用Kinect体感摄像头的辅助,演示者站在摄像头前通过手势操控电子白板,完成演不功能。 [0015] The present invention is to overcome the deficiencies of traditional whiteboard, there is provided a volume much smaller, cheaper cost, with better presentation whiteboard products using Kinect somatosensory auxiliary camera, standing before the presenter electronic camera by the gestures whiteboard, speech is not complete function.

[0016]具体的基于体感技术的电子演示系统由Kinect摄像头、计算机、投影仪、屏幕组成,其中Kinect摄像头用于拍摄包含深度的视频图像,并将所述视频图像通过usb数据线传送至计算机。 [0016] The particular electronic presentation system somatosensory Based on the Kinect camera, computer, projector, screen, of which Kinect camera for video image photographing contains a depth, and the video images to the computer via usb data line. 计算机设备用于接受视频图像,并利用OpenNI自然用户界面开发包,解析所获得的视频图像和其中包含的深度信息,追踪用户的手部位置。 Computer apparatus for receiving video images and natural user interface using the OpenNI development package, parses the obtained depth information and wherein the video image comprises, tracking the position of the user's hand. 并利用手势分割识别算法,分割手部图像并获得手势。 Hand gesture segmentation and recognition algorithm, and obtain an image segmentation hand gesture. 计算机响应控制手势,产生控制命令控制画笔在电子白板上绘图,并控制投影仪把影响投射在屏幕上。 Computer control gesture in response, generates a control command to control the whiteboard pen on the drawing, controls the projector and projected onto a screen the impact.

[0017]投影仪用于将计算机生成的即时图像实时地投放出来。 [0017] the projector for a computer-generated image of an instant delivery out in real time. 屏幕用于实时地向观众展示图像。 Screen is used to display real-time images to the audience.

[0018]本发明有效地提高了多媒体演示的多样性,将多媒体技术、图形图像处理技术、机器视觉技术有机结合起来,替代黑板的作用,大大丰富了现场演示的最佳效果。 [0018] The present invention effectively improves the diversity of a multimedia presentation, multimedia technology, graphics and image processing technology, machine vision techniques combine alternative blackboard effect, greatly enriched the best live presentation.

[0019]相对于现有技术中的方案,本发明的优点是: [0019] with respect to the prior art solutions, the advantages of the present invention are:

[0020]相比现有技术中触摸屏电子白板,本发明使用的Kinect设备具有体积小,成本低廉的优点。 [0020] advantage over the prior art touch screen whiteboard, Kinect apparatus according to the present invention has a small size and low cost.

[0021]相比现有技术中触摸屏电子白板,本发明使用的计算机、投影仪和屏幕设备目前已经广泛配置在演示场合,如教室、礼堂等,不需要另行购买安装。 [0021] Compared with the prior art touch screen whiteboard, a computer, a projector and screen according to the present invention apparatus has been widely used in presentations arranged occasions, such as classrooms, halls and the like, no need to separately purchased and installed. 其更加适合目前的演示领域。 Which is more suitable for the current demonstration areas.

[0022]本发明使用的Kinect设备使得演示者可以使用手势操控电子白板,从一定程度上丰富了演示者的动作,并且演示者不需要离开观众去操作触摸屏。 [0022] Kinect apparatus according to the present invention makes use of gesture control presenter can use the whiteboard, the rich operation of the presenter to some extent, without leaving the audience and the presenter to operate the touch screen. 在演示效果上讲,本发明较之触摸屏电子白板拥有更好的现场效果。 In the demonstration effect of speaking, the present invention compared to the touch screen whiteboard with better on-site effects.

附图说明 BRIEF DESCRIPTION

[0023]下面结合附图及实施例对本发明作进一步描述: [0023] Example embodiments of the present invention will be further described in conjunction with the accompanying drawings and the following:

[0024]图1为基于Kinect的电子演示系统的结构示意图;其中有Kinect设备I,计算机2,投影仪3,屏幕4,USB连接线5,视频连接线6和用户(演讲者)7。 [0024] FIG. 1 is a schematic view of an electronic presentation system based Kinect; Kinect which apparatus I, the computer 2, the projector 3, the screen 4, USB cable 5, the video cable 6 and a user (speaker) 7.

具体实施方式 detailed description

[0025]以下结合具体实施例对上述方案做进一步说明。 [0025] The following detailed Examples further illustrate embodiments of the above-described embodiment. 应理解,这些实施例是用于说明本发明而不限于限制本发明的范围。 It should be understood that these examples are for explaining the present invention is not limited to limit the scope of the invention. 实施例中采用的实施条件可以根据具体厂家的条件做进一步调整,未注明的实施条件通常为常规实验中的条件。 Conditions employed in the embodiments can be further adjusted according to Embodiment particular manufacturer's conditions, conditions are generally not specified embodiment of a conventional test conditions.

[0026] 实施例 [0026] Example

[0027]如图1所示,本实施例中基于Kinect的电子演示系统(电子白板)的组成包括:Kinect设备I,计算机2,投影仪3和屏幕4。 [0027] 1, based on the composition of the present embodiment, the electronic display system Kinect (whiteboard) comprises: Kinect device I, the computer 2, the projector 3 and the screen 4.

[0028] Kinect设备I用于拍摄包含深度信息的视频图像,通过USB线5连接在计算机2上。 [0028] Kinect device I for capturing video images comprising depth information, connected to the computer 2 through the USB cable 5. Kinect设备I放置在面对演示者的位置,并限制演示者的活动范围是Kinect设备I水平视角范围小于57。 Kinect device I placed in a position facing the presenter, and the presenter limit the scope of activities is Kinect device I horizontal viewing angle range is less than 57. ,垂直视角范围小于43。 Vertical angle range of less than 43. 和传感深度范围在1.2米到3.5米之间。 And sensing the depth range between 1.2 to 3.5m.

[0029]计算机2用于接收从Kinect设备I传来的视频图像,并利用OpenNI开发包对视频图像进行处理,获得手势并产生相应命令,通过USB连接线5与Kinect设备I连接。 [0029] Computer 2 for receiving from the video image processing device I Kinect transmitted video images, and using OpenNI development package, and generates a corresponding gesture command is obtained, is connected via the USB cable 5 and the device I Kinect. 然后控制投影仪3实时投放画面,通过视频输出线6与投影仪3连接。 3 then controls the projector screen real run, line 3 is connected via the video output 6 and the projector. 投影仪3用于实时投放画面,通过视频输出线与计算机2连接。 The projector screen 3 for real-time delivery, connected with the computer through the video output line 2. 屏幕4用于向观众展示画面,放置在投影仪3的正面。 4 shows a screen picture to the audience, is placed in front of the projector 3.

[0030]采用的视频图像处理过程按照以下步骤: [0030] The video image processing process using the following steps:

[0031] 1、采用基于模糊连通性的区域分割算法分割手掌图像:基于模糊逻辑的区域连通性,相邻像素之间的连通量根据其深度值之间的差异取介于0-1之间的值。 0-1 based on the area of ​​communication between the fuzzy logic, an amount of communication between adjacent pixels interposed taking a difference between the depth values: [0031] 1, using the palm image segmentation region segmentation algorithm based on fuzzy connectivity value. 前后相邻的像素列构成一条路径,任意一条路径的连通量由路径上相邻像素的连通量最小值。 Before and after the adjacent pixel columns constituting the path, the minimum value of the pixel of the amount of communication paths communicating the amount of any one of the adjacent path. 任意两点之间连通量定义为,两点间所有路径的连通量之最大值。 The amount of communication between any two points is defined as the maximum amount of all communication paths between two points. 通过动态规划算法可以递归地求出任何两点之间的连通量。 Recursively determined amount of communication between any two points by a dynamic programming algorithm.

[0032] 2、对包含深度信息的视频图像进行处理,利用OpenNI库中的人体识别功能支持,得到人体的掌心节点三维位置。 [0032] 2, the video image containing depth information is processed, the use of human recognition support OpenNI library to obtain three-dimensional position of the human palm node.

[0033] 3、计算图像上所有像素点与掌心的连通量:将掌心位置定义为种子点,构造图像上全部点与种子点的连通值。 [0033] 3, the amount of communication and calculation of all the pixels on the image of the palm: palm position is defined as the seed point, all communicating with the seed point value of the point image on the configuration.

[0034] 4、对连通值取适当阈值后可以分割出手掌图像。 [0034] 4, taken after an appropriate threshold value may be communicated to the palm image segmentation.

[0035] 采用的手势识别过程包括以下流程: [0035] The gesture recognition process employed comprises the following processes:

[0036] 1、计算手掌图形的Hu不变矩,并利用Hu不变矩之特征识别手势:Hu不变矩中包含了手掌形状的一般特征,并且具有平移和旋转不变性,对于手势识别来说是比较惯用的一种特征。 [0036] 1, the palm pattern is calculated invariant moments of Hu and Hu moment invariants using the gesture feature recognition: Hu moment invariants contains general features palm shape, and having a translation and rotation invariant, for the gesture recognition He said comparison is a conventional feature.

[0037 ] 2、从视频流中提取模板手势:利用OpenNI库的支持,可以录制出包含深度信息的视频图像资料保存下来。 [0037] 2, extracted from the video stream template gesture: the use of support OpenNI library, you can record the video image data includes depth information preserved. 通过对视频流中的手势进行自动截取,有目的地进行分类整理之后,产生一个模板手势库。 After the video stream by the gesture automatic interception, purposefully sort, generate a template gesture library. 本实例中,对演示中比较常用的动作场景进行了录制,包括:播放幻灯片、书写文字、绘制图像、移动光标等动作场景。 In this example, the presentation of the more commonly used recording operation of a scene, comprising: a slide show, write text, draw the image, a cursor moving action scene and the like. 然后将从这些视频流中截取到的手势进行归类,储存下来作为模板手势库,总共有大约10种手势,每种手势大约150幅模板图像。 These will then intercept the video stream to gesture to classify, store down gesture library as a template, a total of about 10 kinds of gestures, each gesture about 150 template image.

[0038] 3、利用随机森林算法训练手势分类器:使用图像处理库OpenCV的机器学习模块,利用随机森林算法对模板手势库中的图像训练,生成了一个手势分类器。 [0038] 3, using random forest classifier algorithm to train gestures: Using image processing library OpenCV machine learning module for training template image library of gesture, a gesture generated using random forest classifier algorithm.

[0039] 4、利用训练完成后的手势分类器对新输入的手掌图像进行识别,测试分类器的准确率。 [0039] 4, the palm image newly inputted recognition accuracy, using the test classifier classifier gesture after training is completed. 反复测试并调整部分参数后,直到准确率达到70%以上。 After repeated tests and adjust some of the parameters, until the accuracy rate of 70% or more.

[0040] 5、使用训练完成的分类器进行手势的识别。 [0040] 5, using the trained classifier to identify the gesture.

[0041]本实施例中,绘图功能部分包含以下流程: [0041] In this embodiment, the process section contains the following mapping function:

[0042] 1、对输入的手势产生手势响应命令:绘图实例中的手势包含以下5种:“下笔”、“抬笔”、“擦除”、“更新”和“保存”。 [0042] 1, a gesture of the gesture input in response to the command: drawing a gesture instance comprises the following five kinds: "write", "pen up", "erase", "update" and "Save." 每种手势由预先训练完成的手势分类器进行识别。 Each gesture is performed by a pre-trained classifier for gesture recognition. 对识别的结果,计算机产生相应的响应命令,例如:在识别到“下笔”手势时,计算机会开始记录之后时间内手掌的移动位置,直到识别到“抬笔”手势后,产生一条完成的路径。 The results of the identification, the computer produces a corresponding response command, for example: upon identification "write" gesture, the computer begins recording the movement position time after the hand or until identifying the "pen up" gesture after generation path a complete .

[0043] 2、绘图操作由“下笔”手势开始,以“抬笔”手势结束:在识别到“下笔”手势时,计算机会开始记录之后时间内手掌的移动位置,直到识别到“抬笔”手势后,产生一条完成的轨迹。 [0043] 2, the drawing operation is started by the "write" gesture, the "pen up" gesture end: moving position within the palm after identifying the "write" gesture, the computer begins recording until recognized "pen up" after the gesture, to generate a complete trajectory. 计算机会对轨迹进行平滑化,以减少人为原因产生的抖动,对绘制时的轨迹进行美化。 The computer will trajectory smoothing to reduce the jitter of the anthropogenic causes, when drawing the trajectory beautification. 利用指数平滑算法,可以对轨迹进行实时的平滑。 Exponential Smoothing algorithm can track real-time smooth. 在识别到“抬笔”手势之后,计算机在屏幕上绘制出平滑后的轨迹,完成一次绘图操作。 After identifying the "pen up" gesture, computer smoothed trajectory drawn on the screen, to complete a drawing operation.

[0044] 3、擦除操作以“擦除”手势开始,以“抬笔”手势结束:在识别到“擦除”手势时,计算机会开始记录之后时间内手掌的移动位置,并实时地对图像上这些位置上的像素点进行擦除操作,直到识别到“抬笔”手势。 [0044] 3, the erase operation to "erase" gesture starts to "pen up" gesture end: upon recognition of the "erased" gesture, the computer moves the recording start position of the time after the palm, and in real time pixels at these positions on the image erase operation, until the identification "pen up" gesture.

[0045] 4、更新操作在“更新”手势后完成:识别到“更新”手势之后,计算机会将屏幕上已经绘制的图形清除掉。 [0045] 4, the update operation in the "Update" gesture is completed: after identifying the "updating" gesture, the screen will have been drawn on a computer graphics removed.

[0046] 5、保存操作在“保存”手势后完成:识别到“保存”手势之后,计算机会将屏幕上已经绘制的图形保存在文件目录下。 [0046] 5, the save operation in the "Save" gesture is completed: after identifying the "Save" gesture, already drawn on a computer screen will be stored in the pattern file directory.

[0047]本发明可以在软件、固件、硬件或者其结合中实现。 [0047] The present invention may be implemented in software, firmware, hardware, or a combination of. 本发明可以包括在具有计算机可用介质的物品中。 The present invention may include an article having a computer usable medium. 该介质在其中具有例如计算机可读程序代码装置或者逻辑(例如指令,代码,命令等)来提供和使用本发明的能力。 The media has therein, for example, a computer-readable program code means or logic (e.g., instructions, code, commands, etc.) to provide and the ability to use the present invention. 该制造物品可作为计算机系统的一部分或者单独出售。 The article of manufacture can be used as part of a computer system or sold separately. 所有上述变化被认为是要求保护的本发明的一部分。 All of these variations are considered a part of the invention as claimed.

[0048]上述实例只为说明本发明的技术构思及特点,其目的在于让熟悉此项技术的人是能够了解本发明的内容并据以实施,并不能以此限制本发明的保护范围。 [0048] The examples illustrate the technical concept and features of the invention, its object is to only allow a person skilled in the art to understand the present invention and according to embodiments, and not limit the scope of this invention. 凡根据本发明精神实质所做的等效变换或修饰,都应涵盖在本发明的保护范围之内。 Where the spirit of the invention according made equivalent transformation or modification shall fall within the protection scope of the present invention.

Claims (4)

  1. 1.一种基于体感技术的电子演示系统,包括用于实时向被演示者展示内容的屏幕、将演示者的演示内容投放到屏幕上的投影装置,其特征在于所述系统还包括用于捕获演示者动作的体感装置、与投影装置连接的信息处理装置,所述体感装置将捕获演示者动作的图像传输给信息处理装置,所述信息处理装置根据演示者动作的图像判断演示者演示指令和演示内容,并通过投影装置将演示内容实时投放到屏幕上进行实时演示; 所述体感装置为Kinect设备;所述Kinect设备通过Kinect摄像头拍摄演示者的视频图像,并将演示者的视频图像通过usb数据线传送至信息处理装置; 所述信息处理装置为计算机,所述计算机通过视频输出线与投影装置连接,接收从Kinect设备传来的视频图像,并对视频图像进行处理获取演示者的演示指令和演示内容;计算机利用OpenNI自然用户 An electronic presentation system somatosensory Technology, including a real-time screen is to display the contents of the presenter, the presenter of the presentation content delivered to devices on a projection screen, characterized in that the system further comprises means for capturing somatosensory device presenter operation, the information processing apparatus of a projection apparatus connected to a body sensing device captures an image transfer motion of the presenter to the information processing apparatus, the information processing apparatus image determination presenter presentation command based on motion of the presenter and presentations, and the presentation content by the projection apparatus to deliver real-time presentation in real time on a screen; the sensing device is a Kinect apparatus body; said video image photographing device Kinect presenter Kinect by the camera, and the video image by the presenter usb the data transfer lines to the information processing apparatus; the information processing apparatus is a computer, the computer through the video output lines connected to the projection device, a video image transmitted from the receiving apparatus Kinect, image and video processing instructions for a demonstration of the presenter and presentations; computer use natural user OpenNI 面开发包,解析所获得的视频图像和其中包含的深度信息,追踪用户的手部位置;并利用手势分割识别算法,分割手部图像并获得手势;计算机响应控制手势,产生控制命令控制画笔绘图,并控制投影装置把影像投射在屏幕上; 所述投影装置选用投影仪。 Surface development package, parses the obtained video images and which contain depth information, tracking the hand position of the user; and the hand gesture segmentation recognition algorithm, dividing hand image and obtain gesture; computer responds to control gesture, generates a control command to control pen plotter and means to control the projection image is projected on a screen; choose the projection apparatus a projector.
  2. 2.根据权利要求1所述的基于体感技术的电子演示系统,其特征在于所述屏幕设置在投影装置的对侧,并与投影装置的投影范围相适应。 The electronic presentation system based somatosensory technique according to claim 1, wherein said projection screen is disposed on the opposite side of the apparatus, and adapted to the projection range of the device.
  3. 3.—种采用权利要求1所述的电子演示系统进行电子演示方法,其特征在于所述方法包括以下步骤: (1)通过体感装置捕获演示者动作; (2)信息处理装置根据演示者动作的图像判断演示者演示指令和演示内容,并通过投影装置将演示内容实时投放到屏幕上进行实时演示; 所述信息处理装置判断演示者演示指令和演示内容的步骤包括对视频图像处理,手势识别和绘图; 采用的视频图像处理过程按照以下步骤: 1)采用基于模糊连通性的区域分割算法分割手掌图像:基于模糊逻辑的区域连通性,相邻像素之间的连通量根据其深度值之间的差异取介于0-1之间的值;前后相邻的像素列构成一条路径,任意一条路径的连通量由路径上相邻像素的连通量最小值;任意两点之间连通量定义为,两点间所有路径的连通量之最大值;通过动态规划算法可以递归地求出任何两点之间 (2) The information processing apparatus according to the motion of the presenter; (1) captured by the motion of the presenter somatosensory means: an electronic presentation system according to claim 1 3.- kind using an electronic display, characterized in that said method comprises the steps of Analyzing image presenter presentation instruction and presentations, the presentations and to serve real-time presentation in real time on a screen by a projection means; step determines the information processing apparatus and presenter presentation command comprises a content presentation of video image processing, the gesture recognition and drawing; video image processing process using the following steps: communicating between the amount between the pixel value according to the depth of the communication area based on fuzzy logic, adjacent to: 1) use of palm image segmentation algorithm the divided regions based on the fuzzy connectivity difference takes a value between 0-1; adjacent longitudinal pixel columns constituting a route, the amount of any one of the communication path by the communication path pixels adjacent to the minimum amount; communication between any two points is defined as the amount of , the amount of communication between two points of a maximum value for all paths; recursively determined between any two points by a dynamic programming algorithm 的连通量; 2)对包含深度信息的视频图像进行处理,利用OpenNI库中的人体识别功能支持,得到人体的掌心节点三维位置; 3)计算图像上所有像素点与掌心的连通量:将掌心位置定义为种子点,构造图像上全部点与种子点的连通值; 4)对连通值取适当阈值后可以分割出手掌图像; 采用的手势识别过程包括以下流程: 1)计算手掌图形的Hu不变矩,并利用Hu不变矩之特征识别手势:Hu不变矩中包含了手掌形状的一般特征,并且具有平移和旋转不变性,对于手势识别来说是比较惯用的一种特征; 2)从视频流中提取模板手势:利用OpenNI库的支持,可以录制出包含深度信息的视频图像资料保存下来;通过对视频流中的手势进行自动截取,有目的地进行分类整理之后,产生一个模板手势库;本实例中,对演示中比较常用的动作场景进行了录制,包括:播放幻灯片、书写文字 Communication amount; 2) on the video image contains depth information is processed, the use of human recognition support OpenNI library, to give a three-dimensional position of the human palm node; 3) calculate the amount of communication for all the pixels on the image of the palm: the palm position is defined as the seed point, all the communication values ​​configured image point and the seed point; 4) take the appropriate threshold for communicating values ​​can segmented palm image; gesture recognition process employed comprises the following processes: 1) calculate the palm pattern Hu not torque converter, and using the Hu moment invariant feature of recognizing a gesture: Hu moment invariants contains general features palm shape, and having a translation and rotation invariant, gesture recognition is to compare a characteristic of the conventional; 2) extracted from the video stream template gesture: the use of support OpenNI library, you can record the video image data includes depth information saved; then automatically intercept the video stream through gesture, purposefully sort, generate a template gesture library; this example, more commonly used for the presentation of the action scenes were recorded, including: a slideshow, the written word 绘制图像、移动光标的动作场景;然后将从这些视频流中截取到的手势进行归类,储存下来作为模板手势库,总共有10种手势,每种手势150幅模板图像; 3)利用随机森林算法训练手势分类器:使用图像处理库OpenCV的机器学习模块,利用随机森林算法对模板手势库中的图像训练,生成了一个手势分类器; 4)利用训练完成后的手势分类器对新输入的手掌图像进行识别,测试分类器的准确率;反复测试并调整部分参数后,直到准确率达到70%以上; 5)使用训练完成的分类器进行手势的识别; 绘图功能部分包含以下流程: 1)对输入的手势产生手势响应命令:绘图实例中的手势包含以下5种:“下笔”、“抬笔”、“擦除”、“更新”和“保存”;每种手势由预先训练完成的手势分类器进行识别;对识别的结果,计算机产生相应的响应命令;在识别到“下笔”手势时,计算 Rendered image, a cursor moving action scene; then taken from the video stream to classify a gesture, the gesture is stored as a template library down, total of 10 gestures, each gesture template image 150; 3) using the Random Forest algorithm to train gesture classifier: using image processing library OpenCV machine learning module, using random forest algorithms for image training template gesture library, generated a gesture classifier; 4) the use of gesture classifier after training is completed on the new input palm image recognition, accuracy of the test classifier; after repeated tests and adjust some of the parameters, until the accuracy rate of 70% or more; 5) using the trained classifier to identify the gesture; drawing function section contains the following procedure: 1) generating a gesture input gesture command response: drawing a gesture instance comprises the following five kinds: "write", "pen up", "erase", "update" and "save"; each gesture is completed by the pre-trained gesture classifier recognition; the results of recognition, the computer generates a corresponding response command; upon identification "write" gesture, calculating 机会开始记录之后时间内手掌的移动位置,直到识别到“抬笔”手势后,产生一条完成的路径; 2)绘图操作由“下笔”手势开始,以“抬笔”手势结束:在识别到“下笔”手势时,计算机会开始记录之后时间内手掌的移动位置,直到识别到“抬笔”手势后,产生一条完成的轨迹;计算机会对轨迹进行平滑化,以减少人为原因产生的抖动,对绘制时的轨迹进行美化;利用指数平滑算法,可以对轨迹进行实时的平滑;在识别到“抬笔”手势之后,计算机在屏幕上绘制出平滑后的轨迹,完成一次绘图操作; 3)擦除操作以“擦除”手势开始,以“抬笔”手势结束:在识别到“擦除”手势时,计算机会开始记录之后时间内手掌的移动位置,并实时地对图像上这些位置上的像素点进行擦除操作,直到识别到“抬笔”手势; 4)更新操作在“更新”手势后完成:识别到“更 After an opportunity to start recording moving position within the palm until recognized "pen up" gesture after generating a path completed; 2) drawing operation is started by the "write" gesture, the "pen up" gesture end: upon identifying " write "gesture, the computer starts the recording time after the palm movement position until recognized" pen up "the gesture, to generate a complete trajectory; computer would track smoothing to reduce the jitter of the anthropogenic causes of locus drawing landscaping; using exponential smoothing algorithm, real-time trajectory can be smoothed; upon recognition of "pen up" gesture, the computer drawing on the screen the smoothed trajectory, to complete a drawing operation; 3) erase operates to "erase" gesture starts to "pen up" gesture end: upon recognition of the "erased" gesture, the computer moves the recording start position of the time after the palm, and the pixels of the image in real time on these positions point erase operation, until the identification "pen up" gesture; 4) after the update operation "update" gesture is completed: to identify "more ”手势之后,计算机会将屏幕上已经绘制的图形清除掉; 5)保存操作在“保存”手势后完成:识别到“保存”手势之后,计算机会将屏幕上已经绘制的图形保存在文件目录下。 "After gestures, on a computer screen will have mapped graphics cleared; 5) in the save operation," Save "after the completion of the gesture: to recognize" After saving "gesture, on your computer screen will have to draw graphics stored in the file directory under .
  4. 4.根据权利要求3所述的电子演示方法,其特征在于所述方法中演示指令为根据演示者动作的图像判断的演示者手势指令,根据演示者手势指令判断用户需要实时演示的内容。 The electronic presentation method according to claim 3, characterized in that the method according to the presentation command presenter image gesture command judging operation of the presenter, it is determined what the user needs according to the real-time presentation presenter gesture command.
CN 201310091358 2013-03-21 2013-03-21 Electronic presentation systems somatosensory Technology CN103207674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201310091358 CN103207674B (en) 2013-03-21 2013-03-21 Electronic presentation systems somatosensory Technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201310091358 CN103207674B (en) 2013-03-21 2013-03-21 Electronic presentation systems somatosensory Technology

Publications (2)

Publication Number Publication Date
CN103207674A true CN103207674A (en) 2013-07-17
CN103207674B true CN103207674B (en) 2016-06-22

Family

ID=48754923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201310091358 CN103207674B (en) 2013-03-21 2013-03-21 Electronic presentation systems somatosensory Technology

Country Status (1)

Country Link
CN (1) CN103207674B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747196B (en) * 2013-12-31 2017-08-01 北京理工大学 A projection method of sensor-based Kinect
CN104978010A (en) * 2014-04-03 2015-10-14 冠捷投资有限公司 Three-dimensional space handwriting trajectory acquisition method
CN103941866B (en) * 2014-04-08 2017-02-15 河海大学常州校区 The gesture recognition method based on the three-dimensional depth image Kinect
US9185062B1 (en) * 2014-05-31 2015-11-10 Apple Inc. Message user interfaces for capture and transmittal of media and location content
CN104731334A (en) * 2015-03-26 2015-06-24 广东工业大学 Spatial gesture interactive type maritime silk road dynamic history GIS and implementation method
CN104966422A (en) * 2015-07-15 2015-10-07 滁州市状元郎电子科技有限公司 Remote sensing spatial information teaching experiment system
US10003938B2 (en) 2015-08-14 2018-06-19 Apple Inc. Easy location sharing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101694694A (en) * 2009-10-22 2010-04-14 上海交通大学; Finger identification method used in interactive demonstration system
CN102063231A (en) * 2011-01-13 2011-05-18 中科芯集成电路股份有限公司 Non-contact electronic whiteboard system and detection method based on image detection
CN102520793A (en) * 2011-11-30 2012-06-27 苏州奇可思信息科技有限公司 Gesture identification-based conference presentation interaction method
CN102801924A (en) * 2012-07-20 2012-11-28 合肥工业大学 Television program host interaction system based on Kinect

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120278712A1 (en) * 2011-04-27 2012-11-01 Microsoft Corporation Multi-input gestures in hierarchical regions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101694694A (en) * 2009-10-22 2010-04-14 上海交通大学; Finger identification method used in interactive demonstration system
CN102063231A (en) * 2011-01-13 2011-05-18 中科芯集成电路股份有限公司 Non-contact electronic whiteboard system and detection method based on image detection
CN102520793A (en) * 2011-11-30 2012-06-27 苏州奇可思信息科技有限公司 Gesture identification-based conference presentation interaction method
CN102801924A (en) * 2012-07-20 2012-11-28 合肥工业大学 Television program host interaction system based on Kinect

Also Published As

Publication number Publication date Type
CN103207674A (en) 2013-07-17 application

Similar Documents

Publication Publication Date Title
Zhang et al. Visual panel: virtual mouse, keyboard and 3D controller with an ordinary piece of paper
US8259163B2 (en) Display with built in 3D sensing
US20090077504A1 (en) Processing of Gesture-Based User Interactions
US20130249944A1 (en) Apparatus and method of augmented reality interaction
US6594616B2 (en) System and method for providing a mobile input device
Gorodnichy et al. Nouse ‘use your nose as a mouse’perceptual vision technology for hands-free games and interfaces
US20110267265A1 (en) Spatial-input-based cursor projection systems and methods
US20130055143A1 (en) Method for manipulating a graphical user interface and interactive input system employing the same
US20110243380A1 (en) Computing device interface
Yeo et al. Hand tracking and gesture recognition system for human-computer interaction using low-cost hardware
US20120062736A1 (en) Hand and indicating-point positioning method and hand gesture determining method used in human-computer interaction system
US20060092178A1 (en) Method and system for communicating through shared media
US20110197263A1 (en) Systems and methods for providing a spatial-input-based multi-user shared display experience
CN103246351A (en) User interaction system and method
US20120121185A1 (en) Calibrating Vision Systems
Wagner et al. The social signal interpretation (SSI) framework: multimodal signal processing and recognition in real-time
US20110154233A1 (en) Projected display to enhance computer device use
US20130162532A1 (en) Method and system for gesture-based human-machine interaction and computer-readable medium thereof
US8644467B2 (en) Video conferencing system, method, and computer program storage device
CN102339125A (en) Information equipment and control method and system thereof
US20120229590A1 (en) Video conferencing with shared drawing
Clark et al. An interactive augmented reality coloring book
CN102854983A (en) Man-machine interaction method based on gesture recognition
US20120306734A1 (en) Gesture Recognition Techniques
CN104159032A (en) Method and device of adjusting facial beautification effect in camera photographing in real time

Legal Events

Date Code Title Description
C06 Publication
C10 Entry into substantive examination
C14 Grant of patent or utility model
TR01