CN104360729A - Multi-interactive method and device based on Kinect and Unity 3D - Google Patents
Multi-interactive method and device based on Kinect and Unity 3D Download PDFInfo
- Publication number
- CN104360729A CN104360729A CN201410381549.XA CN201410381549A CN104360729A CN 104360729 A CN104360729 A CN 104360729A CN 201410381549 A CN201410381549 A CN 201410381549A CN 104360729 A CN104360729 A CN 104360729A
- Authority
- CN
- China
- Prior art keywords
- kinect
- unity3d
- registration
- coordinate system
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000000694 effects Effects 0.000 claims abstract description 12
- 230000002452 interceptive effect Effects 0.000 claims abstract description 11
- 238000006073 displacement reaction Methods 0.000 claims abstract description 10
- 230000008569 process Effects 0.000 claims description 5
- 238000001514 detection method Methods 0.000 abstract description 7
- 230000004927 fusion Effects 0.000 abstract description 7
- 230000003993 interaction Effects 0.000 description 7
- 230000003190 augmentative effect Effects 0.000 description 6
- 230000001960 triggered effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
Abstract
本发明涉及一种基于Kinect和Unity3D的多交互方法,包括:S1:调整Unity3D中摄像机参数与Kinect有效检测范围一致;S2:利用Kinect确定用户坐标以及地面方程;S3:根据相对位置确定虚拟模拟坐标,并注册虚拟模型;S4:设计交互姿势与语音;S5:确定Unity3D控制模型位移动画以及多媒体效果;S6:对Unity3D中的摄像机获取到的画面与Kinect的摄像头获取的图像进行融合并显示。本发明利用Kinect对语音识别的支持和对人体骨架定位增加虚拟模型三维注册的触发方式,通过肢体动作的识别功能为用户提供更多的交互方式,改善用户的使用体验,利用Unity3D的三维引擎对模型位姿进行自动化处理,极大简化三维注册所需步骤。本发明还公开了一种基于Kinect和Unity3D的多交互装置。
The invention relates to a multi-interaction method based on Kinect and Unity3D, comprising: S1: adjusting camera parameters in Unity3D to be consistent with the effective detection range of Kinect; S2: using Kinect to determine user coordinates and ground equations; S3: determining virtual simulation coordinates according to relative positions , and register the virtual model; S4: Design interactive gestures and voice; S5: Determine the displacement animation and multimedia effects of the Unity3D control model; S6: Fusion and display the images obtained by the camera in Unity3D and the image obtained by the Kinect camera. The present invention utilizes the support of Kinect for voice recognition and the triggering method of adding virtual model three-dimensional registration to the positioning of the human body skeleton, provides more interactive modes for the user through the recognition function of body movements, improves the user experience, and utilizes the three-dimensional engine of Unity3D to The model pose is automatically processed, which greatly simplifies the steps required for 3D registration. The invention also discloses a multi-interaction device based on Kinect and Unity3D.
Description
技术领域 technical field
本发明涉及计算机增强现实技术领域,尤其涉及一种基于Kinect和Unity3D的多交互方法与装置。 The invention relates to the technical field of computer augmented reality, in particular to a multi-interaction method and device based on Kinect and Unity3D. the
背景技术 Background technique
增强现实技术(Augmented Reality)最早与上世纪九十年代提出,现在已经广泛应用于医疗,教育,工业,商业等多个方面。增强现实一个较为通用的定义于1997年由北卡罗来纳大学的Ronald Azuma提出,包括三个主要方面:虚拟与现实结合(Combines real and virtual),即时互动(Interactive in real time),三维注册(Registered in3D)。该技术在屏幕上将虚拟场景叠加在现实场景之上,并使参与者可以和虚拟场景互动。目前增强现实的实现流程一般为:1)通过图像获取装置获取场景图像;2)识别并跟踪场景中标定图像或文字,计算其形变计算其位移旋转矩阵;3)根据标定图像的位置和旋转矩阵,在三维空间中注册对应虚拟模型位置信息;4)融合虚拟模型与真实场景,并将其显示在屏幕上。 Augmented Reality (Augmented Reality) was first proposed in the 1990s, and now it has been widely used in many aspects such as medical care, education, industry, and commerce. A more general definition of augmented reality was proposed by Ronald Azuma of the University of North Carolina in 1997, including three main aspects: Combines real and virtual, Interactive in real time, Registered in 3D ). The technology superimposes a virtual scene on top of the real one on the screen and allows participants to interact with the virtual scene. At present, the implementation process of augmented reality is generally as follows: 1) Obtain the scene image through the image acquisition device; 2) Identify and track the calibration image or text in the scene, calculate its deformation and calculate its displacement and rotation matrix; 3) According to the position and rotation matrix of the calibration image , registering the position information of the corresponding virtual model in the three-dimensional space; 4) fusing the virtual model and the real scene, and displaying it on the screen. the
但是,目前常用技术存在以下几点缺陷:1)交互方式单一化,只能通过标定图像或文字触发虚拟模型注册,且注册后只能对模型进行平移旋转等操作,模型只能跟随标定物运动,交互方式少而且限制较多;2)三维注册算法繁琐,需要根据特征点坐标系确定模型位置与姿态,再将其转换到摄像机坐标系,最后融合虚拟模型和现实场景并根据显示器屏幕坐标对其进行显示。可见现行技术在虚拟模型的三维注册阶段需要较多步骤的计算,操作不够简洁和自动化。 However, the current commonly used technologies have the following defects: 1) The interaction method is simplistic, and the registration of the virtual model can only be triggered by the calibration image or text, and after registration, the model can only be translated and rotated, and the model can only follow the movement of the calibration object , less interaction methods and more restrictions; 2) The 3D registration algorithm is cumbersome, it is necessary to determine the position and posture of the model according to the feature point coordinate system, and then convert it to the camera coordinate system, and finally integrate the virtual model and the real scene and align it according to the coordinates of the display screen. It displays. It can be seen that the current technology requires more calculation steps in the three-dimensional registration stage of the virtual model, and the operation is not simple and automatic enough. the
发明内容 Contents of the invention
本发明所要解决的技术问题是,针对现有技术的不足,如何利用Kinect对语音识别的支持和对人体骨架定位增加虚拟模型三维注册的触发方式,通过肢体动作的识别功能为用户提供更多的交互方式,改善用户的使用体验,以及如何利用Unity3D的三维引擎对模型位姿进行自动化处理,极大简化三维注册所需步骤的关键问题。 The technical problem to be solved by the present invention is, aiming at the deficiencies in the prior art, how to use Kinect to support voice recognition and increase the trigger mode of virtual model three-dimensional registration for human skeleton positioning, and provide users with more information through the recognition function of body movements. How to interact, improve the user experience, and how to use Unity3D's 3D engine to automatically process the model pose, greatly simplifying the key issues of the steps required for 3D registration. the
为此目的,本发明提出了一种基于Kinect和Unity3D的多交互方法,包括: For this purpose, the present invention proposes a kind of multi-interaction method based on Kinect and Unity3D, comprising:
S1:调整Unity3D中摄像机参数与Kinect有效检测范围一致; S1: Adjust the camera parameters in Unity3D to be consistent with the effective detection range of Kinect;
S2:利用Kinect确定用户坐标以及地面方程; S2: Use Kinect to determine user coordinates and ground equations;
S3:根据相对位置确定虚拟模拟坐标,并注册虚拟模型; S3: Determine the virtual simulation coordinates according to the relative position, and register the virtual model;
S4:设计交互姿势与语音; S4: Design interactive gestures and voice;
S5:确定Unity3D控制模型位移动画以及多媒体效果; S5: Determine the displacement animation and multimedia effects of the Unity3D control model;
S6:对Unity3D中的摄像机获取到的画面与Kinect的摄像头获取的图像进行融合并显示。 S6: Fusion and displaying the picture acquired by the camera in Unity3D and the image acquired by the camera of Kinect. the
进一步地,所述步骤S1进一步包括:放置Kinect至现实场景的预设位置,调整现实场景处于Kinect有效检测范围内。 Further, the step S1 further includes: placing the Kinect at a preset position in the real scene, and adjusting the real scene to be within the effective detection range of the Kinect. the
进一步地,所述步骤S1进一步包括:调整Unity3D中摄像机Field of view以及Clipping Planes参数。 Further, the step S1 further includes: adjusting the Camera Field of view and Clipping Planes parameters in Unity3D. the
进一步地,所述步骤S2进一步包括: Further, said step S2 further includes:
S21:使用SkeletonFrame.FloorClipPlane函数确定代表地面的平面方程,其中,在所述Kinect坐标系下的所述平面方程为:Ax+By+Cz+D=0,(A,B,C)为所述平面方程的平面法向量,在所述 Unity3D坐标系下的所述平面方程为:y+E=0,(0,1,0)为所述平面方程的平面法向量; S21: Use the SkeletonFrame.FloorClipPlane function to determine the plane equation representing the ground, wherein the plane equation in the Kinect coordinate system is: Ax+By+Cz+D=0, (A, B, C) is the The plane normal vector of the plane equation, the plane equation under the Unity3D coordinate system is: y+E=0, (0,1,0) is the plane normal vector of the plane equation;
S22:将(A,B,C)旋转至(0,1,0)重合,完成Kinect坐标系与Unity3D坐标系的配准。 S22: Rotate (A, B, C) to coincide with (0, 1, 0), and complete the registration of the Kinect coordinate system and the Unity3D coordinate system. the
进一步地,所述Kinect坐标系与Unity3D坐标系的配准进一步包括:Kinect坐标系下任意点(k1,k2,k3)向Unity3D坐标系转换时,需绕X轴旋转角度为-arctan(B/C),绕Z轴旋转角度为arctan(A/B),旋转半径为旋转后坐标为:(k1cosα-(k2cosβ-k3sinβ)sinα,k1sinα+(k2cosβ-k3sinβ)cosα,k2sinβ+k3cosβ),其中,α=arctan(A/B),β=-arctan(B/C)。 Further, the registration of the Kinect coordinate system and the Unity3D coordinate system further includes: when any point (k 1 , k 2 , k 3 ) in the Kinect coordinate system is converted to the Unity3D coordinate system, the rotation angle around the X axis is -arctan (B/C), the rotation angle around the Z axis is arctan(A/B), and the rotation radius is The coordinates after rotation are: (k 1 cosα-(k 2 cosβ-k 3 sinβ)sinα, k 1 sinα+(k 2 cosβ-k 3 sinβ)cosα, k 2 sinβ+k 3 cosβ), where α=arctan (A/B), β=-arctan (B/C).
进一步地,所述步骤S6进一步包括: Further, said step S6 further includes:
S61:对两幅图像进行取样或差值操作; S61: Perform sampling or difference operation on two images;
S62:对操作后的两幅图像进行遍历,比较两幅图像中与目的图像像素点对应点的深度值; S62: traverse the two images after the operation, and compare the depth values of the points corresponding to the pixel points of the target image in the two images;
S63:将目的图像对应点颜色值设置为深度值较小像素点的颜色值。 S63: Set the color value of the corresponding point of the target image to the color value of the pixel with a smaller depth value. the
进一步地,所述注册虚拟模型还可以通过用户移动至特殊位置触发默认模型的方式。 Further, the registered virtual model can also trigger a default model by the user moving to a special location. the
进一步地,所述注册虚拟模型还可以通过用户语音触发对应模型注册的方式。 Further, the registered virtual model can also trigger the registration of the corresponding model through the voice of the user. the
为此目的,本发明提出了一种基于Kinect和Unity3D的多交互装置,包括: For this purpose, the present invention proposes a kind of multi-interaction device based on Kinect and Unity3D, comprising:
调整模块,用于调整Unity3D中摄像机参数与Kinect有效检测范围一致; The adjustment module is used to adjust the camera parameters in Unity3D to be consistent with the effective detection range of Kinect;
确定坐标与地面方程模块,用于利用Kinect确定用户坐标以及地面方程; Determine the coordinates and ground equation module, used to determine user coordinates and ground equations using Kinect;
虚拟模型注册模块,用于根据相对位置确定虚拟模拟坐标,并注册虚拟模型; The virtual model registration module is used to determine the virtual simulation coordinates according to the relative position, and register the virtual model;
设计模块,用于设计交互姿势与语音; Design module for designing interactive gestures and voice;
确定效果模块,用于确定Unity3D控制模型位移动画以及多媒体效果; Determine the effect module, which is used to determine the displacement animation and multimedia effects of the Unity3D control model;
图像融合模块,用于对Unity3D中的摄像机获取到的画面与Kinect的摄像头获取的图像进行融合并显示。 The image fusion module is used to fuse and display the images obtained by the camera in Unity3D and the images obtained by the camera of Kinect. the
本发明所公开的一种基于Kinect和Unity3D的多交互方法,首先通过设置Unity3D中摄像机的位置与属性,简化了真实场景坐标系与虚拟场景坐标系之间的转换;其次通过Kinect获取用户在Unity中的对应坐标,以及代表地面的平面方程,然后可以根据待注册虚拟模型与地面和用户的相对位置关系确定三维注册坐标,其触发注册的机制更加灵活,可以在用户移动到特定位置时进行触发,也可以使用语音识别模块进行触发。再次丰富了模型注册后的交互方式,可以通过肢体动作和语音操作模型与之交互;最后利用Unity3D中的Transform组件和Mecanim动画系统简化了虚拟模型位移变化和动画效果的实现。本发明还公开了一种基于Kinect和Unity3D的多交互装置。 A kind of multi-interaction method based on Kinect and Unity3D disclosed by the present invention firstly simplifies the conversion between the real scene coordinate system and the virtual scene coordinate system by setting the position and attribute of the camera in Unity3D; The corresponding coordinates in , and the plane equation representing the ground, and then the three-dimensional registration coordinates can be determined according to the relative positional relationship between the virtual model to be registered and the ground and the user. The trigger registration mechanism is more flexible and can be triggered when the user moves to a specific position , and can also be triggered using the speech recognition module. Once again, the interaction mode after the model registration is enriched, and the model can be interacted with through body movements and voice operations; finally, the Transform component and the Mecanim animation system in Unity3D are used to simplify the realization of virtual model displacement changes and animation effects. The invention also discloses a multi-interaction device based on Kinect and Unity3D. the
附图说明 Description of drawings
通过参考附图会更加清楚的理解本发明的特征和优点,附图是示意性的而不应理解为对本发明进行任何限制,在附图中: The features and advantages of the present invention will be more clearly understood by referring to the accompanying drawings, which are schematic and should not be construed as limiting the present invention in any way, in the accompanying drawings:
图1示出了本发明实施例中的一种基于Kinect和Unity3D的多交互方法的步骤流程图; Fig. 1 shows a kind of flow chart of steps based on the multi-interaction method of Kinect and Unity3D in the embodiment of the present invention;
图2示出了本发明实施例中的一种基于Kinect和Unity3D的多交 互装置的结构图。 Fig. 2 shows a kind of structural diagram based on Kinect and Unity3D multi-interaction device in the embodiment of the present invention. the
具体实施方式 Detailed ways
下面将结合附图对本发明的实施例进行详细描述。 Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. the
如图1所示,本发明提供了一种基于Kinect和Unity3D的多交互方法,包括具体以下步骤: As shown in Figure 1, the present invention provides a kind of multi-interaction method based on Kinect and Unity3D, comprises specific following steps:
步骤S1:调整Unity3D中摄像机参数与Kinect有效检测范围一致。具体地,放置Kinect至现实场景的预设位置,调整现实场景处于Kinect有效检测范围内,其中,有效范围是指距摄像头1.2-3.6米,水平57度,垂直43度。 Step S1: Adjust the camera parameters in Unity3D to be consistent with the effective detection range of Kinect. Specifically, place the Kinect to the preset position of the real scene, and adjust the real scene to be within the effective detection range of the Kinect, wherein the effective range refers to 1.2-3.6 meters from the camera, 57 degrees horizontally, and 43 degrees vertically. the
进一步地,在Kinect返回数据的坐标系中,原点即是Kinect的传感器,因此将Unity3D中摄像机放置于坐标原点,以方便三维注册时计算虚拟模型坐标。调整Unity3D中摄像机Field of view以及Clipping Planes参数,将Field of view,Clipping Planes等参数与Kinect的有效范围设置成一样的数值。 Furthermore, in the coordinate system of the data returned by Kinect, the origin is the Kinect sensor, so the camera in Unity3D is placed at the coordinate origin to facilitate the calculation of virtual model coordinates during 3D registration. Adjust the Field of view and Clipping Planes parameters of the camera in Unity3D, and set the Field of view, Clipping Planes and other parameters to the same value as the effective range of Kinect. the
步骤S2:利用Kinect确定用户坐标以及地面方程。 Step S2: Use Kinect to determine user coordinates and ground equations. the
具体地,使用SkeletonFrame.FloorClipPlane函数确定代表地面的平面方程,其中,在Kinect坐标系下的平面方程为:Ax+By+Cz+D=0,(A,B,C)为平面方程的平面法向量,在Unity3D坐标系下的平面方程为:y+E=0,(0,1,0)为平面方程的平面法向量;将(A,B,C)旋转至(0,1,0)重合,完成Kinect坐标系与Unity3D坐标系的配准。 Specifically, use the SkeletonFrame.FloorClipPlane function to determine the plane equation representing the ground, wherein the plane equation in the Kinect coordinate system is: Ax+By+Cz+D=0, (A, B, C) is the plane method of the plane equation Vector, the plane equation in the Unity3D coordinate system is: y+E=0, (0, 1, 0) is the plane normal vector of the plane equation; rotate (A, B, C) to (0, 1, 0) Coincidentally, complete the registration of the Kinect coordinate system and the Unity3D coordinate system. the
进一步地,Kinect坐标系与Unity3D坐标系的配准进一步包括:Kinect坐标系下任意点(k1,k2,k3)向Unity3D坐标系转换时,需绕X轴旋转角度为-arctan(B/C),绕Z轴旋转角度为arctan(A/B),旋转半径为 旋转后坐标为:(k1cosα-(k2cosβ-k3sinβ)sinα,k1sinα+(k2cosβ-k3sinβ)cosα,k2sinβ+k3cosβ),其中,α=arctan(A/B), β=-arctan(B/C)。 Furthermore, the registration between the Kinect coordinate system and the Unity3D coordinate system further includes: when any point (k 1 , k 2 , k 3 ) in the Kinect coordinate system is converted to the Unity3D coordinate system, the rotation angle around the X axis is -arctan(B /C), the rotation angle around the Z axis is arctan(A/B), and the rotation radius is The coordinates after rotation are: (k 1 cosα-(k 2 cosβ-k 3 sinβ)sinα, k 1 sinα+(k 2 cosβ-k 3 sinβ)cosα, k 2 sinβ+k 3 cosβ), where α=arctan (A/B), β=-arctan (B/C).
步骤S3:根据相对位置确定虚拟模拟坐标,并注册虚拟模型。 Step S3: Determine the virtual simulation coordinates according to the relative position, and register the virtual model. the
具体地,根据上述步骤的Kinect坐标系向Unity3D坐标系进行转换的转换公式,将Kinect SDK中API返回的骨骼位置点Skeleton Point转换至Unity3D坐标系。在Unity3D坐标系下根据地面高度,用户坐标和虚拟模型与用户的相对位置将模型定位至所需坐标;或者选择默认模型进行注册,或者向语音库添加与模型相关的词语进行注册,其中,若干词语对应一个模型,通过Kinect Speech模块语音识别。当用户说出语音库中存在的词语后,在场景中注册与该词语对应模型三维模型。 Specifically, according to the conversion formula for converting the Kinect coordinate system to the Unity3D coordinate system in the above steps, the Skeleton Point returned by the API in the Kinect SDK is converted to the Unity3D coordinate system. In the Unity3D coordinate system, locate the model to the required coordinates according to the ground height, user coordinates and the relative position of the virtual model and the user; or select the default model for registration, or add words related to the model to the voice library for registration, among which, several Words correspond to a model, which is recognized by the Kinect Speech module. After the user speaks a word in the speech database, a three-dimensional model corresponding to the word is registered in the scene. the
步骤S4:设计交互姿势与语音。 Step S4: Design interactive gestures and voice. the
具体地,设计交互姿势,确定每种操作的肢体动作集合。例如:使用手臂悬停表示选择物体或点击按钮;移动手臂表示滑动鼠标或平移模型;两手远离、靠近表示缩放模型;两手抱球旋转表示旋转模型等,且通过语音实现简单交互。例如:模型的显示、消失,多媒体的播放、暂停等。 Specifically, design interaction poses and determine the set of body movements for each operation. For example: using the arm to hover means selecting an object or clicking a button; moving the arm means sliding the mouse or panning the model; moving the hands away and approaching means zooming the model; rotating the ball with both hands means rotating the model, etc., and simple interaction is realized through voice. For example: display and disappear of the model, play and pause of multimedia, etc. the
步骤S5:确定Unity3D控制模型位移动画以及多媒体效果。 Step S5: Determine the displacement animation and multimedia effects of the Unity3D control model. the
具体地,根据用户的肢体动作对模型进行相应操作。利用Unity3D SDK中GameObject对象中的Transform组件对模型进行平移,旋转,缩放等操作;使用Mecanim动画系统控制模型对用户做出跟随,跑动,引导等设计好的交互动作;使用Audio组件和Movie Textures组件控制多媒体效果。 Specifically, corresponding operations are performed on the model according to the user's body movements. Use the Transform component in the GameObject object in the Unity3D SDK to translate, rotate, and scale the model; use the Mecanim animation system to control the model to follow, run, guide and other designed interactive actions for the user; use Audio components and Movie Textures Components control multimedia effects. the
步骤S6:对Unity3D中的摄像机获取到的画面与Kinect的摄像头获取的图像进行融合并显示。 Step S6: Fusion and displaying the picture acquired by the camera in Unity3D and the image acquired by the camera of Kinect. the
具体地,步骤S6进一步包括: Specifically, step S6 further includes:
步骤S61:对两幅图像进行取样或差值操作,使之缩放至目的图像大小。 Step S61: Sampling or difference operation is performed on the two images to scale them to the size of the target image. the
步骤S62:对操作后的两幅图像进行遍历,比较两幅图像中与目的图像像素点对应点的深度值; Step S62: traverse the two images after the operation, and compare the depth values of the points corresponding to the pixel points of the target image in the two images;
步骤S63:将目的图像对应点颜色值设置为深度值较小像素点的颜色值。 Step S63: Set the color value of the corresponding point of the target image to the color value of the pixel with a smaller depth value. the
本发明公开的一种基于Kinect和Unity3D的多交互方法,为三维注册操作简单的增强现实技术。利用Kinect对语音识别的支持和对人体骨架定位增加虚拟模型三维注册的触发方式,通过肢体动作的识别功能为用户提供更多的交互方式,改善了用户的使用体验;利用Unity3D的三维引擎对模型位姿进行自动化处理,极大简化了三维注册所需步骤。即综合利用体感交互设备和三维游戏引擎,简化了三维注册流程,增加了三维注册触发方式,丰富了用户交互途径,完善了用户操作体验。 A multi-interaction method based on Kinect and Unity3D disclosed by the invention is an augmented reality technology with simple three-dimensional registration operation. Utilize Kinect's support for speech recognition and human skeleton positioning to increase the trigger method of 3D registration of virtual models, provide users with more interactive methods through the recognition function of body movements, and improve user experience; use Unity3D's 3D engine to model The pose is automatically processed, which greatly simplifies the steps required for 3D registration. That is to say, the integrated use of somatosensory interactive equipment and 3D game engine simplifies the 3D registration process, increases the trigger method of 3D registration, enriches the user interaction channels, and improves the user operation experience. the
如图2所示,本发明提供了一种基于Kinect和Unity3D的多交互装置10,包括:调整模块101、确定坐标与地面方程模块102、虚拟模型注册模块103、设计模块104、确定效果模块105以及图像融合模块106。 As shown in Figure 2, the present invention provides a kind of multi-interaction device 10 based on Kinect and Unity3D, comprising: adjustment module 101, determination coordinate and ground equation module 102, virtual model registration module 103, design module 104, determination effect module 105 and an image fusion module 106 . the
具体地,调整模块101用于调整Unity3D中摄像机参数与Kinect有效检测范围一致;确定坐标与地面方程模块102用于利用Kinect确定用户坐标以及地面方程;虚拟模型注册模块103用于根据相对位置确定虚拟模拟坐标,并注册虚拟模型;设计模块104用于设计交互姿势与语音;确定效果模块105用于确定Unity3D控制模型位移动画以及多媒体效果;图像融合模块106用于对Unity3D中的摄像机获取到的画面与Kinect的摄像头获取的图像进行融合并显示。 Specifically, the adjustment module 101 is used to adjust the camera parameters in Unity3D to be consistent with the effective detection range of Kinect; the determination coordinate and ground equation module 102 is used to determine the user coordinates and the ground equation using Kinect; the virtual model registration module 103 is used to determine the virtual model according to the relative position. Simulate the coordinates and register the virtual model; the design module 104 is used to design interactive gestures and voice; the determination effect module 105 is used to determine the displacement animation and multimedia effects of the Unity3D control model; the image fusion module 106 is used to obtain the pictures obtained by the camera in Unity3D Fusion and display with the image acquired by the camera of Kinect. the
本发明所公开的一种基于Kinect和Unity3D的多交互方法,首先 通过设置Unity3D中摄像机的位置与属性,简化了真实场景坐标系与虚拟场景坐标系之间的转换;其次通过Kinect获取用户在Unity中的对应坐标,以及代表地面的平面方程,然后可以根据待注册虚拟模型与地面和用户的相对位置关系确定三维注册坐标,其触发注册的机制更加灵活,可以在用户移动到特定位置时进行触发,也可以使用语音识别模块进行触发。再次丰富了模型注册后的交互方式,可以通过肢体动作和语音操作模型与之交互;最后利用Unity3D中的Transform组件和Mecanim动画系统简化了虚拟模型位移变化和动画效果的实现。本发明还公开了一种基于Kinect和Unity3D的多交互装置。 A kind of multi-interaction method based on Kinect and Unity3D disclosed by the present invention firstly simplifies the conversion between the real scene coordinate system and the virtual scene coordinate system by setting the position and attribute of the camera in Unity3D; The corresponding coordinates in , and the plane equation representing the ground, and then the three-dimensional registration coordinates can be determined according to the relative positional relationship between the virtual model to be registered and the ground and the user. The trigger registration mechanism is more flexible and can be triggered when the user moves to a specific position , and can also be triggered using the speech recognition module. Once again, the interaction mode after the model registration is enriched, and the model can be interacted with through body movements and voice operations; finally, the Transform component and the Mecanim animation system in Unity3D are used to simplify the realization of virtual model displacement changes and animation effects. The invention also discloses a multi-interaction device based on Kinect and Unity3D. the
以上实施方式仅用于说明本发明,而并非对本发明的限制,有关技术领域的普通技术人员,在不脱离本发明的精神和范围的情况下,还可以做出各种变化和变型,因此所有等同的技术方案也属于本发明的范畴,本发明的专利保护范围应由权利要求限定。 The above embodiments are only used to illustrate the present invention, but not to limit the present invention. Those of ordinary skill in the relevant technical field can make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, all Equivalent technical solutions also belong to the category of the present invention, and the scope of patent protection of the present invention should be defined by the claims. the
虽然结合附图描述了本发明的实施方式,但是本领域技术人员可以在不脱离本发明的精神和范围的情况下做出各种修改和变型,这样的修改和变型均落入由所附权利要求所限定的范围之内。 Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art can make various modifications and variations without departing from the spirit and scope of the present invention. within the bounds of the requirements. the
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410381549.XA CN104360729B (en) | 2014-08-05 | 2014-08-05 | Many exchange methods and device based on Kinect and Unity3D |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410381549.XA CN104360729B (en) | 2014-08-05 | 2014-08-05 | Many exchange methods and device based on Kinect and Unity3D |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104360729A true CN104360729A (en) | 2015-02-18 |
CN104360729B CN104360729B (en) | 2017-10-10 |
Family
ID=52527997
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410381549.XA Active CN104360729B (en) | 2014-08-05 | 2014-08-05 | Many exchange methods and device based on Kinect and Unity3D |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104360729B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106125903A (en) * | 2016-04-24 | 2016-11-16 | 林云帆 | Many people interactive system and method |
CN106791478A (en) * | 2016-12-15 | 2017-05-31 | 山东数字人科技股份有限公司 | A kind of three-dimensional data real-time volume display systems |
CN107330978A (en) * | 2017-06-26 | 2017-11-07 | 山东大学 | The augmented reality modeling experiencing system and method mapped based on position |
CN107551551A (en) * | 2017-08-09 | 2018-01-09 | 广东欧珀移动通信有限公司 | Game effect construction method and device |
CN107861714A (en) * | 2017-10-26 | 2018-03-30 | 天津科技大学 | The development approach and system of car show application based on IntelRealSense |
CN108096836A (en) * | 2017-12-20 | 2018-06-01 | 深圳市百恩互动娱乐有限公司 | A kind of method that true man's real scene shooting makes game |
CN109089017A (en) * | 2018-09-05 | 2018-12-25 | 宁波梅霖文化科技有限公司 | Magic virtual bench |
CN109782911A (en) * | 2018-12-30 | 2019-05-21 | 广州嘉影软件有限公司 | Double method for catching and system based on virtual reality |
CN110728739A (en) * | 2019-09-30 | 2020-01-24 | 杭州师范大学 | A method of virtual human control and interaction based on video stream |
CN111913577A (en) * | 2020-07-31 | 2020-11-10 | 武汉木子弓数字科技有限公司 | Three-dimensional space interaction method based on Kinect |
CN113709537A (en) * | 2020-05-21 | 2021-11-26 | 云米互联科技(广东)有限公司 | User interaction method based on 5G television, 5G television and readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110169927A1 (en) * | 2010-01-13 | 2011-07-14 | Coco Studios | Content Presentation in a Three Dimensional Environment |
CN103049618A (en) * | 2012-12-30 | 2013-04-17 | 江南大学 | Intelligent home displaying method on basis of Kinect |
CN103181157A (en) * | 2011-07-28 | 2013-06-26 | 三星电子株式会社 | Plane-characteristic-based markerless augmented reality system and method for operating same |
-
2014
- 2014-08-05 CN CN201410381549.XA patent/CN104360729B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110169927A1 (en) * | 2010-01-13 | 2011-07-14 | Coco Studios | Content Presentation in a Three Dimensional Environment |
CN103181157A (en) * | 2011-07-28 | 2013-06-26 | 三星电子株式会社 | Plane-characteristic-based markerless augmented reality system and method for operating same |
CN103049618A (en) * | 2012-12-30 | 2013-04-17 | 江南大学 | Intelligent home displaying method on basis of Kinect |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106125903B (en) * | 2016-04-24 | 2021-11-16 | 林云帆 | Multi-person interaction system and method |
CN106125903A (en) * | 2016-04-24 | 2016-11-16 | 林云帆 | Many people interactive system and method |
CN106791478A (en) * | 2016-12-15 | 2017-05-31 | 山东数字人科技股份有限公司 | A kind of three-dimensional data real-time volume display systems |
CN107330978A (en) * | 2017-06-26 | 2017-11-07 | 山东大学 | The augmented reality modeling experiencing system and method mapped based on position |
CN107551551A (en) * | 2017-08-09 | 2018-01-09 | 广东欧珀移动通信有限公司 | Game effect construction method and device |
CN107861714A (en) * | 2017-10-26 | 2018-03-30 | 天津科技大学 | The development approach and system of car show application based on IntelRealSense |
CN108096836A (en) * | 2017-12-20 | 2018-06-01 | 深圳市百恩互动娱乐有限公司 | A kind of method that true man's real scene shooting makes game |
CN109089017A (en) * | 2018-09-05 | 2018-12-25 | 宁波梅霖文化科技有限公司 | Magic virtual bench |
CN109782911A (en) * | 2018-12-30 | 2019-05-21 | 广州嘉影软件有限公司 | Double method for catching and system based on virtual reality |
CN109782911B (en) * | 2018-12-30 | 2022-02-08 | 广州嘉影软件有限公司 | Whole body motion capture method and system based on virtual reality |
CN110728739A (en) * | 2019-09-30 | 2020-01-24 | 杭州师范大学 | A method of virtual human control and interaction based on video stream |
CN110728739B (en) * | 2019-09-30 | 2023-04-14 | 杭州师范大学 | A Virtual Human Control and Interaction Method Based on Video Stream |
CN113709537A (en) * | 2020-05-21 | 2021-11-26 | 云米互联科技(广东)有限公司 | User interaction method based on 5G television, 5G television and readable storage medium |
CN113709537B (en) * | 2020-05-21 | 2023-06-13 | 云米互联科技(广东)有限公司 | User interaction method based on 5G television, 5G television and readable storage medium |
CN111913577A (en) * | 2020-07-31 | 2020-11-10 | 武汉木子弓数字科技有限公司 | Three-dimensional space interaction method based on Kinect |
Also Published As
Publication number | Publication date |
---|---|
CN104360729B (en) | 2017-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104360729B (en) | Many exchange methods and device based on Kinect and Unity3D | |
US20220139051A1 (en) | Creating a viewport in a hybrid-reality system | |
CN102638653B (en) | Automatic face tracing method on basis of Kinect | |
TWI505709B (en) | System and method for determining individualized depth information in augmented reality scene | |
US9256986B2 (en) | Automated guidance when taking a photograph, using virtual objects overlaid on an image | |
US9268410B2 (en) | Image processing device, image processing method, and program | |
US11417365B1 (en) | Methods, systems and apparatuses for multi-directional still pictures and/or multi-directional motion pictures | |
CN104781849B (en) | Monocular vision positions the fast initialization with building figure (SLAM) simultaneously | |
US20120212405A1 (en) | System and method for presenting virtual and augmented reality scenes to a user | |
JP6456347B2 (en) | INSITU generation of plane-specific feature targets | |
WO2017134886A1 (en) | Information processing device, information processing method, and recording medium | |
CN109584295A (en) | The method, apparatus and system of automatic marking are carried out to target object in image | |
KR20130108643A (en) | Systems and methods for a gaze and gesture interface | |
CN106325509A (en) | Three-dimensional gesture recognition method and system | |
CN108830944B (en) | Optical perspective three-dimensional near-to-eye display system and display method | |
CN107368314A (en) | Course Design of Manufacture teaching auxiliary system and development approach based on mobile AR | |
WO2020253716A1 (en) | Image generation method and device | |
CN112907652B (en) | Camera pose acquisition method, video processing method, display device, and storage medium | |
US11961195B2 (en) | Method and device for sketch-based placement of virtual objects | |
Afif et al. | Orientation control for indoor virtual landmarks based on hybrid-based markerless augmented reality | |
CN103700128B (en) | Mobile equipment and enhanced display method thereof | |
Sobota et al. | Mixed reality: a known unknown | |
US9292165B2 (en) | Multiple-mode interface for spatial input devices | |
Miyake et al. | Outdoor markerless augmented reality | |
Lv et al. | Interaction design in augmented reality on the smartphone |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
ASS | Succession or assignment of patent right |
Owner name: BEIJING RESEARCH CENTER OF INTELLIGENT EQUIPMENT F Free format text: FORMER OWNER: BEIJING AGRICULTURE INFORMATION TECHNOLOGY RESEARCH CENTER Effective date: 20150804 Owner name: BEIJING AGRICULTURE INFORMATION TECHNOLOGY RESEARC Effective date: 20150804 |
|
C41 | Transfer of patent application or patent right or utility model | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20150804 Address after: Block 318b, No. 11 building, 100097 Beijing City, Haidian District agricultural A shuguangyuanzhong Road Applicant after: Beijing Research Center of Intelligent Equipment for Agriculture Applicant after: Beijing Research Center for Information Technology in Agriculture Address before: Block 318b, No. 11 building, 100097 Beijing City, Haidian District agricultural A shuguangyuanzhong Road Applicant before: Beijing Research Center for Information Technology in Agriculture |
|
GR01 | Patent grant | ||
GR01 | Patent grant |