CN104360729B - Many exchange methods and device based on Kinect and Unity3D - Google Patents

Many exchange methods and device based on Kinect and Unity3D Download PDF

Info

Publication number
CN104360729B
CN104360729B CN201410381549.XA CN201410381549A CN104360729B CN 104360729 B CN104360729 B CN 104360729B CN 201410381549 A CN201410381549 A CN 201410381549A CN 104360729 B CN104360729 B CN 104360729B
Authority
CN
China
Prior art keywords
kinect
unity3d
model
registration
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410381549.XA
Other languages
Chinese (zh)
Other versions
CN104360729A (en
Inventor
王虓
郭新宇
吴升
温维亮
王传宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Research Center for Information Technology in Agriculture
Beijing Research Center of Intelligent Equipment for Agriculture
Original Assignee
Beijing Research Center for Information Technology in Agriculture
Beijing Research Center of Intelligent Equipment for Agriculture
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Research Center for Information Technology in Agriculture, Beijing Research Center of Intelligent Equipment for Agriculture filed Critical Beijing Research Center for Information Technology in Agriculture
Priority to CN201410381549.XA priority Critical patent/CN104360729B/en
Publication of CN104360729A publication Critical patent/CN104360729A/en
Application granted granted Critical
Publication of CN104360729B publication Critical patent/CN104360729B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明涉及一种基于Kinect和Unity3D的多交互方法,包括:S1:调整Unity3D中摄像机参数与Kinect有效检测范围一致;S2:利用Kinect确定用户坐标以及地面方程;S3:根据相对位置确定虚拟模拟坐标,并注册虚拟模型;S4:设计交互姿势与语音;S5:确定Unity3D控制模型位移动画以及多媒体效果;S6:对Unity3D中的摄像机获取到的画面与Kinect的摄像头获取的图像进行融合并显示。本发明利用Kinect对语音识别的支持和对人体骨架定位增加虚拟模型三维注册的触发方式,通过肢体动作的识别功能为用户提供更多的交互方式,改善用户的使用体验,利用Unity3D的三维引擎对模型位姿进行自动化处理,极大简化三维注册所需步骤。本发明还公开了一种基于Kinect和Unity3D的多交互装置。

The invention relates to a multi-interaction method based on Kinect and Unity3D, comprising: S1: adjusting camera parameters in Unity3D to be consistent with the effective detection range of Kinect; S2: using Kinect to determine user coordinates and ground equations; S3: determining virtual simulation coordinates according to relative positions , and register the virtual model; S4: Design interactive gestures and voice; S5: Determine the displacement animation and multimedia effects of the Unity3D control model; S6: Fusion and display the images obtained by the camera in Unity3D and the image obtained by the Kinect camera. The present invention utilizes the support of Kinect for voice recognition and the triggering method of adding virtual model three-dimensional registration to the positioning of the human body skeleton, provides more interactive modes for the user through the recognition function of body movements, improves the user experience, and utilizes the three-dimensional engine of Unity3D to The model pose is automatically processed, which greatly simplifies the steps required for 3D registration. The invention also discloses a multi-interaction device based on Kinect and Unity3D.

Description

基于Kinect和Unity3D的多交互方法与装置Multi-interaction method and device based on Kinect and Unity3D

技术领域technical field

本发明涉及计算机增强现实技术领域,尤其涉及一种基于Kinect和Unity3D的多交互方法与装置。The invention relates to the technical field of computer augmented reality, in particular to a multi-interaction method and device based on Kinect and Unity3D.

背景技术Background technique

增强现实技术(Augmented Reality)最早与上世纪九十年代提出,现在已经广泛应用于医疗,教育,工业,商业等多个方面。增强现实一个较为通用的定义于1997年由北卡罗来纳大学的Ronald Azuma提出,包括三个主要方面:虚拟与现实结合(Combines realand virtual),即时互动(Interactive in real time),三维注册(Registered in 3D)。该技术在屏幕上将虚拟场景叠加在现实场景之上,并使参与者可以和虚拟场景互动。目前增强现实的实现流程一般为:1)通过图像获取装置获取场景图像;2)识别并跟踪场景中标定图像或文字,计算其形变计算其位移旋转矩阵;3)根据标定图像的位置和旋转矩阵,在三维空间中注册对应虚拟模型位置信息;4)融合虚拟模型与真实场景,并将其显示在屏幕上。Augmented Reality technology (Augmented Reality) was first proposed in the 1990s, and now it has been widely used in many aspects such as medical care, education, industry, and commerce. A more general definition of augmented reality was proposed by Ronald Azuma of the University of North Carolina in 1997, including three main aspects: Combines real and virtual, Interactive in real time, Registered in 3D ). The technology superimposes a virtual scene on top of the real one on the screen and allows participants to interact with the virtual scene. At present, the implementation process of augmented reality is generally as follows: 1) Obtain the scene image through the image acquisition device; 2) Identify and track the calibration image or text in the scene, calculate its deformation and calculate its displacement and rotation matrix; 3) According to the position and rotation matrix of the calibration image , registering the position information of the corresponding virtual model in the three-dimensional space; 4) fusing the virtual model and the real scene, and displaying it on the screen.

但是,目前常用技术存在以下几点缺陷:1)交互方式单一化,只能通过标定图像或文字触发虚拟模型注册,且注册后只能对模型进行平移旋转等操作,模型只能跟随标定物运动,交互方式少而且限制较多;2)三维注册算法繁琐,需要根据特征点坐标系确定模型位置与姿态,再将其转换到摄像机坐标系,最后融合虚拟模型和现实场景并根据显示器屏幕坐标对其进行显示。可见现行技术在虚拟模型的三维注册阶段需要较多步骤的计算,操作不够简洁和自动化。However, the current commonly used technologies have the following defects: 1) The interaction method is simplistic, and the registration of the virtual model can only be triggered by the calibration image or text, and after registration, the model can only be translated and rotated, and the model can only follow the movement of the calibration object , less interaction methods and more restrictions; 2) The 3D registration algorithm is cumbersome, it is necessary to determine the position and posture of the model according to the feature point coordinate system, and then convert it to the camera coordinate system, and finally integrate the virtual model and the real scene and align it according to the coordinates of the display screen. It displays. It can be seen that the current technology requires more calculation steps in the three-dimensional registration stage of the virtual model, and the operation is not simple and automatic enough.

发明内容Contents of the invention

本发明所要解决的技术问题是,针对现有技术的不足,如何利用Kinect对语音识别的支持和对人体骨架定位增加虚拟模型三维注册的触发方式,通过肢体动作的识别功能为用户提供更多的交互方式,改善用户的使用体验,以及如何利用Unity3D的三维引擎对模型位姿进行自动化处理,极大简化三维注册所需步骤的关键问题。The technical problem to be solved by the present invention is, aiming at the deficiencies in the prior art, how to use Kinect to support voice recognition and increase the trigger mode of virtual model three-dimensional registration for human skeleton positioning, and provide users with more information through the recognition function of body movements. How to interact, improve the user experience, and how to use Unity3D's 3D engine to automatically process the model pose, greatly simplifying the key issues of the steps required for 3D registration.

为此目的,本发明提出了一种基于Kinect和Unity3D的多交互方法,包括:For this purpose, the present invention proposes a kind of multi-interaction method based on Kinect and Unity3D, comprising:

S1:调整Unity3D中摄像机参数与Kinect有效检测范围一致;S1: Adjust the camera parameters in Unity3D to be consistent with the effective detection range of Kinect;

S2:利用Kinect确定用户坐标以及地面方程;S2: Use Kinect to determine user coordinates and ground equations;

S3:根据相对位置确定虚拟模拟坐标,并注册虚拟模型;S3: Determine the virtual simulation coordinates according to the relative position, and register the virtual model;

S4:设计交互姿势与语音;S4: Design interactive gestures and voice;

S5:确定Unity3D控制模型位移动画以及多媒体效果;S5: Determine the displacement animation and multimedia effects of the Unity3D control model;

S6:对Unity3D中的摄像机获取到的画面与Kinect的摄像头获取的图像进行融合并显示。S6: Fusion and displaying the picture acquired by the camera in Unity3D and the image acquired by the camera of Kinect.

进一步地,所述步骤S1进一步包括:放置Kinect至现实场景的预设位置,调整现实场景处于Kinect有效检测范围内。Further, the step S1 further includes: placing the Kinect at a preset position in the real scene, and adjusting the real scene to be within the effective detection range of the Kinect.

进一步地,所述步骤S1进一步包括:调整Unity3D中摄像机Field of view以及Clipping Planes参数。Further, the step S1 further includes: adjusting Field of view and Clipping Planes parameters of the camera in Unity3D.

进一步地,所述步骤S2进一步包括:Further, the step S2 further includes:

S21:使用SkeletonFrame.FloorClipPlane函数确定代表地面的平面方程,其中,在所述Kinect坐标系下的所述平面方程为:Ax+By+Cz+D=0,(A,B,C)为所述平面方程的平面法向量,在所述Unity3D坐标系下的所述平面方程为:y+E=0,(0,1,0)为所述平面方程的平面法向量;S21: Use the SkeletonFrame.FloorClipPlane function to determine the plane equation representing the ground, wherein the plane equation in the Kinect coordinate system is: Ax+By+Cz+D=0, (A, B, C) is the The plane normal vector of plane equation, described plane equation under described Unity3D coordinate system is: y+E=0, (0,1,0) is the plane normal vector of described plane equation;

S22:将(A,B,C)旋转至(0,1,0)重合,完成Kinect坐标系与Unity3D坐标系的配准。S22: Rotate (A, B, C) to coincide with (0, 1, 0), and complete the registration of the Kinect coordinate system and the Unity3D coordinate system.

进一步地,所述Kinect坐标系与Unity3D坐标系的配准进一步包括:Kinect坐标系下任意点(k1,k2,k3)向Unity3D坐标系转换时,需绕X轴旋转角度为-arctan(B/C),绕Z轴旋转角度为arctan(A/B),旋转半径为旋转后坐标为:(k1cosα-(k2cosβ-k3sinβ)sinα,k1sinα+(k2cosβ-k3sinβ)cosα,k2sinβ+k3cosβ),其中,α=arctan(A/B),β=-arctan(B/C)。Further, the registration of the Kinect coordinate system and the Unity3D coordinate system further includes: when any point (k 1 , k 2 , k 3 ) in the Kinect coordinate system is converted to the Unity3D coordinate system, the rotation angle around the X axis is -arctan (B/C), the rotation angle around the Z axis is arctan(A/B), and the rotation radius is The coordinates after rotation are: (k 1 cosα-(k 2 cosβ-k 3 sinβ)sinα, k 1 sinα+(k 2 cosβ-k 3 sinβ)cosα, k 2 sinβ+k 3 cosβ), where α=arctan (A/B), β=-arctan (B/C).

进一步地,所述步骤S6进一步包括:Further, the step S6 further includes:

S61:对两幅图像进行取样或差值操作;S61: Perform sampling or difference operation on two images;

S62:对操作后的两幅图像进行遍历,比较两幅图像中与目的图像像素点对应点的深度值;S62: traverse the two images after the operation, and compare the depth values of the points corresponding to the pixels of the target image in the two images;

S63:将目的图像对应点颜色值设置为深度值较小像素点的颜色值。S63: Set the color value of the corresponding point of the target image to the color value of the pixel with a smaller depth value.

进一步地,所述注册虚拟模型还可以通过用户移动至特殊位置触发默认模型的方式。Further, the registered virtual model can also trigger a default model by the user moving to a special location.

进一步地,所述注册虚拟模型还可以通过用户语音触发对应模型注册的方式。Further, the registered virtual model can also trigger the registration of the corresponding model through the voice of the user.

为此目的,本发明提出了一种基于Kinect和Unity3D的多交互装置,包括:For this purpose, the present invention proposes a kind of multi-interaction device based on Kinect and Unity3D, comprising:

调整模块,用于调整Unity3D中摄像机参数与Kinect有效检测范围一致;The adjustment module is used to adjust the camera parameters in Unity3D to be consistent with the effective detection range of Kinect;

确定坐标与地面方程模块,用于利用Kinect确定用户坐标以及地面方程;Determine the coordinates and ground equation module, used to determine user coordinates and ground equations using Kinect;

虚拟模型注册模块,用于根据相对位置确定虚拟模拟坐标,并注册虚拟模型;The virtual model registration module is used to determine the virtual simulation coordinates according to the relative position and register the virtual model;

设计模块,用于设计交互姿势与语音;Design module for designing interactive gestures and voice;

确定效果模块,用于确定Unity3D控制模型位移动画以及多媒体效果;Determine the effect module, which is used to determine the displacement animation and multimedia effects of the Unity3D control model;

图像融合模块,用于对Unity3D中的摄像机获取到的画面与Kinect的摄像头获取的图像进行融合并显示。The image fusion module is used to fuse and display the images obtained by the camera in Unity3D and the images obtained by the camera of Kinect.

本发明所公开的一种基于Kinect和Unity3D的多交互方法,首先通过设置Unity3D中摄像机的位置与属性,简化了真实场景坐标系与虚拟场景坐标系之间的转换;其次通过Kinect获取用户在Unity中的对应坐标,以及代表地面的平面方程,然后可以根据待注册虚拟模型与地面和用户的相对位置关系确定三维注册坐标,其触发注册的机制更加灵活,可以在用户移动到特定位置时进行触发,也可以使用语音识别模块进行触发。再次丰富了模型注册后的交互方式,可以通过肢体动作和语音操作模型与之交互;最后利用Unity3D中的Transform组件和Mecanim动画系统简化了虚拟模型位移变化和动画效果的实现。本发明还公开了一种基于Kinect和Unity3D的多交互装置。A kind of multi-interaction method based on Kinect and Unity3D disclosed by the present invention firstly simplifies the conversion between the real scene coordinate system and the virtual scene coordinate system by setting the position and attribute of the camera in Unity3D; The corresponding coordinates in , and the plane equation representing the ground, and then the three-dimensional registration coordinates can be determined according to the relative positional relationship between the virtual model to be registered and the ground and the user. The trigger registration mechanism is more flexible and can be triggered when the user moves to a specific position , and can also be triggered using the speech recognition module. Once again, the interaction mode after the model registration is enriched, and the model can be interacted with through body movements and voice operations; finally, the Transform component and the Mecanim animation system in Unity3D are used to simplify the realization of virtual model displacement changes and animation effects. The invention also discloses a multi-interaction device based on Kinect and Unity3D.

附图说明Description of drawings

通过参考附图会更加清楚的理解本发明的特征和优点,附图是示意性的而不应理解为对本发明进行任何限制,在附图中:The features and advantages of the present invention will be more clearly understood by referring to the accompanying drawings, which are schematic and should not be construed as limiting the invention in any way. In the accompanying drawings:

图1示出了本发明实施例中的一种基于Kinect和Unity3D的多交互方法的步骤流程图;Fig. 1 shows a kind of flow chart of steps based on the multi-interaction method of Kinect and Unity3D in the embodiment of the present invention;

图2示出了本发明实施例中的一种基于Kinect和Unity3D的多交互装置的结构图。FIG. 2 shows a structural diagram of a multi-interaction device based on Kinect and Unity3D in an embodiment of the present invention.

具体实施方式detailed description

下面将结合附图对本发明的实施例进行详细描述。Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

如图1所示,本发明提供了一种基于Kinect和Unity3D的多交互方法,包括具体以下步骤:As shown in Figure 1, the present invention provides a kind of multi-interaction method based on Kinect and Unity3D, comprises specific following steps:

步骤S1:调整Unity3D中摄像机参数与Kinect有效检测范围一致。具体地,放置Kinect至现实场景的预设位置,调整现实场景处于Kinect有效检测范围内,其中,有效范围是指距摄像头1.2-3.6米,水平57度,垂直43度。Step S1: Adjust the camera parameters in Unity3D to be consistent with the effective detection range of Kinect. Specifically, place the Kinect to the preset position of the real scene, and adjust the real scene to be within the effective detection range of the Kinect, wherein the effective range refers to 1.2-3.6 meters from the camera, 57 degrees horizontally, and 43 degrees vertically.

进一步地,在Kinect返回数据的坐标系中,原点即是Kinect的传感器,因此将Unity3D中摄像机放置于坐标原点,以方便三维注册时计算虚拟模型坐标。调整Unity3D中摄像机Field of view以及Clipping Planes参数,将Field of view,Clipping Planes等参数与Kinect的有效范围设置成一样的数值。Furthermore, in the coordinate system of the data returned by Kinect, the origin is the Kinect sensor, so the camera in Unity3D is placed at the coordinate origin to facilitate the calculation of virtual model coordinates during 3D registration. Adjust the Field of view and Clipping Planes parameters of the camera in Unity3D, and set the Field of view, Clipping Planes and other parameters to the same value as the effective range of Kinect.

步骤S2:利用Kinect确定用户坐标以及地面方程。Step S2: Use Kinect to determine user coordinates and ground equations.

具体地,使用SkeletonFrame.FloorClipPlane函数确定代表地面的平面方程,其中,在Kinect坐标系下的平面方程为:Ax+By+Cz+D=0,(A,B,C)为平面方程的平面法向量,在Unity3D坐标系下的平面方程为:y+E=0,(0,1,0)为平面方程的平面法向量;将(A,B,C)旋转至(0,1,0)重合,完成Kinect坐标系与Unity3D坐标系的配准。Specifically, use the SkeletonFrame.FloorClipPlane function to determine the plane equation representing the ground, wherein the plane equation in the Kinect coordinate system is: Ax+By+Cz+D=0, (A, B, C) is the plane method of the plane equation Vector, the plane equation in the Unity3D coordinate system is: y+E=0, (0, 1, 0) is the plane normal vector of the plane equation; rotate (A, B, C) to (0, 1, 0) Coincidentally, complete the registration of the Kinect coordinate system and the Unity3D coordinate system.

进一步地,Kinect坐标系与Unity3D坐标系的配准进一步包括:Kinect坐标系下任意点(k1,k2,k3)向Unity3D坐标系转换时,需绕X轴旋转角度为-arctan(B/C),绕Z轴旋转角度为arctan(A/B),旋转半径为旋转后坐标为:(k1cosα-(k2cosβ-k3sinβ)sinα,k1sinα+(k2cosβ-k3sinβ)cosα,k2sinβ+k3cosβ),其中,α=arctan(A/B),β=-arctan(B/C)。Furthermore, the registration between the Kinect coordinate system and the Unity3D coordinate system further includes: when any point (k 1 , k 2 , k 3 ) in the Kinect coordinate system is converted to the Unity3D coordinate system, the rotation angle around the X axis is -arctan(B /C), the rotation angle around the Z axis is arctan(A/B), and the rotation radius is The coordinates after rotation are: (k 1 cosα-(k 2 cosβ-k 3 sinβ)sinα, k 1 sinα+(k 2 cosβ-k 3 sinβ)cosα, k 2 sinβ+k 3 cosβ), where α=arctan (A/B), β=-arctan (B/C).

步骤S3:根据相对位置确定虚拟模拟坐标,并注册虚拟模型。Step S3: Determine the virtual simulation coordinates according to the relative position, and register the virtual model.

具体地,根据上述步骤的Kinect坐标系向Unity3D坐标系进行转换的转换公式,将Kinect SDK中API返回的骨骼位置点Skeleton Point转换至Unity3D坐标系。在Unity3D坐标系下根据地面高度,用户坐标和虚拟模型与用户的相对位置将模型定位至所需坐标;或者选择默认模型进行注册,或者向语音库添加与模型相关的词语进行注册,其中,若干词语对应一个模型,通过Kinect Speech模块语音识别。当用户说出语音库中存在的词语后,在场景中注册与该词语对应模型三维模型。Specifically, according to the conversion formula for converting the Kinect coordinate system to the Unity3D coordinate system in the above steps, the Skeleton Point returned by the API in the Kinect SDK is converted to the Unity3D coordinate system. In the Unity3D coordinate system, locate the model to the required coordinates according to the ground height, user coordinates and the relative position of the virtual model and the user; or select the default model for registration, or add words related to the model to the voice library for registration, among which, several Words correspond to a model, through Kinect Speech module speech recognition. After the user speaks a word in the speech database, a three-dimensional model corresponding to the word is registered in the scene.

步骤S4:设计交互姿势与语音。Step S4: Design interactive gestures and voice.

具体地,设计交互姿势,确定每种操作的肢体动作集合。例如:使用手臂悬停表示选择物体或点击按钮;移动手臂表示滑动鼠标或平移模型;两手远离、靠近表示缩放模型;两手抱球旋转表示旋转模型等,且通过语音实现简单交互。例如:模型的显示、消失,多媒体的播放、暂停等。Specifically, design interaction poses and determine the set of body movements for each operation. For example: using the arm to hover means selecting an object or clicking a button; moving the arm means sliding the mouse or panning the model; moving the hands away and approaching means zooming the model; rotating the ball with both hands means rotating the model, etc., and simple interaction is realized through voice. For example: display and disappear of the model, play and pause of multimedia, etc.

步骤S5:确定Unity3D控制模型位移动画以及多媒体效果。Step S5: Determine the displacement animation and multimedia effects of the Unity3D control model.

具体地,根据用户的肢体动作对模型进行相应操作。利用Unity3D SDK中GameObject对象中的Transform组件对模型进行平移,旋转,缩放等操作;使用Mecanim动画系统控制模型对用户做出跟随,跑动,引导等设计好的交互动作;使用Audio组件和MovieTextures组件控制多媒体效果。Specifically, corresponding operations are performed on the model according to the user's body movements. Use the Transform component in the GameObject object in the Unity3D SDK to translate, rotate, and scale the model; use the Mecanim animation system to control the model to follow, run, guide and other designed interactive actions for the user; use the Audio component and MovieTextures component Control multimedia effects.

步骤S6:对Unity3D中的摄像机获取到的画面与Kinect的摄像头获取的图像进行融合并显示。Step S6: Fusion and displaying the picture acquired by the camera in Unity3D and the image acquired by the camera of Kinect.

具体地,步骤S6进一步包括:Specifically, step S6 further includes:

步骤S61:对两幅图像进行取样或差值操作,使之缩放至目的图像大小。Step S61: Sampling or difference operation is performed on the two images to scale them to the size of the target image.

步骤S62:对操作后的两幅图像进行遍历,比较两幅图像中与目的图像像素点对应点的深度值;Step S62: traverse the two images after operation, and compare the depth values of the points corresponding to the pixels of the target image in the two images;

步骤S63:将目的图像对应点颜色值设置为深度值较小像素点的颜色值。Step S63: Set the color value of the corresponding point of the target image to the color value of the pixel with a smaller depth value.

本发明公开的一种基于Kinect和Unity3D的多交互方法,为三维注册操作简单的增强现实技术。利用Kinect对语音识别的支持和对人体骨架定位增加虚拟模型三维注册的触发方式,通过肢体动作的识别功能为用户提供更多的交互方式,改善了用户的使用体验;利用Unity3D的三维引擎对模型位姿进行自动化处理,极大简化了三维注册所需步骤。即综合利用体感交互设备和三维游戏引擎,简化了三维注册流程,增加了三维注册触发方式,丰富了用户交互途径,完善了用户操作体验。A multi-interaction method based on Kinect and Unity3D disclosed by the invention is an augmented reality technology with simple three-dimensional registration operation. Utilize Kinect's support for speech recognition and human skeleton positioning to increase the trigger method of 3D registration of virtual models, provide users with more interactive methods through the recognition function of body movements, and improve user experience; use Unity3D's 3D engine to model The pose is automatically processed, which greatly simplifies the steps required for 3D registration. That is to say, the integrated use of somatosensory interactive equipment and 3D game engine simplifies the 3D registration process, increases the trigger method of 3D registration, enriches the user interaction channels, and improves the user operation experience.

如图2所示,本发明提供了一种基于Kinect和Unity3D的多交互装置10,包括:调整模块101、确定坐标与地面方程模块102、虚拟模型注册模块103、设计模块104、确定效果模块105以及图像融合模块106。As shown in Figure 2, the present invention provides a kind of multi-interaction device 10 based on Kinect and Unity3D, comprising: adjustment module 101, determination coordinate and ground equation module 102, virtual model registration module 103, design module 104, determination effect module 105 and an image fusion module 106 .

具体地,调整模块101用于调整Unity3D中摄像机参数与Kinect有效检测范围一致;确定坐标与地面方程模块102用于利用Kinect确定用户坐标以及地面方程;虚拟模型注册模块103用于根据相对位置确定虚拟模拟坐标,并注册虚拟模型;设计模块104用于设计交互姿势与语音;确定效果模块105用于确定Unity3D控制模型位移动画以及多媒体效果;图像融合模块106用于对Unity3D中的摄像机获取到的画面与Kinect的摄像头获取的图像进行融合并显示。Specifically, the adjustment module 101 is used to adjust the camera parameters in Unity3D to be consistent with the effective detection range of Kinect; the determination coordinate and ground equation module 102 is used to determine the user coordinates and the ground equation using Kinect; the virtual model registration module 103 is used to determine the virtual model according to the relative position. Simulate the coordinates and register the virtual model; the design module 104 is used to design interactive gestures and voice; the determination effect module 105 is used to determine the displacement animation and multimedia effects of the Unity3D control model; the image fusion module 106 is used to obtain the pictures obtained by the camera in Unity3D Fusion and display with the image acquired by the camera of Kinect.

本发明所公开的一种基于Kinect和Unity3D的多交互方法,首先通过设置Unity3D中摄像机的位置与属性,简化了真实场景坐标系与虚拟场景坐标系之间的转换;其次通过Kinect获取用户在Unity中的对应坐标,以及代表地面的平面方程,然后可以根据待注册虚拟模型与地面和用户的相对位置关系确定三维注册坐标,其触发注册的机制更加灵活,可以在用户移动到特定位置时进行触发,也可以使用语音识别模块进行触发。再次丰富了模型注册后的交互方式,可以通过肢体动作和语音操作模型与之交互;最后利用Unity3D中的Transform组件和Mecanim动画系统简化了虚拟模型位移变化和动画效果的实现。本发明还公开了一种基于Kinect和Unity3D的多交互装置。A kind of multi-interaction method based on Kinect and Unity3D disclosed by the present invention firstly simplifies the conversion between the real scene coordinate system and the virtual scene coordinate system by setting the position and attribute of the camera in Unity3D; The corresponding coordinates in , and the plane equation representing the ground, and then the three-dimensional registration coordinates can be determined according to the relative positional relationship between the virtual model to be registered and the ground and the user. The trigger registration mechanism is more flexible and can be triggered when the user moves to a specific position , and can also be triggered using the speech recognition module. Once again, the interaction mode after the model registration is enriched, and the model can be interacted with through body movements and voice operations; finally, the Transform component and the Mecanim animation system in Unity3D are used to simplify the realization of virtual model displacement changes and animation effects. The invention also discloses a multi-interaction device based on Kinect and Unity3D.

以上实施方式仅用于说明本发明,而并非对本发明的限制,有关技术领域的普通技术人员,在不脱离本发明的精神和范围的情况下,还可以做出各种变化和变型,因此所有等同的技术方案也属于本发明的范畴,本发明的专利保护范围应由权利要求限定。The above embodiments are only used to illustrate the present invention, but not to limit the present invention. Those of ordinary skill in the relevant technical field can make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, all Equivalent technical solutions also belong to the category of the present invention, and the scope of patent protection of the present invention should be defined by the claims.

虽然结合附图描述了本发明的实施方式,但是本领域技术人员可以在不脱离本发明的精神和范围的情况下做出各种修改和变型,这样的修改和变型均落入由所附权利要求所限定的范围之内。Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art can make various modifications and variations without departing from the spirit and scope of the present invention. within the bounds of the requirements.

Claims (9)

1. a kind of many exchange methods based on Kinect and Unity3D, it is characterised in that including specific following steps:
S1:Adjust camera parameters in Unity3D consistent with Kinect valid analysing ranges;
S2:User coordinates and ground equation are determined using Kinect;
S3:Virtual analog coordinate is determined according to relative position, and registers dummy model;
S4:Design interaction posture and voice;
S5:Determine Unity3D Controlling model displacement animations and Multimedia;
S6:The picture got to the video camera in Unity3D is merged and shown with the image that Kinect camera is obtained Show.
2. the method as described in claim 1, it is characterised in that the step S1 further comprises:Kinect is placed to reality The predeterminated position of scene, adjustment reality scene is in Kinect valid analysing ranges.
3. the method as described in claim 1, it is characterised in that the step S1 further comprises:Adjust in Unity3D and image Machine Field of view and Clipping Planes parameters.
4. the method as described in claim 1, it is characterised in that the step S2 further comprises:
S21:Determine to represent the plane equation on ground, i.e., described ground using SkeletonFrame.FloorClipPlane functions Equation, wherein, the plane equation under the Kinect coordinate systems is:Ax+By+Cz+D=0, (A, B, C) is described flat The plane normal vector of face equation, the plane equation under the Unity3D coordinate systems is:Y+E=0, (0,1,0) is described The plane normal vector of plane equation;
S22:(A, B, C) rotation to (0,1,0) is overlapped, Kinect coordinate systems are completed registering with Unity3D coordinate systems.
5. method as claimed in claim 4, it is characterised in that the Kinect coordinate systems are registering with Unity3D coordinate systems Further comprise:Arbitrfary point (k under Kinect coordinate systems1, k2, k3) to Unity3D coordinate systems change when, need to be around the X-axis anglec of rotation For-arctan (B/C), the anglec of rotation is arctan (A/B) about the z axis, and radius of turn isCoordinate after rotation For:(k1cosα-(k2cosβ-k3Sin β) sin α,
k1sinα+(k2cosβ-k3Sin β) cos α,
k2sinβ+k3Cos β), wherein, α=arctan (A/B), β=- arctan (B/C).
6. the method as described in claim 1, it is characterised in that the step S6 further comprises:
S61:Two images are sampled or difference operation;
S62:Two images after operation are traveled through, compare the depth with purpose image slices vegetarian refreshments corresponding points in two images Value;
S63:Purpose image corresponding points color value is set to color value of the depth value compared with statuette vegetarian refreshments.
7. the method as described in claim 1, it is characterised in that the registration dummy model can also be moved to spy by user The mode of different location triggered default models.
8. the method as described in claim 1, it is characterised in that the registration dummy model can also pass through user's speech trigger The mode of correspondence model registration.
9. a kind of many interactive devices based on Kinect and Unity3D, it is characterised in that including:
Adjusting module, it is consistent with Kinect valid analysing ranges for adjusting camera parameters in Unity3D;
Coordinate and ground equation module are determined, for determining user coordinates and ground equation using Kinect;
Dummy model Registering modules, for determining virtual analog coordinate according to relative position, and register dummy model;
Module is designed, for designing interactive posture and voice;
Effects module is determined, for determining Unity3D Controlling model displacement animations and Multimedia;
Image co-registration module, the figure that picture and Kinect camera for being got to the video camera in Unity3D are obtained As being merged and being shown.
CN201410381549.XA 2014-08-05 2014-08-05 Many exchange methods and device based on Kinect and Unity3D Active CN104360729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410381549.XA CN104360729B (en) 2014-08-05 2014-08-05 Many exchange methods and device based on Kinect and Unity3D

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410381549.XA CN104360729B (en) 2014-08-05 2014-08-05 Many exchange methods and device based on Kinect and Unity3D

Publications (2)

Publication Number Publication Date
CN104360729A CN104360729A (en) 2015-02-18
CN104360729B true CN104360729B (en) 2017-10-10

Family

ID=52527997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410381549.XA Active CN104360729B (en) 2014-08-05 2014-08-05 Many exchange methods and device based on Kinect and Unity3D

Country Status (1)

Country Link
CN (1) CN104360729B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125903B (en) * 2016-04-24 2021-11-16 林云帆 Multi-person interaction system and method
CN106791478A (en) * 2016-12-15 2017-05-31 山东数字人科技股份有限公司 A kind of three-dimensional data real-time volume display systems
CN107330978B (en) * 2017-06-26 2020-05-22 山东大学 Augmented reality modeling experience system and method based on location mapping
CN107551551B (en) * 2017-08-09 2021-03-26 Oppo广东移动通信有限公司 Game effect construction method and device
CN107861714B (en) * 2017-10-26 2021-03-02 天津科技大学 Development method and system of automobile display application based on Intel RealSense
CN108096836B (en) * 2017-12-20 2021-05-04 深圳市百恩互动娱乐有限公司 Method for making game by real-person real shooting
CN109089017A (en) * 2018-09-05 2018-12-25 宁波梅霖文化科技有限公司 Magic virtual bench
CN109782911B (en) * 2018-12-30 2022-02-08 广州嘉影软件有限公司 Whole body motion capture method and system based on virtual reality
CN110728739B (en) * 2019-09-30 2023-04-14 杭州师范大学 A Virtual Human Control and Interaction Method Based on Video Stream
CN113709537B (en) * 2020-05-21 2023-06-13 云米互联科技(广东)有限公司 User interaction method based on 5G television, 5G television and readable storage medium
CN111913577A (en) * 2020-07-31 2020-11-10 武汉木子弓数字科技有限公司 Three-dimensional space interaction method based on Kinect

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049618A (en) * 2012-12-30 2013-04-17 江南大学 Intelligent home displaying method on basis of Kinect
CN103181157A (en) * 2011-07-28 2013-06-26 三星电子株式会社 Plane-characteristic-based markerless augmented reality system and method for operating same

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110169927A1 (en) * 2010-01-13 2011-07-14 Coco Studios Content Presentation in a Three Dimensional Environment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103181157A (en) * 2011-07-28 2013-06-26 三星电子株式会社 Plane-characteristic-based markerless augmented reality system and method for operating same
CN103049618A (en) * 2012-12-30 2013-04-17 江南大学 Intelligent home displaying method on basis of Kinect

Also Published As

Publication number Publication date
CN104360729A (en) 2015-02-18

Similar Documents

Publication Publication Date Title
CN104360729B (en) Many exchange methods and device based on Kinect and Unity3D
US10181222B2 (en) Method and device for augmented reality display of real physical model
US9256986B2 (en) Automated guidance when taking a photograph, using virtual objects overlaid on an image
JP7337104B2 (en) Model animation multi-plane interaction method, apparatus, device and storage medium by augmented reality
CN102638653B (en) Automatic face tracing method on basis of Kinect
CN104781849B (en) Monocular vision positions the fast initialization with building figure (SLAM) simultaneously
US9268410B2 (en) Image processing device, image processing method, and program
TWI505709B (en) System and method for determining individualized depth information in augmented reality scene
US20120212405A1 (en) System and method for presenting virtual and augmented reality scenes to a user
WO2017134886A1 (en) Information processing device, information processing method, and recording medium
JP6456347B2 (en) INSITU generation of plane-specific feature targets
CN109584295A (en) The method, apparatus and system of automatic marking are carried out to target object in image
US20240290299A1 (en) Systems, methods, and media for displaying interactive augmented reality presentations
US20170358120A1 (en) Texture mapping with render-baked animation
KR20130108643A (en) Systems and methods for a gaze and gesture interface
CN115956259A (en) Generating an underlying real dataset for a virtual reality experience
CN106325509A (en) Three-dimensional gesture recognition method and system
CN103530881A (en) Outdoor augmented reality mark-point-free tracking registration method applicable to mobile terminal
US20250014263A1 (en) Systems And Methods For Generating Stabilized Images Of A Real Environment In Artificial Reality
CN108830944B (en) Optical perspective three-dimensional near-to-eye display system and display method
CN107368314A (en) Course Design of Manufacture teaching auxiliary system and development approach based on mobile AR
WO2020253716A1 (en) Image generation method and device
US11961195B2 (en) Method and device for sketch-based placement of virtual objects
Afif et al. Orientation control for indoor virtual landmarks based on hybrid-based markerless augmented reality
CN103700128B (en) Mobile equipment and enhanced display method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: BEIJING RESEARCH CENTER OF INTELLIGENT EQUIPMENT F

Free format text: FORMER OWNER: BEIJING AGRICULTURE INFORMATION TECHNOLOGY RESEARCH CENTER

Effective date: 20150804

Owner name: BEIJING AGRICULTURE INFORMATION TECHNOLOGY RESEARC

Effective date: 20150804

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20150804

Address after: Block 318b, No. 11 building, 100097 Beijing City, Haidian District agricultural A shuguangyuanzhong Road

Applicant after: Beijing Research Center of Intelligent Equipment for Agriculture

Applicant after: Beijing Research Center for Information Technology in Agriculture

Address before: Block 318b, No. 11 building, 100097 Beijing City, Haidian District agricultural A shuguangyuanzhong Road

Applicant before: Beijing Research Center for Information Technology in Agriculture

GR01 Patent grant
GR01 Patent grant