WO2016107230A1 - 一种3d场景中重现物体的系统和方法 - Google Patents

一种3d场景中重现物体的系统和方法 Download PDF

Info

Publication number
WO2016107230A1
WO2016107230A1 PCT/CN2015/090529 CN2015090529W WO2016107230A1 WO 2016107230 A1 WO2016107230 A1 WO 2016107230A1 CN 2015090529 W CN2015090529 W CN 2015090529W WO 2016107230 A1 WO2016107230 A1 WO 2016107230A1
Authority
WO
WIPO (PCT)
Prior art keywords
shape
information
real time
real
changes
Prior art date
Application number
PCT/CN2015/090529
Other languages
English (en)
French (fr)
Inventor
姜茂山
张向军
周宏伟
Original Assignee
青岛歌尔声学科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛歌尔声学科技有限公司 filed Critical 青岛歌尔声学科技有限公司
Priority to US15/313,446 priority Critical patent/US9842434B2/en
Priority to JP2017509026A priority patent/JP2017534940A/ja
Publication of WO2016107230A1 publication Critical patent/WO2016107230A1/zh
Priority to US15/808,151 priority patent/US10482670B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the present invention relates to the field of virtual reality technologies, and in particular, to a system and method for reproducing objects in a 3D scene.
  • the present invention provides a system and method for reproducing an object in a 3D scene to solve the problem that the prior art cannot truly reproduce an object in a 3D scene.
  • the present invention provides a system for inputting an object in a 3D scene, comprising: an object collection unit, an object recognition unit, an object tracking unit, and an object projection unit;
  • the object collecting unit is configured to simultaneously collect at least two video stream data in real time from different angles of the object to be displayed;
  • the object recognition unit is configured to identify an object shape that changes in real time from the at least two video stream data
  • the object tracking unit is configured to obtain a corresponding object motion trajectory according to the shape of the object that changes in real time;
  • the object projection unit is configured to process the real-time changing object shape and the corresponding object motion trajectory into a 3D image real-time superimposed projection in a 3D scene.
  • the object recognition unit comprises:
  • a sampling module configured to perform sampling processing on each of the at least two video stream data to obtain video image data of each sampling
  • a contour extraction module configured to determine whether an object is included in the video image data, and if included, performing binarization processing on the video image data to extract object contour information
  • a shape recognition module configured to identify an object shape corresponding to the contour information of the object in a preset object model database
  • the shape synthesis module is configured to synthesize the shape of the object recognized by each sampling of each video stream data, and obtain the shape of the object that changes in real time.
  • the object tracking unit comprises:
  • a location information acquiring module configured to obtain relative spatial location information of a shape of an object that changes in real time
  • a contact information acquiring module configured to obtain, according to the contact determined on the shape of the object that changes in real time, change information of the contact on the shape of the object that changes in real time, and the contact is a key point of the feature that identifies the object;
  • the motion track acquiring module is configured to obtain a corresponding object motion track in a preset motion track database according to the relative spatial position information and the change information of the contact.
  • the location information acquiring module is specifically configured to:
  • the relative spatial position information of the object is obtained according to the angle information of the shape change of the object and the distance information of the object.
  • the object projecting unit is further configured to process the real-time changing object shape and the corresponding object motion trajectory into a 3D image, and project the 3D image into a 3D scene according to a split screen technique.
  • the present invention provides a method of inputting an object in a 3D scene, comprising:
  • the real-time changing object shape and the corresponding object motion trajectory are processed into a 3D image real-time superimposed projection in a 3D scene.
  • the identifying an object shape that changes in real time from the at least two video stream data includes:
  • the shape of the object recognized by each sample of each video stream data is synthesized, and the shape of the object that changes in real time is obtained.
  • the obtaining the corresponding object motion trajectory according to the real-time changing object shape comprises:
  • Corresponding object motion trajectories are obtained in a preset motion trajectory database based on the relative spatial position information and the change information of the contacts.
  • the obtaining relative spatial position information of the shape of the object that changes in real time comprises:
  • the relative spatial position information of the object is obtained according to the angle information of the shape change of the object and the distance information of the object.
  • the processing the real-time changing object shape and the corresponding object motion trajectory into a 3D image real-time superimposed projection in the 3D scene comprises:
  • the 3D image is projected into a 3D scene according to a split screen technique.
  • the embodiments of the present invention provide a system and method for reproducing an object in a 3D scene.
  • the object collection unit of the system simultaneously collects at least two video stream data from different angles in real time.
  • the object recognition unit identifies an object shape having complete object information from the at least two video stream data, and obtains an object motion trajectory corresponding to the real-time changed object shape by the object tracking unit, and the object projection unit changes in real time through the object projection unit.
  • the object shape and the corresponding object motion trajectory are processed into a 3D image real-time superimposed projection in the 3D scene, thereby achieving the purpose of displaying the real object in the 3D scene.
  • the present invention does not need to redraw the object to be displayed according to the object shape of the database, and can directly display the collected object image to improve the user experience.
  • FIG. 1 is a schematic structural diagram of a system for reproducing an object in a 3D scene according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a technical flow of a physical weight wearing device in a virtual reality wearing device according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a method for reproducing an object in a 3D scene according to an embodiment of the present invention.
  • the overall idea of the invention is: using at least two cameras to simultaneously collect objects from different angles at the same time, recognizing the shape of the object according to the video stream data collected by each camera, and obtaining the corresponding object motion trajectory according to the shape of the recognized object, which will change in real time.
  • the object shape and the corresponding object motion trajectory are processed into a 3D image real-time superimposed projection in the 3D scene, thereby reproducing the real object in the 3D scene.
  • FIG. 1 is a schematic structural diagram of a system for reproducing an object in a 3D scene according to an embodiment of the present invention.
  • the system includes: an object collection unit 11, an object recognition unit 12, an object tracking unit 13, and an object projection unit 14.
  • the object collecting unit 11 is configured to simultaneously collect at least two video stream data in real time from different angles of the object to be displayed.
  • the object collecting unit 11 can collect the objects to be displayed in real time from different angles through multiple cameras, thereby obtaining multi-channel video stream data. In practical applications, according to the data processing performance and system accuracy requirements of the system, a suitable number of cameras can be selected to collect the video stream data of the corresponding number of channels. It should be noted that the camera in the object collection unit 11 may be a white light camera of a common performance or an infrared camera. The object collection unit is not particularly limited in this embodiment.
  • the object recognition unit 12 is configured to identify an object shape that changes in real time from the at least two video stream data.
  • the object tracking unit 13 is configured to obtain a corresponding object motion trajectory according to the shape of the object that changes in real time.
  • the object projection unit 14 is configured to process the real-time changing object shape and the corresponding object motion trajectory into a 3D image real-time superimposed projection in the 3D scene.
  • the object projecting unit 14 is configured to process the real-time changing object shape and the corresponding object motion trajectory into a 3D image, and project the 3D image into the 3D scene according to the split screen technology. That is, the main display screen is used to display the 3D scene, and the shape of the object that is processed into a real-time change of the 3D image and the corresponding object motion trajectory are displayed through another display screen, and the optical correlation principle is used to make the object in the human eye contain the object. 3D scenes of shape and displacement.
  • the object collecting unit of the embodiment collects at least two video stream data in real time from the object, and the object recognizing unit recognizes the shape of the object having the complete object information from the at least two video stream data, and obtains the corresponding object motion track after the object tracking unit obtains the corresponding object motion track.
  • the object shape unit and the corresponding object motion trajectory are processed into a 3D image real-time superimposed projection in the 3D scene by the object projection unit, thereby achieving the purpose of reproducing the real object in the 3D scene.
  • the object recognition unit 12 in the embodiment shown in FIG. 1 above comprises: a sampling module, a contour extraction module, a shape recognition module, and a shape synthesis module.
  • the sampling module is configured to separately sample each path in the at least two video stream data to obtain video image data of each sampling.
  • the contour extraction module is configured to determine whether the video image data includes an object, and if so, the video image data is binarized to extract the contour information of the object.
  • the shape recognition module is configured to identify an object shape corresponding to the object contour information in a preset object model database.
  • the object model database stores various object models, which may be human hand parts, heads and the like having vital signs, and may also be mechanical, electronic, etc.
  • the shape recognition module may be Various object models in the object model database identify the contour information of the object to obtain a corresponding object shape.
  • the shape synthesis module is configured to synthesize the shape of the object recognized after each sampling of each video stream data, and obtain the shape of the object that changes in real time.
  • this embodiment uses a shape synthesis module to identify each video stream data after each sampling.
  • the shape of the object is synthesized to obtain more information about the shape of the object.
  • the identification unit identifies the corresponding object shape according to the object contour information in each video stream data, and synthesizes the recognized objects in the multi-channel video stream data to obtain an object shape including all the information of the object. , thereby enhancing the real effect of the objects in the 3D scene and improving the user experience.
  • the object tracking unit in the preferred embodiment shown in FIG. 1 above includes: a position information acquisition module, a contact information acquisition module, and a motion track acquisition module.
  • the location information acquisition module is configured to obtain relative spatial location information of the shape of the object that changes in real time.
  • the present technical solution acquires the relative spatial position information of the object shape that changes in real time based on the objective fact.
  • the present invention schematically illustrates two relative spatial positional information that captures the shape of an object that changes in real time.
  • the first way to obtain the relative spatial position information of the shape of the object is:
  • the position information acquiring module obtains angle information of the shape change of the object from the video image information of the at least two video data streams collected by the object collecting unit; obtains the distance information of the object according to the angle information of the shape change of the object, and combines the angle information of the shape change of the object
  • the distance information from the object obtains the relative spatial position information of the object.
  • the second way to obtain relative spatial position information of an object shape is:
  • the position information acquisition module obtains angle information of the shape change of the object from the video image information of the at least two video data streams collected by the object collection unit; the distance information of the object is sensed by the distance sensor in real time; the angle information of the shape change of the object and the object are combined The distance information obtains relative spatial position information of the object.
  • the first solution does not require additional sensors.
  • the relative spatial position information of the shape of the object can be obtained only by the information provided by the video stream data itself, but it needs to be implemented by an advanced algorithm, which increases the computational complexity of the system;
  • the solution senses the distance change of the object in real time through the distance sensor, and obtains relatively accurate relative spatial position information through a simple algorithm. In actual use, you can choose the right solution according to the specific design requirements.
  • the contact information acquiring module is configured to obtain change information of the contact on the shape of the object in real time according to the contact determined on the shape of the object that changes in real time, and the contact is a key point of the feature that identifies the object.
  • the contact in the module is a characteristic key point of the identification object, and the key point is preferably each joint point of the motion of the object, thereby better determining the shape of the object that changes in real time.
  • the technical solution does not specifically limit the number of contacts on the shape of the object and the way the contacts are arranged. In the design process, the specific design of the system accuracy and the data processing capability of the system can be comprehensively measured.
  • the motion track acquiring module is configured to obtain a corresponding object motion track in a preset motion track database according to the relative spatial position information and the change information of the contact.
  • the weight of the virtual reality wearing device is now described as an example.
  • the virtual headset includes: a display screen for displaying a 3D virtual reality scene and a system for reproducing an object in a 3D scene of the above technical solution, wherein the object collection unit of the system for reproducing the object in the 3D scene is set in the virtual reality head Wear a front camera and a bottom camera on the device.
  • the working principle of presenting the weight of the object in the virtual reality wearing device is: real-time acquisition of the object through the front camera and the bottom camera, obtaining two video stream data, and identifying the shape of the object from the two video stream data.
  • the corresponding object displacement is obtained according to the shape of the object changed in real time, and the object displacement is processed into a 3D image real-time superimposed projection in the 3D virtual reality scene.
  • S201 Perform video sampling processing on the two video stream data at the current time to obtain a corresponding video image.
  • step S202 Determine whether there is an object in the video image. If yes, go to step S202. If not, obtain video stream data of the next moment.
  • S203 Perform binarization processing on the video image data to extract object contour information.
  • S205 Synthesize the shape of the object identified by sampling the two video streams, and obtain an object shape including more object information.
  • a system for reproducing an object in a 3D scene is applied to a virtual reality wearing device, and a system for reproducing an object in the 3D scene can display the weight of the object in a virtual reality wearing device, and the virtual reality wearing device Real-time display of changes in the shape and displacement of the object, thereby improving the user experience.
  • FIG. 3 is a flowchart of a method for reproducing an object in a 3D scene according to an embodiment of the present disclosure, where the method includes:
  • S300 The object to be displayed simultaneously acquires at least two video stream data in real time from different angles.
  • the shape of the object recognized by each sample of each video stream data is synthesized, and the shape of the object that changes in real time is obtained.
  • the corresponding object motion trajectory is obtained in a preset motion trajectory database.
  • obtaining relative spatial position information of the shape of the object that changes in real time includes:
  • the relative spatial position information of the object is obtained according to the angle information of the shape change of the object and the distance information of the object.
  • S303 Process the shape of the object that changes in real time and the corresponding object motion trajectory into a 3D image and superimpose and project it in the 3D scene.
  • the 3D image is projected into a 3D scene according to a split screen technique.
  • the embodiment of the present invention discloses a system and method for reproducing an object in a 3D scene.
  • the object collection unit of the system simultaneously collects at least two video stream data from different angles at the same time, and the object recognition unit Identifying an object shape having complete object information from the at least two video stream data, obtaining an object motion trajectory corresponding to the real-time changing object shape by the object tracking unit, and real-time changing the object shape and corresponding by the object projection unit
  • the object motion trajectory is processed into a 3D image real-time superimposed projection in a 3D scene, thereby achieving the purpose of displaying a real object in a 3D scene.
  • the present invention does not need to redraw the object to be displayed according to the object shape of the database, and can directly display the collected object image to improve the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明公开了一种3D场景中重现物体的系统和方法,所述系统包括用于对待显示的物体从不同角度同时实时采集至少两路视频流数据的物体采集单元;用于从所述至少两路视频流数据中识别出实时变化的物体形状的物体识别单元;用于根据所述实时变化的物体形状,获得对应的物体运动轨迹的物体追踪单元;用于将所述实时变化的物体形状和对应的物体运动轨迹处理成3D图像实时叠加投影在3D场景中的物体投影单元。本发明的技术方案能够将物体重现在3D场景中,达到3D场景中显示真实物体的目的。

Description

一种3D场景中重现物体的系统和方法 技术领域
本发明涉及虚拟现实技术领域,特别涉及一种3D场景中重现物体的系统和方法。
发明背景
虚拟现实技术未来将会发展成为一种改变人们生活方式的新突破,目前,虚拟现实技术如何在虚拟世界中与目标进行互动是虚拟现实技术所面临的巨大挑战,因此虚拟现实技术想要真正进入消费级市场还有一段很长的路要走。
当前已有的各种虚拟现实装备依然阻挡着用户和虚拟世界之间的交流,例如在3D场景中无法实时显示物体形状及位移的变化,无法真正重现物体。
发明内容
本发明提供了一种3D场景中重现物体的系统和方法,以解决现有技术在3D场景中无法真正重现物体的问题。
为达到上述目的,本发明的技术方案是这样实现的:
一方面,本发明提供了一种3D场景中输入物体的系统,包括:物体采集单元、物体识别单元、物体追踪单元和物体投影单元;
所述物体采集单元,用于对待显示的物体从不同角度同时实时采集至少两路视频流数据;
所述物体识别单元,用于从所述至少两路视频流数据中识别出实时变化的物体形状;
所述物体追踪单元,用于根据所述实时变化的物体形状,获得对应的物体运动轨迹;
所述物体投影单元,用于将所述实时变化的物体形状和所述对应的物体运动轨迹处理成3D图像实时叠加投影在3D场景中。
优选地,所述物体识别单元包括:
采样模块,用于对所述至少两路视频流数据中的各路分别进行采样处理,获得每次采样的视频图像数据;
轮廓提取模块,用于判断所述视频图像数据中是否包含物体,若包含则对所述视频图像数据进行二值化处理,提取物体轮廓信息;
形状识别模块,用于在预先设定的物体模型数据库中识别出所述物体轮廓信息对应的物体形状;
形状合成模块,用于合成各路视频流数据的每次采样识别出的物体形状,得到实时变化的物体形状。
优选地,所述物体追踪单元包括:
位置信息获取模块,用于获得实时变化的物体形状的相对空间位置信息;
触点信息获取模块,用于根据实时变化的物体形状上确定出的触点,获得所述实时变化的物体形状上的触点的变化信息,所述触点为标识物体的特征关键点;
运动轨迹获取模块,用于根据所述相对空间位置信息和所述触点的变化信息,在预先设定的运动轨迹数据库中获得相应的物体运动轨迹。
优选地,所述位置信息获取模块具体用于,
从所述至少两路视频数据流的视频图像信息中获得物体形状变化的角度信息;
根据所述物体形状变化的角度信息获得物体的距离信息;或者通过距离传感器实时感应物体的距离信息;
根据所述物体形状变化的角度信息和所述物体的距离信息获得物体的相对空间位置信息。
优选地,所述物体投影单元,进一步用于将所述实时变化的物体形状和所述对应的物体运动轨迹处理成3D图像,根据分屏技术将所述3D图像投影到3D场景中。
另一方面,本发明提供了一种3D场景中输入物体的方法,包括:
对待显示的物体从不同角度同时实时采集至少两路视频流数据;
从所述至少两路视频流数据中识别出实时变化的物体形状;
根据所述实时变化的物体形状,获得对应的物体运动轨迹;
将所述实时变化的物体形状和所述对应的物体运动轨迹处理成3D图像实时叠加投影在3D场景中。
优选地,所述从所述至少两路视频流数据中识别出实时变化的物体形状包括:
对所述至少两路视频流数据中的各路分别进行采样处理,获得每次采样的视频图像数据;
判断所述视频图像数据中是否包含物体,若包含则对所述视频图像数据进行二值化处理,提取物体轮廓信息;
在预先设定的物体模型数据库中识别出所述物体轮廓信息对应的物体形状;
合成各路视频流数据的每次采样识别出的物体形状,得到实时变化的物体形状。
优选地,所述根据所述实时变化的物体形状,获得对应的物体运动轨迹包括:
获得实时变化的物体形状的相对空间位置信息;
根据实时变化的物体形状上确定出的触点,获得所述实时变化的物体形状上的触点的变化信息,所述触点为标识物体的特征关键点;
根据所述相对空间位置信息和所述触点的变化信息,在预先设定的运动轨迹数据库中获得相应的物体运动轨迹。
优选地,所述获得实时变化的物体形状的相对空间位置信息包括:
从所述至少两路视频数据流的视频图像信息中获得物体形状变化的角度信息;
根据所述物体形状变化的角度信息获得物体的距离信息;或者通过距离传感器实时感应物体的距离信息;
根据所述物体形状变化的角度信息和所述物体的距离信息获得物体的相对空间位置信息。
优选地,所述将所述实时变化的物体形状和所述对应的物体运动轨迹处理成3D图像实时叠加投影在3D场景中包括:
将所述实时变化的物体形状和所述对应的物体运动轨迹处理成3D图像;
根据分屏技术将所述3D图像投影到3D场景中。
本发明实施例的有益效果是:本发明实施例公开了一种3D场景中重现物体的系统和方法,所述系统的物体采集单元对待显示物体从不同角度同时实时采集至少两路视频流数据,物体识别单元从所述至少两路视频流数据中识别出具有完整物体信息的物体形状,经物体追踪单元获得所述实时变化的物体形状相应的物体运动轨迹,通过物体投影单元将实时变化的物体形状和对应的物体运动轨迹处理成3D图像实时叠加投影在3D场景中,从而达到了在3D场景中显示真实物体的目的。相比于现有技术,本发明不需要根据数据库的物体形态进行重新绘制需显示的物体,可以直接将采集到的物体图像进行真实显示,以提高用户的使用体验。
附图简要说明
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:
图1为本发明实施例提供的一种3D场景中重现物体的系统结构示意图;
图2为本发明实施例提供的一种将物体重现在虚拟现实头戴设备中的技术流程示意图;
图3为本发明实施例提供的一种3D场景中重现物体的方法流程示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。
本发明的整体思路是:利用至少两个摄像头从不同角度同时实时采集物体,根据每个摄像头采集的视频流数据识别物体形状,根据识别出的物体形状进获得相应的物体运动轨迹,将实时变化的物体形状和对应的物体运动轨迹处理成3D图像实时叠加投影在3D场景中,从而在3D场景中重现真实物体。
图1为本发明实施例提供的一种3D场景中重现物体的系统结构示意图,所述系统包括:物体采集单元11、物体识别单元12、物体追踪单元13和物体投影单元14。
物体采集单元11,用于对待显示的物体从不同角度同时实时采集至少两路视频流数据。
其中,物体采集单元11可以通过多个摄像头,从不同角度实时采集待显示的物体,从而获得多路视频流数据。在实际应用中,可以根据系统的数据处理性能和系统精度要求,选择合适数量的摄像头采集相应路数的视频流数据。需要说明的是,物体采集单元11中的摄像头可以为普通性能的白光摄像头,也可以是红外摄像头,本实施例并不对物体采集单元做特别限定。
物体识别单元12,用于从上述至少两路视频流数据中识别出实时变化的物体形状。
物体追踪单元13,用于根据实时变化的物体形状,获得对应的物体运动轨迹。
物体投影单元14,用于将实时变化的物体形状和对应的物体运动轨迹处理成3D图像实时叠加投影在3D场景中。
在实际应用中,物体投影单元14,用于将实时变化的物体形状和对应的物体运动轨迹处理成3D图像,根据分屏技术将所述3D图像投影到3D场景中。即采用主显示屏显示3D场景,将处理成3D图像的实时变化的物体形状和对应的物体运动轨迹通过另一显示屏进行显示,通过光学的相关原理,使呈现到人眼中的是包含物体的形状和位移的3D场景。
本实施例的物体采集单元对物体实时采集至少两路视频流数据,物体识别单元从至少两路视频流数据中识别出具有完整物体信息的物体形状,经物体追踪单元获得相应的物体运动轨迹后,通过物体投影单元将实时变化的物体形状和对应的物体运动轨迹处理成3D图像实时叠加投影在3D场景中,从而达到了在3D场景中重现真实物体的目的。
优选地,在上述图1所示实施例中的物体识别单元12包括:采样模块、轮廓提取模块、形状识别模块和形状合成模块。
采样模块,用于对至少两路视频流数据中的各路分别进行采样处理,获得每次采样的视频图像数据。
轮廓提取模块,用于判断视频图像数据中是否包含物体,若包含则对视频图像数据进行二值化处理,提取物体轮廓信息。
形状识别模块,用于在预先设定的物体模型数据库中识别出物体轮廓信息对应的物体形状。
示例性的,上述物体模型数据库保存着各种物体模型,该物体模型即可以是具有生命特征的人体手部、头部等部位,也可以机械、电子等设备,此时形状识别模块则可以根据该物体模型数据库中的各种物体模型对该物体轮廓信息进行识别,获得相应的物体形状。
形状合成模块,用于合成各路视频流数据的每次采样后识别出的物体形状,得到实时变化的物体形状。
在实际应用中,由于每一路视频流数据采用到的都是物体的一部分,无法在同一时刻获得完整的物体,因此本实施例采用形状合成模块,将各路视频流数据的每次采样后识别出的物体形状进行合成处理,以获得更多信息的物体形状。
由上所述,识别单元根据各路视频流数据中的物体轮廓信息识别出相应的物体形状,并将多路视频流数据中已识别出的物体进行合成处理,得到包含物体全部信息的物体形状,从而加强重现在3D场景中的物体的真实效果,提高用户的使用体验。
优选地,在上述图1所示优选实施例中的物体追踪单元包括:位置信息获取模块、触点信息获取模块和运动轨迹获取模块。
位置信息获取模块,用于获得实时变化的物体形状的相对空间位置信息。
由于多个摄像头同一时刻在不同角度对物体进行拍摄时,每个摄像头发出的光线会和物体形成一个夹角,如物体发生移动或变化则每个摄像头发出的光线与物体形成的夹角可能会发生变化,而这些夹角的变化反映在视频流图像数据中则表现为空间位置的变化,因此本技术方案基于该客观事实获取实时变化的物体形状的相对空间位置信息。
具体的,本发明示意性的示出两种获取实时变化的物体形状的相对空间位置信息。其中,第一种获得物体形状的相对空间位置信息的方式是:
位置信息获取模块从上述物体采集单元采集的至少两路视频数据流的视频图像信息中获得物体形状变化的角度信息;根据物体形状变化的角度信息获得物体的距离信息,结合物体形状变化的角度信息和物体的距离信息获得物体的相对空间位置信息。
第二种获得物体形状的相对空间位置信息的方式是:
位置信息获取模块从上述物体采集单元采集的至少两路视频数据流的视频图像信息中获得物体形状变化的角度信息;通过距离传感器实时感应物体的距离信息;结合物体形状变化的角度信息和物体的距离信息获得物体的相对空间位置信息。
上述两种方案均通过结合物体形状变化的角度信息和物体的实时距离信息提高获得的物体形状的相对空间位置信息的准确度。其中第一种方案不需要额外的传感器,只通过视频流数据本身提供的信息就可以获得物体形状的相对空间位置信息,但是需要通过高级算法实现,会增加系统的计算复杂度;而第二种方案通过距离传感器实时感应物体的距离变化,通过简单的算法就可获得较高的精度的相对空间位置信息。在实际使用时,可以根据具体设计要求选择合适的方案。
触点信息获取模块,用于根据实时变化的物体形状上确定出的触点,获得实时变化的物体形状上的触点的变化信息,所述触点为标识物体的特征关键点。
需要说明的是,本模块中的触点为标识物体的特征关键点,该关键点优选地为物体运动的各个关节点,从而更好的确定实时变化的物体形状。本技术方案并不对物体形状上的触点的数量和触点的设置方式做特别限定,在设计过程中可以综合衡量系统精度和系统的数据处理能力等方面的要求具体设计。
运动轨迹获取模块,用于根据相对空间位置信息和触点的变化信息,在预先设定的运动轨迹数据库中获得相应的物体运动轨迹。
为更加详细的说明本技术方案的有益效果,现将物体重现在虚拟现实头戴设备为例进行说明。
该虚拟头戴设备包括:用于显示3D虚拟现实场景的显示屏和上述技术方案的3D场景中重现物体的系统,其中3D场景中重现物体的系统的物体采集单元为设置在虚拟现实头戴设备上的一前置摄像头和一底置摄像头。
将物体重现于虚拟现实头戴设备中的工作原理是:通过前置摄像头和底置摄像头对物体同时进行实时采集,获得两路视频流数据,从两路视频流数据中识别出物体形状,根据实时变化的物体形状获得相应的物体位移,将该物体位移处理成3D图像实时叠加投影在3D虚拟现实场景中。
其中,根据视频流数据获取物体位移,并将该物体位移重现于3D虚拟现实头戴设备的技术流程如图2所示:
S200,获取前置摄像头和底置摄像头从不同角度同时实时采集得到的两路视频流数据。
S201,对当前时刻的两路视频流数据分别进行视频采样处理,得到相应的视频图像。
S202,判断视频图像中是否有物体,如果有,则跳转到步骤S202,如果没有,则获取下一时刻的视频流数据。
S203,对视频图像数据进行二值化处理,提取物体轮廓信息。
S204,根据预先设定的物体模型从物体轮廓信息中识别出当前的物体形状。
S205,合成两路视频流数据采样后识别出的物体形状,获得包含更多物体信息的物体形状。
S206,获取物体的空间位置变化信息。
S207,根据物体触点的变化信息和物体空间位置变化信息,利用HMM(Hidden Markov Model,隐马尔可夫模型)动态物体识别方法,获得实时变化的物体形状的相应位移。
S208,将上述物体位移叠加投影到3D虚拟现实场景中。
本实施例将3D场景中重现物体的系统应用在虚拟现实头戴设备中,能够通过该3D场景中重现物体的系统将物体重现于虚拟现实头戴设备中,在虚拟现实头戴设备中实时显示物体形状及位移的变化,从而提高用户的体验。
图3为本发明实施例提供的一种3D场景中重现物体的方法流程图,所述方法包括:
S300,对待显示的物体从不同角度同时实时采集至少两路视频流数据。
S301,从上述至少两路视频流数据中识别出实时变化的物体形状。
具体的,
对上述至少两路视频流数据中的各路分别进行采样处理,获得每次采样的视频图像数据;
判断该视频图像数据中是否包含物体,若包含则对该视频图像数据进行二值化处理,提取物体轮廓信息;
在预先设定的物体模型数据库中识别出物体轮廓信息对应的物体形状;
合成各路视频流数据的每次采样识别出的物体形状,得到实时变化的物体形状。
S302,根据实时变化的物体形状,获得对应的物体运动轨迹。
具体的,获得实时变化的物体形状的相对空间位置信息;
根据实时变化的物体形状上确定出的触点,获得实时变化的物体形状上的触点的变化信息,所述触点为标识物体的特征关键点;
根据相对空间位置信息和触点的变化信息,在预先设定的运动轨迹数据库中获得相应的物体运动轨迹。
其中,获得实时变化的物体形状的相对空间位置信息包括:
从上述至少两路视频数据流的视频图像信息中获得物体形状变化的角度信息;
根据物体形状变化的角度信息获得物体的距离信息,或者通过距离传感器实时感应物体的距离信息;
根据物体形状变化的角度信息和物体的距离信息获得物体的相对空间位置信息。
S303,将实时变化的物体形状和对应的物体运动轨迹处理成3D图像实时叠加投影在3D场景中。
优选地,根据分屏技术将所述3D图像投影到3D场景中。
综上所述,本发明实施例公开了一种3D场景中重现物体的系统和方法,所述系统的物体采集单元对待显示物体从不同角度同时实时采集至少两路视频流数据,物体识别单元从所述至少两路视频流数据中识别出具有完整物体信息的物体形状,经物体追踪单元获得所述实时变化的物体形状相应的物体运动轨迹,通过物体投影单元将实时变化的物体形状和对应的物体运动轨迹处理成3D图像实时叠加投影在3D场景中,从而达到了在3D场景中显示真实物体的目的。相比于现有技术,本发明不需要根据数据库的物体形态进行重新绘制需显示的物体,可以直接将采集到的物体图像进行真实显示,以提高用户的使用体验。
以上所述仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。凡在本发明的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本发明的保护范围内。

Claims (10)

  1. 一种3D场景中重现物体的系统,其特征在于,包括:物体采集单元、物体识别单元、物体追踪单元和物体投影单元;
    所述物体采集单元,用于对待显示的物体从不同角度同时实时采集至少两路视频流数据;
    所述物体识别单元,用于从所述至少两路视频流数据中识别出实时变化的物体形状;
    所述物体追踪单元,用于根据所述实时变化的物体形状,获得对应的物体运动轨迹;
    所述物体投影单元,用于将所述实时变化的物体形状和所述对应的物体运动轨迹处理成3D图像实时叠加投影在3D场景中。
  2. 根据权利要求1所述的系统,其特征在于,所述物体识别单元包括:
    采样模块,用于对所述至少两路视频流数据中的各路分别进行采样处理,获得每次采样的视频图像数据;
    轮廓提取模块,用于判断所述视频图像数据中是否包含物体,若包含则对所述视频图像数据进行二值化处理,提取物体轮廓信息;
    形状识别模块,用于在预先设定的物体模型数据库中识别出所述物体轮廓信息对应的物体形状;
    形状合成模块,用于合成各路视频流数据的每次采样识别出的物体形状,得到实时变化的物体形状。
  3. 根据权利要求1所述的系统,其特征在于,所述物体追踪单元包括:
    位置信息获取模块,用于获得实时变化的物体形状的相对空间位置信息;
    触点信息获取模块,用于根据实时变化的物体形状上确定出的触点,获得所述实时变化的物体形状上的触点的变化信息,所述触点为标识物体的特征关键点;
    运动轨迹获取模块,用于根据所述相对空间位置信息和所述触点的变化信息,在预先设定的运动轨迹数据库中获得相应的物体运动轨迹。
  4. 根据权利要求3所述的系统,其特征在于,所述位置信息获取模块具体用于,
    从所述至少两路视频数据流的视频图像信息中获得物体形状变化的角度信息;
    根据所述物体形状变化的角度信息获得物体的距离信息;或者通过距离传感器实时感应物体的距离信息;
    根据所述物体形状变化的角度信息和所述物体的距离信息获得物体的相对空间位置信息。
  5. 根据权利要求1所述的系统,其特征在于,所述物体投影单元,进一步用于将所述实时变化的物体形状和所述对应的物体运动轨迹处理成3D图像,根据分屏技术将所述3D图像投影到3D场景中。
  6. 一种3D场景中输入物体的方法,其特征在于,包括:
    对待显示的物体从不同角度同时实时采集至少两路视频流数据;
    从所述至少两路视频流数据中识别出实时变化的物体形状;
    根据所述实时变化的物体形状,获得对应的物体运动轨迹;
    将所述实时变化的物体形状和所述对应的物体运动轨迹处理成3D图像实时叠加投影在3D场景中。
  7. 根据权利要求6所述的方法,其特征在于,所述从所述至少两路视频流数据中识别出实时变化的物体形状包括:
    对所述至少两路视频流数据中的各路分别进行采样处理,获得每次采样的视频图像数据;
    判断所述视频图像数据中是否包含物体,若包含则对所述视频图像数据进行二值化处理,提取物体轮廓信息;
    在预先设定的物体模型数据库中识别出所述物体轮廓信息对应的物体形状;
    合成各路视频流数据的每次采样识别出的物体形状,得到实时变化的物体形状。
  8. 根据权利要求6所述的方法,其特征在于,所述根据所述实时变化的物体形状,获得对应的物体运动轨迹包括:
    获得实时变化的物体形状的相对空间位置信息;
    根据实时变化的物体形状上确定出的触点,获得所述实时变化的物体形状上的触点的变化信息,所述触点为标识物体的特征关键点;
    根据所述相对空间位置信息和所述触点的变化信息,在预先设定的运动轨迹数据库中获得相应的物体运动轨迹。
  9. 根据权利要求8所述的方法,其特征在于,所述获得实时变化的物体形状的相对空间位置信息包括:
    从所述至少两路视频数据流的视频图像信息中获得物体形状变化的角度信息;
    根据所述物体形状变化的角度信息获得物体的距离信息;或者通过距离传感器实时感应物体的距离信息;
    根据所述物体形状变化的角度信息和所述物体的距离信息获得物体的相对空间位置信息。
  10. 根据权利要求6所述的方法,其特征在于,所述将所述实时变化的物体形状和所述对应的物体运动轨迹处理成3D图像实时叠加投影在3D场景中包括:
    将所述实时变化的物体形状和所述对应的物体运动轨迹处理成3D图像;
    根据分屏技术将所述3D图像投影到3D场景中。
PCT/CN2015/090529 2014-12-30 2015-09-24 一种3d场景中重现物体的系统和方法 WO2016107230A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/313,446 US9842434B2 (en) 2014-12-30 2015-09-24 System and method for reproducing objects in 3D scene
JP2017509026A JP2017534940A (ja) 2014-12-30 2015-09-24 3dシーンでオブジェクトを再現するシステム及び方法
US15/808,151 US10482670B2 (en) 2014-12-30 2017-11-09 Method for reproducing object in 3D scene and virtual reality head-mounted device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410842257.1 2014-12-30
CN201410842257.1A CN104571511B (zh) 2014-12-30 2014-12-30 一种3d场景中重现物体的系统和方法

Related Child Applications (5)

Application Number Title Priority Date Filing Date
US15/313,472 Continuation-In-Part US10466798B2 (en) 2014-12-30 2015-09-24 System and method for inputting gestures in 3D scene
US15/313,472 A-371-Of-International US10466798B2 (en) 2014-12-30 2015-09-24 System and method for inputting gestures in 3D scene
PCT/CN2015/090531 Continuation-In-Part WO2016107231A1 (zh) 2014-12-30 2015-09-24 一种3d场景中输入手势的系统和方法
US15/313,446 A-371-Of-International US9842434B2 (en) 2014-12-30 2015-09-24 System and method for reproducing objects in 3D scene
US15/808,151 Continuation-In-Part US10482670B2 (en) 2014-12-30 2017-11-09 Method for reproducing object in 3D scene and virtual reality head-mounted device

Publications (1)

Publication Number Publication Date
WO2016107230A1 true WO2016107230A1 (zh) 2016-07-07

Family

ID=53087789

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/090529 WO2016107230A1 (zh) 2014-12-30 2015-09-24 一种3d场景中重现物体的系统和方法

Country Status (4)

Country Link
US (1) US9842434B2 (zh)
JP (1) JP2017534940A (zh)
CN (1) CN104571511B (zh)
WO (1) WO2016107230A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334818A (zh) * 2017-01-17 2018-07-27 罗伯特·博世有限公司 用于识别车辆中的对象的方法和设备

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10482670B2 (en) 2014-12-30 2019-11-19 Qingdao Goertek Technology Co., Ltd. Method for reproducing object in 3D scene and virtual reality head-mounted device
CN104571511B (zh) * 2014-12-30 2018-04-27 青岛歌尔声学科技有限公司 一种3d场景中重现物体的系统和方法
US10471304B2 (en) 2016-03-08 2019-11-12 Sportsmedia Technology Corporation Systems and methods for integrated automated sports data collection and analytics platform
CN106843532A (zh) * 2017-02-08 2017-06-13 北京小鸟看看科技有限公司 一种虚拟现实场景的实现方法和装置
EP3425907B1 (en) * 2017-07-03 2022-01-05 Vestel Elektronik Sanayi ve Ticaret A.S. Display device and method for rendering a three-dimensional image
DE102017211518A1 (de) * 2017-07-06 2019-01-10 Bayerische Motoren Werke Aktiengesellschaft Verfahren zum Erzeugen einer virtuellen Umgebung für einen Benutzer in einem Fahrzeug, entsprechendes Virtual-Reality-System sowie Fahrzeug
US10832055B2 (en) * 2018-01-31 2020-11-10 Sportsmedia Technology Corporation Systems and methods for providing video presentation and video analytics for live sporting events
CN109407826B (zh) * 2018-08-31 2020-04-07 百度在线网络技术(北京)有限公司 球类运动模拟方法、装置、存储介质及电子设备
CN110211661B (zh) * 2019-06-05 2021-05-28 山东大学 基于混合现实的手功能训练系统及数据处理方法
CN110276841B (zh) * 2019-06-27 2023-11-24 北京小米移动软件有限公司 应用于增强现实设备的运动轨迹确定方法、装置及终端
CN110930805A (zh) * 2019-12-20 2020-03-27 国网湖北省电力公司咸宁供电公司 变电站三维仿真系统
CN111010561A (zh) * 2019-12-20 2020-04-14 上海沃咨信息科技有限公司 一种基于vr技术的虚拟现实投影系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1747559A (zh) * 2005-07-29 2006-03-15 北京大学 三维几何建模系统和方法
CN102722249A (zh) * 2012-06-05 2012-10-10 上海鼎为软件技术有限公司 操控方法、操控装置及电子装置
CN104571511A (zh) * 2014-12-30 2015-04-29 青岛歌尔声学科技有限公司 一种3d场景中重现物体的系统和方法
CN204463031U (zh) * 2014-12-30 2015-07-08 青岛歌尔声学科技有限公司 一种3d场景中重现物体的系统和虚拟现实头戴设备

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0926318A (ja) * 1995-07-13 1997-01-28 Seikosha Co Ltd 測距装置
JP3660492B2 (ja) * 1998-01-27 2005-06-15 株式会社東芝 物体検知装置
CN1304931C (zh) * 2005-01-27 2007-03-14 北京理工大学 一种头戴式立体视觉手势识别装置
US8755569B2 (en) * 2009-05-29 2014-06-17 University Of Central Florida Research Foundation, Inc. Methods for recognizing pose and action of articulated objects with collection of planes in motion
US8832574B2 (en) * 2009-06-30 2014-09-09 Nokia Corporation Apparatus and associated methods
CN101742348A (zh) * 2010-01-04 2010-06-16 中国电信股份有限公司 渲染方法与系统
US8631355B2 (en) * 2010-01-08 2014-01-14 Microsoft Corporation Assigning gesture dictionaries
US8751215B2 (en) * 2010-06-04 2014-06-10 Microsoft Corporation Machine based sign language interpreter
CN102156859B (zh) * 2011-04-21 2012-10-03 刘津甦 手部姿态与空间位置的感知方法
US20140307920A1 (en) * 2013-04-12 2014-10-16 David Holz Systems and methods for tracking occluded objects in three-dimensional space
JP5833526B2 (ja) * 2012-10-19 2015-12-16 日本電信電話株式会社 映像コミュニケーションシステム及び映像コミュニケーション方法
DE102012111304A1 (de) * 2012-11-22 2014-05-22 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung, Verfahren und Computerprogramm zur Rekonstruktion einer Bewegung eines Objekts
JP6132659B2 (ja) * 2013-02-27 2017-05-24 シャープ株式会社 周囲環境認識装置、それを用いた自律移動システムおよび周囲環境認識方法
CN103914152B (zh) * 2014-04-11 2017-06-09 周光磊 三维空间中多点触控与捕捉手势运动的识别方法与系统
CN103927016B (zh) * 2014-04-24 2017-01-11 西北工业大学 一种基于双目视觉的实时三维双手手势识别方法及其系统
US20150379770A1 (en) * 2014-06-27 2015-12-31 David C. Haley, JR. Digital action in response to object interaction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1747559A (zh) * 2005-07-29 2006-03-15 北京大学 三维几何建模系统和方法
CN102722249A (zh) * 2012-06-05 2012-10-10 上海鼎为软件技术有限公司 操控方法、操控装置及电子装置
CN104571511A (zh) * 2014-12-30 2015-04-29 青岛歌尔声学科技有限公司 一种3d场景中重现物体的系统和方法
CN204463031U (zh) * 2014-12-30 2015-07-08 青岛歌尔声学科技有限公司 一种3d场景中重现物体的系统和虚拟现实头戴设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334818A (zh) * 2017-01-17 2018-07-27 罗伯特·博世有限公司 用于识别车辆中的对象的方法和设备
CN108334818B (zh) * 2017-01-17 2023-06-27 罗伯特·博世有限公司 用于识别车辆中的对象的方法和设备

Also Published As

Publication number Publication date
US20170098331A1 (en) 2017-04-06
CN104571511A (zh) 2015-04-29
US9842434B2 (en) 2017-12-12
CN104571511B (zh) 2018-04-27
JP2017534940A (ja) 2017-11-24

Similar Documents

Publication Publication Date Title
WO2016107230A1 (zh) 一种3d场景中重现物体的系统和方法
WO2016107231A1 (zh) 一种3d场景中输入手势的系统和方法
WO2019066563A1 (en) DETERMINATION AND FOLLOW-UP OF CAMERA INSTALLATION
CN111327788B (zh) 相机组的同步方法、测温方法、装置及电子系统
WO2012091326A2 (ko) 고유식별 정보를 이용한 3차원 실시간 거리뷰시스템
US20150193970A1 (en) Video playing method and system based on augmented reality technology and mobile terminal
WO2018048000A1 (ko) 단일 카메라 기반의 3차원 영상 해석 장치 및 방법, 3차원 영상 해석을 위한 프로그램이 기록된 컴퓨터로 읽을 수 있는 매체
WO2012124852A1 (ko) 감시구역 상의 객체의 경로를 추적할 수 있는 스테레오 카메라 장치, 그를 이용한 감시시스템 및 방법
WO2012173373A2 (ko) 가상터치를 이용한 3차원 장치 및 3차원 게임 장치
WO2012005387A1 (ko) 다중 카메라와 물체 추적 알고리즘을 이용한 광범위한 지역에서의 물체 이동 감시 방법 및 그 시스템
WO2013100239A1 (ko) 스테레오 비전 시스템의 영상처리방법 및 그 장치
WO2016155284A1 (zh) 一种终端的信息采集方法及其终端
CN206105869U (zh) 一种机器人快速示教装置
CN104717426A (zh) 一种基于外部传感器的多摄像机视频同步装置及方法
WO2014133251A1 (ko) 엘에스에이치 알고리즘의 자료조회결과의 특징점을 이용한 매칭포인트 추출시스템 및 그 방법
WO2019156543A2 (ko) 동영상의 대표 이미지를 결정하는 방법 및 그 방법을 처리하는 전자 장치
WO2013014872A1 (ja) 画像変換装置、カメラ、映像システム、画像変換方法およびプログラムを記録した記録媒体
WO2015196878A1 (zh) 一种电视虚拟触控方法及系统
CN103006332B (zh) 手术刀跟踪方法与装置及数字立体显微镜系统
WO2012034469A1 (zh) 基于手势的人机交互方法及系统、计算机存储介质
WO2019098421A1 (ko) 모션 정보를 이용한 객체 복원 장치 및 이를 이용한 객체 복원 방법
WO2012133962A1 (ko) 스테레오 카메라를 이용한 3차원 동작 인식장치 및 인식방법
WO2019083073A1 (ko) 교통정보 제공 방법, 장치 및 그러한 방법을 실행하기 위하여 매체에 저장된 컴퓨터 프로그램
WO2012074174A1 (ko) 고유식별 정보를 이용한 증강 현실 구현시스템
WO2022182096A1 (en) Real-time limb motion tracking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15874915

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15313446

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2017509026

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15874915

Country of ref document: EP

Kind code of ref document: A1