WO2018170678A1 - 一种头戴式显示装置及其手势动作识别方法 - Google Patents

一种头戴式显示装置及其手势动作识别方法 Download PDF

Info

Publication number
WO2018170678A1
WO2018170678A1 PCT/CN2017/077291 CN2017077291W WO2018170678A1 WO 2018170678 A1 WO2018170678 A1 WO 2018170678A1 CN 2017077291 W CN2017077291 W CN 2017077291W WO 2018170678 A1 WO2018170678 A1 WO 2018170678A1
Authority
WO
WIPO (PCT)
Prior art keywords
hand
user
image
central processing
display device
Prior art date
Application number
PCT/CN2017/077291
Other languages
English (en)
French (fr)
Inventor
廖建强
Original Assignee
廖建强
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 廖建强 filed Critical 廖建强
Priority to PCT/CN2017/077291 priority Critical patent/WO2018170678A1/zh
Publication of WO2018170678A1 publication Critical patent/WO2018170678A1/zh

Links

Images

Definitions

  • the present invention relates to the field of human-computer interaction technologies, and in particular, to a head-mounted display device and a gesture recognition method thereof.
  • a head mounted display device is widely used in virtual display technology (Virtual Reality), which displays a stereoscopic image by a parallax technology, and a user can view a stereoscopic image through a head mounted display device.
  • the stereoscopic image can also be integrated into the real environment where the user is located, so that the user can get the feeling of being in the virtual environment.
  • the head-mounted display device visually allows the user to view the stereoscopic virtual image.
  • the user In order to realize the interaction control between the user and the head-mounted display device, the user often needs to wear the sensing glove, and the sensing glove can recognize The gestures of the user enhance the user's sense of reality in viewing the image, but such sensing gloves are often expensive and inconvenient to carry.
  • the user desires to realize human-machine interaction control with the head mounted display device without the need for the sensing glove, thereby improving the user experience of the head mounted display device.
  • the technical problem to be solved by the present invention is that the head-mounted display device of the prior art requires the user to wear the sensing glove for the recognition of the gesture, which gives the human-computer interaction of the head-mounted display device. Performance improvements bring great difficulties. Therefore, the present invention provides an apparatus and method for recognizing a gesture motion of a user through a binocular vision detection function of a head mounted display device, thereby enabling the user to implement and wear the headset without wearing the sensor glove.
  • the display device performs human-computer interaction control.
  • an embodiment of the present invention provides a head mounted display device, which includes a display module, a binocular camera module, and a central processing module, wherein:
  • the display module is configured to display a stereoscopic image
  • the binocular camera module is configured to acquire real space environment information and an image about a hand
  • the central processing module is configured to control the display module and the binocular camera module, and is capable of establishing a corresponding virtual coordinate system according to the real space environment information and constructing the hand based on the image of the hand a structural model, wherein the structural model is associated with the virtual coordinate system;
  • the display module is capable of switching between a video display mode or a transmissive display mode, wherein the transmissive display mode is capable of integrating a real space environment image captured by the binocular camera module into a video stream image;
  • the central processing module constructs a structural model of the hand by extracting feature point information in an image of the hand;
  • the central processing module is capable of mapping the amount of movement or pose of the hand in the real space environment into the structural model.
  • an embodiment of the present invention further provides a gesture motion recognition method for a head mounted display device, where the head mounted display device includes a display module, a binocular camera module, and a central processing module, and the gesture motion recognition method includes :
  • the central processing module acquires an image of the user's hand and a real space environment information captured by the binocular camera module.
  • the central processing module acquires a structural model of the user's hand according to the image of the user's hand, and establishes a virtual coordinate system about the real space environment according to the information of the real space environment;
  • the central processing module acquires the moving amount or posture of the user's hand detected by the binocular camera module in the real space environment in real time, and maps the moving amount or posture into the structural model;
  • the image of the user's hand captured by the binocular camera module includes an image of any one of the palm, the back of the hand, or a finger, or a combination thereof;
  • the central processing module further acquires an image of the user's hand regarding the real space environment, as an initial positioning of the user's hand in the real space environment;
  • the central processing module obtains the structural model about the user's hand in the virtual coordinate system by extracting feature points of the user's hand image;
  • the feature points are extracted based on a texture distribution or a skin color distribution of the user's hand;
  • the central processing module can also map the structural model into a virtual space about a stereoscopic image displayed by the display module to implement human-computer interaction control of the stereoscopic image.
  • the present invention provides a head-mounted display device and a gesture recognition method thereof by the above technical solution.
  • the device and method pass the binocular visual inspection method to the user's hand without the user wearing the gesture sensing glove.
  • the positioning and gesture motion recognition is performed, and the head mounted display device is further capable of performing a synchronous response according to the recognized gesture motion, thereby improving display space authenticity and human-computer interaction performance of the head mounted display device.
  • FIG. 1 is a schematic structural diagram of a head-mounted display device according to an embodiment of the present invention.
  • FIG. 2 is a schematic flow chart of a method for recognizing a gesture of a head mounted display device according to an embodiment of the present invention.
  • FIG. 1 is a schematic structural diagram of a head-mounted display device according to an embodiment of the present invention.
  • the head-mounted display device includes a display module, a binocular camera module, and a central processing module;
  • the module displays a stereoscopic image.
  • the display module can be a head-mounted stereoscopic display, which enables a user to view a stereoscopic image through parallax technology.
  • the head-mounted stereoscopic display can be, but not limited to, a liquid crystal shutter stereoscopic display, and a polarized type.
  • the binocular camera module is disposed on the front of the head mounted display device for photographing a real space environment in which the head mounted display device is located; the central processing module is used for The image display state of the display module is controlled, an image captured by the binocular camera module is received, and image calculation processing is performed.
  • the central processing module can control the head mounted display device to implement different display modes. , such as video display mode and perspective display mode.
  • the central processing module inputs a video stream to the display module, and the display module displays a stereoscopic image about the video stream to the user, where the user can only view the video stream in the video display mode.
  • the stereoscopic image cannot be viewed in the real space environment in which it is located; in the perspective display mode, the central processing module receives the real space environment image captured by the binocular camera module, and then integrates the real space environment image into the video stream image. Therefore, the user can view the real-space environment while viewing the stereoscopic video stream image, which can improve the on-site authenticity of the user viewing the stereoscopic image.
  • FIG. 2 is a schematic flowchart of a method for recognizing a gesture of a head-mounted display device according to an embodiment of the present invention.
  • the gesture recognition method includes:
  • the central processing module acquires an image of the user's hand and a real space environment information captured by the binocular camera module.
  • the central processing module sends a control instruction to the binocular camera module, and the binocular camera module acquires an image of the user's hand or information of a real space environment in which the user is located according to the control command, wherein the user's hand is acquired.
  • the image may be an image obtained by acquiring an image of a different part of the user's hand, such as any one of the palm, the back of the hand, or a finger, and the information of the real space environment in which the user is located may include photographing the real space environment at different angles.
  • Image information or obtaining full-range video stream information of the real space environment in addition, in order to facilitate initial positioning of the user's hand in the real space environment, the binocular camera module also needs to acquire the user's hand about the real space environment.
  • Image is an image obtained by acquiring an image of a different part of the user's hand, such as any one of the palm, the back of the hand, or a finger
  • the information of the real space environment in which the user is located may
  • the central processing module acquires a structural model of the user's hand according to the image about the user's hand, and establishes a virtual coordinate system about the real space environment according to the information of the real space environment.
  • the central processing module after receiving the image about the user's hand, performs image analysis processing on the image of the user's hand to construct three-dimensional information about the user's hand, and the central processing module parses through the corresponding image.
  • the function modularly splits the image of the user's hand, and the image analytic function performs feature point extraction on the different image portions after the segmentation, and the feature point may be formed based on information such as texture distribution or skin color distribution of the user's hand.
  • the image analytic function is characterized by a three-dimensional model of the component user's hand to represent the texture distribution in a three-dimensional form; when the extracted feature point is a hand
  • the image analytic function converts the color image of the hand into a gray scale map, thereby corresponding the gray point to the corresponding gray Characterizing the degree
  • the central processing module converts the user's hand into a basic model consisting of points and lines according to the feature point information, thereby acting as a virtual coordinate system of the user's hand in a real space environment or about the real space environment. Structural model.
  • the central processing module After receiving the information of the real space environment, the central processing module establishes a virtual coordinate system about the real space environment, and the coordinate system may be an xyzo three-dimensional rectangular coordinate system or an r- ⁇ polar coordinate system, and the coordinate system The origin is determined based on the initial positioning of the user's hand obtained in the step S1 in the real space environment; in addition, in order to adapt to different display modes of the head mounted display device, the coordinate system may also be based on the display module The video stream is created by virtual space.
  • the central processing module acquires the moving amount or posture of the user's hand detected by the binocular camera module in the real space environment in real time, and maps the moving amount or posture into the structural model.
  • the binocular camera module captures an image of the user's hand in real time during the working process
  • the central processing module forms a motion trajectory of the user's hand in the real environment space based on the principle of binocular parallax, and calculates
  • the central processing module integrates the structural model of the user's hand into the virtual coordinate system, and converts the calculated movement amount and pose into the structure.
  • the corresponding movement amount and posture of the model in the virtual coordinate system thereby completing the mapping between the movement amount and the posture of the user's hand body and the structural model, thereby realizing the recognition of the user's hand gesture.
  • the central processing module can further add a human-computer interaction virtual template to the stereoscopic image displayed by the head-mounted display device according to the above-mentioned peer relationship.
  • the user can perform human-computer interaction control through the hand in the real space environment, thereby improving the human-computer interaction performance of the head-mounted display device.
  • the gesture recognition method of the above-mentioned head-mounted display device maps the motion of the user's hand to the virtual space in which the user views the image by means of binocular vision detection.
  • the mapping is based on the principle of binocular vision detection. Obtaining the motion posture of the user's hand in the real space environment, and realizing the motion posture by using the relationship between the established user hand structure model and the virtual coordinate system, and the user's hand and the real space environment.
  • the complete mapping transformation in the virtual coordinate system, the above gesture gesture recognition method is separated from the constraint of the sensing glove, so that the user can achieve the head in any situation Human-computer interaction control of the wearable display device.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Abstract

一种头戴式显示装置及其手势动作识别方法,该装置和方法基于双目视觉检测原理,通过设置在头戴式显示装置上的双目摄像模块来拍摄用户手部在真实空间环境中的运动姿态,并将该运动姿态映射到已经构建的用户手部结构模型和虚拟空间坐标系中,从而实现对用户手部的手势动作识别和该手势动作在虚拟空间坐标系中的变换,这使得用户在不需要佩戴传感手套的情况下,也能够实现与头戴式显示装置之间的人机交互控制。

Description

一种头戴式显示装置及其手势动作识别方法 技术领域
本发明涉及人机交互技术领域,尤其涉及一种头戴式显示装置及其手势识别方法。
背景技术
目前,头戴式显示装置(HUM)广泛应用于虚拟显示技术(Virtual Reality)中,该头戴式显示装置通过视差技术来显示立体图像,用户通过头戴式显示装置就能够观看到立体图像,并且该立体图像还能融入用户所处的现实环境中,使得用户能够获得置身于虚拟环境中的感觉。通常地,头戴式显示装置都是在视觉上让用户观看到立体虚拟图像,为了实现用户与头戴式显示装置之间的交互控制,用户往往需要佩戴传感手套,该传感手套能够识别用户的手势动作,从而提高用户观看图像的现实感,但是这种传感手套往往价格昂贵且不便于携带。随着头戴式显示装置的普及,用户希望在不需要传感手套的情况下也能够实现与头戴式显示装置的人机交互控制,从而改善头戴式显示装置的用户体验。
发明内容
针对上述现有技术的缺陷,本发明所要解决的技术问题在于现有技术的头戴式显示装置都需要用户佩戴传感手套来进行手势动作的识别,这给头戴式显示装置的人机交互性能改善带来很大的困难。因此本发明提供一种通过头戴式显示装置的双目视觉检测功能就能对用户的手势动作进行识别的装置和方法,从而使用户在不需要佩戴传感手套的情况下也实现与头戴式显示装置进行人机交互控制。
为了解决上述技术问题,本发明实施例提供一种头戴式显示装置,所述头戴式显示装置包括显示模块、双目摄像模块和中央处理模块,其特征在于:
所述显示模块用于显示立体图像;
所述双目摄像模块用于获取真实空间环境信息和关于手部的图像;
所述中央处理模块用于控制所述显示模块和所述双目摄像模块,并能够根据所述真实空间环境信息建立对应的虚拟坐标系以及根据所述手部的图像构建关于所述手部的结构模型,同时将所述结构模型与所述虚拟坐标系进行对应关联;
进一步,所述显示模块能够进行视频显示模式或者透射显示模式的切换,其中所述透射显示模式能够将所述双目摄像模块拍摄的真实空间环境图像融入视频流图像中;
进一步,所述中央处理模块通过在所述手部的图像中提取特征点信息来构建所述手部的结构模型;
进一步,所述中央处理模块能够将所述手部在所述真实空间环境中的移动量或位姿映射到所述结构模型中。
相应地,本发明实施例还提供了一种头戴式显示装置的手势动作识别方法,所述头戴式显示装置包括显示模块、双目摄像模块和中央处理模块,所述手势动作识别方法包括:
S01:中央处理模块获取双目摄像模块拍摄的用户手部的图像以及真实空间环境的信息,
S02:中央处理模块根据所述用户手部的图像获取用户手部的结构模型,以及根据该真实空间环境的信息建立关于所述真实空间环境的虚拟坐标系;
S03:中央处理模块实时获取双目摄像模块检测的用户手部在真实空间环境中的移动量或位姿,并将所述移动量或位姿映射到该结构模型中;
进一步,所述双目摄像模块拍摄的用户手部的图像包括拍摄手掌、手背或手指中的任意一者或者其组合的图像;
进一步,所述中央处理模块还获取该用户手部关于所述真实空间环境的图像,以作为所述用户手部在所述真实空间环境中的初始化定位;
进一步,所述中央处理模块通过提取用户手部图像的特征点来获得关于用户手部在所述虚拟坐标系中的所述结构模型;
进一步,所述特征点是基于用户手部的纹理分布或者肤色分布而提取的;
进一步,所述中央处理模块还能将所述结构模型映射到关于所述显示模块显示的立体图像的虚拟空间中,以实现对该立体图像的人机交互控制。
本发明通过上述技术方案提供一种头戴式显示装置及其手势动作识别方法,该装置和方法在不需要用户佩戴手势传感手套的情况下,通双目过视觉检测的方法对用户手部进行定位和手势动作识别,该头戴式显示装置还能够根据识别到的手势动作来进行同步响应,从而提高头戴式显示装置的显示空间真实性和人机交互性能。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种头戴式显示装置的结构示意图;
图2是本发明实施例提供的一种头戴式显示装置的手势动作识别方法的流程示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
参见图1,为本发明实施例提供的一种头戴式显示装置的结构示意图,在本发明实施例中,该头戴式显示装置包括显示模块、双目摄像模块和中央处理模块;该显示模块显示立体图像,具体而言,该显示模块可为头戴式立体显示器,其通过视差技术来使用户观看到立体图像,该头戴式立体显示器可为但不限于液晶快门立体显示器,偏光式立体显示器或波分色差式立体显示器等;该双目摄像模块设置在该头戴式显示装置前方上,用于对头戴式显示装置所处的真实空间环境进行拍摄;该中央处理模块用于控制该显示模块的图像显示状态、接收该双目摄像模块拍摄到的图像以及进行图像运算处理。
具体来说,该中央处理模块能够控制该头戴式显示装置实现不同的显示模 式,如视频显示模式和透视显示模式。其中,在该视频显示模式下,该中央处理模块向该显示模块输入视频流,该显示模块向用户显示关于该视频流的立体图像,在该视频显示模式下用户只能观看到该视频流的立体图像而不能观看到自身所处的真实空间环境;在该透视显示模式下,该中央处理模块接收该双目摄像模块拍摄的真实空间环境图像后,将该真实空间环境图像融入视频流图像中,从而使得用户能够在观看立体视频流图像的同时,也能观看到该真实空间环境,这样能够提高用户观看立体图像的现场真实性。
参见图2,为本发明实施例提供的一种头戴式显示装置的手势动作识别方法的流程示意图,在本发明实施例中,该手势动作识别方法包括:
S01:中央处理模块获取双目摄像模块拍摄的用户手部的图像以及真实空间环境的信息。
具体而言,该中央处理模块向双目摄像模块发出控制指令,该双目摄像模块根据该控制指令来获取用户手部的图像或者用户所处的真实空间环境的信息,其中该获取用户手部的图像可为获取关于用户手部不同部位的图像,如手掌、手背或手指中的任意一者或者其组合的图像,该获取用户所处真实空间环境的信息可包括在不同角度拍摄真实空间环境图像信息或者获取该真实空间环境的全范围视频流信息;此外,为了便于将用户手部在该真实空间环境中进行初始化定位,该双目摄像模块还需要获取该用户手部关于该真实空间环境的图像。
S02:中央处理模块根据该关于用户手部的图像获取用户手部的结构模型,以及根据该真实空间环境的信息建立关于该真实空间环境的虚拟坐标系。
具体而言,该中央处理模块接收到该关于用户手部的图像后,对该用户手部的图像进行图像解析处理以构建关于该用户手部的三维信息,该中央处理模块通过相应的图像解析函数将该用户手部的图像进行模块化分割,该图像解析函数针对分割后的不同图像部分进行特征点的提取,该特征点可以是基于用户手部的纹理分布或肤色分布等信息而形成的,比如,当提取的特征点为手部纹理分布时,该图像解析函数通过构件用户手部的三维立体模型来将该纹理分布以三维形貌的形式来表征;当提取的特征点为手部肤色分布时,该图像解析函数通过将关于手部的彩色图像转换成灰度分布图,从而将该特征点以相应的灰 度值进行表征,该中央处理模块根据该特征点信息将用户的手部转换成由点和线构成的基础模型,从而作为用户手部在真实空间环境或者关于该真实空间环境的虚拟坐标系的结构模型。
该中央处理模块接收到该真实空间环境的信息后,建立关于该真实空间环境的虚拟坐标系,该坐标系可以是xyzo三维直角坐标系,也可以是r-θ极坐标系,而该坐标系的原点是基于该步骤S1中获取的用户手部在真实空间环境中的初始化定位来确定的;此外,为了适应头戴式显示装置的不同显示模式,该坐标系还可基于显示模块所显示的视频流虚拟空间而建立的。
S03:中央处理模块实时获取双目摄像模块检测的用户手部在真实空间环境中的移动量或位姿,并将该移动量或位姿映射到该结构模型中。
具体而言,该双目摄像模块在工作过程中实时拍摄关于该用户手部的图像,该中央处理模块基于双目视差的原理形成该用户手部在真实环境空间中的运动轨迹,并计算得出该用户手部的移动量和位姿;与此同时,该中央处理模块将用户手部的结构模型融入到该虚拟坐标系中,并且将该计算得到的移动量和位姿转换为该结构模型在该虚拟坐标系中对应的移动量和位姿,从而完成用户手部本体与结构模型之间移动量和位姿的映射,从而实现对用户手部手势动作的识别。
此外,由于该用户手部与真实空间环境之间的位置关系,以及用户手部的结构模型与虚拟坐标系之间的位置关系这两种关系是对等的,用户手部在现实空间中的运动会完整地映射到该结构模型在虚拟坐标系中的姿态变化,故该中央处理模块还可依据上述对等的关系在头戴式显示装置显示的立体图像中增加人机交互虚拟模板,以供用户在真实空间环境中通过手部就能进行人机交互控制,从而改善头戴式显示装置的人机交互性能。
可见,上述头戴式显示装置的手势动作识别方法通过双目视觉检测的方式将用户手部的运动映射到用户观看图像的虚拟空间中,实际上这种映射是基于双目视觉检测的原理来获取用户手部在真实空间环境的运动姿态,并利用已建立的用户手部结构模型与虚拟坐标系之间、以及用户手部与真实空间环境之间相互对等的关系来实现该运动姿态在虚拟坐标系中的完整映射转换,上述手势动作识别方法脱离了传感手套的约束,使得用户在任意情景下都能够实现与头 戴式显示装置的人机交互控制。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本发明一种较佳实施例而已,当然不能以此来限定本发明之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本发明权利要求所作的等同变化,仍属于发明所涵盖的范围。

Claims (10)

  1. 一种头戴式显示装置,所述头戴式显示装置包括显示模块、双目摄像模块和中央处理模块,其特征在于:
    所述显示模块用于显示立体图像;
    所述双目摄像模块用于获取真实空间环境信息和关于手部的图像;
    所述中央处理模块用于控制所述显示模块和所述双目摄像模块,并能够根据所述真实空间环境信息建立对应的虚拟坐标系以及根据所述手部的图像构建关于所述手部的结构模型,同时将所述结构模型与所述虚拟坐标系进行对应关联。
  2. 根据权利要求1所述的头戴式显示装置,其特征在于:所述显示模块能够进行视频显示模式或者透射显示模式的切换,其中所述透射显示模式能够将所述双目摄像模块拍摄的真实空间环境图像融入视频流图像中。
  3. 根据权利要求1所述的头戴式显示装置,其特征在于:所述中央处理模块通过在所述手部的图像中提取特征点信息来构建所述手部的结构模型。
  4. 根据权利要求1所述的头戴式显示装置,其特征在于:所述中央处理模块能够将所述手部在所述真实空间环境中的移动量或位姿映射到所述结构模型中。
  5. 一种头戴式显示装置的手势动作识别方法,所述头戴式显示装置包括显示模块、双目摄像模块和中央处理模块,所述手势动作识别方法包括,
    S01:中央处理模块获取双目摄像模块拍摄的用户手部的图像以及真实空间环境的信息,
    S02:中央处理模块根据所述用户手部的图像获取用户手部的结构模型,以及根据该真实空间环境的信息建立关于所述真实空间环境的虚拟坐标系;
    S03:中央处理模块实时获取双目摄像模块检测的用户手部在真实空间环境中的移动量或位姿,并将所述移动量或位姿映射到该结构模型中。
  6. 根据权利要求5所述的手势动作识别方法,在所述步骤S01中,所述双目摄像模块拍摄的用户手部的图像包括拍摄手掌、手背或手指中的任意一者或者其组合的图像。
  7. 根据权利要求5所述的手势动作识别方法,在所述步骤S01中,所述中央处理模块还获取该用户手部关于所述真实空间环境的图像,以作为所述用户手部在所述真实空间环境中的初始化定位。
  8. 根据权利要求5所述的手势动作识别方法,在所述步骤S02中,所述中央处理模块通过提取用户手部图像的特征点来获得关于用户手部在所述虚拟坐标系中的所述结构模型。
  9. 根据权利要求8所述的手势动作识别方法,所述特征点是基于用户手部的纹理分布或者肤色分布而提取的。
  10. 根据权利要求5所述的手势动作识别方法,所述中央处理模块还能将所述结构模型映射到关于所述显示模块显示的立体图像的虚拟空间中,以实现对该立体图像的人机交互控制。
PCT/CN2017/077291 2017-03-20 2017-03-20 一种头戴式显示装置及其手势动作识别方法 WO2018170678A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/077291 WO2018170678A1 (zh) 2017-03-20 2017-03-20 一种头戴式显示装置及其手势动作识别方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/077291 WO2018170678A1 (zh) 2017-03-20 2017-03-20 一种头戴式显示装置及其手势动作识别方法

Publications (1)

Publication Number Publication Date
WO2018170678A1 true WO2018170678A1 (zh) 2018-09-27

Family

ID=63583973

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/077291 WO2018170678A1 (zh) 2017-03-20 2017-03-20 一种头戴式显示装置及其手势动作识别方法

Country Status (1)

Country Link
WO (1) WO2018170678A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688965A (zh) * 2019-09-30 2020-01-14 北京航空航天大学青岛研究院 基于双目视觉的ipt模拟训练手势识别方法
CN113784105A (zh) * 2021-09-10 2021-12-10 上海曼恒数字技术股份有限公司 一种沉浸式vr终端的信息处理方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120320092A1 (en) * 2011-06-14 2012-12-20 Electronics And Telecommunications Research Institute Method and apparatus for exhibiting mixed reality based on print medium
CN105096382A (zh) * 2015-07-09 2015-11-25 浙江宇视科技有限公司 一种在视频监控图像中关联真实物体信息的方法及装置
CN105528082A (zh) * 2016-01-08 2016-04-27 北京暴风魔镜科技有限公司 三维空间及手势识别追踪交互方法、装置和系统
US20160239080A1 (en) * 2015-02-13 2016-08-18 Leap Motion, Inc. Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments
CN106293099A (zh) * 2016-08-19 2017-01-04 北京暴风魔镜科技有限公司 手势识别方法及系统
CN106291930A (zh) * 2015-06-24 2017-01-04 联发科技股份有限公司 头戴式显示器
CN106504073A (zh) * 2016-11-09 2017-03-15 大连文森特软件科技有限公司 基于ar虚拟现实技术的待售房屋考察及装修方案竞价系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120320092A1 (en) * 2011-06-14 2012-12-20 Electronics And Telecommunications Research Institute Method and apparatus for exhibiting mixed reality based on print medium
US20160239080A1 (en) * 2015-02-13 2016-08-18 Leap Motion, Inc. Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments
CN106291930A (zh) * 2015-06-24 2017-01-04 联发科技股份有限公司 头戴式显示器
CN105096382A (zh) * 2015-07-09 2015-11-25 浙江宇视科技有限公司 一种在视频监控图像中关联真实物体信息的方法及装置
CN105528082A (zh) * 2016-01-08 2016-04-27 北京暴风魔镜科技有限公司 三维空间及手势识别追踪交互方法、装置和系统
CN106293099A (zh) * 2016-08-19 2017-01-04 北京暴风魔镜科技有限公司 手势识别方法及系统
CN106504073A (zh) * 2016-11-09 2017-03-15 大连文森特软件科技有限公司 基于ar虚拟现实技术的待售房屋考察及装修方案竞价系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688965A (zh) * 2019-09-30 2020-01-14 北京航空航天大学青岛研究院 基于双目视觉的ipt模拟训练手势识别方法
CN110688965B (zh) * 2019-09-30 2023-07-21 北京航空航天大学青岛研究院 基于双目视觉的ipt模拟训练手势识别方法
CN113784105A (zh) * 2021-09-10 2021-12-10 上海曼恒数字技术股份有限公司 一种沉浸式vr终端的信息处理方法及系统

Similar Documents

Publication Publication Date Title
US20210131790A1 (en) Information processing apparatus, information processing method, and recording medium
JP6057396B2 (ja) 3次元ユーザインタフェース装置及び3次元操作処理方法
US9651782B2 (en) Wearable tracking device
CN106705837B (zh) 一种基于手势的物体测量方法及装置
JP5936155B2 (ja) 3次元ユーザインタフェース装置及び3次元操作方法
TW202119199A (zh) 虛擬鍵盤
US10587868B2 (en) Virtual reality system using mixed reality and implementation method thereof
US10037614B2 (en) Minimizing variations in camera height to estimate distance to objects
US11755122B2 (en) Hand gesture-based emojis
US20130063560A1 (en) Combined stereo camera and stereo display interaction
KR20170031733A (ko) 디스플레이를 위한 캡처된 이미지의 시각을 조정하는 기술들
JP7026825B2 (ja) 画像処理方法及び装置、電子機器並びに記憶媒体
US11288871B2 (en) Web-based remote assistance system with context and content-aware 3D hand gesture visualization
WO2019069536A1 (ja) 情報処理装置、情報処理方法、及び記録媒体
US20200341284A1 (en) Information processing apparatus, information processing method, and recording medium
US11520409B2 (en) Head mounted display device and operating method thereof
WO2018146922A1 (ja) 情報処理装置、情報処理方法、及びプログラム
WO2018170678A1 (zh) 一种头戴式显示装置及其手势动作识别方法
US20200211275A1 (en) Information processing device, information processing method, and recording medium
US11275434B2 (en) Information processing apparatus, information processing method, and storage medium
CN111179341B (zh) 一种增强现实设备与移动机器人的配准方法
JP2018063567A (ja) 画像処理装置、画像処理方法およびプログラム
JP2015184986A (ja) 複合現実感共有装置
WO2017147826A1 (zh) 智能设备的图像处理方法及装置
KR20200120467A (ko) Hmd 장치 및 그 동작 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17902523

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17902523

Country of ref document: EP

Kind code of ref document: A1