WO2021168922A1 - 通过头戴式vr显示设备实现人机交互的方法及装置 - Google Patents

通过头戴式vr显示设备实现人机交互的方法及装置 Download PDF

Info

Publication number
WO2021168922A1
WO2021168922A1 PCT/CN2020/079454 CN2020079454W WO2021168922A1 WO 2021168922 A1 WO2021168922 A1 WO 2021168922A1 CN 2020079454 W CN2020079454 W CN 2020079454W WO 2021168922 A1 WO2021168922 A1 WO 2021168922A1
Authority
WO
WIPO (PCT)
Prior art keywords
head
display device
user
computer interaction
human
Prior art date
Application number
PCT/CN2020/079454
Other languages
English (en)
French (fr)
Inventor
刘奎斌
Original Assignee
上海唯二网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海唯二网络科技有限公司 filed Critical 上海唯二网络科技有限公司
Publication of WO2021168922A1 publication Critical patent/WO2021168922A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements

Definitions

  • the present invention relates to the field of virtual reality technology, in particular to a method and device for realizing human-computer interaction through a head-mounted VR display device.
  • virtual reality is a computer simulation system that can create and experience a virtual world. It uses a computer to generate a simulation environment and immerse users in the environment. Virtual reality technology is to use the data in real life, through the electronic signal generated by computer technology, and combine it with various output devices to transform it into phenomena that people can feel. These phenomena can be real objects in reality. , It can also be a substance that we can't see with the naked eye, expressed through a three-dimensional model. Because these phenomena are not what we can see directly, but the real world simulated by computer technology, they are called virtual reality.
  • VR virtual reality technology
  • a common VR product is a head-mounted VR display device.
  • VR scenes or other devices are usually controlled by the posture of the head-mounted VR display device. For example, through corresponding head movements, different VR scenes can be realized When the user's head turns to the left and the user nods, it means that the user wants to control the display interface of the VR scene to move to the lower left direction.
  • the attitude angle of the drone is controlled by the attitude angle of the head-mounted VR display device.
  • the angle of the user shaking his head from left to right corresponds to the heading angle of the nacelle, and the angle of nodding up and down corresponds to the pitch angle of the nacelle.
  • the operation control of the dialog box still needs to be realized through external physical devices.
  • the external physical devices include handles and remote controls.
  • the lever, touchpad, etc. realize the input of abstract control commands to the VR system.
  • the VR system is operated through external physical devices such as a handle or remote control to realize human-computer interaction. Not only is it inconvenient for the user to operate, but the handle and remote control standards of each VR device are not uniform. , It is impossible or difficult to fully adapt in the same VR application.
  • the embodiments of the present invention provide a method and device for realizing human-computer interaction through a head-mounted VR display device, which are used to solve the above-mentioned technical problems in the prior art.
  • an embodiment of the present invention provides a method for realizing human-computer interaction through a head-mounted VR display device, including:
  • abstract control instructions are matched to respond to the human-computer interaction interface displayed in the VR scene.
  • the determining the current movement state of the user's head based on the current posture data specifically includes:
  • the current movement state of the user's head is determined.
  • the type of the motion state includes at least any one of left turn, right turn, left swing, right swing, head down, and head up.
  • the determining the combined action of the user's head based on the current movement state of the user's head specifically includes:
  • the type of the predefined combination action includes at least any one of nodding, shaking head, left-clicking, and right-clicking.
  • the posture data includes at least any one of an orientation, a horizontal angle, and a vertical inclination angle.
  • the acquiring the current posture data of the head-mounted VR display device specifically includes:
  • an embodiment of the present invention provides an apparatus for realizing human-computer interaction through a head-mounted VR display device, including:
  • the acquisition module is used to acquire the current posture data of the head-mounted VR display device when the human-computer interaction interface is displayed in the VR scene;
  • the motion state recognition module is used to determine the current motion state of the user's head based on the current posture data
  • the combined action recognition module is used to determine the combined action of the user's head based on the current movement state of the user's head;
  • the interaction module is used to match abstract control instructions according to the combined actions of the user to respond to the human-computer interaction interface displayed in the VR scene.
  • an embodiment of the present invention provides an electronic device, including: a memory, a processor, and a computer program stored in the memory and running on the processor, and the processor executes the computer program When, the steps of the method provided in the first aspect are implemented.
  • an embodiment of the present invention provides a non-transitory computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method provided in the first aspect are implemented.
  • the method and device for realizing human-computer interaction through a head-mounted VR display device provided by the embodiments of the present invention realize human-computer interaction from software without hardware improvement, and VR applications no longer need to adapt the physical control equipment of different VR devices , It reduces the difficulty of VR application adaptation and broadens the scope of VR application adaptation to VR devices.
  • FIG. 1 is a schematic diagram of a method for realizing human-computer interaction through a head-mounted VR display device according to an embodiment of the present invention
  • Figure 2 is a logic flow chart of motion state recognition provided by an embodiment of the present invention.
  • Figure 3 is a logic flow chart of combined action recognition provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an apparatus for realizing human-computer interaction through a head-mounted VR display device according to an embodiment of the present invention
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention.
  • FIG. 1 is a schematic diagram of a method for implementing human-computer interaction through a head-mounted VR display device according to an embodiment of the present invention.
  • an embodiment of the present invention provides a method for implementing human-computer interaction through a head-mounted VR display device.
  • the executive body is a device that realizes human-computer interaction through a head-mounted VR display device. The method includes:
  • Step S101 Acquire current posture data of the head-mounted VR display device when the human-computer interaction interface is displayed in the VR scene.
  • the current posture data of the head-mounted VR display device is acquired.
  • the posture data of the VR device can be collected, including orientation, horizontal angle, and vertical inclination angle.
  • the posture data is provided by the application program interface API of the VR device.
  • Step S102 Determine the current movement state of the user's head based on the current posture data.
  • the current motion state of the user's head is determined based on the current posture data.
  • the types of motion states include at least any one of left turn, right turn, left swing, right swing, head down, and head up.
  • Step S103 Determine the combined action of the user's head based on the current movement state of the user's head.
  • the combined movement of the user's head is determined based on the current movement state of the user's head.
  • the pre-defined actions that may be combined, traverse the collected movement records, take the current record and the next n records, and compare whether the pre-defined actions match the current record one by one, if they match , Then an action is recorded.
  • the types of predefined combined actions include at least any one of nodding, shaking head, left-clicking, and right-clicking.
  • Step S104 Match the abstract control instruction according to the combined action of the user to respond to the human-computer interaction interface displayed in the VR scene.
  • the abstract control instruction is matched according to the combined action of the user to respond to the human-computer interaction interface displayed in the VR scene.
  • the method for realizing human-computer interaction through a head-mounted VR display device realizes human-computer interaction from software without hardware improvement.
  • VR applications no longer need to adapt to the physical control devices of different VR devices, which reduces It improves the difficulty of VR application adaptation and broadens the scope of VR application adaptation to VR devices.
  • the determining the current movement state of the user's head based on the current posture data specifically includes:
  • the current movement state of the user's head is determined.
  • FIG. 2 is a logic flow chart of motion state recognition provided by an embodiment of the present invention. As shown in FIG. 2, in an embodiment of the present invention, the specific steps of determining the current state of movement of the user's head are based on the current posture data. as follows:
  • the current movement state of the user's head is determined.
  • the method for realizing human-computer interaction through a head-mounted VR display device realizes human-computer interaction from software without hardware improvement.
  • VR applications no longer need to adapt to the physical control devices of different VR devices, which reduces It improves the difficulty of VR application adaptation and broadens the scope of VR application adaptation to VR devices.
  • the type of the motion state includes at least any one of left turn, right turn, left swing, right swing, head down, and head up.
  • the types of motion states include at least any one of left turn, right turn, left swing, right swing, head down, and head up.
  • the method for realizing human-computer interaction through a head-mounted VR display device realizes human-computer interaction from software without hardware improvement.
  • VR applications no longer need to adapt to the physical control devices of different VR devices, which reduces It improves the difficulty of VR application adaptation and broadens the scope of VR application adaptation to VR devices.
  • the determining the combined action of the user's head based on the current movement state of the user's head specifically includes:
  • FIG. 3 is a logic flow chart of the combined action recognition provided by an embodiment of the present invention. As shown in FIG. 3, in the embodiment of the present invention, based on the current movement state of the user's head, the combined action of the user's head is determined Specific steps are as follows:
  • the predefined combined action is the combination of the motion state records.
  • the method for realizing human-computer interaction through a head-mounted VR display device realizes human-computer interaction from software without hardware improvement.
  • VR applications no longer need to adapt to the physical control devices of different VR devices, which reduces It improves the difficulty of VR application adaptation and broadens the scope of VR application adaptation to VR devices.
  • the type of the predefined combination action includes at least any one of nodding, shaking head, left-clicking, and right-clicking.
  • the types of predefined combined actions include at least any one of nodding, shaking head, left-clicking, and right-clicking.
  • the method for realizing human-computer interaction through a head-mounted VR display device realizes human-computer interaction from software without hardware improvement.
  • VR applications no longer need to adapt to the physical control devices of different VR devices, which reduces It improves the difficulty of VR application adaptation and broadens the scope of VR application adaptation to VR devices.
  • the posture data includes at least any one of an orientation, a horizontal angle, and a vertical inclination angle.
  • the posture data includes at least any one of an orientation, a horizontal angle, and a vertical inclination angle.
  • the method for realizing human-computer interaction through a head-mounted VR display device realizes human-computer interaction from software without hardware improvement.
  • VR applications no longer need to adapt to the physical control devices of different VR devices, which reduces It improves the difficulty of VR application adaptation and broadens the scope of VR application adaptation to VR devices.
  • the acquiring the current posture data of the head-mounted VR display device specifically includes:
  • the specific method for obtaining the current posture data of the head-mounted VR display device may be to obtain the current posture data through the application program interface API of the head-mounted VR display device.
  • the method for realizing human-computer interaction through a head-mounted VR display device realizes human-computer interaction from software without hardware improvement.
  • VR applications no longer need to adapt to the physical control devices of different VR devices, which reduces It improves the difficulty of VR application adaptation and broadens the scope of VR application adaptation to VR devices.
  • FIG. 4 is a schematic diagram of an apparatus for realizing human-computer interaction through a head-mounted VR display device provided by an embodiment of the present invention.
  • an embodiment of the present invention provides a head-mounted VR display device.
  • a device for realizing human-computer interaction by a device includes an acquisition module 401, a motion state recognition module 402, a combined action recognition module 403, and an interaction module 404, in which:
  • the obtaining module 401 is used to obtain the current posture data of the head-mounted VR display device when the human-computer interaction interface is displayed in the VR scene; the movement state recognition module 402 is used to determine the current movement state of the user's head based on the current posture data;
  • the combined action recognition module 403 is used to determine the combined actions of the user's head based on the current movement state of the user's head;
  • the interaction module 404 is used to match abstract control instructions according to the user's combined actions to respond to the human-computer interaction interface displayed in the VR scene .
  • the embodiment of the present invention provides a device for realizing human-computer interaction through a head-mounted VR display device, which is used to execute the method described in any of the above embodiments, and the device provided in this embodiment executes the method described in one of the above embodiments.
  • the specific steps of the method described are the same as the corresponding embodiments above, and will not be repeated here.
  • the device for realizing human-computer interaction through a head-mounted VR display device provided by the embodiment of the present invention realizes human-computer interaction from software without hardware improvement. VR applications no longer need to adapt to the physical control equipment of different VR devices, which reduces It improves the difficulty of VR application adaptation and broadens the scope of VR application adaptation to VR devices.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention.
  • the electronic device includes: a processor 501, a communication interface 502, a memory 503, and a communication bus 504 Among them, the processor 501, the communication interface 502, and the memory 503 communicate with each other through the communication bus 504.
  • the processor 501 and the memory 502 communicate with each other through the bus 503.
  • the processor 501 can call the logic instructions in the memory 503 to execute the following methods:
  • abstract control instructions are matched to respond to the human-computer interaction interface displayed in the VR scene.
  • the above-mentioned logical instructions in the memory can be implemented in the form of a software functional unit and when sold or used as an independent product, they can be stored in a computer readable storage medium.
  • the technical solution of the present invention essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present invention.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .
  • an embodiment of the present invention provides a computer program product
  • the computer program product includes a computer program stored on a non-transitory computer-readable storage medium
  • the computer program includes program instructions, when the program instructions are executed by a computer
  • the computer can execute the steps in the foregoing method embodiments, for example, including:
  • abstract control instructions are matched to respond to the human-computer interaction interface displayed in the VR scene.
  • an embodiment of the present invention provides a non-transitory computer-readable storage medium on which a computer program is stored.
  • the steps in the foregoing method embodiments are implemented, for example, including:
  • abstract control instructions are matched to respond to the human-computer interaction interface displayed in the VR scene.
  • the device embodiments described above are merely illustrative.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units.
  • Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement it without creative work.
  • each implementation manner can be implemented by means of software plus a necessary general hardware platform, and of course, it can also be implemented by hardware.
  • the above technical solution essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic A disc, an optical disc, etc., include several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute the methods described in each embodiment or some parts of the embodiment.

Abstract

本发明实施例提供一种通过头戴式VR显示设备实现人机交互的方法及装置,包括:当VR场景中显示人机交互界面时,获取头戴式VR显示设备当前的姿态数据;基于当前的姿态数据,确定用户头部当前的运动状态;基于用户头部当前的运动状态,确定用户头部的组合动作;根据用户的组合动作匹配抽象控制指令,以响应VR场景中显示的人机交互界面。本发明实施例提供的通过头戴式VR显示设备实现人机交互的方法及装置,从软件上的实现人机交互,无需硬件改进,VR应用不再需要适配各个不同VR设备的物理控制设备,降低了VR应用适配难度,拓宽了VR应用适配VR设备的范围。

Description

通过头戴式VR显示设备实现人机交互的方法及装置 技术领域
本发明涉及虚拟现实技术领域,尤其涉及一种通过头戴式VR显示设备实现人机交互的方法及装置。
背景技术
所谓虚拟现实,顾名思义,就是虚拟和现实相互结合。从理论上来讲,虚拟现实技术(VR)是一种可以创建和体验虚拟世界的计算机仿真系统,它利用计算机生成一种模拟环境,使用户沉浸到该环境中。虚拟现实技术就是利用现实生活中的数据,通过计算机技术产生的电子信号,将其与各种输出设备结合使其转化为能够让人们感受到的现象,这些现象可以是现实中真真切切的物体,也可以是我们肉眼所看不到的物质,通过三维模型表现出来。因为这些现象不是我们直接所能看到的,而是通过计算机技术模拟出来的现实中的世界,故称为虚拟现实。
常见的VR产品为头戴式VR显示设备,现有技术中,通常通过头戴式VR显示设备的姿态来控制VR场景或者其他设备,例如,通过相应的头部运动,可以实现VR场景在不同方向上移动,当用户头部往左转且用户点头时,表示用户想要控制VR场景的显示界面往左下方向移动。再例如,通过头戴式VR显示设备的姿态角控制无人机的吊舱姿态角度,用户左右摇头的角度对应吊舱的航向角,上下点头的角度对应吊舱的俯仰角。而针对VR场景中的人机交互,例如,对话框的操作控制,依然需要通过外部物理设备来实现,外部物理设备包括手柄、遥控器等,用户通过手动操作手柄或遥控器上的按钮、摇杆、触控板等实现向VR系统输入抽象控制指令。
但是,由于头戴式VR显示设备,需覆盖双眼,通过外部物理设备如手柄或遥控器操作VR系统,实现人机交互,不但用户操作不方便,而且各个 VR设备的手柄和遥控器标准不统一,无法或很难在同一个VR应用中完全适配。
发明内容
本发明实施例提供一种通过头戴式VR显示设备实现人机交互的方法及装置,用于解决现有技术中的上述技术问题。
为了解决上述技术问题,一方面,本发明实施例提供一种通过头戴式VR显示设备实现人机交互的方法,包括:
当VR场景中显示人机交互界面时,获取头戴式VR显示设备当前的姿态数据;
基于当前的姿态数据,确定用户头部当前的运动状态;
基于用户头部当前的运动状态,确定用户头部的组合动作;
根据用户的组合动作匹配抽象控制指令,以响应VR场景中显示的人机交互界面。
进一步地,所述基于当前的姿态数据,确定用户头部当前的运动状态,具体包括:
将当前的姿态数据与前一次采集的姿态数据做对比,确定对比结果;
根据所述对比结果和预设阈值的关系,确定用户头部当前的运动状态。
进一步地,所述运动状态的种类至少包括左转、右转、左摆、右摆、低头和抬头中的任一种。
进一步地,所述基于用户头部当前的运动状态,确定用户头部的组合动作,具体包括:
遍历当前采集到的运动状态记录及之后的若干条运动状态记录;
逐条对比预定义组合动作是否匹配当前记录,如果匹配,则记录一个组合动作。
进一步地,所述预定义组合动作的种类至少包括点头、摇头、左侧点击和右侧点击中的任一种。
进一步地,所述姿态数据至少包括朝向、水平角度和垂直倾角中的任一种。
进一步地,所述获取头戴式VR显示设备当前的姿态数据,具体包括:
通过头戴式VR显示设备的应用程序接口获取当前的姿态数据。
另一方面,本发明实施例提供一种通过头戴式VR显示设备实现人机交互的装置,包括:
获取模块,用于当VR场景中显示人机交互界面时,获取头戴式VR显示设备当前的姿态数据;
运动状态识别模块,用于基于当前的姿态数据,确定用户头部当前的运动状态;
组合动作识别模块,用于基于用户头部当前的运动状态,确定用户头部的组合动作;
交互模块,用于根据用户的组合动作匹配抽象控制指令,以响应VR场景中显示的人机交互界面。
再一方面,本发明实施例提供一种电子设备,包括:存储器、处理器,以及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现上述第一方面提供的方法的步骤。
又一方面,本发明实施例提供一种非暂态计算机可读存储介质,其上存储有计算机程序,当所述计算机程序被处理器执行时,实现上述第一方面提供的方法的步骤。
本发明实施例提供的通过头戴式VR显示设备实现人机交互的方法及装置,从软件上的实现人机交互,无需硬件改进,VR应用不再需要适配各个不同VR设备的物理控制设备,降低了VR应用适配难度,拓宽了VR应用适配VR设备的范围。
附图说明
图1为本发明实施例提供的通过头戴式VR显示设备实现人机交互的方法示意图;
图2为本发明实施例提供的运动状态识别逻辑流程图;
图3为本发明实施例提供的组合动作识别逻辑流程图;
图4为本发明实施例提供的通过头戴式VR显示设备实现人机交互的装置示意图;
图5为本发明实施例提供的电子设备的结构示意图。
具体实施方式
为了使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
图1为本发明实施例提供的通过头戴式VR显示设备实现人机交互的方法示意图,如图1所示,本发明实施例提供一种通过头戴式VR显示设备实现人机交互的方法,其执行主体为通过头戴式VR显示设备实现人机交互的装置。该方法包括:
步骤S101、当VR场景中显示人机交互界面时,获取头戴式VR显示设备当前的姿态数据。
具体来说,当VR场景中显示人机交互界面时,获取头戴式VR显示设备当前的姿态数据。
例如,可以采集VR设备的姿态数据,包括朝向、水平角度、垂直倾角。其中,姿态数据由VR设备的应用程序接口API提供。
步骤S102、基于当前的姿态数据,确定用户头部当前的运动状态。
具体来说,在确定头戴式VR显示设备当前的姿态数据之后,基于当前的姿态数据,确定用户头部当前的运动状态。
例如,将当前的运动状态与前一次采集的数据做对比,如果某姿态数据发生变化超过阈值,则记录此变化为一条运动状态数据。运动状态的种类至少包括左转、右转、左摆、右摆、低头和抬头中的任一种。
步骤S103、基于用户头部当前的运动状态,确定用户头部的组合动作。
具体来说,在确定用户头部当前的运动状态之后,基于用户头部当前的运动状态,确定用户头部的组合动作。
例如,根据采集到的运动状态的运动记录,分析可能组合成的预定义动作,遍历已经采集到的运动记录,取当前记录及之后n条记录,逐条对比预定义动作是否匹配当前记录,如果匹配,则记录一个动作。预定义组合动作的种类至少包括点头、摇头、左侧点击和右侧点击中的任一种。
步骤S104、根据用户的组合动作匹配抽象控制指令,以响应VR场景中显示的人机交互界面。
具体来说,在识别出用户头部的组合动作之后,根据用户的组合动作匹配抽象控制指令,以响应VR场景中显示的人机交互界面。
例如,组合动作与抽象控制指令的匹配关系表如表1所示,组合动作“点头”表示的抽象控制指令为“确定”,组合动作“摇头”表示的抽象控制指令为“取消”等。
表1组合动作与抽象控制指令的匹配关系表
组合动作 抽象控制指令
点头 确定
摇头 取消
左侧点击 选择左侧选项
右侧点击 选择右侧选项
本发明实施例提供的通过头戴式VR显示设备实现人机交互的方法,从软件上的实现人机交互,无需硬件改进,VR应用不再需要适配各个不同VR设备的物理控制设备,降低了VR应用适配难度,拓宽了VR应用适配VR设备的范围。
基于上述任一实施例,进一步地,所述基于当前的姿态数据,确定用户头部当前的运动状态,具体包括:
将当前的姿态数据与前一次采集的姿态数据做对比,确定对比结果;
根据所述对比结果和预设阈值的关系,确定用户头部当前的运动状态。
具体来说,图2为本发明实施例提供的运动状态识别逻辑流程图,如图2所示,在本发明实施例中,基于当前的姿态数据,确定用户头部当前的运动状态的具体步骤如下:
首先,将当前的姿态数据与前一次采集的姿态数据做对比,确定对比结果。
然后,根据对比结果和预设阈值的关系,确定用户头部当前的运动状态。
例如:
①与前次相比,朝向发生变化,则记录左转或右转。
②与前次相比,水平角度变化,则记录左摆或右摆。
③与前次相比,垂直倾角发生变化,则记录低头或抬头。
本发明实施例提供的通过头戴式VR显示设备实现人机交互的方法,从软件上的实现人机交互,无需硬件改进,VR应用不再需要适配各个不同VR设备的物理控制设备,降低了VR应用适配难度,拓宽了VR应用适配VR设备的范围。
基于上述任一实施例,进一步地,所述运动状态的种类至少包括左转、右转、左摆、右摆、低头和抬头中的任一种。
具体来说,在本发明实施例中,运动状态的种类至少包括左转、右转、左摆、右摆、低头和抬头中的任一种。
本发明实施例提供的通过头戴式VR显示设备实现人机交互的方法,从软件上的实现人机交互,无需硬件改进,VR应用不再需要适配各个不同VR设备的物理控制设备,降低了VR应用适配难度,拓宽了VR应用适配VR设备的范围。
基于上述任一实施例,进一步地,所述基于用户头部当前的运动状态,确定用户头部的组合动作,具体包括:
遍历当前采集到的运动状态记录及之后的若干条运动状态记录;
逐条对比预定义组合动作是否匹配当前记录,如果匹配,则记录一个组合动作。
具体来说,图3为本发明实施例提供的组合动作识别逻辑流程图,如图3所示,在本发明实施例中,基于用户头部当前的运动状态,确定用户头部的组合动作的具体步骤如下:
首先,获取当前采集到的运动状态记录及之后的若干条运动状态记录,分析可能组合成的预定义动作。预定义组合动作是运动状态记录的组合。例如:
①定义:点头=低头+抬头
②定义:摇头A=左转+右转+左转
③定义:摇头B=右转+左转+右转
④定义:左侧点击=左摆+右摆
⑤定义:右侧点击=右摆+左摆
然后,遍历当前采集到的运动状态记录及之后的若干条运动状态记录,逐条对比预定义组合动作是否匹配当前记录,如果匹配,则记录一个组合动作。
本发明实施例提供的通过头戴式VR显示设备实现人机交互的方法,从软件上的实现人机交互,无需硬件改进,VR应用不再需要适配各个不同VR设备的物理控制设备,降低了VR应用适配难度,拓宽了VR应用适配VR设备的范围。
基于上述任一实施例,进一步地,所述预定义组合动作的种类至少包括点头、摇头、左侧点击和右侧点击中的任一种。
具体来说,在本发明实施例中,预定义组合动作的种类至少包括点头、摇头、左侧点击和右侧点击中的任一种。
本发明实施例提供的通过头戴式VR显示设备实现人机交互的方法,从软件上的实现人机交互,无需硬件改进,VR应用不再需要适配各个不同VR设备的物理控制设备,降低了VR应用适配难度,拓宽了VR应用适配VR设备的范围。
基于上述任一实施例,进一步地,所述姿态数据至少包括朝向、水平角度和垂直倾角中的任一种。
具体来说,在本发明实施例中,姿态数据至少包括朝向、水平角度和垂直倾角中的任一种。
本发明实施例提供的通过头戴式VR显示设备实现人机交互的方法,从软件上的实现人机交互,无需硬件改进,VR应用不再需要适配各个不同VR设备的物理控制设备,降低了VR应用适配难度,拓宽了VR应用适配VR设备的范围。
基于上述任一实施例,进一步地,所述获取头戴式VR显示设备当前的姿态数据,具体包括:
通过头戴式VR显示设备的应用程序接口获取当前的姿态数据。
具体来说,在本发明实施例中,获取头戴式VR显示设备当前的姿态数据的具体方式,可以通过头戴式VR显示设备的应用程序接口API获取当前的姿态数据。
本发明实施例提供的通过头戴式VR显示设备实现人机交互的方法,从软件上的实现人机交互,无需硬件改进,VR应用不再需要适配各个不同VR设备的物理控制设备,降低了VR应用适配难度,拓宽了VR应用适配VR设备的范围。
基于上述任一实施例,图4为本发明实施例提供的通过头戴式VR显示设备实现人机交互的装置示意图,如图4所示,本发明实施例提供一种通过头戴式VR显示设备实现人机交互的装置,包括获取模块401、运动状态识别模块402、组合动作识别模块403和交互模块404,其中:
获取模块401用于当VR场景中显示人机交互界面时,获取头戴式VR显示设备当前的姿态数据;运动状态识别模块402用于基于当前的姿态数据,确定用户头部当前的运动状态;组合动作识别模块403用于基于用户头部当前的运动状态,确定用户头部的组合动作;交互模块404用于根据用户的组合动作匹配抽象控制指令,以响应VR场景中显示的人机交互界面。
本发明实施例提供一种通过头戴式VR显示设备实现人机交互的装置,用于执行上述任一实施例中所述的方法,通过本实施例提供的装置执行上述某一实施例中所述的方法的具体步骤与上述相应实施例相同,此处不再赘 述。
本发明实施例提供的通过头戴式VR显示设备实现人机交互的装置,从软件上的实现人机交互,无需硬件改进,VR应用不再需要适配各个不同VR设备的物理控制设备,降低了VR应用适配难度,拓宽了VR应用适配VR设备的范围。
图5为本发明实施例提供的电子设备的结构示意图,如图5所示,该电子设备包括:处理器(processor)501、通信接口(Communications Interface)502、存储器(memory)503和通信总线504,其中,处理器501,通信接口502,存储器503通过通信总线504完成相互间的通信。处理器501和存储器502通过总线503完成相互间的通信。处理器501可以调用存储器503中的逻辑指令,以执行如下方法:
当VR场景中显示人机交互界面时,获取头戴式VR显示设备当前的姿态数据;
基于当前的姿态数据,确定用户头部当前的运动状态;
基于用户头部当前的运动状态,确定用户头部的组合动作;
根据用户的组合动作匹配抽象控制指令,以响应VR场景中显示的人机交互界面。
此外,上述的存储器中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
进一步地,本发明实施例提供一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序 包括程序指令,当所述程序指令被计算机执行时,计算机能够执行上述各方法实施例中的步骤,例如包括:
当VR场景中显示人机交互界面时,获取头戴式VR显示设备当前的姿态数据;
基于当前的姿态数据,确定用户头部当前的运动状态;
基于用户头部当前的运动状态,确定用户头部的组合动作;
根据用户的组合动作匹配抽象控制指令,以响应VR场景中显示的人机交互界面。
进一步地,本发明实施例提供一种非暂态计算机可读存储介质,其上存储有计算机程序,当所述计算机程序被处理器执行时,实现上述各方法实施例中的步骤,例如包括:
当VR场景中显示人机交互界面时,获取头戴式VR显示设备当前的姿态数据;
基于当前的姿态数据,确定用户头部当前的运动状态;
基于用户头部当前的运动状态,确定用户头部的组合动作;
根据用户的组合动作匹配抽象控制指令,以响应VR场景中显示的人机交互界面。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台 计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (10)

  1. 一种通过头戴式VR显示设备实现人机交互的方法,其特征在于,包括:
    当VR场景中显示人机交互界面时,获取头戴式VR显示设备当前的姿态数据;
    基于当前的姿态数据,确定用户头部当前的运动状态;
    基于用户头部当前的运动状态,确定用户头部的组合动作;
    根据用户的组合动作匹配抽象控制指令,以响应VR场景中显示的人机交互界面。
  2. 根据权利要求1所述的通过头戴式VR显示设备实现人机交互的方法,其特征在于,所述基于当前的姿态数据,确定用户头部当前的运动状态,具体包括:
    将当前的姿态数据与前一次采集的姿态数据做对比,确定对比结果;
    根据所述对比结果和预设阈值的关系,确定用户头部当前的运动状态。
  3. 根据权利要求2所述的通过头戴式VR显示设备实现人机交互的方法,其特征在于,所述运动状态的种类至少包括左转、右转、左摆、右摆、低头和抬头中的任一种。
  4. 根据权利要求1所述的通过头戴式VR显示设备实现人机交互的方法,其特征在于,所述基于用户头部当前的运动状态,确定用户头部的组合动作,具体包括:
    遍历当前采集到的运动状态记录及之后的若干条运动状态记录;
    逐条对比预定义组合动作是否匹配当前记录,如果匹配,则记录一个组合动作。
  5. 根据权利要求4所述的通过头戴式VR显示设备实现人机交互的方法,其特征在于,所述预定义组合动作的种类至少包括点头、摇头、左侧点击和右侧点击中的任一种。
  6. 根据权利要求1-5任一项所述的通过头戴式VR显示设备实现人机交互的方法,其特征在于,所述姿态数据至少包括朝向、水平角度和垂直倾角中的任一种。
  7. 根据权利要求1-5任一项所述的通过头戴式VR显示设备实现人机交互的方法,其特征在于,所述获取头戴式VR显示设备当前的姿态数据,具体包括:
    通过头戴式VR显示设备的应用程序接口获取当前的姿态数据。
  8. 一种通过头戴式VR显示设备实现人机交互的装置,其特征在于,包括:
    获取模块,用于当VR场景中显示人机交互界面时,获取头戴式VR显示设备当前的姿态数据;
    运动状态识别模块,用于基于当前的姿态数据,确定用户头部当前的运动状态;
    组合动作识别模块,用于基于用户头部当前的运动状态,确定用户头部的组合动作;
    交互模块,用于根据用户的组合动作匹配抽象控制指令,以响应VR场景中显示的人机交互界面。
  9. 一种电子设备,包括存储器、处理器,以及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时,实现如权利要求1至7任一项所述通过头戴式VR显示设备实现人机交互的方法的步骤。
  10. 一种非暂态计算机可读存储介质,其上存储有计算机程序,其特征在于,当所述计算机程序被处理器执行时,实现如权利要求1至7任一所述通过头戴式VR显示设备实现人机交互的方法的步骤。
PCT/CN2020/079454 2020-02-25 2020-03-16 通过头戴式vr显示设备实现人机交互的方法及装置 WO2021168922A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010115656.3 2020-02-25
CN202010115656.3A CN111338476A (zh) 2020-02-25 2020-02-25 通过头戴式vr显示设备实现人机交互的方法及装置

Publications (1)

Publication Number Publication Date
WO2021168922A1 true WO2021168922A1 (zh) 2021-09-02

Family

ID=71181802

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/079454 WO2021168922A1 (zh) 2020-02-25 2020-03-16 通过头戴式vr显示设备实现人机交互的方法及装置

Country Status (2)

Country Link
CN (1) CN111338476A (zh)
WO (1) WO2021168922A1 (zh)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105867613A (zh) * 2016-03-21 2016-08-17 乐视致新电子科技(天津)有限公司 基于虚拟现实系统的头控交互方法及装置
CN106200954A (zh) * 2016-07-06 2016-12-07 捷开通讯(深圳)有限公司 虚拟现实系统和虚拟现实眼镜的控制方法
CN106557170A (zh) * 2016-11-25 2017-04-05 三星电子(中国)研发中心 对虚拟现实设备上的图像进行缩放的方法及装置
CN106873767A (zh) * 2016-12-30 2017-06-20 深圳超多维科技有限公司 一种虚拟现实应用的运行控制方法和装置
CN107290972A (zh) * 2017-07-05 2017-10-24 三星电子(中国)研发中心 设备控制方法和装置
CN107357432A (zh) * 2017-07-18 2017-11-17 歌尔科技有限公司 基于vr的交互方法及装置
CN108572719A (zh) * 2017-03-13 2018-09-25 北京杜朗自动化系统技术有限公司 利用体态识别的智能头盔控制方法及系统
US20200051418A1 (en) * 2017-01-11 2020-02-13 Universal Entertainment Corporation Controlling electronic device alerts by operating head mounted display
CN110806797A (zh) * 2018-07-20 2020-02-18 北京君正集成电路股份有限公司 一种基于头部运动对游戏进行控制的方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106200899A (zh) * 2016-06-24 2016-12-07 北京奇思信息技术有限公司 根据用户头部动作控制虚拟现实交互的方法及系统
CN107885318A (zh) * 2016-09-29 2018-04-06 西门子公司 一种虚拟环境交互方法、装置、系统及计算机可读介质
CN108268123A (zh) * 2016-12-30 2018-07-10 成都虚拟世界科技有限公司 基于头戴式显示设备的指令识别方法及装置
CN107807446B (zh) * 2017-11-13 2020-11-10 歌尔光学科技有限公司 头戴显示设备调节方法及头戴显示设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105867613A (zh) * 2016-03-21 2016-08-17 乐视致新电子科技(天津)有限公司 基于虚拟现实系统的头控交互方法及装置
CN106200954A (zh) * 2016-07-06 2016-12-07 捷开通讯(深圳)有限公司 虚拟现实系统和虚拟现实眼镜的控制方法
CN106557170A (zh) * 2016-11-25 2017-04-05 三星电子(中国)研发中心 对虚拟现实设备上的图像进行缩放的方法及装置
CN106873767A (zh) * 2016-12-30 2017-06-20 深圳超多维科技有限公司 一种虚拟现实应用的运行控制方法和装置
US20200051418A1 (en) * 2017-01-11 2020-02-13 Universal Entertainment Corporation Controlling electronic device alerts by operating head mounted display
CN108572719A (zh) * 2017-03-13 2018-09-25 北京杜朗自动化系统技术有限公司 利用体态识别的智能头盔控制方法及系统
CN107290972A (zh) * 2017-07-05 2017-10-24 三星电子(中国)研发中心 设备控制方法和装置
CN107357432A (zh) * 2017-07-18 2017-11-17 歌尔科技有限公司 基于vr的交互方法及装置
CN110806797A (zh) * 2018-07-20 2020-02-18 北京君正集成电路股份有限公司 一种基于头部运动对游戏进行控制的方法和装置

Also Published As

Publication number Publication date
CN111338476A (zh) 2020-06-26

Similar Documents

Publication Publication Date Title
US11587300B2 (en) Method and apparatus for generating three-dimensional virtual image, and storage medium
CN113240778B (zh) 虚拟形象的生成方法、装置、电子设备和存储介质
CN108040285A (zh) 视频直播画面调整方法、计算机设备及存储介质
JP2019535055A (ja) ジェスチャに基づく操作の実施
EP3519926A1 (en) Method and system for gesture-based interactions
CN111586459B (zh) 一种控制视频播放的方法、装置、电子设备及存储介质
CN110795569B (zh) 知识图谱的向量表示生成方法、装置及设备
JP7268071B2 (ja) バーチャルアバターの生成方法及び生成装置
US20180196930A1 (en) System, method and computer program product for stateful instruction-based dynamic man-machine interactions for humanness validation
CN111507111B (zh) 语义表示模型的预训练方法、装置、电子设备及存储介质
EP3605369A1 (en) Method for determining emotional threshold and artificial intelligence device
CN111695698A (zh) 用于模型蒸馏的方法、装置、电子设备及可读存储介质
WO2019144346A1 (zh) 虚拟场景中的对象处理方法、设备及存储介质
CN114222076B (zh) 一种换脸视频生成方法、装置、设备以及存储介质
TW202232284A (zh) 用於在虛擬實境環境中三維人類姿勢之模擬控制
WO2021168922A1 (zh) 通过头戴式vr显示设备实现人机交互的方法及装置
CN114399424A (zh) 模型训练方法及相关设备
CN112714337A (zh) 视频处理方法、装置、电子设备和存储介质
CN109445573A (zh) 一种用于虚拟化身形象互动的方法与装置
CN112381927A (zh) 图像生成的方法、装置、设备以及存储介质
CN111523467A (zh) 人脸跟踪方法和装置
CN116894880A (zh) 一种文生图模型的训练方法、模型、装置及电子设备
CN115393514A (zh) 三维重建模型的训练方法、三维重建方法、装置、设备
KR20200078816A (ko) 기 작성된 3d 모델을 활용한 가상현실을 이용한 설비 교육 훈련 시스템
CN114115533A (zh) 智能交互方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20920915

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20920915

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20920915

Country of ref document: EP

Kind code of ref document: A1