WO2017076224A1 - User interaction method and system based on virtual reality - Google Patents

User interaction method and system based on virtual reality Download PDF

Info

Publication number
WO2017076224A1
WO2017076224A1 PCT/CN2016/103733 CN2016103733W WO2017076224A1 WO 2017076224 A1 WO2017076224 A1 WO 2017076224A1 CN 2016103733 W CN2016103733 W CN 2016103733W WO 2017076224 A1 WO2017076224 A1 WO 2017076224A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
virtual reality
information
action
virtual
Prior art date
Application number
PCT/CN2016/103733
Other languages
French (fr)
Chinese (zh)
Inventor
郑少华
黎剑辉
张圳
朱一伟
罗海彬
湛浩
Original Assignee
丰唐物联技术(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 丰唐物联技术(深圳)有限公司 filed Critical 丰唐物联技术(深圳)有限公司
Publication of WO2017076224A1 publication Critical patent/WO2017076224A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • the present invention relates to the field of virtual reality technologies, and in particular, to a user interaction method and system based on virtual reality.
  • Virtual reality technology is a computer simulation system that can create and experience a virtual world. It uses a computer to generate a simulation environment. It is an interactive three-dimensional dynamic view of multi-source information fusion. And the system simulation of the entity behavior, so that the user can truly feel and immersed in the environment; therefore, after the user performs activities such as games and activity exercises based on the virtual reality technology, the stereoscopic and realistic feeling of the image displayed on the display screen is required. More intense and more real.
  • An embodiment of the present invention discloses a virtual reality-based user interaction method, including the following steps: [0006] acquiring action data detected by sensors worn at different positions on a user's body;
  • the virtual reality based user interaction method further includes:
  • the environmental information includes:
  • the sensors worn at different positions on the user's body include:
  • the action that is currently performed by the acquired user is fed back to a corresponding virtual character in the virtual reality
  • the corresponding virtual character in the virtual reality is controlled to perform a corresponding operation according to the motion trajectory information according to the drawn motion trajectory information.
  • the present invention also discloses a virtual reality-based user interaction system, the system including multiple positions worn on different positions on the user's body. Sensor
  • the virtual reality based user interaction system further includes:
  • an obtaining module configured to acquire motion data detected by sensors worn at different positions on the user's body
  • the parsing module is configured to parse the acquired action data according to the set position of the sensor on the user's body, and obtain an action currently performed by the user;
  • the control module is configured to feed back the currently performed action of the acquired user to a corresponding virtual character in the virtual reality, and control the corresponding virtual character in the virtual reality to perform a corresponding operation.
  • the virtual reality-based user interaction system further includes:
  • the interaction module is configured to obtain environment information of a scenario in which the virtual character is currently located in the virtual reality, and feed the environment information to the user, so that the user can perceive the environment information.
  • the environmental information includes:
  • the sensors worn at different positions on the user's body include:
  • control module comprises:
  • a drawing unit configured to draw a motion trajectory corresponding to the currently performed action of the user according to the obtained action currently performed by the user, and obtain motion trajectory information after the drawing;
  • the control unit is configured to control, according to the drawn motion trajectory information, a corresponding virtual character in the virtual reality to perform a corresponding operation according to the motion trajectory information.
  • a virtual reality-based user interaction method and system of the present invention can achieve the following beneficial effects: [0033] action data detected by acquiring sensors worn at different positions on the user's body; according to the sensor in the user's body Setting the location, parsing the obtained action data, and acquiring the action currently performed by the user; feeding back the currently performed action of the obtained user to the corresponding virtual character in the virtual reality, and controlling the corresponding virtual character execution in the virtual reality Corresponding operation; having the sensor worn by the user to detect and acquire the user's limb movement, thereby enabling the user to interact with the virtual character in the virtual scene, thereby improving the realism of the virtual reality.
  • FIG. 1 is a schematic flowchart of an embodiment of a virtual reality-based user interaction method according to the present invention
  • FIG. 2 is a virtual reality-based user interaction method according to the present invention, and step S30 in the embodiment of FIG. Schematic diagram of an embodiment of an embodiment
  • FIG. 3 is a schematic flowchart diagram of another embodiment of a virtual reality-based user interaction method according to the present invention.
  • FIG. 4 is a block diagram of an embodiment of a virtual reality-based user interaction system according to the present invention.
  • FIG. 5 is a block diagram of an embodiment of the control module 80 in the embodiment of FIG. 4 in the virtual reality-based user interaction system of the present invention
  • FIG. 6 is a block diagram of another embodiment of a virtual reality based user interaction system of the present invention.
  • the present invention provides a virtual reality-based user interaction method and system for detecting and acquiring a user's physical motion through a sensor worn by a user, thereby enabling a user to interact with a virtual character in a virtual scene. Improve the realism of virtual reality.
  • FIG. 1 is a schematic flowchart of an implementation manner of a virtual reality-based user interaction method according to the present invention. As shown in FIG. 1, the virtual reality-based user interaction method of the present invention may be implemented as the following steps. S10-S30:
  • Step S10 Acquire motion data detected by sensors worn at different positions on the user's body
  • the user wears a corresponding sensor in different positions on the body in advance.
  • the sensor located in different parts of the user can know the user's made by detecting the corresponding data. The corresponding action.
  • the sensors worn at different locations on the user's body include: sensors respectively disposed on the user's hands, feet, and waist.
  • respective sensors are provided on the user's hands, feet, and waist, respectively, so that motion data of the user's hands, feet, and body torso portions can be detected; further, the user's head and/or neck can also be detected.
  • the corresponding sensors are also set separately, so that the motion data of different parts of the user's body can be detected more accurately.
  • the system After detecting the above action data by the sensor, the system directly acquires the above action data detected by the sensor.
  • Step S20 Parse the acquired action data according to the set position of the sensor on the user's body, and obtain an action currently performed by the user;
  • the system After acquiring the motion data detected by the sensors respectively set at different positions on the user, the system sets the position according to the sensor on the user's body, for example: the left and right hands, the left and right feet, the waist or other parts of the sensor, the analysis Obtaining corresponding actions of the sensors of the different setting positions mentioned above Data, so that the action currently performed by the user can be known.
  • a sensor is respectively disposed on the hands, the feet, and the waist of the user as an example, and an example is illustrated.
  • the sensor that is separately set by the user's hands, feet, and waist is regarded as five points at different setting positions.
  • the original position of the above five points is acquired; when the user is upside down, Obtaining the action data of the above five points in the user's inverted position, by analyzing the above action data, obtaining the new position of the above five points after the user's motion, comparing the new position with the original position, and obtaining the current user
  • the action performed is an inverted action.
  • Step S30 The action currently performed by the obtained user is fed back to the corresponding virtual character in the virtual reality, and the corresponding virtual character in the virtual reality is controlled to perform a corresponding operation.
  • the action currently performed by the obtained user is fed back to the corresponding virtual character in the virtual reality, for example: the system converts the action performed by the user into virtual reality.
  • Corresponding virtual characters can recognize the control commands, thereby controlling the corresponding virtual characters in the virtual reality to perform corresponding operations.
  • the system retrieves the corresponding sensor data and parses out the “right hook action” performed by the user, thereby feeding back the "right hook action” to The corresponding virtual character A in the virtual reality controls the virtual character A to perform the same "right hooking action" as the user.
  • Step S30 the action feedback of the currently performed user to be acquired
  • steps S301-S302 the action feedback of the currently performed user to be acquired
  • Step S301 According to the currently performed action of the acquired user, draw a motion track corresponding to the action currently performed by the user, and obtain motion track information after the drawing;
  • the system parses the motion data detected by the sensor and obtains the action currently performed by the user, according to the action currently performed by the acquired user, the motion track corresponding to the action currently performed by the user is drawn. Thereby, the motion track information after drawing is obtained.
  • the motion trajectory information drawn by the system can be recognized by the virtual character in the virtual reality and directly perform the action corresponding to the motion trajectory information.
  • Step S302 Control the corresponding virtual character in the virtual reality according to the drawn motion track information. Corresponding operations are performed in accordance with the motion trajectory information.
  • the system directly controls the corresponding virtual character in the virtual reality to perform the corresponding operation according to the motion track information.
  • the specific implementation manner includes: but is not limited to: the system directly sends the motion track information to a corresponding virtual character, and the virtual character identifies an operation corresponding to the motion track information, and performs an operation corresponding to the motion track information.
  • the system controls the corresponding virtual character to perform an operation consistent with the action corresponding to the motion track information according to the motion track information.
  • the virtual reality-based user interaction method of the present invention further includes the following steps:
  • Step S40 Obtain environment information of a scenario in which the virtual character is currently located in the virtual reality, and feed the environment information to the user, so that the user can perceive the environment information.
  • the system may also obtain the environment information corresponding to the scenario in which the virtual character is currently located in the virtual reality, and feed the environment information to the user, so that the user can perceive the environment information, and then Corresponding tactile perception.
  • the environment information corresponding to the scene in which the virtual character is currently located in the virtual reality includes, but is not limited to, temperature information, humidity information, air pressure information, wind direction information, and rainfall or snowfall information.
  • the system obtains the scene in which the virtual character is currently located in the virtual reality: the virtual character B is in the hail, and the environment information corresponding to the scene “ice” is: the temperature is minus 10° C.
  • the system changes the current environment of the user according to the above environment information, and then feeds the environment information to the user, so that the user can perceive the environment information through the environment in which the user is currently located. For example, the user also feels cold.
  • the system changes the specific change mode of the environment in which the user is currently located, according to the specific environment in which the user is located; for example, if the user is currently in a closed space, the system passes Control the smart home in the space, such as air conditioning, to adjust the user's current environment. For another example, if the user currently wears a garment having adjustable environmental factors (such as temperature, humidity, etc.) to perform the above actions, the system adjusts the environmental factors (such as temperature and humidity) of the functional clothing. The user perceives the above environmental information.
  • adjustable environmental factors such as temperature, humidity, etc.
  • the user interaction method based on virtual reality of the present invention obtains the transmission of different positions on the user's body by acquiring The action data detected by the sensor; the action data obtained by the user is obtained according to the set position of the sensor on the user's body, and the action currently performed by the user is obtained; and the action currently performed by the user is fed back to the virtual reality.
  • a virtual character, and controlling the corresponding virtual character in the virtual reality to perform a corresponding operation having a sensor worn by the user to detect and acquire the user's physical motion, thereby enabling the user to interact with the virtual character in the virtual scene
  • the beneficial effect is to improve the realism of the virtual reality; further, since the user can also perceive the environmental information of the virtual character, the user experience is also improved.
  • a virtual reality-based user interaction system method corresponding to the embodiment described in FIG. 1, FIG. 2 and FIG. 3, the present invention also provides a virtual reality-based user interaction system, as shown in FIG.
  • the virtual reality-based user interaction system includes: a plurality of sensors 100 that are worn at different locations on the user's body, and the virtual reality-based user interaction system further includes: an acquisition module 60, a parsing module 70, and a control module 80. ; among them:
  • the obtaining module 60 is configured to acquire motion data detected by the sensor 100 worn at different positions on the user's body;
  • the user wears the corresponding sensor 100 in different positions on the body in advance.
  • the sensor 100 located in different parts of the user can know the user through the detected corresponding data. The corresponding action made.
  • the sensor 100 worn at different locations on the user's body comprises: sensors respectively disposed on the user's hands, feet and waist.
  • respective sensors are provided on the user's hands, feet, and waist, respectively, so that motion data of the user's hands, feet, and body torso portions can be detected; further, the user's head and/or neck can also be detected.
  • the corresponding sensor 100 is also set separately, so that the motion data of different parts of the user's body is detected more accurately.
  • the acquisition module 60 directly acquires the motion data detected by the sensor 100.
  • the parsing module 70 is configured to parse the obtained action data according to the set position of the sensor 100 on the user's body, and obtain an action currently performed by the user;
  • the parsing module 70 sets different positions according to the sensor 100 on the user's body, for example: The sensor 100 of the left and right hands, the left and right feet, the waist, or other parts analyzes the motion data corresponding to the sensors 100 of the different set positions obtained, so that the current action performed by the user can be known.
  • a sensor is respectively disposed on the hands, the feet, and the waist of the user as an example, and an example is illustrated.
  • the sensor respectively set by the user's hands, feet and waist is regarded as five points located at different setting positions, and when the user stands normally, the obtaining module 60 acquires the original positions of the above five points;
  • the obtaining module 60 acquires the action data of the five points in the user's inverted position, and the parsing module 70 obtains the new position of the five points after the user moves by analyzing the action data.
  • the new position is compared with the original position, and the current action performed by the user is obtained as an inverted action.
  • the control module 80 is configured to feed back the currently performed action of the acquired user to a corresponding virtual character in the virtual reality, and control the corresponding virtual character in the virtual reality to perform a corresponding operation.
  • the control module 80 feeds back the action currently performed by the obtained user to the corresponding virtual character in the virtual reality, for example, the control module 80 executes the user.
  • the action is converted into a control command that can be recognized by the corresponding virtual character in the virtual reality, thereby controlling the corresponding virtual character in the virtual reality to perform a corresponding operation.
  • the acquisition module 60 obtains the corresponding sensor data, and the parsing module 70 parses the "right hook action” performed by the user, and the control module 80 thereby ""
  • the right hook action "feeds back to the corresponding virtual character A in the virtual reality, and controls the virtual character A to perform the same "right hook action” as the user.
  • control module 80 includes: a drawing unit 801 and a control unit 802. ; among them:
  • the drawing unit 801 is configured to: according to the obtained action currently performed by the user, draw a motion track corresponding to the action currently performed by the user, and obtain motion track information after the drawing;
  • the drawing unit 801 draws the current execution of the user according to the obtained action currently performed by the user.
  • the motion trajectory corresponding to the motion so as to obtain the motion trajectory information after the drawing [0080]
  • the motion trajectory information drawn by the rendering unit 801 can be recognized by the virtual character in the virtual reality and directly perform the action corresponding to the motion trajectory information.
  • the control unit 802 is configured to control, according to the drawn motion trajectory information, a corresponding virtual character in the virtual reality to perform a corresponding operation according to the motion trajectory information.
  • the control unit 802 directly controls the corresponding virtual character in the virtual reality to perform the corresponding operation according to the motion trajectory information.
  • the specific implementation manner includes, but is not limited to, the control unit 802 directly transmitting the motion track information to the corresponding virtual character, and the virtual character identifies the operation corresponding to the motion track information, and performs the operation corresponding to the motion track information.
  • the control unit 802 controls the corresponding virtual character to perform an operation consistent with the action corresponding to the motion track information according to the motion track information.
  • the virtual reality-based user interaction system of the present invention further includes:
  • the interaction module 90 is configured to obtain environment information of a scenario in which the virtual character is currently located in the virtual reality, and feed the environment information to the user, so that the user can perceive the environment information.
  • the interaction module 90 can also obtain the environment information corresponding to the scene in which the virtual character is currently located in the virtual reality, and feed the environment information to the user, so that the user can perceive the environment information, and then Corresponding tactile perception.
  • the environment information corresponding to the scenario in which the virtual character is currently located in the virtual reality includes, but is not limited to, temperature information, humidity information, air pressure information, wind direction information, and rainfall or snowfall information.
  • the interaction module 90 obtains the scene in which the virtual character is currently located in the virtual reality: the virtual character B is in the hail, and the environment information corresponding to the scene "ice” is: the temperature is minus 10 ° C
  • the interaction module 90 changes the environment in which the user is currently located according to the environment information, and then feeds the environment information to the user, so that the user can perceive the environment information through the environment in which the user is currently located. For example, the user also feels To the cold.
  • the interaction module 90 changes the specific change manner of the environment in which the user is currently located, and is executed according to the specific environment in which the user is located; for example, if the user is currently in a closed space, The interaction module 90 adjusts the environment in which the user is currently located by controlling a smart home such as an air conditioner in the space. For another example, if the user is currently wearing an adjustable environmental factor (such as temperature, humidity, etc.) When the garment performs the above actions, the interaction module 90 causes the user to perceive the environmental information by adjusting environmental factors (such as temperature and humidity) of the functional garment.
  • a smart home such as an air conditioner in the space.
  • an adjustable environmental factor such as temperature, humidity, etc.
  • the virtual reality-based user interaction system acquires action data detected by sensors worn at different positions on the user's body; and according to the set position of the sensor on the user's body, the acquired action data is parsed, Obtaining an action currently performed by the user; feeding back the currently performed action of the obtained user to a corresponding virtual character in the virtual reality, and controlling the corresponding virtual character in the virtual reality to perform a corresponding operation; having detecting by using a sensor worn by the user And obtaining the user's physical movements, thereby enabling the user to interact with the virtual characters in the virtual scene, thereby improving the realism of the virtual reality; further, since the user can also perceive the environmental information of the virtual character, It also improves the user experience.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former It is a better implementation.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A user interaction method and system based on virtual reality: acquiring movement data detected by sensors worn on different positions on the body of a user (S10); on the basis of the positions of the sensors on the body of the user, parsing the acquired movement data to acquire the movement currently executed by the user (S20); feeding back the movement currently executed by the user to a corresponding virtual role in virtual reality, and controlling the corresponding virtual role in virtual reality to execute a corresponding operation (S30); the present invention has the beneficial effect of user limb movements being detected and acquired by means of sensors worn on the body of a user, enabling the user to interact with a virtual role in a virtual scene, thus improving the sense of reality of the virtual reality; furthermore, the user can perceive information about the environment in which the virtual role is located, thus improving the user experience.

Description

说明书 发明名称:基于虚拟现实的用户交互方法及系统 技术领域  Description: Inventive name: User interaction method and system based on virtual reality
[0001] 本发明涉及虚拟现实技术领域, 尤其涉及一种基于虚拟现实的用户交互方法及 系统。  [0001] The present invention relates to the field of virtual reality technologies, and in particular, to a user interaction method and system based on virtual reality.
背景技术  Background technique
[0002] 随着虚拟现实技术的迅猛发展, 随之而不断幵发或者更新的利用虚拟现实技术 的产品也越来越多。 虚拟现实技术具有如下基本的特点: 虚拟现实技术是一种 可以创建和体验虚拟世界的计算机仿真系统, 它利用计算机生成一种模拟环境 , 是一种多源信息融合的交互式的三维动态视景和实体行为的系统仿真, 使用 户能够真实感受并沉浸到该环境中; 因此, 在用户基于虚拟现实技术进行游戏 、 活动演练等活动吋, 要求显示屏上显示的图像的立体感和真实感会更强烈、 更真实。  [0002] With the rapid development of virtual reality technology, there are more and more products that use virtual reality technology. Virtual reality technology has the following basic characteristics: Virtual reality technology is a computer simulation system that can create and experience a virtual world. It uses a computer to generate a simulation environment. It is an interactive three-dimensional dynamic view of multi-source information fusion. And the system simulation of the entity behavior, so that the user can truly feel and immersed in the environment; therefore, after the user performs activities such as games and activity exercises based on the virtual reality technology, the stereoscopic and realistic feeling of the image displayed on the display screen is required. More intense and more real.
[0003] 目前, 在利用虚拟现实技术进行游戏或者活动演练等活动吋, 尚未出现通过用 户身上佩戴的传感器来检测并获取用户的肢体动作, 从而与虚拟场景中的虚拟 角色进行互动的技术方案, 影响了利用虚拟场景进行游戏或者活动演练等活动 吋用户的真实感。  [0003] At present, in the use of virtual reality technology for activities such as games or activity exercises, there is no technical solution for detecting and acquiring the user's physical actions by the sensors worn by the user, thereby interacting with the virtual characters in the virtual scene. It affects the realism of users who use virtual scenes for games or event drills.
技术问题  technical problem
[0004] 鉴于此, 有必要提供一种基于虚拟现实的用户交互方法及系统, 用以通过用户 身上佩戴的传感器来检测并获取用户的肢体动作, 从而使得用户能够与虚拟场 景中的虚拟角色进行互动, 提高虚拟现实的真实感。  [0004] In view of the above, it is necessary to provide a virtual reality-based user interaction method and system for detecting and acquiring a user's physical motion through a sensor worn by a user, thereby enabling a user to perform a virtual character in a virtual scene. Interactive, enhance the realism of virtual reality.
问题的解决方案  Problem solution
技术解决方案  Technical solution
[0005] 本发明实施例公幵了一种基于虚拟现实的用户交互方法, 包括以下步骤: [0006] 获取佩戴在用户身体上不同位置的传感器所检测到的动作数据;  [0005] An embodiment of the present invention discloses a virtual reality-based user interaction method, including the following steps: [0006] acquiring action data detected by sensors worn at different positions on a user's body;
[0007] 根据所述传感器在用户身体上的设置位置, 解析获取的所述动作数据, 获取用 户当前执行的动作; [0008] 将获取的用户当前执行的动作反馈至虚拟现实中对应的虚拟角色, 并控制虚拟 现实中对应的所述虚拟角色执行相应的操作。 [0007] according to the setting position of the sensor on the user's body, parsing the acquired action data, and acquiring an action currently performed by the user; [0008] feedback the currently performed action of the obtained user to the corresponding virtual character in the virtual reality, and control the corresponding virtual character in the virtual reality to perform a corresponding operation.
[0009] 优选地, 所述基于虚拟现实的用户交互方法还包括: [0009] Preferably, the virtual reality based user interaction method further includes:
[0010] 获取虚拟现实中所述虚拟角色当前所在场景的环境信息, 并将所述环境信息反 馈至用户, 使得用户能够感知到所述环境信息。  [0010] Obtaining environment information of a scenario in which the virtual character is currently located in the virtual reality, and feeding the environment information to the user, so that the user can perceive the environment information.
[0011] 优选地, 所述环境信息包括: [0011] Preferably, the environmental information includes:
[0012] 温度信息、 湿度信息、 气压信息、 风力风向信息以及降雨或者降雪量信息。  [0012] Temperature information, humidity information, air pressure information, wind direction information, and rainfall or snowfall information.
[0013] 优选地, 所述佩戴在用户身体上不同位置的传感器包括: [0013] Preferably, the sensors worn at different positions on the user's body include:
[0014] 在用户的双手、 双脚和腰部所分别设置的传感器。 [0014] Sensors respectively disposed on the user's hands, feet, and waist.
[0015] 优选地, 所述将获取的用户当前执行的动作反馈至虚拟现实中对应的虚拟角色 [0015] Preferably, the action that is currently performed by the acquired user is fed back to a corresponding virtual character in the virtual reality
, 并控制虚拟现实中对应的所述虚拟角色执行相应的操作, 包括: And controlling the corresponding virtual character in the virtual reality to perform corresponding operations, including:
[0016] 根据获取的用户当前执行的动作, 绘制用户当前执行的动作所对应的运动轨迹[0016] according to the currently performed action of the acquired user, drawing a motion track corresponding to the action currently performed by the user
, 得到绘制后的运动轨迹信息; , obtaining the motion track information after drawing;
[0017] 根据绘制后的所述运动轨迹信息, 控制虚拟现实中对应的虚拟角色按照所述运 动轨迹信息执行对应的操作。 [0017] The corresponding virtual character in the virtual reality is controlled to perform a corresponding operation according to the motion trajectory information according to the drawn motion trajectory information.
[0018] 对应于以上所公幵的一种基于虚拟现实的用户交互方法, 本发明还公幵了一种 基于虚拟现实的用户交互系统, 所述系统包括佩戴在用户身体上不同位置的多 个传感器; [0018] Corresponding to a virtual reality-based user interaction method disclosed above, the present invention also discloses a virtual reality-based user interaction system, the system including multiple positions worn on different positions on the user's body. Sensor
[0019] 所述基于虚拟现实的用户交互系统还包括:  [0019] The virtual reality based user interaction system further includes:
[0020] 获取模块, 用于获取佩戴在用户身体上不同位置的传感器所检测到的动作数据  [0020] an obtaining module, configured to acquire motion data detected by sensors worn at different positions on the user's body
[0021] 解析模块, 用于根据所述传感器在用户身体上的设置位置, 解析获取的所述动 作数据, 获取用户当前执行的动作; [0021] The parsing module is configured to parse the acquired action data according to the set position of the sensor on the user's body, and obtain an action currently performed by the user;
[0022] 控制模块, 用于将获取的用户当前执行的动作反馈至虚拟现实中对应的虚拟角 色, 并控制虚拟现实中对应的所述虚拟角色执行相应的操作。 [0022] The control module is configured to feed back the currently performed action of the acquired user to a corresponding virtual character in the virtual reality, and control the corresponding virtual character in the virtual reality to perform a corresponding operation.
[0023] 优选地, 所述基于虚拟现实的用户交互系统还包括: [0023] Preferably, the virtual reality-based user interaction system further includes:
[0024] 交互模块, 用于获取虚拟现实中所述虚拟角色当前所在场景的环境信息, 并将 所述环境信息反馈至用户, 使得用户能够感知到所述环境信息。 [0025] 优选地, 所述环境信息包括: [0024] The interaction module is configured to obtain environment information of a scenario in which the virtual character is currently located in the virtual reality, and feed the environment information to the user, so that the user can perceive the environment information. [0025] Preferably, the environmental information includes:
[0026] 温度信息、 湿度信息、 气压信息、 风力风向信息以及降雨或者降雪量信息。  [0026] Temperature information, humidity information, air pressure information, wind direction information, and rainfall or snowfall information.
[0027] 优选地, 所述佩戴在用户身体上不同位置的传感器包括: [0027] Preferably, the sensors worn at different positions on the user's body include:
[0028] 在用户的双手、 双脚和腰部所分别设置的传感器。 [0028] Sensors respectively disposed on the user's hands, feet, and waist.
[0029] 优选地, 所述控制模块包括: [0029] Preferably, the control module comprises:
[0030] 绘制单元, 用于根据获取的用户当前执行的动作, 绘制用户当前执行的动作所 对应的运动轨迹, 得到绘制后的运动轨迹信息;  [0030] a drawing unit, configured to draw a motion trajectory corresponding to the currently performed action of the user according to the obtained action currently performed by the user, and obtain motion trajectory information after the drawing;
[0031] 控制单元, 用于根据绘制后的所述运动轨迹信息, 控制虚拟现实中对应的虚拟 角色按照所述运动轨迹信息执行对应的操作。 [0031] The control unit is configured to control, according to the drawn motion trajectory information, a corresponding virtual character in the virtual reality to perform a corresponding operation according to the motion trajectory information.
发明的有益效果  Advantageous effects of the invention
有益效果  Beneficial effect
[0032] 本发明一种基于虚拟现实的用户交互方法及系统可以达到如下有益效果: [0033] 通过获取佩戴在用户身体上不同位置的传感器所检测到的动作数据; 根据所述 传感器在用户身体上的设置位置, 解析获取的所述动作数据, 获取用户当前执 行的动作; 将获取的用户当前执行的动作反馈至虚拟现实中对应的虚拟角色, 并控制虚拟现实中对应的所述虚拟角色执行相应的操作; 具有通过用户身上佩 戴的传感器来检测并获取用户的肢体动作, 从而使得用户能够与虚拟场景中的 虚拟角色进行互动的有益效果, 提高了虚拟现实的真实感。  [0032] A virtual reality-based user interaction method and system of the present invention can achieve the following beneficial effects: [0033] action data detected by acquiring sensors worn at different positions on the user's body; according to the sensor in the user's body Setting the location, parsing the obtained action data, and acquiring the action currently performed by the user; feeding back the currently performed action of the obtained user to the corresponding virtual character in the virtual reality, and controlling the corresponding virtual character execution in the virtual reality Corresponding operation; having the sensor worn by the user to detect and acquire the user's limb movement, thereby enabling the user to interact with the virtual character in the virtual scene, thereby improving the realism of the virtual reality.
对附图的简要说明  Brief description of the drawing
附图说明  DRAWINGS
[0034] 图 1是本发明基于虚拟现实的用户交互方法的一种实施方式的流程示意图; [0035] 图 2是本发明基于虚拟现实的用户交互方法中, 图 1所述实施例中步骤 S30的一 种实施方式的流程示意图;  1 is a schematic flowchart of an embodiment of a virtual reality-based user interaction method according to the present invention; [0035] FIG. 2 is a virtual reality-based user interaction method according to the present invention, and step S30 in the embodiment of FIG. Schematic diagram of an embodiment of an embodiment;
[0036] 图 3是本发明基于虚拟现实的用户交互方法的另一种实施方式的流程示意图; [0037] 图 4是本发明基于虚拟现实的用户交互系统的一种实施方式的框图; 3 is a schematic flowchart diagram of another embodiment of a virtual reality-based user interaction method according to the present invention; [0037] FIG. 4 is a block diagram of an embodiment of a virtual reality-based user interaction system according to the present invention;
[0038] 图 5是本发明基于虚拟现实的用户交互系统中, 图 4所述实施例中控制模块 80— 实施例框图; [0038] FIG. 5 is a block diagram of an embodiment of the control module 80 in the embodiment of FIG. 4 in the virtual reality-based user interaction system of the present invention;
[0039] 图 6是本发明基于虚拟现实的用户交互系统的另一种实施方式的框图。 [0040] 本发明实施例目的的实现、 功能特点及优点将结合实施例, 参照附图做进一步 说明。 本发明的实施方式 6 is a block diagram of another embodiment of a virtual reality based user interaction system of the present invention. [0040] The implementation, functional features and advantages of the embodiments of the present invention will be further described with reference to the accompanying drawings. Embodiments of the invention
[0041] 以下结合说明书附图及具体实施例进一步说明本发明的技术方案。 应当理解, 此处所描述的具体实施例仅仅用以解释本发明, 并不用于限定本发明。  [0041] The technical solutions of the present invention are further described below in conjunction with the drawings and specific embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
[0042] 本发明提供了一种基于虚拟现实的用户交互方法及系统, 用以通过用户身上佩 戴的传感器来检测并获取用户的肢体动作, 从而使得用户能够与虚拟场景中的 虚拟角色进行互动, 提高虚拟现实的真实感。 [0042] The present invention provides a virtual reality-based user interaction method and system for detecting and acquiring a user's physical motion through a sensor worn by a user, thereby enabling a user to interact with a virtual character in a virtual scene. Improve the realism of virtual reality.
[0043] 参照图 1, 图 1是本发明基于虚拟现实的用户交互方法的一种实施方式的流程示 意图; 如图 1所示, 本发明基于虚拟现实的用户交互方法可以实施为如下描述的 步骤 S10-S30: 1 is a schematic flowchart of an implementation manner of a virtual reality-based user interaction method according to the present invention. As shown in FIG. 1, the virtual reality-based user interaction method of the present invention may be implemented as the following steps. S10-S30:
[0044] 步骤 S10、 获取佩戴在用户身体上不同位置的传感器所检测到的动作数据; [0044] Step S10: Acquire motion data detected by sensors worn at different positions on the user's body;
[0045] 本发明实施例中, 用户预先在身上的不同位置佩戴上相应的传感器, 当用户有 所动作吋, 位于该用户身上不同部位的传感器通过检测到的相应数据即可获知 用户所作出的相应动作。 [0045] In the embodiment of the present invention, the user wears a corresponding sensor in different positions on the body in advance. When the user moves, the sensor located in different parts of the user can know the user's made by detecting the corresponding data. The corresponding action.
[0046] 在一优选的实施例中, 所述佩戴在用户身体上不同位置的传感器包括: 在用户 的双手、 双脚和腰部所分别设置的传感器。 例如, 分别在用户的双手、 双脚和 腰部各设置相应的传感器, 从而能够检测到用户双手、 双脚和身体躯干部分的 动作数据; 进一步的, 也可以在用户的头部和 /或颈部, 也同吋分别设置相应的 传感器, 从而更加准确地检测到用户身体不同部位的动作数据。  [0046] In a preferred embodiment, the sensors worn at different locations on the user's body include: sensors respectively disposed on the user's hands, feet, and waist. For example, respective sensors are provided on the user's hands, feet, and waist, respectively, so that motion data of the user's hands, feet, and body torso portions can be detected; further, the user's head and/or neck can also be detected. , and the corresponding sensors are also set separately, so that the motion data of different parts of the user's body can be detected more accurately.
[0047] 当通过传感器检测到上述动作数据后, 系统直接获取来自传感器检测到的上述 动作数据。  [0047] After detecting the above action data by the sensor, the system directly acquires the above action data detected by the sensor.
[0048] 步骤 S20、 根据所述传感器在用户身体上的设置位置, 解析获取的所述动作数 据, 获取用户当前执行的动作;  [0048] Step S20: Parse the acquired action data according to the set position of the sensor on the user's body, and obtain an action currently performed by the user;
[0049] 获取到用户身上不同位置所分别设置的传感器所检测到的动作数据后, 系统根 据传感器在用户身体上的不同设置位置, 例如: 左右手、 左右脚、 腰部或者其 他部位的传感器, 解析所获取的上述不同设置位置的传感器所分别对应的动作 数据, 从而即可获知该用户当前所执行的动作。 [0049] After acquiring the motion data detected by the sensors respectively set at different positions on the user, the system sets the position according to the sensor on the user's body, for example: the left and right hands, the left and right feet, the waist or other parts of the sensor, the analysis Obtaining corresponding actions of the sensors of the different setting positions mentioned above Data, so that the action currently performed by the user can be known.
[0050] 本发明实施例中, 以用户身上的双手、 双脚和腰部分别设置一个传感器为例, 进行举例说明。 例如, 将用户双手、 双脚和腰部分别设置的传感器看成是五个 位于不同设置位置的点, 当用户正常站立的吋候, 获取上述五个点的原始位置 ; 当用户倒立的吋候, 获取上述五个点在用户倒立吋的动作数据, 通过解析上 述动作数据, 获取到上述五个点在用户运动后所处的新位置, 将上述新位置与 原始位置进行比较, 即可获取用户当前执行的动作为倒立动作。  [0050] In the embodiment of the present invention, a sensor is respectively disposed on the hands, the feet, and the waist of the user as an example, and an example is illustrated. For example, the sensor that is separately set by the user's hands, feet, and waist is regarded as five points at different setting positions. When the user stands normally, the original position of the above five points is acquired; when the user is upside down, Obtaining the action data of the above five points in the user's inverted position, by analyzing the above action data, obtaining the new position of the above five points after the user's motion, comparing the new position with the original position, and obtaining the current user The action performed is an inverted action.
[0051] 步骤 S30、 将获取的用户当前执行的动作反馈至虚拟现实中对应的虚拟角色, 并控制虚拟现实中对应的所述虚拟角色执行相应的操作。  [0051] Step S30: The action currently performed by the obtained user is fed back to the corresponding virtual character in the virtual reality, and the corresponding virtual character in the virtual reality is controlled to perform a corresponding operation.
[0052] 在系统解析并获取到用户当前所执行的动作后, 将获取的用户当前所执行的动 作反馈至虚拟现实中对应的虚拟角色, 例如: 系统将用户所执行的动作转换为 虚拟现实中对应的虚拟角色所能够识别的控制指令, 从而控制虚拟现实中对应 的虚拟角色执行相应的操作。  [0052] After the system parses and obtains the action currently performed by the user, the action currently performed by the obtained user is fed back to the corresponding virtual character in the virtual reality, for example: the system converts the action performed by the user into virtual reality. Corresponding virtual characters can recognize the control commands, thereby controlling the corresponding virtual characters in the virtual reality to perform corresponding operations.
[0053] 例如, 用户执行了"右勾拳 "的动作, 则系统通过获取对应的传感器数据, 并解 析出用户所执行的"右勾拳动作", 从而将上述"右勾拳动作"反馈至虚拟现实中对 应的虚拟角色 A, 控制该虚拟角色 A执行与用户相同的"右勾拳动作"。  [0053] For example, if the user performs the action of "right hook", the system retrieves the corresponding sensor data and parses out the "right hook action" performed by the user, thereby feeding back the "right hook action" to The corresponding virtual character A in the virtual reality controls the virtual character A to perform the same "right hooking action" as the user.
[0054] 在一优选的实施例中, 如图 2所示, 本发明基于虚拟现实的用户交互方法中, 图 1所述实施例中, 上述"步骤 S30、 将获取的用户当前执行的动作反馈至虚拟现 实中对应的虚拟角色, 并控制虚拟现实中对应的所述虚拟角色执行相应的操作" 可以实施为如下描述的步骤 S301-S302:  [0054] In a preferred embodiment, as shown in FIG. 2, in the virtual reality-based user interaction method of the present invention, in the embodiment of FIG. 1, the above-mentioned "Step S30, the action feedback of the currently performed user to be acquired" To the corresponding virtual character in the virtual reality, and controlling the corresponding virtual character in the virtual reality to perform the corresponding operation" may be implemented as steps S301-S302 as described below:
[0055] 步骤 S301、 根据获取的用户当前执行的动作, 绘制用户当前执行的动作所对应 的运动轨迹, 得到绘制后的运动轨迹信息;  [0055] Step S301: According to the currently performed action of the acquired user, draw a motion track corresponding to the action currently performed by the user, and obtain motion track information after the drawing;
[0056] 本发明实施例中, 当系统解析传感器检测到的动作数据并获取到用户当前执行 的动作后, 根据获取的用户当前执行的动作, 绘制上述用户当前执行的动作所 对应的运动轨迹, 从而得到绘制后的运动轨迹信息。  In the embodiment of the present invention, after the system parses the motion data detected by the sensor and obtains the action currently performed by the user, according to the action currently performed by the acquired user, the motion track corresponding to the action currently performed by the user is drawn. Thereby, the motion track information after drawing is obtained.
[0057] 进一步地, 在一优选的实施例中, 系统绘制的运动轨迹信息可以被虚拟现实中 的虚拟角色识别并直接执行该运动轨迹信息所对应的动作。  [0057] Further, in a preferred embodiment, the motion trajectory information drawn by the system can be recognized by the virtual character in the virtual reality and directly perform the action corresponding to the motion trajectory information.
[0058] 步骤 S302、 根据绘制后的所述运动轨迹信息, 控制虚拟现实中对应的虚拟角色 按照所述运动轨迹信息执行对应的操作。 [0058] Step S302: Control the corresponding virtual character in the virtual reality according to the drawn motion track information. Corresponding operations are performed in accordance with the motion trajectory information.
[0059] 根据绘制后的上述运动轨迹信息, 系统直接控制虚拟现实中对应的虚拟角色按 照上述运动轨迹信息去执行对应的操作。 其中, 具体的实现方式包括但不限于 : 系统直接将上述运动轨迹信息发送至对应的虚拟角色, 该虚拟角色识别上述 运动轨迹信息所对应的操作, 执行上述运动轨迹信息所对应的操作。 或者, 系 统根据上述运动轨迹信息, 控制对应的虚拟角色执行与上述运动轨迹信息所对 应的动作相一致的操作。  [0059] According to the motion track information after the drawing, the system directly controls the corresponding virtual character in the virtual reality to perform the corresponding operation according to the motion track information. The specific implementation manner includes: but is not limited to: the system directly sends the motion track information to a corresponding virtual character, and the virtual character identifies an operation corresponding to the motion track information, and performs an operation corresponding to the motion track information. Alternatively, the system controls the corresponding virtual character to perform an operation consistent with the action corresponding to the motion track information according to the motion track information.
[0060] 在一优选的实施例中, 基于图 1、 图 2所述实施例的描述, 如图 3所示, 本发明 基于虚拟现实的用户交互方法还包括步骤:  [0060] In a preferred embodiment, based on the description of the embodiment shown in FIG. 1 and FIG. 2, as shown in FIG. 3, the virtual reality-based user interaction method of the present invention further includes the following steps:
[0061] 步骤 S40、 获取虚拟现实中所述虚拟角色当前所在场景的环境信息, 并将所述 环境信息反馈至用户, 使得用户能够感知到所述环境信息。  [0061] Step S40: Obtain environment information of a scenario in which the virtual character is currently located in the virtual reality, and feed the environment information to the user, so that the user can perceive the environment information.
[0062] 本发明实施例中, 系统还可以通过获取虚拟现实中虚拟角色当前所处的场景对 应的环境信息, 并将该环境信息反馈至用户, 使得用户能够感知到上述环境信 息, 进而做出相应的触觉感知。 其中, 上述虚拟现实中虚拟角色当前所处的场 景所对应的环境信息包括但不限于: 温度信息、 湿度信息、 气压信息、 风力风 向信息以及降雨或者降雪量信息。  In the embodiment of the present invention, the system may also obtain the environment information corresponding to the scenario in which the virtual character is currently located in the virtual reality, and feed the environment information to the user, so that the user can perceive the environment information, and then Corresponding tactile perception. The environment information corresponding to the scene in which the virtual character is currently located in the virtual reality includes, but is not limited to, temperature information, humidity information, air pressure information, wind direction information, and rainfall or snowfall information.
[0063] 例如, 系统通过获取虚拟现实中虚拟角色当前所处的场景为: 该虚拟角色 B在 冰窖中, 则获取上述场景"冰窖"所对应的环境信息为: 温度为零下 10°C, 则系统 按照上述环境信息改变用户当前所处的环境, 进而将上述环境信息反馈至用户 , 使得用户能够通过自身当前所处的环境, 来感知到上述环境信息, 例如, 用 户同样会感觉到冷。  [0063] For example, the system obtains the scene in which the virtual character is currently located in the virtual reality: the virtual character B is in the hail, and the environment information corresponding to the scene “ice” is: the temperature is minus 10° C. The system changes the current environment of the user according to the above environment information, and then feeds the environment information to the user, so that the user can perceive the environment information through the environment in which the user is currently located. For example, the user also feels cold.
[0064] 本发明实施例中, 系统改变用户当前所处的环境的具体改变方式, 是根据用户 所处的具体环境来执行的; 例如, 用户当前所处的为一个密闭的空间, 则系统 通过控制该空间中的智能家居比如空调, 来调节用户当前所处的环境。 又例如 , 若用户当前身穿具备可调节环境因素 (例如温度、 湿度等) 的服装来执行上 述动作的, 则系统通过调整上述功能性服装所具备的环境因素 (例如温度、 湿 度) , 来使用户感知到上述环境信息。  [0064] In the embodiment of the present invention, the system changes the specific change mode of the environment in which the user is currently located, according to the specific environment in which the user is located; for example, if the user is currently in a closed space, the system passes Control the smart home in the space, such as air conditioning, to adjust the user's current environment. For another example, if the user currently wears a garment having adjustable environmental factors (such as temperature, humidity, etc.) to perform the above actions, the system adjusts the environmental factors (such as temperature and humidity) of the functional clothing. The user perceives the above environmental information.
[0065] 本发明基于虚拟现实的用户交互方法通过获取佩戴在用户身体上不同位置的传 感器所检测到的动作数据; 根据所述传感器在用户身体上的设置位置, 解析获 取的所述动作数据, 获取用户当前执行的动作; 将获取的用户当前执行的动作 反馈至虚拟现实中对应的虚拟角色, 并控制虚拟现实中对应的所述虚拟角色执 行相应的操作; 具有通过用户身上佩戴的传感器来检测并获取用户的肢体动作 , 从而使得用户能够与虚拟场景中的虚拟角色进行互动的有益效果, 提高了虚 拟现实的真实感; 进一步地, 由于用户也能够感知到虚拟角色所处的环境信息 , 因此也提高了用户体验。 [0065] The user interaction method based on virtual reality of the present invention obtains the transmission of different positions on the user's body by acquiring The action data detected by the sensor; the action data obtained by the user is obtained according to the set position of the sensor on the user's body, and the action currently performed by the user is obtained; and the action currently performed by the user is fed back to the virtual reality. a virtual character, and controlling the corresponding virtual character in the virtual reality to perform a corresponding operation; having a sensor worn by the user to detect and acquire the user's physical motion, thereby enabling the user to interact with the virtual character in the virtual scene The beneficial effect is to improve the realism of the virtual reality; further, since the user can also perceive the environmental information of the virtual character, the user experience is also improved.
[0066] 对应于图 1、 图 2和图 3所述实施例所描述的一种基于虚拟现实的用户交互系统 方法, 本发明还提供了一种基于虚拟现实的用户交互系统, 如图 4所示, 本发明 基于虚拟现实的用户交互系统包括: 佩戴在用户身体上不同位置的多个传感器 1 00, 所述基于虚拟现实的用户交互系统还包括: 获取模块 60、 解析模块 70和控 制模块 80; 其中:  [0066] A virtual reality-based user interaction system method corresponding to the embodiment described in FIG. 1, FIG. 2 and FIG. 3, the present invention also provides a virtual reality-based user interaction system, as shown in FIG. The virtual reality-based user interaction system includes: a plurality of sensors 100 that are worn at different locations on the user's body, and the virtual reality-based user interaction system further includes: an acquisition module 60, a parsing module 70, and a control module 80. ; among them:
[0067] 所述获取模块 60, 用于获取佩戴在用户身体上不同位置的传感器 100所检测到 的动作数据;  [0067] The obtaining module 60 is configured to acquire motion data detected by the sensor 100 worn at different positions on the user's body;
[0068] 本发明实施例中, 用户预先在身上的不同位置佩戴上相应的传感器 100, 当用 户有所动作吋, 位于该用户身上不同部位的传感器 100通过检测到的相应数据即 可获知用户所作出的相应动作。  In the embodiment of the present invention, the user wears the corresponding sensor 100 in different positions on the body in advance. When the user moves, the sensor 100 located in different parts of the user can know the user through the detected corresponding data. The corresponding action made.
[0069] 在一优选的实施例中, 所述佩戴在用户身体上不同位置的传感器 100包括: 在 用户的双手、 双脚和腰部所分别设置的传感器。 例如, 分别在用户的双手、 双 脚和腰部各设置相应的传感器, 从而能够检测到用户双手、 双脚和身体躯干部 分的动作数据; 进一步的, 也可以在用户的头部和 /或颈部, 也同吋分别设置相 应的传感器 100, 从而更加准确地检测到用户身体不同部位的动作数据。  [0069] In a preferred embodiment, the sensor 100 worn at different locations on the user's body comprises: sensors respectively disposed on the user's hands, feet and waist. For example, respective sensors are provided on the user's hands, feet, and waist, respectively, so that motion data of the user's hands, feet, and body torso portions can be detected; further, the user's head and/or neck can also be detected. The corresponding sensor 100 is also set separately, so that the motion data of different parts of the user's body is detected more accurately.
[0070] 当通过传感器 100检测到上述动作数据后, 所述获取模块 60直接获取来自传感 器 100检测到的上述动作数据。  [0070] After the motion data is detected by the sensor 100, the acquisition module 60 directly acquires the motion data detected by the sensor 100.
[0071] 所述解析模块 70, 用于根据所述传感器 100在用户身体上的设置位置, 解析获 取的所述动作数据, 获取用户当前执行的动作;  [0071] The parsing module 70 is configured to parse the obtained action data according to the set position of the sensor 100 on the user's body, and obtain an action currently performed by the user;
[0072] 所述获取模块 60获取到用户身上不同位置所分别设置的传感器 100所检测到的 动作数据后, 解析模块 70根据传感器 100在用户身体上的不同设置位置, 例如: 左右手、 左右脚、 腰部或者其他部位的传感器 100, 解析所获取的上述不同设置 位置的传感器 100所分别对应的动作数据, 从而即可获知该用户当前所执行的动 作。 [0072] After the obtaining module 60 obtains the action data detected by the sensor 100 respectively set in different positions on the user, the parsing module 70 sets different positions according to the sensor 100 on the user's body, for example: The sensor 100 of the left and right hands, the left and right feet, the waist, or other parts analyzes the motion data corresponding to the sensors 100 of the different set positions obtained, so that the current action performed by the user can be known.
[0073] 本发明实施例中, 以用户身上的双手、 双脚和腰部分别设置一个传感器为例, 进行举例说明。 例如, 将用户双手、 双脚和腰部分别设置的传感器看成是五个 位于不同设置位置的点, 当用户正常站立的吋候, 所述获取模块 60获取上述五 个点的原始位置; 当用户倒立的吋候, 所述获取模块 60获取上述五个点在用户 倒立吋的动作数据, 解析模块 70通过解析上述动作数据, 获取到上述五个点在 用户运动后所处的新位置, 将上述新位置与原始位置进行比较, 即可获取用户 当前执行的动作为倒立动作。  [0073] In the embodiment of the present invention, a sensor is respectively disposed on the hands, the feet, and the waist of the user as an example, and an example is illustrated. For example, the sensor respectively set by the user's hands, feet and waist is regarded as five points located at different setting positions, and when the user stands normally, the obtaining module 60 acquires the original positions of the above five points; When the handstand is inverted, the obtaining module 60 acquires the action data of the five points in the user's inverted position, and the parsing module 70 obtains the new position of the five points after the user moves by analyzing the action data. The new position is compared with the original position, and the current action performed by the user is obtained as an inverted action.
[0074] 所述控制模块 80, 用于将获取的用户当前执行的动作反馈至虚拟现实中对应的 虚拟角色, 并控制虚拟现实中对应的所述虚拟角色执行相应的操作。  [0074] The control module 80 is configured to feed back the currently performed action of the acquired user to a corresponding virtual character in the virtual reality, and control the corresponding virtual character in the virtual reality to perform a corresponding operation.
[0075] 在解析模块 70解析并获取到用户当前所执行的动作后, 控制模块 80将获取的用 户当前所执行的动作反馈至虚拟现实中对应的虚拟角色, 例如: 控制模块 80将 用户所执行的动作转换为虚拟现实中对应的虚拟角色所能够识别的控制指令, 从而控制虚拟现实中对应的虚拟角色执行相应的操作。  [0075] After the parsing module 70 parses and obtains the action currently performed by the user, the control module 80 feeds back the action currently performed by the obtained user to the corresponding virtual character in the virtual reality, for example, the control module 80 executes the user. The action is converted into a control command that can be recognized by the corresponding virtual character in the virtual reality, thereby controlling the corresponding virtual character in the virtual reality to perform a corresponding operation.
[0076] 例如, 用户执行了"右勾拳 "的动作, 则获取模块 60通过获取对应的传感器数据 , 解析模块 70解析出用户所执行的 "右勾拳动作", 控制模块 80从而将上述"右勾 拳动作"反馈至虚拟现实中对应的虚拟角色 A, 控制该虚拟角色 A执行与用户相同 的"右勾拳动作"。  [0076] For example, if the user performs the action of "right hook", the acquisition module 60 obtains the corresponding sensor data, and the parsing module 70 parses the "right hook action" performed by the user, and the control module 80 thereby "" The right hook action "feeds back to the corresponding virtual character A in the virtual reality, and controls the virtual character A to perform the same "right hook action" as the user.
[0077] 在一优选的实施例中, 如图 5所示, 本发明基于虚拟现实的用户交互系统中, 图 1所述实施例中, 所述控制模块 80包括: 绘制单元 801和控制单元 802; 其中: [0077] In a preferred embodiment, as shown in FIG. 5, in the virtual reality-based user interaction system of the present invention, in the embodiment of FIG. 1, the control module 80 includes: a drawing unit 801 and a control unit 802. ; among them:
[0078] 所述绘制单元 801, 用于根据获取的用户当前执行的动作, 绘制用户当前执行 的动作所对应的运动轨迹, 得到绘制后的运动轨迹信息; [0078] The drawing unit 801 is configured to: according to the obtained action currently performed by the user, draw a motion track corresponding to the action currently performed by the user, and obtain motion track information after the drawing;
[0079] 本发明实施例中, 当解析模块 70解析传感器 100检测到的动作数据并获取到用 户当前执行的动作后, 所述绘制单元 801根据获取的用户当前执行的动作, 绘制 上述用户当前执行的动作所对应的运动轨迹, 从而得到绘制后的运动轨迹信息 [0080] 进一步地, 在一优选的实施例中, 所述绘制单元 801绘制的运动轨迹信息可以 被虚拟现实中的虚拟角色识别并直接执行该运动轨迹信息所对应的动作。 In the embodiment of the present invention, after the parsing module 70 parses the action data detected by the sensor 100 and obtains the action currently performed by the user, the drawing unit 801 draws the current execution of the user according to the obtained action currently performed by the user. The motion trajectory corresponding to the motion, so as to obtain the motion trajectory information after the drawing [0080] Further, in a preferred embodiment, the motion trajectory information drawn by the rendering unit 801 can be recognized by the virtual character in the virtual reality and directly perform the action corresponding to the motion trajectory information.
[0081] 所述控制单元 802, 用于根据绘制后的所述运动轨迹信息, 控制虚拟现实中对 应的虚拟角色按照所述运动轨迹信息执行对应的操作。  [0081] The control unit 802 is configured to control, according to the drawn motion trajectory information, a corresponding virtual character in the virtual reality to perform a corresponding operation according to the motion trajectory information.
[0082] 根据绘制单元 801绘制后的上述运动轨迹信息, 控制单元 802直接控制虚拟现实 中对应的虚拟角色按照上述运动轨迹信息去执行对应的操作。 其中, 具体的实 现方式包括但不限于: 控制单元 802直接将上述运动轨迹信息发送至对应的虚拟 角色, 该虚拟角色识别上述运动轨迹信息所对应的操作, 执行上述运动轨迹信 息所对应的操作。 或者, 控制单元 802根据上述运动轨迹信息, 控制对应的虚拟 角色执行与上述运动轨迹信息所对应的动作相一致的操作。  [0082] According to the above-mentioned motion trajectory information drawn by the rendering unit 801, the control unit 802 directly controls the corresponding virtual character in the virtual reality to perform the corresponding operation according to the motion trajectory information. The specific implementation manner includes, but is not limited to, the control unit 802 directly transmitting the motion track information to the corresponding virtual character, and the virtual character identifies the operation corresponding to the motion track information, and performs the operation corresponding to the motion track information. Alternatively, the control unit 802 controls the corresponding virtual character to perform an operation consistent with the action corresponding to the motion track information according to the motion track information.
[0083] 在一优选的实施例中, 基于图 4、 图 5所述实施例的描述, 如图 6所示, 本发明 基于虚拟现实的用户交互系统还包括:  [0083] In a preferred embodiment, based on the description of the embodiment shown in FIG. 4 and FIG. 5, as shown in FIG. 6, the virtual reality-based user interaction system of the present invention further includes:
[0084] 交互模块 90, 用于获取虚拟现实中所述虚拟角色当前所在场景的环境信息, 并 将所述环境信息反馈至用户, 使得用户能够感知到所述环境信息。  [0084] The interaction module 90 is configured to obtain environment information of a scenario in which the virtual character is currently located in the virtual reality, and feed the environment information to the user, so that the user can perceive the environment information.
[0085] 发明实施例中, 交互模块 90还可以通过获取虚拟现实中虚拟角色当前所处的场 景对应的环境信息, 并将该环境信息反馈至用户, 使得用户能够感知到上述环 境信息, 进而做出相应的触觉感知。 其中, 上述虚拟现实中虚拟角色当前所处 的场景所对应的环境信息包括但不限于: 温度信息、 湿度信息、 气压信息、 风 力风向信息以及降雨或者降雪量信息。  In the embodiment of the present invention, the interaction module 90 can also obtain the environment information corresponding to the scene in which the virtual character is currently located in the virtual reality, and feed the environment information to the user, so that the user can perceive the environment information, and then Corresponding tactile perception. The environment information corresponding to the scenario in which the virtual character is currently located in the virtual reality includes, but is not limited to, temperature information, humidity information, air pressure information, wind direction information, and rainfall or snowfall information.
[0086] 例如, 交互模块 90通过获取虚拟现实中虚拟角色当前所处的场景为: 该虚拟角 色 B在冰窖中, 则获取上述场景"冰窖"所对应的环境信息为: 温度为零下 10°C, 则交互模块 90按照上述环境信息改变用户当前所处的环境, 进而将上述环境信 息反馈至用户, 使得用户能够通过自身当前所处的环境, 来感知到上述环境信 息, 例如, 用户同样会感觉到冷。  [0086] For example, the interaction module 90 obtains the scene in which the virtual character is currently located in the virtual reality: the virtual character B is in the hail, and the environment information corresponding to the scene "ice" is: the temperature is minus 10 ° C The interaction module 90 changes the environment in which the user is currently located according to the environment information, and then feeds the environment information to the user, so that the user can perceive the environment information through the environment in which the user is currently located. For example, the user also feels To the cold.
[0087] 本发明实施例中, 交互模块 90改变用户当前所处的环境的具体改变方式, 是根 据用户所处的具体环境来执行的; 例如, 用户当前所处的为一个密闭的空间, 则交互模块 90通过控制该空间中的智能家居比如空调, 来调节用户当前所处的 环境。 又例如, 若用户当前身穿具备可调节环境因素 (例如温度、 湿度等) 的 服装来执行上述动作的, 则交互模块 90通过调整上述功能性服装所具备的环境 因素 (例如温度、 湿度) , 来使用户感知到上述环境信息。 In the embodiment of the present invention, the interaction module 90 changes the specific change manner of the environment in which the user is currently located, and is executed according to the specific environment in which the user is located; for example, if the user is currently in a closed space, The interaction module 90 adjusts the environment in which the user is currently located by controlling a smart home such as an air conditioner in the space. For another example, if the user is currently wearing an adjustable environmental factor (such as temperature, humidity, etc.) When the garment performs the above actions, the interaction module 90 causes the user to perceive the environmental information by adjusting environmental factors (such as temperature and humidity) of the functional garment.
[0088] 本发明基于虚拟现实的用户交互系统通过获取佩戴在用户身体上不同位置的传 感器所检测到的动作数据; 根据所述传感器在用户身体上的设置位置, 解析获 取的所述动作数据, 获取用户当前执行的动作; 将获取的用户当前执行的动作 反馈至虚拟现实中对应的虚拟角色, 并控制虚拟现实中对应的所述虚拟角色执 行相应的操作; 具有通过用户身上佩戴的传感器来检测并获取用户的肢体动作 , 从而使得用户能够与虚拟场景中的虚拟角色进行互动的有益效果, 提高了虚 拟现实的真实感; 进一步地, 由于用户也能够感知到虚拟角色所处的环境信息 , 因此也提高了用户体验。  [0088] The virtual reality-based user interaction system acquires action data detected by sensors worn at different positions on the user's body; and according to the set position of the sensor on the user's body, the acquired action data is parsed, Obtaining an action currently performed by the user; feeding back the currently performed action of the obtained user to a corresponding virtual character in the virtual reality, and controlling the corresponding virtual character in the virtual reality to perform a corresponding operation; having detecting by using a sensor worn by the user And obtaining the user's physical movements, thereby enabling the user to interact with the virtual characters in the virtual scene, thereby improving the realism of the virtual reality; further, since the user can also perceive the environmental information of the virtual character, It also improves the user experience.
[0089] 需要说明的是, 在本文中, 术语"包括"、 "包含 "或者其任何其他变体意在涵盖 非排他性的包含, 从而使得包括一系列要素的过程、 方法、 物品或者装置不仅 包括那些要素, 而且还包括没有明确列出的其他要素, 或者是还包括为这种过 程、 方法、 物品或者装置所固有的要素。 在没有更多限制的情况下, 由语句 "包 括一个 ...... "限定的要素, 并不排除在包括该要素的过程、 方法、 物品或者装置 中还存在另外的相同要素。  [0089] It is to be understood that the terms "comprising", "comprising", or any other variants are intended to encompass a non-exclusive inclusion, such that a process, method, article, or device comprising a Those elements, but also other elements not explicitly listed, or elements that are inherent to such a process, method, item or device. An element defined by the phrase "comprises a ..." without further restrictions does not exclude the presence of additional elements in the process, method, article, or device that comprises the element.
[0090] 上述本发明实施例序号仅仅为了描述, 不代表实施例的优劣。  [0090] The foregoing serial numbers of the embodiments of the present invention are merely for the description, and do not represent the advantages and disadvantages of the embodiments.
[0091] 通过以上的实施方式的描述, 本领域的技术人员可以清楚地了解到上述实施例 方法可借助软件加必需的通用硬件平台的方式来实现, 当然也可以通过硬件, 但很多情况下前者是更佳的实施方式。 基于这样的理解, 本发明的技术方案本 质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来, 该计 算机软件产品存储在一个存储介质 (如 ROM/RAM、 磁碟、 光盘) 中, 包括若干 指令用以使得一台终端设备 (可以是手机, 计算机, 服务器, 或者网络设备等 ) 执行本发明各个实施例所述的方法。  Through the description of the above embodiments, those skilled in the art can clearly understand that the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former It is a better implementation. Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, The optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
以上所述仅为本发明的优选实施例, 并非因此限制其专利范围, 凡是利用本发 明说明书及附图内容所作的等效结构或等效流程变换, 直接或间接运用在其他 相关的技术领域, 均同理包括在本发明的专利保护范围内。  The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the patents. The equivalent structure or equivalent process transformations made by the description of the present invention and the drawings are directly or indirectly applied to other related technical fields. The same is included in the scope of patent protection of the present invention.

Claims

权利要求书  Claim
一种基于虚拟现实的用户交互方法, 其特征在于, 包括以下步骤: 获取佩戴在用户身体上不同位置的传感器所检测到的动作数据; 根据所述传感器在用户身体上的设置位置, 解析获取的所述动作数据 , 获取用户当前执行的动作; A virtual reality-based user interaction method, comprising the steps of: acquiring motion data detected by sensors worn at different positions on a user's body; and parsing and obtaining according to the set position of the sensor on the user's body The action data acquires an action currently performed by the user;
将获取的用户当前执行的动作反馈至虚拟现实中对应的虚拟角色, 并 控制虚拟现实中对应的所述虚拟角色执行相应的操作。 The action currently performed by the obtained user is fed back to the corresponding virtual character in the virtual reality, and the corresponding virtual character in the virtual reality is controlled to perform a corresponding operation.
如权利要求 1所述的方法, 其特征在于, 所述基于虚拟现实的用户交 互方法还包括: The method of claim 1, wherein the virtual reality based user interaction method further comprises:
获取虚拟现实中所述虚拟角色当前所在场景的环境信息, 并将所述环 境信息反馈至用户, 使得用户能够感知到所述环境信息。 The environment information of the scenario in which the virtual character is currently located in the virtual reality is obtained, and the environment information is fed back to the user, so that the user can perceive the environment information.
如权利要求 2所述的方法, 其特征在于, 所述环境信息包括: 温度信息、 湿度信息、 气压信息、 风力风向信息以及降雨或者降雪量 f π息。 The method according to claim 2, wherein the environmental information comprises: temperature information, humidity information, air pressure information, wind direction information, and rainfall or snowfall amount.
如权利要求 1所述的方法, 其特征在于, 所述佩戴在用户身体上不同 位置的传感器包括: The method of claim 1, wherein the sensors worn at different locations on the user's body comprise:
在用户的双手、 双脚和腰部所分别设置的传感器。 Sensors are placed separately on the user's hands, feet and waist.
如权利要求 1至 4任一项所述的方法, 其特征在于, 所述将获取的用户 当前执行的动作反馈至虚拟现实中对应的虚拟角色, 并控制虚拟现实 中对应的所述虚拟角色执行相应的操作, 包括: The method according to any one of claims 1 to 4, wherein the action currently performed by the acquired user is fed back to a corresponding virtual character in the virtual reality, and the corresponding virtual character in the virtual reality is controlled to be executed. The corresponding operations, including:
根据获取的用户当前执行的动作, 绘制用户当前执行的动作所对应的 运动轨迹, 得到绘制后的运动轨迹信息; According to the currently performed action of the obtained user, the motion track corresponding to the action currently performed by the user is drawn, and the motion track information after the drawing is obtained;
根据绘制后的所述运动轨迹信息, 控制虚拟现实中对应的虚拟角色按 照所述运动轨迹信息执行对应的操作。 The corresponding virtual character in the virtual reality is controlled to perform a corresponding operation according to the motion track information according to the drawn motion track information.
一种基于虚拟现实的用户交互系统, 所述系统包括佩戴在用户身体上 不同位置的多个传感器; A virtual reality based user interaction system, the system comprising a plurality of sensors worn at different locations on a user's body;
其特征在于, 所述基于虚拟现实的用户交互系统还包括: The VR-based user interaction system further includes:
获取模块, 用于获取佩戴在用户身体上不同位置的传感器所检测到的 动作数据; An acquisition module for acquiring sensors detected by sensors worn at different positions on the user's body Action data
解析模块, 用于根据所述传感器在用户身体上的设置位置, 解析获取 的所述动作数据, 获取用户当前执行的动作; The parsing module is configured to parse the acquired action data according to the set position of the sensor on the user's body, and obtain an action currently performed by the user;
控制模块, 用于将获取的用户当前执行的动作反馈至虚拟现实中对应 的虚拟角色, 并控制虚拟现实中对应的所述虚拟角色执行相应的操作 如权利要求 6所述的系统, 其特征在于, 所述基于虚拟现实的用户交 互系统还包括: a control module, configured to feed back the currently performed action of the acquired user to a corresponding virtual character in the virtual reality, and control the corresponding virtual character in the virtual reality to perform a corresponding operation. The system according to claim 6, wherein the system is characterized in that: The virtual reality based user interaction system further includes:
交互模块, 用于获取虚拟现实中所述虚拟角色当前所在场景的环境信 息, 并将所述环境信息反馈至用户, 使得用户能够感知到所述环境信 息。 The interaction module is configured to obtain environment information of a scenario in which the virtual character is currently located in the virtual reality, and feed the environment information to the user, so that the user can perceive the environment information.
如权利要求 7所述的系统, 其特征在于, 所述环境信息包括: 温度信息、 湿度信息、 气压信息、 风力风向信息以及降雨或者降雪量 f π息。 The system according to claim 7, wherein the environmental information comprises: temperature information, humidity information, air pressure information, wind direction information, and rainfall or snowfall amount.
如权利要求 6所述的系统, 其特征在于, 所述佩戴在用户身体上不同 位置的传感器包括: The system of claim 6 wherein said sensors worn at different locations on a user's body comprise:
在用户的双手、 双脚和腰部所分别设置的传感器。 Sensors are placed separately on the user's hands, feet and waist.
如权利要求 6至 9任一项所述的系统, 其特征在于, 所述控制模块包括 绘制单元, 用于根据获取的用户当前执行的动作, 绘制用户当前执行 的动作所对应的运动轨迹, 得到绘制后的运动轨迹信息; The system according to any one of claims 6 to 9, wherein the control module includes a drawing unit, configured to draw a motion track corresponding to the currently performed action of the user according to the obtained action currently performed by the user, The motion track information after drawing;
控制单元, 用于根据绘制后的所述运动轨迹信息, 控制虚拟现实中对 应的虚拟角色按照所述运动轨迹信息执行对应的操作。 And a control unit, configured to control, according to the drawn motion trajectory information, a corresponding virtual character in the virtual reality to perform a corresponding operation according to the motion trajectory information.
PCT/CN2016/103733 2015-11-05 2016-10-28 User interaction method and system based on virtual reality WO2017076224A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510745327.6 2015-11-05
CN201510745327.6A CN106681479A (en) 2015-11-05 2015-11-05 User interaction method and system based on virtual reality

Publications (1)

Publication Number Publication Date
WO2017076224A1 true WO2017076224A1 (en) 2017-05-11

Family

ID=58661676

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/103733 WO2017076224A1 (en) 2015-11-05 2016-10-28 User interaction method and system based on virtual reality

Country Status (2)

Country Link
CN (1) CN106681479A (en)
WO (1) WO2017076224A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2569603A (en) * 2017-12-21 2019-06-26 Sony Interactive Entertainment Inc Position tracking apparatus and method

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358007B (en) * 2017-08-14 2018-11-30 腾讯科技(深圳)有限公司 It controls the method, apparatus of smart home system and calculates readable storage medium storing program for executing
CN107562195A (en) * 2017-08-17 2018-01-09 英华达(南京)科技有限公司 Man-machine interaction method and system
DE102017215074A1 (en) * 2017-08-29 2019-02-28 Siemens Healthcare Gmbh Method and selection unit for selecting a virtual object or a picture parameter value
CN109783144B (en) * 2017-11-13 2022-03-25 深圳市创客工场科技有限公司 Method and device for processing variable in interactive realization of virtual environment and storage medium
CN109783112A (en) * 2017-11-13 2019-05-21 深圳市创客工场科技有限公司 The exchange method of virtual environment and physical hardware, device and storage medium
KR102354274B1 (en) * 2017-11-17 2022-01-20 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 Role play simulation method and terminal device in VR scenario
CN108334192A (en) * 2018-01-02 2018-07-27 联想(北京)有限公司 A kind of information processing method, wearing formula device and storage medium
CN110134226B (en) * 2018-02-09 2022-05-10 深圳市掌网科技股份有限公司 Auxiliary positioning device and virtual reality operation platform adopting same
CN108509050A (en) * 2018-05-04 2018-09-07 北京航空航天大学 A kind of haptic feedback devices with multiple spot independent temperature feedback
CN108734774B (en) * 2018-05-18 2022-05-31 网易(杭州)网络有限公司 Virtual limb construction method and device and human-computer interaction method
CN112000228B (en) * 2020-09-04 2024-04-05 河北大学 Method and system for controlling movement in immersive virtual reality
CN116107436A (en) * 2023-04-13 2023-05-12 北京乐开科技有限责任公司 VR virtual image interaction method and system based on mobile device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116857A (en) * 2013-02-01 2013-05-22 武汉百景互动科技有限责任公司 Virtual sample house wandering system based on body sense control
CN103488291A (en) * 2013-09-09 2014-01-01 北京诺亦腾科技有限公司 Immersion virtual reality system based on motion capture
CN104258555A (en) * 2014-09-10 2015-01-07 北京理工大学 RGBD vision sensing type double-fist ball hitting fitness interaction system
CN104407701A (en) * 2014-11-27 2015-03-11 曦煌科技(北京)有限公司 Individual-oriented clustering virtual reality interactive system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116857A (en) * 2013-02-01 2013-05-22 武汉百景互动科技有限责任公司 Virtual sample house wandering system based on body sense control
CN103488291A (en) * 2013-09-09 2014-01-01 北京诺亦腾科技有限公司 Immersion virtual reality system based on motion capture
CN104258555A (en) * 2014-09-10 2015-01-07 北京理工大学 RGBD vision sensing type double-fist ball hitting fitness interaction system
CN104407701A (en) * 2014-11-27 2015-03-11 曦煌科技(北京)有限公司 Individual-oriented clustering virtual reality interactive system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2569603A (en) * 2017-12-21 2019-06-26 Sony Interactive Entertainment Inc Position tracking apparatus and method
GB2569603B (en) * 2017-12-21 2020-04-01 Sony Interactive Entertainment Inc Position tracking apparatus and method
US11369866B2 (en) 2017-12-21 2022-06-28 Sony Interactive Entertainment Inc. Position tracking apparatus and method

Also Published As

Publication number Publication date
CN106681479A (en) 2017-05-17

Similar Documents

Publication Publication Date Title
WO2017076224A1 (en) User interaction method and system based on virtual reality
US11112856B2 (en) Transition between virtual and augmented reality
US10296086B2 (en) Dynamic gloves to convey sense of touch and movement for virtual objects in HMD rendered environments
CN105323129B (en) A kind of family's virtual reality entertainment systems
CN103246351B (en) A kind of user interactive system and method
CN110456907A (en) Control method, device, terminal device and the storage medium of virtual screen
KR102064795B1 (en) Posture training system and method of control thereof
CN106871333A (en) Method, the apparatus and system of a kind of scenery control air-conditioning in virtual world
JP6672386B2 (en) Information processing device
CN106445157B (en) Method and device for adjusting picture display direction
US11278810B1 (en) Menu placement dictated by user ability and modes of feedback
CN106885352A (en) A kind of utilization virtual acting controls method, the apparatus and system of air-conditioning
CN102779000A (en) User interaction system and method
US20190204923A1 (en) Electronic device and control method thereof
JP2022520699A (en) Object control methods and object controls, computer programs, and electronic devices
CN105310853A (en) Virtual reality somatosensory interaction massage chair
Karthika et al. Hololens
CN106527710A (en) Virtual reality interaction method and device
CN107413048B (en) Processing method and device in VR game process
CN207676287U (en) A kind of virtual reality experience system
CN106125927B (en) Image processing system and method
CN204129662U (en) A kind of virtual interacting scene system based on rear projection
CN107256087B (en) VR instantaneous shift control method
CN107870702B (en) User operation prompting method and device based on head-mounted display equipment
CN105630169A (en) Motion input method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16861492

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16861492

Country of ref document: EP

Kind code of ref document: A1