WO2023174008A1 - 基于虚拟现实的操控方法、装置及电子设备 - Google Patents

基于虚拟现实的操控方法、装置及电子设备 Download PDF

Info

Publication number
WO2023174008A1
WO2023174008A1 PCT/CN2023/077218 CN2023077218W WO2023174008A1 WO 2023174008 A1 WO2023174008 A1 WO 2023174008A1 CN 2023077218 W CN2023077218 W CN 2023077218W WO 2023174008 A1 WO2023174008 A1 WO 2023174008A1
Authority
WO
WIPO (PCT)
Prior art keywords
interactive component
component model
user
virtual reality
target object
Prior art date
Application number
PCT/CN2023/077218
Other languages
English (en)
French (fr)
Inventor
吴培培
李笑林
冀利悦
赵文珲
王丹阳
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023174008A1 publication Critical patent/WO2023174008A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the present disclosure relates to the field of virtual reality technology, and in particular, to a control method, device and electronic device based on virtual reality.
  • VR virtual reality
  • buttons of this kind of physical device are easily damaged, which easily affects the user's control.
  • this method brings a poor sense of technology to users, and also affects the user experience to a certain extent.
  • the present disclosure provides a control method, device and electronic device based on virtual reality.
  • the main purpose is to improve the current control method through physical device buttons. Since the physical device buttons are easily damaged, it will easily affect the user's health. Control, and this method brings a poor sense of technology to users, thus affecting the technical issues of user experience.
  • the present disclosure provides a virtual reality-based control method, including:
  • At least one interactive component model is displayed in the virtual reality space, and interactive function events pre-bound by the interactive component model selected by the user are executed.
  • the present disclosure provides a virtual reality-based control device, including:
  • a monitoring module configured to monitor the image information captured by the camera on the user
  • a recognition module configured to recognize action information of the target object in the image information
  • the display module is configured to display at least one interactive component model in the virtual reality space according to the action information of the target object, and execute interactive function events pre-bound by the interactive component model selected by the user.
  • the present disclosure provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the virtual reality-based control method described in the first aspect is implemented.
  • the present disclosure provides an electronic device, including a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor.
  • the processor executes the computer program, the first aspect is implemented.
  • the control method based on virtual reality.
  • the present disclosure provides a control method, device and electronic device based on virtual reality.
  • the present disclosure proposes a VR control method that does not require the use of physical device buttons. improvement plan.
  • the VR device side can first monitor the image information captured by the camera on the user; then identify the action information of the target object in the image information; and then display at least one object in the virtual reality space based on the identified action information of the target object.
  • the interactive component model and executes the pre-bound interactive function events of the user-selected interactive component model.
  • Figure 1 shows a schematic flowchart of a virtual reality-based control method provided by an embodiment of the present disclosure
  • Figure 2 shows a schematic flowchart of another virtual reality-based control method provided by an embodiment of the present disclosure
  • Figure 3 shows a schematic diagram showing an example effect of an interactive component model in the form of a suspended ball provided by an embodiment of the present disclosure
  • Figure 4 shows a schematic diagram showing an example effect of the click interaction component model provided by an embodiment of the present disclosure
  • Figure 5 shows a schematic diagram showing an example effect of the interactive component model of the application scenario provided by the embodiment of the present disclosure
  • Figure 6 shows a schematic diagram showing an example effect of a camera model of an application scenario provided by an embodiment of the present disclosure
  • FIG. 7 shows a schematic structural diagram of a virtual reality-based control device provided by an embodiment of the present disclosure.
  • This embodiment provides a control method based on virtual reality, as shown in Figure 1, which can be applied to the end-side of VR equipment.
  • the method includes:
  • Step 101 Monitor the image information captured by the camera on the user.
  • the camera can be connected to the VR device.
  • the camera can Take pictures and get image information.
  • the user's whole body can be photographed, or a specific part of the user can be photographed, etc.
  • the details can be preset according to actual needs.
  • the user's control instructions can be obtained through image monitoring, without the need to use physical device buttons for VR control.
  • Step 102 Identify the action information of the target object in the image information.
  • the target object can be set in advance, that is, which reference target in the image information the VR equipment system uses to identify and obtain the user's VR control instructions.
  • the target object may include: the user's hands, and/or the user's legs, and/or the user's head, and/or the user's waist, and/or the user's hips, and/or the user's wearable device, etc.
  • the interactive component model may be a component model used for interaction.
  • Each of these interactive component models is pre-bound with interactive function events, and the user can realize the corresponding VR interactive function by selecting one of the interactive component models.
  • the conditions for displaying these interactive component models can be preset for the target object of image monitoring. For example, if the user's hands, and/or the user's legs, and/or the user's head, and/or the user's waist, and/or the user's hips, and/or the user's wearing equipment, etc., meet certain action requirements, it can be determined that the display meets the requirements.
  • the interactive component model can be displayed in the virtual reality space, and the interactive function events pre-bound by the user-selected interactive component model can be executed based on the subsequent action information content of the target object, that is, performing step 103. process shown.
  • Step 103 Display at least one interactive component model in the virtual reality space according to the action information of the target object, and execute interactive function events pre-bound by the interactive component model selected by the user.
  • the three-dimensional space position of these interactive component models is bound to the three-dimensional space position of the user's own virtual character in advance, and then based on the real-time three-dimensional space position of the user's own virtual character, the currently displayed three-dimensional space position of these interactive component models is determined, and then These interactive component models are displayed according to this position, so that these interactive component models are presented in front of the user's virtual character, such as multiple interactive component models in the form of bracelets.
  • this embodiment can continue to monitor the image information captured by the user, by identifying the user's hands, and/or the user's legs, and/or the user's head, and/or the user's waist, in the image information. and/or the action information of the user's hips, and/or the user's wearable device, etc., to determine the target interactive component model selected by the user in the displayed interactive component model, and then execute the interaction pre-bound by the target interactive component model.
  • Interactive function events to implement corresponding VR interactive functions.
  • this embodiment proposes an improved solution for VR control without using physical device buttons.
  • the user's control instructions can be obtained through image monitoring.
  • the improvement solution of this embodiment can effectively improve the technical problem that the buttons of the physical device are easily damaged, which in turn easily affects the user's control. It brings a stronger sense of technology to users and improves their VR experience.
  • this embodiment provides a specific method as shown in Figure 2, which method includes:
  • Step 201 Monitor the image information captured by the camera on the user.
  • Step 202 Based on the target object in the image information, determine whether it meets the preset conditions for displaying the interactive component model.
  • step 202 may specifically include: first identifying the user gesture information in the image information; then determining whether the user gesture information matches the preset gesture information; if the user gesture information matches the preset gesture information, If the preset gesture information matches, it is determined that the preset conditions for displaying the interactive component model are met, and then at least one interactive component model can be displayed in the virtual reality space, which can also be considered as a specific optional way to display the interactive component model in step 103. .
  • the preset gesture information that can evoke the display interactive component model can be preset according to actual needs.
  • the user makes a scissor hand gesture that can trigger the display interactive component model.
  • Different preset gestures can also be implemented.
  • the model of interactive components that gesture information can evoke and display is also different. This method of evoking the display interactive component model through user gesture recognition can facilitate user control and improve the efficiency of user control of VR.
  • determining whether the user's gesture information matches the preset gesture information may include: if the user's hand raising amplitude is greater than the preset amplitude threshold (which can be preset according to actual needs), then determining whether the user's gesture information matches the preset gesture information. Information matches.
  • floating balls 1, 2, 3, 4, and 5 can include: from left to right: Models of interactive components such as “leave the room”, “shoot”, “send emoticons”, “send barrages", and "menu”.
  • step 202 may specifically include: first identifying the position information of the handheld device in the image information; and then determining whether the position information of the handheld device matches the preset position. Change rules; if the position information of the handheld device complies with the preset position change rules, it is determined that it meets the preset conditions for displaying the interactive component model, and then at least one interactive component model can be displayed in the virtual reality space, which can also be considered to be displayed in step 103. Another concrete alternative to the interactive component model.
  • the preset position change rules for the handheld device that can evoke the display interactive component model can be preset according to actual needs.
  • the user's handheld device makes an action of drawing a circle in the air, which can trigger the display of the interactive component model.
  • different preset position change rules can evoke different interactive component models for display. This method of evoking the display interactive component model through the recognition of position changes of the user's handheld device. Since the VR device can use a variety of sensors to detect the user's handheld device, it can effectively assist in accurately determining whether it conforms to the evoked display interactive component model.
  • the preset conditions not only facilitate the user's control, but also effectively improve the accuracy of the user's VR control.
  • determining whether the position information of the handheld device complies with the preset position change rules specifically includes: if the lifting amplitude of the handheld device is greater than the preset amplitude threshold (which can be preset according to actual needs), then determining whether the position information of the handheld device conforms to the preset position change rules.
  • the user lifts the handheld device to call up an interactive component model in the form of a floating ball.
  • Each floating ball represents a control function, and subsequent users can use the Use the floating ball function to interact.
  • Whether the preset conditions for displaying the interactive component model are met can be comprehensively determined based on the scene where the user is located and/or the area where the user's focus is.
  • the user when the user is in a specific VR scene, and/or the user focus triggers the display of the VR controller, it is then determined based on the target object in the image information whether it meets the preset conditions for displaying the interactive component model.
  • the comprehensive identification method users can be prevented from erroneously evoking the display interactive component model, ensuring that users The smoothness of the user’s VR experience can improve the user’s VR experience.
  • Step 203 If it is determined that the preset conditions for displaying the interactive component model are met, by identifying the target object in the image information, while displaying the interactive component model in the virtual reality space, the virtual object model corresponding to the target object is also displayed.
  • the virtual object model can display dynamic changes following the movement of the target object.
  • the movement of the target object in the image information can be mapped into the virtual reality space, so that the virtual object model of the target object can follow the movement of the target object.
  • the user's virtual hand image is displayed.
  • the virtual hand image can dynamically change and display following the hand movement information of the user's hand image.
  • a virtual handheld device image is displayed.
  • the virtual handheld device image can dynamically change and display following the device movement information of the user's handheld device image.
  • the user can refer to the virtual object model (such as virtual hand or virtual handheld device, etc.) presented in the virtual reality space, and make actions to complete the click selection in the displayed interactive component model.
  • virtual object model such as virtual hand or virtual handheld device, etc.
  • An interactive component model of the required functionality It is convenient for users to control and can improve the efficiency of users’ VR control.
  • step 203 may specifically include: dynamically adjusting the spatial display position of the interactive component model based on changes in the spatial position of the virtual object model, so that the interactive component model can follow the movement of the virtual object model. show.
  • the spatial position of the virtual object model can be bound to the spatial display position of the interactive component model in advance, so that when the spatial position of the virtual object model changes, the interactive component model can also follow the virtual object model for motion display. .
  • the user's hand moves, not only will the virtual hand in the virtual reality space follow the movement, but the displayed interactive component model will also follow the movement, making it easier for the user to locate the position of the interactive component model that needs to be selected, and then click accurately. Select the interactive component model.
  • step 203 may specifically include: displaying the interactive component model within a preset range of the virtual object model. For example, while displaying a virtual hand, an interactive component model in the form of a floating ball is displayed, and these floating balls can be displayed in the area near the virtual hand to facilitate user selection. Select control.
  • the interactive functions required by the user can be implemented by executing the processes shown in steps 204 to 206 .
  • Step 204 By identifying the position of the target object and mapping it into the virtual reality space, determine the spatial position of the corresponding first click mark.
  • Step 205 If the spatial position of the first click mark matches the spatial position of the target interactive component model in the interactive component model, determine that the target interactive component model is the interactive component model selected by the user.
  • Step 206 Execute the interactive function event pre-bound by the target interactive component model.
  • the user can raise the left hand to bring the user's virtual left hand mapped in the virtual reality space into the user's current perspective, thereby evoking an interactive component model in the form of a suspended ball, and then Select the interactive component model you click on by moving your right hand.
  • the VR device side based on the user's hand image, the position of the user's right hand will be identified and mapped into the virtual reality space to determine the spatial position of the corresponding click mark.
  • the spatial position of the click mark is related to the interactive component of "posting emojis" If the spatial position of the model matches, the user chooses to click on the "post expression” function; finally, the interactive function event pre-bound by the "post expression” interactive component model is executed, which triggers the call to the "post expression” function, and then displays the expression panel model. .
  • step 206 may optionally include: first displaying options corresponding to the target interactive component model in the virtual reality space. Panel model; then by identifying the position of the target object, map it to the virtual reality space to determine the spatial position of the corresponding second click mark; if the spatial position of the second click mark matches the spatial position of the target option in the options panel, determine The target option is the option selected by the user in the options panel, and triggers the execution of the corresponding event.
  • the above embodiments illustrate the specific process of how the interactive component model evokes display and how to click and select for use.
  • the method in this embodiment may also include: judging whether the preset conditions for canceling the display of the interactive component model are met based on the target object in the image information (can be preset according to actual needs); if it is determined that the preset conditions for canceling the display of the interactive component are met; If the preset conditions of the component model are set, these interactive component models will be canceled in the virtual reality space.
  • the above-mentioned method of determining whether the preset conditions for canceling the display interaction component model are met based on the target object in the image information may specifically include: based on the user gesture information or handheld device position information in the image information, determining whether the canceling the display interaction is met.
  • Preconditions for component models For example, the user's left hand is raised to a certain extent, so that the user's virtual left hand mapped in the virtual reality space enters the user's current perspective range to evoke the display of interactive component models; when the user does not need to display these interactive component models, the user's virtual left hand can be displayed. Lowering the raised left hand causes the user's virtual left hand mapped in the virtual reality space to move outside the user's current visual angle. The display of these interactive component models can be cancelled. When it is necessary to display the interactive component model, just raise the left hand again.
  • users can watch virtual live broadcasts and other video content. For example, after users wear VR equipment and enter the virtual concert site, they can watch the performance content as if they were at the scene.
  • the camera can be used to capture the user's hand image or the user's handheld device image, and the user's hand gestures in the image can be captured based on image recognition technology. Or the position change of the handheld device is judged. If it is determined that the user's hand or the user's handheld device is raised to a certain extent, so that the user's virtual hand or virtual handheld device mapped in the virtual reality space enters the user's current perspective range, then it can be A model of evocative display interaction components in virtual reality space. As shown in Figure 5, based on image recognition technology, the user can lift the handheld device to summon an interactive component model in the form of a floating ball.
  • Each floating ball represents a control function.
  • the user can control the function based on the floating ball. function to interact. As shown in Figure 5, it can specifically include: “leave the room”, “shoot”, “post emojis”, “post barrage”, “2D live broadcast” and other interactive component models.
  • the position of the user's hand or the user's handheld device is identified and mapped into the virtual reality space to determine the corresponding click.
  • the spatial position of the sign If the spatial position of the click sign matches the spatial position of the target interactive component model among the displayed interactive component models, then the target interactive component model is determined to be the interactive component model selected by the user; finally, the target interactive component is executed Model pre-bound interaction function events.
  • the user can raise the handle of the left hand to evoke the interactive component model displayed in the form of a suspended ball, and then select and click on the interactive component by moving the handle of the right hand.
  • the position of the right hand handle will be identified and mapped into the virtual reality space to determine the spatial position of the corresponding click mark. If the spatial position of the click mark is consistent with the "shooting" interactive component model If the spatial position matches, the user chooses to click on the "shooting" function; finally, the interactive function event pre-bound by the interactive component model of the "shooting” is executed, which triggers the call to the shooting function.
  • scene information corresponding to the shooting range of the camera model is selected and rendered to the texture. Display the camera model in the virtual reality space, and place the rendered texture map in the preset viewfinder area of the camera model.
  • the corresponding shooting function panel can be displayed, and then a camera model in the form of a selfie stick camera is displayed in the virtual reality space, and then the viewfinder screen is displayed in the viewfinder. . If the user needs to capture image information within the desired shooting range, the shooting range of the camera model can be dynamically adjusted by inputting a shooting range adjustment instruction.
  • the virtual shooting method of this embodiment is to render the VR scene information within the selected range into texture in real time, and then paste it into the area of the viewfinder, without the need for the physical camera module. sensor, thus ensuring the picture quality of the captured images.
  • the VR scene content within the dynamic moving shooting range can be presented in the preset viewfinder area in real time.
  • the viewfinder display effect will not be affected by factors such as the swing of the camera, and can be well simulated. It can bring out the user's real shooting experience, thereby improving the user's VR experience.
  • this embodiment proposes to perform VR control without resorting to physical device buttons.
  • the improvement plan can improve the technical problem that the buttons of the physical device are easily damaged, which in turn easily affects the user's control.
  • this embodiment provides a control device based on virtual reality.
  • the device includes: a monitoring module 31, an identification module 32, and a display module. 33.
  • the monitoring module 31 is configured to monitor the image information captured by the camera on the user;
  • the recognition module 32 is configured to recognize the action information of the target object in the image information
  • the display module 33 is configured to display at least one interactive component model in the virtual reality space according to the action information of the target object, and execute interactive function events pre-bound by the interactive component model selected by the user.
  • the target object includes: a user's hand; accordingly, the display module 33 is specifically configured to identify user gesture information in the image information; determine whether the user gesture information matches the preset gesture information; if the user gesture information matches the preset gesture information, at least one interactive component model is displayed in the virtual reality space.
  • the display module 33 is specifically configured to determine that the user gesture information matches the preset gesture information if the user's hand lifting amplitude is greater than the preset amplitude threshold.
  • the target object includes: a user's handheld device; correspondingly, the display module 33 is specifically configured to identify the handheld device position information in the image information; determine the handheld device position Whether the information conforms to the preset position change rules; if the handheld device position information conforms to the preset position change rules, at least one interactive component model is displayed in the virtual reality space.
  • the display module 33 is specifically configured to determine that the position information of the handheld device complies with the preset position change rules if the lifting amplitude of the handheld device is greater than the preset amplitude threshold.
  • the display module 33 is specifically configured to display the virtual object model corresponding to the target object while displaying the interactive component model in the virtual reality space, wherein the virtual object model The display can be dynamically changed following the movement of the target object.
  • the display module 33 is specifically configured to dynamically adjust the spatial display position of the interactive component model based on changes in the spatial position of the virtual object model, so that the interaction The component model can follow the virtual object model for movement display.
  • the display module 33 is specifically configured to display the interactive component model within a preset range of the virtual object model.
  • the display module 33 is specifically configured to identify the position of the target object and map it into the virtual reality space to determine the spatial position of the corresponding first click mark; if the space of the first click mark If the position matches the spatial position of the target interactive component model in the interactive component model, it is determined that the target interactive component model is the interactive component model selected by the user; and the interactive function event pre-bound by the target interactive component model is executed.
  • the display module 33 is specifically configured to display the option panel model corresponding to the target interactive component model in the virtual reality space; by identifying the position of the target object, it is mapped to the virtual reality space, Determine the spatial position of the corresponding second click mark; if the spatial position of the second click mark matches the spatial position of the target option in the options panel, it is determined that the target option is selected by the user in the options panel options and trigger execution of corresponding events.
  • the display module 33 is also configured to, after displaying at least one interactive component model in the virtual reality space, determine whether the preset conditions for canceling the display of the interactive component model are met based on the target object in the image information. ; If it is determined that the preset conditions for canceling the display of the interactive component model are met, then canceling the display of the at least one interactive component model in the virtual reality space.
  • the display module 33 is specifically configured to determine whether the preset condition for canceling the display of the interactive component model is met based on the user gesture information or the handheld device position information in the image information.
  • this embodiment also provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by the processor, the above-mentioned Figures 1 and 2 are implemented.
  • the technical solution of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk etc.), including several instructions to cause a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the method of each implementation scenario of the present disclosure.
  • a non-volatile storage medium which can be a CD-ROM, U disk, mobile hard disk etc.
  • a computer device which can be a personal computer, a server, or a network device, etc.
  • embodiments of the present disclosure also provide an electronic device, which can be a virtual reality device, such as VR.
  • the device includes a storage medium and a processor; the storage medium is used to store a computer program; and the processor is used to execute the computer program to implement the above-mentioned virtual reality-based control method as shown in Figures 1 and 2.
  • the above-mentioned physical devices may also include user interfaces, network interfaces, cameras, radio frequency (Radio Frequency, RF) circuits, sensors, audio circuits, WI-FI modules, etc.
  • the user interface may include a display screen (Display), an input unit such as a keyboard (Keyboard), etc.
  • the optional user interface may also include a USB interface, a card reader interface, etc.
  • Optional network interfaces may include standard wired interfaces, wireless interfaces (such as WI-FI interfaces), etc.
  • the above-mentioned physical device structure does not constitute a limitation on the physical device, and may include more or fewer components, or combine certain components, or arrange different components.
  • the storage medium may also include an operating system and a network communication module.
  • the operating system is a program that manages the hardware and software resources of the above-mentioned physical devices and supports the operation of information processing programs and other software and/or programs.
  • the network communication module is used to realize communication between components within the storage medium, as well as communication with other hardware and software in the information processing physical device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开涉及一种基于虚拟现实的操控方法、装置及电子设备,涉及虚拟现实技术领域,其中方法包括:首先监测摄像头对用户拍摄的图像信息;再识别所述图像信息中的目标对象的动作信息;然后根据所述目标对象的动作信息,在虚拟现实空间中显示至少一个交互组件模型,以及执行用户所选交互组件模型预先绑定的交互功能事件。通过应用本公开的技术方案,可有效改善由于实体设备按钮容易损坏,进而会容易影响到用户操控的技术问题。给用户所带来的科技感更强,提高了用户的VR使用体验。

Description

基于虚拟现实的操控方法、装置及电子设备
相关申请的交叉引用
本申请基于申请号为202210263698.0、申请日为2022年03月17日、名称为“基于虚拟现实的操控方法、装置及电子设备”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及虚拟现实技术领域,尤其涉及一种基于虚拟现实的操控方法、装置及电子设备。
背景技术
随着社会生产力和科学技术的不断发展,各行各业对虚拟现实(Virtual Reality,VR)技术的需求日益旺盛。VR技术也取得了巨大进步,并逐步成为一个新的科学技术领域。
目前,用户在使用虚拟现实设备去体验虚拟现实图像效果的过程中,可通过实体设备按钮进行操控,如设置菜单等,以满足用户常用功能操作。
然而,这种实体设备按钮容易损坏,进而容易影响到用户的操控。并且这种方式给用户所带来的科技感也较差,也影响了用户一定的使用体验。
发明内容
有鉴于此,本公开提供了一种基于虚拟现实的操控方法、装置及电子设备,主要目的在于改善目前通过实体设备按钮进行操控的方式,由于实体设备按钮容易损坏,进而会容易影响到用户的操控,并且这种方式给用户所带来的科技感也较差,从而影响了用户使用体验的技术问题。
第一方面,本公开提供了一种基于虚拟现实的操控方法,包括:
监测摄像头对用户拍摄的图像信息;
识别所述图像信息中的目标对象的动作信息;
根据所述目标对象的动作信息,在虚拟现实空间中显示至少一个交互组件模型,以及执行用户所选交互组件模型预先绑定的交互功能事件。
第二方面,本公开提供了一种基于虚拟现实的操控装置,包括:
监测模块,被配置为监测摄像头对用户拍摄的图像信息;
识别模块,被配置为识别所述图像信息中的目标对象的动作信息;
显示模块,被配置为根据所述目标对象的动作信息,在虚拟现实空间中显示至少一个交互组件模型,以及执行用户所选交互组件模型预先绑定的交互功能事件。
第三方面,本公开提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现第一方面所述的基于虚拟现实的操控方法。
第四方面,本公开提供了一种电子设备,包括存储介质、处理器及存储在存储介质上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现第一方面所述的基于虚拟现实的操控方法。
借由上述技术方案,本公开提供的一种基于虚拟现实的操控方法、装置及电子设备,与目前通过实体设备按钮进行操控的方式相比,本公开提出一种无需借助实体设备按钮进行VR操控的改进方案。具体的,在VR设备侧可首先监测摄像头对用户拍摄的图像信息;再识别图像信息中的目标对象的动作信息;然后根据识别到的该目标对象的动作信息,在虚拟现实空间中显示至少一个交互组件模型,以及执行用户所选交互组件模型预先绑定的交互功能事件。通过应用本公开的技术方案,可有效改善由于实体设备按钮容易损坏,进而会容易影响到用户操控的技术问题。给用户所带来的科技感更强,提高了用户的VR使用体验。
上述说明仅是本公开技术方案的概述,为了能够更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为了让本公开的上述和其它目的、特征和优点能够更明显易懂,以下特举本公开的具体实施方式。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1示出了本公开实施例提供的一种基于虚拟现实的操控方法的流程示意图;
图2示出了本公开实施例提供的另一种基于虚拟现实的操控方法的流程示意图;
图3示出了本公开实施例提供的悬浮球形式的交互组件模型的显示示例效果的示意图;
图4示出了本公开实施例提供的点击交互组件模型的显示示例效果的示意图;
图5示出了本公开实施例提供的应用场景的交互组件模型的显示示例效果的示意图;
图6示出了本公开实施例提供的应用场景的拍摄器模型的显示示例效果的示意图;
图7示出了本公开实施例提供的一种基于虚拟现实的操控装置的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。
为了改善目前通过实体设备按钮进行操控的方式,由于实体设备按钮容易损坏,进而会容易影响到用户的操控,并且这种方式给用户所带来的科技感也较差,从而影响了用户使用体验的技术问题。本实施例提供了一种基于虚拟现实的操控方法,如图1所示,可应用于VR设备端侧,该方法包括:
步骤101、监测摄像头对用户拍摄的图像信息。
摄像头可与VR设备连接,用户在使用VR设备的过程中,摄像头可对用户 进行拍摄,得到图像信息。例如,可对用户的全身进行拍摄,或者对用户的特定部位进行拍摄等,具体可根据实际需求预先设定。本实施例可通过图像监测的方式,来获取到用户的操控指令,无需借助实体设备按钮进行VR操控。
步骤102、识别图像信息中的目标对象的动作信息。
目标对象可预先进行设定,即VR设备系统以图像信息中的哪个参考目标进行识别,得到用户的VR操控指令。如目标对象可包括:用户手部、和/或用户腿部、和/或用户头部、和/或用户腰部、和/或用户臀部、和/或用户穿戴设备等。
对于本实施例,可根据识别到的该目标对象的动作信息,判断是否符合显示交互组件模型的预设条件。其中,交互组件模型可为用于交互的组件模型,这些交互组件模型各自预先绑定有交互功能事件,进而用户可通过选择其中的交互组件模型来实现对应的VR交互功能。本实施例中可针对图像监测的目标对象,预先设定显示这些交互组件模型的条件。例如,用户手部、和/或用户腿部、和/或用户头部、和/或用户腰部、和/或用户臀部、和/或用户穿戴设备等满足一定的动作要求,即可判定符合显示这些交互组件模型的条件,进而可在虚拟现实空间中显示交互组件模型,并可根据目标对象后续的动作信息内容,执行用户所选交互组件模型预先绑定的交互功能事件,即执行步骤103所示的过程。
步骤103、根据目标对象的动作信息,在虚拟现实空间中显示至少一个交互组件模型,以及执行用户所选交互组件模型预先绑定的交互功能事件。
例如,预先将这些交互组件模型的三维空间位置与用户本身虚拟角色的三维空间位置进行绑定,然后基于用户本身虚拟角色实时的三维空间位置,确定这些交互组件模型当前显示的三维空间位置,进而依据此位置显示这些交互组件模型,使得这些交互组件模型呈现在用户虚拟角色的面前,如呈现出手环形式的多个交互组件模型等。
在唤起显示交互组件模型之后,本实施例可继续监测对用户拍摄的图像信息,通过识别图像信息中用户手部、和/或用户腿部、和/或用户头部、和/或用户腰部、和/或用户臀部、和/或用户穿戴设备等的动作信息,确定用户在显示的交互组件模型中所选的目标交互组件模型,进而可执行该目标交互组件模型预先绑定的交 互功能事件,来实现对应的VR交互功能。
与目前通过实体设备按钮进行操控的方式相比,本实施例提出了一种无需借助实体设备按钮进行VR操控的改进方案。具体可通过图像监测的方式,来获取到用户的操控指令。本实施例的改进方案可有效改善由于实体设备按钮容易损坏,进而会容易影响到用户操控的技术问题。给用户所带来的科技感更强,提高了用户的VR使用体验。
进一步的,作为上述实施例的细化和扩展,为了完整说明本实施例方法的具体实现过程,本实施例提供了如图2所示的具体方法,该方法包括:
步骤201、监测摄像头对用户拍摄的图像信息。
步骤202、根据图像信息中的目标对象,判断是否符合显示交互组件模型的预设条件。
以目标对象包括用户手部为例,相应可选的,步骤202具体可包括:首先识别图像信息中的用户手势信息;然后判断该用户手势信息是否与预设手势信息匹配;若用户手势信息与预设手势信息匹配,则判定符合显示交互组件模型的预设条件,进而可在虚拟现实空间中显示至少一个交互组件模型,也可认为是步骤103中显示交互组件模型的一种具体可选方式。
在本可选实施例中,对于可唤起显示交互组件模型的预设手势信息可根据实际需求预先设置,如用户做出剪刀手的手势可触发唤起显示交互组件模型,也可做到不同的预设手势信息可唤起显示的交互组件模型也不同。这种通过用户手势识别来唤起显示交互组件模型的方式,可方便用户操控,提高了用户对VR操控的效率。
示例性的,判断用户手势信息是否与预设手势信息匹配,具体可包括:若用户手部抬起幅度大于预设幅度阈值(可根据实际需求预先设置),则判定用户手势信息与预设手势信息匹配。
例如,如图3所示,基于图像识别技术,用户抬起手部可唤出如悬浮球形式的交互组件模型,其中,每个悬浮球各自代表一种操控功能,后续用户可基于悬浮球功能进行交互。如图3所示,悬浮球1、2、3、4、5从左往右依次可包括: “离开房间”、“拍摄”、“发表情”、“发弹幕”、“菜单”等交互组件模型。
而以目标对象包括用户手持设备(如手柄设备)为例,相应可选的,步骤202具体可包括:首先识别图像信息中的手持设备位置信息;然后判断手持设备位置信息是否符合预设的位置变化规则;若手持设备位置信息符合预设的位置变化规则,则判定符合显示交互组件模型的预设条件,进而可在虚拟现实空间中显示至少一个交互组件模型,也可认为是步骤103中显示交互组件模型的另一种具体可选方式。
在本可选实施例中,对于可唤起显示交互组件模型的手持设备的预设位置变化规则可根据实际需求预先设置,如用户手持设备做出空中画圈的动作可触发唤起显示交互组件模型,也可做到不同的预设位置变化规则能够唤起显示的交互组件模型也不同。这种通过用户手持设备位置变化识别来唤起显示交互组件模型的方式,由于VR设备可对用户手持设备使用多种传感器对其进行检测,因此可有效辅助进行精准判别是否符合唤起显示交互组件模型的预设条件,在方便用户操控的同时,可有效提高用户对VR操控的精确性。
示例性的,判断手持设备位置信息是否符合预设的位置变化规则,具体包括:若手持设备抬起幅度大于预置幅度阈值(可根据实际需求预先设置),则判定手持设备位置信息符合预设的位置变化规则。
例如,与图3所示的示例类似,基于图像识别技术,用户抬起手持设备可唤出如悬浮球形式的交互组件模型,其中,每个悬浮球各自代表一种操控功能,后续用户可基于悬浮球功能进行交互。
进一步的,对于本实施例,为了防止用户误操作(如用户在体验VR游戏中,一些游戏中的常规手势操控行为可能会唤起显示交互组件模型,进而会影响到用户的VR游戏体验),还可结合用户所在的场景和/或用户焦点处于的区域,来综合判断是否符合显示交互组件模型的预设条件。
例如,在用户处于特定VR场景,和/或用户焦点触发显示了VR控制器等情况下,再根据图像信息中的目标对象,判断是否符合显示交互组件模型的预设条件。通过这种综合判别方式,可防止用户误操作唤起显示交互组件模型,保证用 户VR体验的流畅性,进而可提升用户的VR使用体验。
步骤203、若判定符合显示交互组件模型的预设条件,则通过识别图像信息中的目标对象,在虚拟现实空间中显示交互组件模型的同时,一并显示与目标对象对应的虚拟对象模型。
其中,虚拟对象模型能够跟随目标对象的运动进行动态变化显示。在本实施例中,可根据图像信息中目标对象的运动,映射到虚拟现实空间当中,使得目标对象的虚拟对象模型能够跟随该目标对象运动。例如,通过识别用户手部图像,在虚拟现实空间中显示交互组件模型的同时,显示用户的虚拟手部图像,该虚拟手部图像能够跟随用户手部图像的手部动作信息进行动态变化显示。再例如,通过识别用户手持设备图像,在虚拟现实空间中显示交互组件模型的同时,显示虚拟的手持设备图像,该虚拟的手持设备图像能够跟随用户手持设备图像的设备移动信息进行动态变化显示。
通过这种可选方式,使得用户可参考自己在虚拟现实空间中所呈现的虚拟对象模型(如虚拟手部或虚拟手持设备等),做出动作来完成在显示的交互组件模型中点击选择自己所需功能的交互组件模型。便于用户操控,可提高用户对VR操控的效率。
为了进一步便于用户操控,增强科技感,可选的,步骤203具体可包括:基于虚拟对象模型的空间位置变化,动态调整交互组件模型的空间显示位置,使得交互组件模型能够跟随虚拟对象模型进行运动显示。对于本可选方式,可预先将虚拟对象模型的空间位置与交互组件模型的空间显示位置进行绑定,进而在虚拟对象模型的空间位置变化时,交互组件模型也能够跟随虚拟对象模型进行运动显示。例如,用户手部运动时,在虚拟现实空间中的虚拟手部不但会跟随运动,而且显示的交互组件模型也会跟随运动,便于用户找准所需要选择的交互组件模型的位置,进而精确点击选择该交互组件模型。
而为了显示方便,可选的,步骤203具体可包括:在虚拟对象模型的预设范围内显示交互组件模型。例如,在显示虚拟手部的同时,显示如悬浮球形式的交互组件模型,并且这些悬浮球可在虚拟手部附近的区域范围内显示,便于用户选 择操控。
在用户唤起显示交互组件模型之后,具体可通过执行步骤204至206所示的过程,实现用户所需的交互功能。
步骤204、通过识别目标对象的位置,映射到虚拟现实空间中,确定相应第一点击标志的空间位置。
步骤205、若第一点击标志的空间位置与交互组件模型中的目标交互组件模型的空间位置匹配,则确定目标交互组件模型为用户所选的交互组件模型。
步骤206、执行目标交互组件模型预先绑定的交互功能事件。
例如,如图4所示,用户可通过左手抬起的动作,使得在虚拟现实空间中映射的用户虚拟左手进入到用户当前的视角范围内,来唤起显示如悬浮球形式的交互组件模型,然后通过移动右手来选择点击其中的交互组件模型。在VR设备侧,会根据用户的手部图像,通过识别用户右手的位置,映射到虚拟现实空间中,确定相应点击标志的空间位置,如果该点击标志的空间位置与“发表情”的交互组件模型的空间位置匹配,则用户选择点击了该“发表情”功能;最后执行该“发表情”的交互组件模型预先绑定的交互功能事件,即触发调用发表情功能,进而显示出表情面板模型。
而在实际应用中,触发显示的功能面板中可能会存在多个选项,为了实现进一步的选项操控,可选的,步骤206具体可包括:首先在虚拟现实空间中显示目标交互组件模型对应的选项面板模型;再通过识别目标对象的位置,映射到虚拟现实空间中,确定相应第二点击标志的空间位置;若第二点击标志的空间位置与选项面板中的目标选项的空间位置匹配,则确定目标选项为用户在选项面板中所选的选项,并触发执行相应的事件。
例如,如图4所示,在显示“发表情”的功能面板模型后,上面会存在多个表情选项,用户可通过移动右手来选择点击其中的表情选项。在VR设备侧,会根据用户的手部图像,通过识别用户右手的位置,映射到虚拟现实空间中,确定相应点击标志的空间位置,如果该点击标志的空间位置与“惊吓”的表情选项的空间位置匹配,则用户选择点击了该“惊吓”的表情选项;最后执行发送该“惊 吓”的表情,进而可在用户本身虚拟角色的头部上方显示该“惊吓”的表情样式图像。
上述实施例内容说明了交互组件模型如何唤起显示以及如何点击选择使用的具体过程,而在不需要显示该交互组件模型时,为了方便用户操控,进一步可选的,上述在虚拟现实空间中显示至少一个交互组件模型之后,本实施例方法还可包括:根据图像信息中的目标对象,判断是否符合取消显示交互组件模型的预置条件(可根据实际需求进行预先设置);若判定符合取消显示交互组件模型的预置条件,则在虚拟现实空间中取消显示这些交互组件模型。
示例性的,上述根据图像信息中的目标对象,判断是否符合取消显示交互组件模型的预置条件,具体可包括:基于图像信息中的用户手势信息或手持设备位置信息,确定是否符合取消显示交互组件模型的预置条件。例如,用户左手抬起一定幅度,使得在虚拟现实空间中映射的用户虚拟左手进入到用户当前的视角范围内,来唤起显示交互组件模型;而在用户不需要显示这些交互组件模型时,可将抬起的左手放下,使得在虚拟现实空间中映射的用户虚拟左手脱离至用户当前的视角范围外,可取消显示这些交互组件模型,而在需要显示交互组件模型时,再抬起左手即可。
为了说明上述各实施例的具体实施过程,应用本实施例的方法给出如下应用示例,但不限于此:
目前基于VR技术可使用户观看到虚拟的现场直播等视频内容,如用户佩戴VR设备后进入到虚拟的演唱会现场,观看演出内容,犹如身在现场的感觉。
而为了满足用户在观看VR视频过程中的拍摄需求,基于本实施例方法的内容,可利用摄像头拍摄用户手部图像或用户手持设备图像,并基于图像识别技术对该图像中的用户手部手势或手持设备位置变化进行判断,若判定用户手部或用户手持设备抬起一定幅度,使得在虚拟现实空间中映射的用户虚拟手部或虚拟手持设备进入到用户当前的视角范围内,则可在虚拟现实空间中唤起显示交互组件模型。如图5所示,基于图像识别技术,用户抬起手持设备可唤出如悬浮球形式的交互组件模型,其中,每个悬浮球各自代表一种操控功能,用户可基于悬浮球 功能进行交互。如图5所示,具体可包括:“离开房间”、“拍摄”、“发表情”、“发弹幕”、“2D直播”等交互组件模型。
在唤出如悬浮球形式的交互组件模型后,根据后续监测到的用户手部图像或用户手持设备图像,通过识别用户手部或用户手持设备的位置,映射到虚拟现实空间中,确定相应点击标志的空间位置,如果该点击标志的空间位置与显示的这些交互组件模型中的目标交互组件模型的空间位置匹配,则确定目标交互组件模型为用户所选的交互组件模型;最后执行目标交互组件模型预先绑定的交互功能事件。
用户可通过左手的手柄抬起来唤起显示如悬浮球形式的交互组件模型,然后通过移动右手的手柄位置选择点击其中的交互组件。在VR设备侧,会根据用户的手柄图像,通过识别右手手柄的位置,映射到虚拟现实空间中,确定相应点击标志的空间位置,如果该点击标志的空间位置与“拍摄”的交互组件模型的空间位置匹配,则用户选择点击了该“拍摄”功能;最后执行该“拍摄”的交互组件模型预先绑定的交互功能事件,即触发调用拍摄功能。在虚拟现实图像中,选择与拍摄器模型的拍摄范围所对应的场景信息渲染到纹理。在虚拟现实空间中显示拍摄器模型,并将渲染得到的纹理贴图放置在拍摄器模型预设的取景框区域内。
如图6所示,用户点击了“拍摄”功能的悬浮球后,可展示相应的拍摄功能面板,进而虚拟现实空间中显示如自拍杆相机形式的拍摄器模型,然后在取景框中显示取景画面。如果用户需要拍摄自己所期望的拍摄范围内的图像信息,可通过输入拍摄范围的调整指令,对拍摄器模型的拍摄范围进行动态调整。
与现有录屏的方式不同,本实施例方案这种虚拟拍摄方式是对所选范围内的VR场景信息实时渲染到纹理,然后再贴到取景框的区域内,无需借助实体相机模块的那些传感器,因此可保证拍摄图像的画面质量。并且在拍摄器移动过程中,能够实时将动态移动拍摄范围内的VR场景内容呈现在预设的取景框区域内,取景画面展示效果不会受到拍摄器摆动等因素的影响,可很好地模拟出用户真实拍摄的感受,进而可提升用户的VR使用体验。而相比于使用实体设备按钮进行触发拍摄功能调用的方式,本实施例方案提出无需借助实体设备按钮进行VR操控 的改进方案,可改善由于实体设备按钮容易损坏,进而会容易影响到用户操控的技术问题。
进一步的,作为图1和图2所示方法的具体实现,本实施例提供了一种基于虚拟现实的操控装置,如图7所示,该装置包括:监测模块31、识别模块32、显示模块33。
监测模块31,被配置为监测摄像头对用户拍摄的图像信息;
识别模块32,被配置为识别所述图像信息中的目标对象的动作信息;
显示模块33,被配置为根据所述目标对象的动作信息,在虚拟现实空间中显示至少一个交互组件模型,以及执行用户所选交互组件模型预先绑定的交互功能事件。
在具体的应用场景中,可选的,所述目标对象包括:用户手部;相应的,显示模块33,具体被配置为识别所述图像信息中的用户手势信息;判断所述用户手势信息是否与预设手势信息匹配;若所述用户手势信息与预设手势信息匹配,则在虚拟现实空间中显示至少一个交互组件模型。
在具体的应用场景中,显示模块33,具体还被配置为若用户手部抬起幅度大于预设幅度阈值,则判定所述用户手势信息与预设手势信息匹配。
在具体的应用场景中,可选的,所述目标对象包括:用户手持设备;相应的,显示模块33,具体被配置为识别所述图像信息中的手持设备位置信息;判断所述手持设备位置信息是否符合预设的位置变化规则;若所述手持设备位置信息符合预设的位置变化规则,则在虚拟现实空间中显示至少一个交互组件模型。
在具体的应用场景中,显示模块33,具体还被配置为若手持设备抬起幅度大于预置幅度阈值,则判定所述手持设备位置信息符合预设的位置变化规则。
在具体的应用场景中,显示模块33,具体被配置为在虚拟现实空间中显示所述交互组件模型的同时,一并显示与所述目标对象对应的虚拟对象模型,其中,所述虚拟对象模型能够跟随所述目标对象的运动进行动态变化显示。
在具体的应用场景中,显示模块33,具体还被配置为基于所述虚拟对象模型的空间位置变化,动态调整所述交互组件模型的空间显示位置,使得所述交互 组件模型能够跟随所述虚拟对象模型进行运动显示。
在具体的应用场景中,显示模块33,具体还被配置为在所述虚拟对象模型的预设范围内显示所述交互组件模型。
在具体的应用场景中,显示模块33,具体被配置为通过识别所述目标对象的位置,映射到虚拟现实空间中,确定相应第一点击标志的空间位置;若所述第一点击标志的空间位置与所述交互组件模型中的目标交互组件模型的空间位置匹配,则确定所述目标交互组件模型为用户所选的交互组件模型;执行所述目标交互组件模型预先绑定的交互功能事件。
在具体的应用场景中,显示模块33,具体还被配置为在虚拟现实空间中显示所述目标交互组件模型对应的选项面板模型;通过识别所述目标对象的位置,映射到虚拟现实空间中,确定相应第二点击标志的空间位置;若所述第二点击标志的空间位置与所述选项面板中的目标选项的空间位置匹配,则确定所述目标选项为用户在所述选项面板中所选的选项,并触发执行相应的事件。
在具体的应用场景中,显示模块33,还被配置为在虚拟现实空间中显示至少一个交互组件模型之后,根据所述图像信息中的目标对象,判断是否符合取消显示交互组件模型的预置条件;若判定符合取消显示交互组件模型的预置条件,则在虚拟现实空间中取消显示所述至少一个交互组件模型。
在具体的应用场景中,显示模块33,具体还被配置为基于所述图像信息中的用户手势信息或手持设备位置信息,确定是否符合取消显示交互组件模型的预置条件。
需要说明的是,本实施例提供的一种基于虚拟现实的操控处理装置所涉及各功能单元的其它相应描述,可以参考图1和图2中的对应描述,在此不再赘述。
基于上述如图1和图2所示方法,相应的,本实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述如图1和图2所示的基于虚拟现实的操控方法。
基于这样的理解,本公开的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘 等)中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施场景的方法。
基于上述如图1和图2所示的方法,以及图7所示的虚拟装置实施例,为了实现上述目的,本公开实施例还提供了一种电子设备,具体可以为虚拟现实设备,如VR头戴设备等,该设备包括存储介质和处理器;存储介质,用于存储计算机程序;处理器,用于执行计算机程序以实现上述如图1和图2所示的基于虚拟现实的操控方法。
可选的,上述实体设备还可以包括用户接口、网络接口、摄像头、射频(Radio Frequency,RF)电路,传感器、音频电路、WI-FI模块等等。用户接口可以包括显示屏(Display)、输入单元比如键盘(Keyboard)等,可选用户接口还可以包括USB接口、读卡器接口等。网络接口可选的可以包括标准的有线接口、无线接口(如WI-FI接口)等。
本领域技术人员可以理解,本实施例提供的上述实体设备结构并不构成对该实体设备的限定,可以包括更多或更少的部件,或者组合某些部件,或者不同的部件布置。
存储介质中还可以包括操作系统、网络通信模块。操作系统是管理上述实体设备硬件和软件资源的程序,支持信息处理程序以及其它软件和/或程序的运行。网络通信模块用于实现存储介质内部各组件之间的通信,以及与信息处理实体设备中其它硬件和软件之间通信。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本公开可以借助软件加必要的通用硬件平台的方式来实现,也可以通过硬件实现。通过应用本实施例的方案,可有效改善由于实体设备按钮容易损坏,进而会容易影响到用户操控的技术问题。给用户所带来的科技感更强,提高了用户的VR使用体验。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要 素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (15)

  1. 一种基于虚拟现实的操控方法,其特征在于,包括:
    监测摄像头对用户拍摄的图像信息;
    识别所述图像信息中的目标对象的动作信息;
    根据所述目标对象的动作信息,在虚拟现实空间中显示至少一个交互组件模型,以及执行用户所选交互组件模型预先绑定的交互功能事件。
  2. 根据权利要求1所述的方法,其特征在于,所述目标对象包括:用户手部;
    所述根据所述目标对象的动作信息,在虚拟现实空间中显示至少一个交互组件模型,包括:
    识别所述图像信息中的用户手势信息;
    判断所述用户手势信息是否与预设手势信息匹配;
    若所述用户手势信息与预设手势信息匹配,则在虚拟现实空间中显示至少一个交互组件模型。
  3. 根据权利要求2所述的方法,其特征在于,所述判断所述用户手势信息是否与预设手势信息匹配,包括:
    若用户手部抬起幅度大于预设幅度阈值,则判定所述用户手势信息与预设手势信息匹配。
  4. 根据权利要求1所述的方法,其特征在于,所述目标对象包括:用户手持设备;
    所述根据所述目标对象的动作信息,在虚拟现实空间中显示至少一个交互组件模型,包括:
    识别所述图像信息中的手持设备位置信息;
    判断所述手持设备位置信息是否符合预设的位置变化规则;
    若所述手持设备位置信息符合预设的位置变化规则,则在虚拟现实空间中显示至少一个交互组件模型。
  5. 根据权利要求4所述的方法,其特征在于,所述判断所述手持设备位置 信息是否符合预设的位置变化规则,包括:
    若手持设备抬起幅度大于预置幅度阈值,则判定所述手持设备位置信息符合预设的位置变化规则。
  6. 根据权利要求2至5中任一项所述的方法,其特征在于,所述根据所述目标对象的动作信息,在虚拟现实空间中显示至少一个交互组件模型,包括:
    在虚拟现实空间中显示所述交互组件模型的同时,一并显示与所述目标对象对应的虚拟对象模型,其中,所述虚拟对象模型能够跟随所述目标对象的运动进行动态变化显示。
  7. 根据权利要求6所述的方法,其特征在于,所述通过识别所述图像信息中的所述目标对象,在虚拟现实空间中显示所述交互组件模型的同时,一并显示与所述目标对象对应的虚拟对象模型,包括:
    基于所述虚拟对象模型的空间位置变化,动态调整所述交互组件模型的空间显示位置,使得所述交互组件模型能够跟随所述虚拟对象模型进行运动显示。
  8. 根据权利要求6所述的方法,其特征在于,所述通过识别所述图像信息中的所述目标对象,在虚拟现实空间中显示所述交互组件模型的同时,一并显示与所述目标对象对应的虚拟对象模型,包括:
    在所述虚拟对象模型的预设范围内显示所述交互组件模型。
  9. 根据权利要求1所述的方法,其特征在于,所述根据所述目标对象的动作信息,在虚拟现实空间中显示至少一个交互组件模型,以及执行用户所选交互组件模型预先绑定的交互功能事件,包括:
    通过识别所述目标对象的位置,映射到虚拟现实空间中,确定相应第一点击标志的空间位置;
    若所述第一点击标志的空间位置与所述交互组件模型中的目标交互组件模型的空间位置匹配,则确定所述目标交互组件模型为用户所选的交互组件模型;
    执行所述目标交互组件模型预先绑定的交互功能事件。
  10. 根据权利要求9所述的方法,其特征在于,所述执行所述目标交互组件模型预先绑定的交互功能事件,包括:
    在虚拟现实空间中显示所述目标交互组件模型对应的选项面板模型;
    通过识别所述目标对象的位置,映射到虚拟现实空间中,确定相应第二点击标志的空间位置;
    若所述第二点击标志的空间位置与所述选项面板中的目标选项的空间位置匹配,则确定所述目标选项为用户在所述选项面板中所选的选项,并触发执行相应的事件。
  11. 根据权利要求1所述的方法,其特征在于,在虚拟现实空间中显示至少一个交互组件模型之后,所述方法还包括:
    根据所述图像信息中的目标对象,判断是否符合取消显示交互组件模型的预置条件;
    若判定符合取消显示交互组件模型的预置条件,则在虚拟现实空间中取消显示所述至少一个交互组件模型。
  12. 根据权利要求11所述的方法,其特征在于,所述根据所述图像信息中的目标对象,判断是否符合取消显示交互组件模型的预置条件,包括:
    基于所述图像信息中的用户手势信息或手持设备位置信息,确定是否符合取消显示交互组件模型的预置条件。
  13. 一种基于虚拟现实的操控装置,其特征在于,包括:
    监测模块,被配置为监测摄像头对用户拍摄的图像信息;
    识别模块,被配置为识别所述图像信息中的目标对象的动作信息;
    显示模块,被配置为根据所述目标对象的动作信息,在虚拟现实空间中显示至少一个交互组件模型,以及执行用户所选交互组件模型预先绑定的交互功能事件。
  14. 一种非暂态计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至12中任一项所述的方法。
  15. 一种电子设备,包括存储介质、处理器及存储在存储介质上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至12中任一项所述的方法。
PCT/CN2023/077218 2022-03-17 2023-02-20 基于虚拟现实的操控方法、装置及电子设备 WO2023174008A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210263698.0A CN116795203A (zh) 2022-03-17 2022-03-17 基于虚拟现实的操控方法、装置及电子设备
CN202210263698.0 2022-03-17

Publications (1)

Publication Number Publication Date
WO2023174008A1 true WO2023174008A1 (zh) 2023-09-21

Family

ID=88022291

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/077218 WO2023174008A1 (zh) 2022-03-17 2023-02-20 基于虚拟现实的操控方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN116795203A (zh)
WO (1) WO2023174008A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549487A (zh) * 2018-04-23 2018-09-18 网易(杭州)网络有限公司 虚拟现实交互方法与装置
US20200387228A1 (en) * 2019-06-07 2020-12-10 Facebook Technologies, Llc Artificial reality system having a sliding menu
CN112463000A (zh) * 2020-11-10 2021-03-09 赵鹤茗 交互方法、装置、系统、电子设备及交通工具
CN113282169A (zh) * 2021-05-08 2021-08-20 青岛小鸟看看科技有限公司 头戴式显示设备的交互方法、装置及头戴式显示设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549487A (zh) * 2018-04-23 2018-09-18 网易(杭州)网络有限公司 虚拟现实交互方法与装置
US20200387228A1 (en) * 2019-06-07 2020-12-10 Facebook Technologies, Llc Artificial reality system having a sliding menu
CN112463000A (zh) * 2020-11-10 2021-03-09 赵鹤茗 交互方法、装置、系统、电子设备及交通工具
CN113282169A (zh) * 2021-05-08 2021-08-20 青岛小鸟看看科技有限公司 头戴式显示设备的交互方法、装置及头戴式显示设备

Also Published As

Publication number Publication date
CN116795203A (zh) 2023-09-22

Similar Documents

Publication Publication Date Title
JP5464083B2 (ja) 情報処理装置、情報処理方法およびプログラム
CN109952757B (zh) 基于虚拟现实应用录制视频的方法、终端设备及存储介质
US10318011B2 (en) Gesture-controlled augmented reality experience using a mobile communications device
WO2017054453A1 (zh) 一种信息处理方法、终端及计算机存储介质
JP2016048541A (ja) 情報処理システム、情報処理装置及びプログラム
JP2013524354A (ja) コンピューティングデバイスインターフェース
KR20200123223A (ko) 애플리케이션을 위한 디스플레이 적응 방법 및 장치, 디바이스, 및 저장 매체
CN111045511B (zh) 基于手势的操控方法及终端设备
CN108038726A (zh) 物品展示方法及装置
US10754446B2 (en) Information processing apparatus and information processing method
CN103139481A (zh) 一种摄像装置和摄像方法
EP3541066A1 (en) Electronic whiteboard, image display method, and carrier means
CN110568931A (zh) 交互方法、设备、系统、电子设备及存储介质
JP2014195183A (ja) プログラム、通信装置
US20200106967A1 (en) System and method of configuring a virtual camera
CN113485626A (zh) 一种智能显示设备、移动终端和显示控制方法
WO2019242457A1 (zh) 一种应用页面展示方法及移动终端
CN107257506A (zh) 多画面特效加载方法和装置
CN113559501A (zh) 游戏中的虚拟单位选取方法及装置、存储介质及电子设备
WO2023174008A1 (zh) 基于虚拟现实的操控方法、装置及电子设备
WO2023174009A1 (zh) 基于虚拟现实的拍摄处理方法、装置及电子设备
JP2019205514A (ja) プログラム、方法、および情報端末装置
JP5907184B2 (ja) 情報処理装置、情報処理方法およびプログラム
TWI584644B (zh) 使用者部分部位之虛擬表示型態技術
TWI737175B (zh) 物件操控方法、主機裝置及電腦可讀儲存媒體

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23769516

Country of ref document: EP

Kind code of ref document: A1