CN117666769A - Virtual scene interaction method and device, storage medium and equipment - Google Patents

Virtual scene interaction method and device, storage medium and equipment Download PDF

Info

Publication number
CN117666769A
CN117666769A CN202211059004.8A CN202211059004A CN117666769A CN 117666769 A CN117666769 A CN 117666769A CN 202211059004 A CN202211059004 A CN 202211059004A CN 117666769 A CN117666769 A CN 117666769A
Authority
CN
China
Prior art keywords
gesture
virtual
interaction
interactive
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211059004.8A
Other languages
Chinese (zh)
Inventor
孙健航
饶小林
刘硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211059004.8A priority Critical patent/CN117666769A/en
Publication of CN117666769A publication Critical patent/CN117666769A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses an interaction method, device, storage medium and equipment of a virtual scene, wherein the method comprises the following steps: displaying a default gesture of the virtual hand in the virtual scene; switching the virtual hand from a default gesture to an operation gesture corresponding to an interactive object in the virtual scene based at least on the interaction control information and/or the current scene information; and interacting with the interaction object through the operation gesture. According to the method and the device, the interaction control event can be corresponding to the virtual hand in the virtual scene, when the user interacts with different interaction objects, the gesture of the virtual hand displayed in the virtual scene can be automatically switched to the corresponding operation gesture, interaction is carried out with the interaction objects through the operation gesture, more interesting and more convenient experience is provided for the user, and immersive experience of the user is improved.

Description

Virtual scene interaction method and device, storage medium and equipment
Technical Field
The application relates to the technical field of augmented reality, in particular to an interaction method, device, storage medium and equipment of a virtual scene.
Background
For the interactive objects in the virtual scene, the interactive objects are generally required to be controlled through external interactive devices such as handles, so that the interactive objects respond to the interactive control events of the interactive devices such as the handles, however, the mode of displaying the interactive control events in the virtual scene is single at present, and the user immersive experience is poor.
Disclosure of Invention
According to the interaction method, the device, the storage medium and the equipment for the virtual scene, interaction control events can be corresponding to virtual hands in the virtual scene, when a user interacts with different interaction objects, gestures of the virtual hands displayed in the virtual scene can be automatically switched to corresponding operation gestures, interaction is carried out with the interaction objects through the operation gestures, more interesting and more convenient experience is provided for the user, and immersive experience of the user is improved.
In one aspect, an embodiment of the present application provides a method for interaction of a virtual scene, where the method includes:
displaying a default gesture of the virtual hand in the virtual scene;
switching the virtual hand from the default gesture to an operation gesture corresponding to an interactive object in the virtual scene based at least on interaction control information and/or current scene information;
and interacting with the interaction object through the operation gesture.
In another aspect, an embodiment of the present application provides an interaction device for a virtual scene, where the device includes:
a display unit for displaying a default gesture of a virtual hand in a virtual scene;
a switching unit, configured to switch the virtual hand from the default gesture to an operation gesture corresponding to an interactive object in the virtual scene based at least on interaction control information and/or current scene information;
And the interaction unit is used for interacting with the interaction object through the operation gesture.
In another aspect, embodiments of the present application provide a computer readable storage medium storing a computer program adapted to be loaded by a processor to perform the method of interaction of virtual scenes as described in any of the embodiments above.
In another aspect, an embodiment of the present application provides an augmented reality device, where the augmented reality device includes a processor and a memory, where the memory stores a computer program, and the processor is configured to execute the interaction method of the virtual scene according to any one of the embodiments above by calling the computer program stored in the memory.
In another aspect, embodiments of the present application provide a computer program product, including a computer program, which when executed by a processor implements the method of interaction of virtual scenes as described in any of the embodiments above.
The default gesture of the virtual hand is displayed in the virtual scene; switching the virtual hand from a default gesture to an operation gesture corresponding to an interactive object in the virtual scene based at least on the interaction control information and/or the current scene information; and interacting with the interaction object through the operation gesture. According to the method and the device, the interaction control event can be corresponding to the virtual hand in the virtual scene, when the user interacts with different interaction objects, the gesture of the virtual hand displayed in the virtual scene can be automatically switched to the corresponding operation gesture, the interaction is carried out with the interaction objects through the operation gesture, more interesting and more convenient experience is provided for the user, and the user immersive experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of an interaction method of a virtual scene provided in an embodiment of the present application.
Fig. 2 is a schematic diagram of a first application scenario of an interaction method of a virtual scenario provided in an embodiment of the present application.
Fig. 3 is a second application scenario schematic diagram of an interaction method of a virtual scenario provided in an embodiment of the present application.
Fig. 4 is a schematic diagram of a third application scenario of the virtual scenario interaction method provided in the embodiment of the present application.
Fig. 5 is a schematic diagram of a fourth application scenario of an interaction method of a virtual scenario provided in an embodiment of the present application.
Fig. 6 is a schematic diagram of a fifth application scenario of an interaction method of a virtual scenario provided in an embodiment of the present application.
Fig. 7 is a schematic diagram of a sixth application scenario of an interaction method of a virtual scenario provided in an embodiment of the present application.
Fig. 8 is a schematic view of a seventh application scenario of an interaction method of virtual scenarios provided in an embodiment of the present application.
Fig. 9 is an eighth application scenario schematic diagram of an interaction method of a virtual scenario provided in an embodiment of the present application.
Fig. 10 is a schematic diagram of a ninth application scenario of an interaction method of a virtual scenario provided in an embodiment of the present application.
Fig. 11 is a schematic structural diagram of an interaction device for virtual scenes according to an embodiment of the present application.
Fig. 12 is a first schematic structural diagram of an augmented reality device according to an embodiment of the present application.
Fig. 13 is a second schematic structural diagram of an augmented reality device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Embodiments of the present application provide a virtual scene interaction method, apparatus, computer readable storage medium, augmented reality device, and computer program product. Specifically, the interaction method of the virtual scene in the embodiment of the application may be executed by the augmented reality device or by the server.
The embodiment of the application can be applied to various application scenes such as augmented Reality (XR), virtual Reality (VR), augmented Reality (Augmented Reality, AR), mixed Reality (MR) and the like.
First, partial terms or terminology appearing in the course of describing the embodiments of the present application are explained as follows:
augmented Reality (XR) is a concept including Virtual Reality (VR), augmented Reality (Augumented Reality, AR), and Mixed Reality (MR), and represents a technology that creates an environment in which a Virtual world is connected to a real world, and with which a user can interact in real time.
Virtual Reality (VR), a technology of creating and experiencing a Virtual world, generating a Virtual environment by calculation, is a multi-source information (the Virtual Reality mentioned herein at least comprises visual perception, and may further comprise auditory perception, tactile perception, motion perception, and even further comprises gustatory perception, olfactory perception, etc.), realizes the simulation of a fused and interactive three-dimensional dynamic view and entity behavior of the Virtual environment, immerses a user into the simulated Virtual Reality environment, and realizes application in various Virtual environments such as a map, a game, a video, education, medical treatment, simulation, collaborative training, sales, assistance in manufacturing, maintenance, repair, and the like.
Augmented reality (Augmented Reality, AR), a technique of calculating camera pose parameters of a camera in the real world (or three-dimensional world, real world) in real time during image acquisition by the camera, and adding virtual elements to the image acquired by the camera according to the camera pose parameters. Virtual elements include, but are not limited to: images, videos, and three-dimensional models. The goal of AR technology is to socket the virtual world over the real world on the screen for interaction.
Mixed Reality (MR) integrates computer-created sensory input (e.g., virtual objects) with sensory input from a physical scenery or a representation thereof into a simulated scenery, and in some MR sceneries, the computer-created sensory input may be adapted to changes in sensory input from the physical scenery. In addition, some electronic systems for rendering MR scenes may monitor orientation and/or position relative to the physical scene to enable virtual objects to interact with real objects (i.e., physical elements from the physical scene or representations thereof). For example, the system may monitor movement such that the virtual plants appear to be stationary relative to the physical building.
Enhanced virtualization (Augmented Virtuality, AV): AV scenery refers to a simulated scenery in which a computer created scenery or virtual scenery incorporates at least one sensory input from a physical scenery. The one or more sensory inputs from the physical set may be a representation of at least one feature of the physical set. For example, the virtual object may present the color of the physical element captured by the one or more imaging sensors. As another example, the virtual object may exhibit characteristics consistent with actual weather conditions in the physical scenery, as identified via weather-related imaging sensors and/or online weather data. In another example, an augmented reality forest may have virtual trees and structures, but an animal may have features that are accurately reproduced from images taken of a physical animal.
The virtual Field Of View (FOV) is a Field Of View (Field Of View) Of a virtual environment perceived by a user through a lens in an augmented reality device, and the perceived area is represented by the Field Of View (FOV).
The form of the augmented reality device, which realizes the virtual reality effect, may be generally provided as a form of glasses, a head mounted display (Head Mount Display, HMD), or a contact lens for realizing the visual perception and other forms of perception, and of course, the form of the augmented reality device is not limited thereto, and may be further miniaturized or enlarged as needed.
The augmented reality device described in the embodiments of the present application may include, but is not limited to, the following types:
a computer-side virtual reality (PCVR) device performs related computation of a virtual reality function and data output by using a PC side, and an external computer-side augmented reality device realizes a virtual reality effect by using data output by the PC side.
Mobile augmented reality devices support setting up a mobile terminal (e.g., a smart phone) in various ways (e.g., a head mounted display provided with a dedicated card slot), performing related calculations of virtual reality functions by the mobile terminal through wired or wireless connection with the mobile terminal, and outputting data to the mobile augmented reality device, e.g., viewing virtual reality video through the APP of the mobile terminal.
The integrated machine augmented reality device has a processor for performing the related computation of the virtual function, and thus has independent virtual reality input and output functions, and is free from connection with a PC or a mobile terminal, and has high degree of freedom in use.
The following will describe in detail. It should be noted that the following description order of embodiments is not a limitation of the priority order of embodiments.
The embodiments of the application provide an interaction method of a virtual scene, which can be executed by a terminal or a server, or can be executed by the terminal and the server together; the embodiments of the present application will be described with an example in which a virtual scene interaction method is executed by a terminal (an augmented reality device).
Referring to fig. 1 to 10, fig. 1 is a flowchart of an interaction method of a virtual scene provided in an embodiment of the present application, and fig. 2 to 10 are related application scene diagrams provided in an embodiment of the present application, where blank backgrounds in fig. 4 to 6 and 9 may be virtual space layers. The method comprises the following steps:
step 110, displaying a default gesture of the virtual hand in the virtual scene.
For example, if the virtual hand is driven by the interactive device, the default gesture may correspond to a default state of the interactive device. For example, if the virtual hand is driven by a bare hand gesture of the user, the default gesture may correspond to a C-type gesture of the bare hand gesture in the real world.
In some embodiments, the virtual scene comprises any one of a two-dimensional virtual scene, a three-dimensional virtual scene.
For example, a virtual scene is displayed on a display screen of an augmented reality device, and a default gesture of a virtual hand is displayed in the virtual scene. The augmented reality device can be externally connected with an interaction device, can drive a virtual hand in a virtual scene through the interaction device, and can control the virtual hand to switch the gesture of the virtual gesture.
For example, the interactive apparatus may include a handle, a digital glove, a specific interactive device, etc.
For example, a plurality of virtual gestures may be preset and associated with interaction control events, where each interaction control event may correspond to interaction control information and/or scene information triggered by the interaction device.
For example, as shown in fig. 2, the plurality of virtual gestures set may include a first Click gesture (Click), a second Click gesture (Tap), a Pinch gesture (pin), a Grab gesture (Grab), a default gesture (None), and so on. The gesture names, illustrations, and corresponding operational accuracies of the multiple virtual gestures are shown in FIG. 2. For example, the operation precision corresponding to the first Click gesture (Click) may be a cross region position, which indicates that the index finger and the thumb in the first Click gesture are in cross contact; the operation precision corresponding to the second click gesture (Tap) may be a dot position, and represents that the index finger of the second click gesture clicks a certain point of the interactive object; the operation precision corresponding to the pinching gesture (pin) can be two dot positions, and the two dot positions indicate any two points of the interaction object touched by the index finger and the thumb of the pinching gesture; the operation precision corresponding to the grabbing gesture (Grab) can be five dot positions, and the five-finger touch of the grabbing gesture represents any five dot positions of the interactive object; the operation precision corresponding to the default gesture (None) is null.
For example, as shown in fig. 3, a plurality of virtual gestures may be associated with the interaction control event in advance, so as to obtain an association relationship between an operation gesture and the interaction control information and/or the interaction object in the scene information, where the operation gesture is other virtual gestures except for a default gesture in the plurality of virtual gestures. According to whether an interactive object exists in the virtual scene, the gesture posture of the virtual hand corresponding to the interactive device can be divided into two modes: an interaction pattern without an interaction object and an interaction pattern with an interaction object. For example, the interaction pattern with the interaction object may be further subdivided into a near field interaction pattern and a far field interaction pattern. Virtual scenes can be classified into two-dimensional (2D) virtual scenes and three-dimensional (3D) virtual scenes, and interactive objects are classified into three-dimensional virtual objects (such as 3D objects) and two-dimensional virtual interactive interfaces (such as 2D interfaces). The plurality of virtual gestures and the interaction control event can be associated in advance according to whether the interaction objects exist or not, different interaction modes, different virtual scenes and different interaction objects.
For example, without any interactive objects in the virtual scene, an initial pose of the virtual hand may be presented, which may be represented by a default gesture (None pose), where None state represents a state without any manipulation, simulating a C-type pose of the hand in the real world at most relaxed.
For example, in an interactive control event without an interactive object, operation information in the interactive control information can be generated by setting a Trigger (Trigger) key of a Click interactive device (such as a handle) to control a virtual hand to switch from a default gesture (None gesture) to a first Click gesture (Click gesture); operating information in the interaction control information is generated by clicking a Grip (Grip) key of an interaction device such as a handle to control the virtual hand to switch from a default gesture (None gesture) to a grabbing gesture (Grab gesture). In the interaction control event without the interaction object, the interaction control information may include operation information generated by triggering an operation key of the interaction device.
For example, in an interactive control event with an interactive object, a radio-less cursor may be set for the near-field interactive mode. For the 2D interface, it may be configured to move the virtual hand by responding to movement information in the interaction control information generated by the mobile interaction device, and when detecting that the virtual hand contacts the 2D interface collision detection area, control the virtual hand to switch from a default gesture (None gesture) to a second Tap gesture (Tap gesture); in the interaction control event with the interaction object, for the virtual scene of the near field interaction mode, the interaction control information may include movement information generated by the mobile interaction device. For a 3D object, it may be configured to move a virtual hand by responding to interaction control information generated by moving an interaction device and to control the virtual hand to switch from a default gesture (None gesture) to a grabbing gesture (Grab gesture)/pinching gesture (pin gesture) when the virtual hand is detected to contact the 3D object, and to interact with a 3D object displayed in a virtual scene by grabbing gesture (Grab gesture)/pinching gesture (pin gesture) in response to operation information by clicking operation information in the interaction control information generated by a clip/Trigger key of the interaction device (such as a handle), wherein a 3D large object may correspond to grabbing gesture (Grab gesture) and a 3D small object may correspond to pinching gesture (pin gesture); in the interaction control event with the interaction object, for the virtual scene of the near field interaction mode, the interaction control information can comprise mobile information generated by the mobile interaction device and operation information generated by an operation key triggering the interaction device.
For example, in an interactive control event with interactive objects, for far field interaction mode, a radial cursor may be set. For the 2D interface, it may be configured to move the virtual hand by responding to movement information in the interaction control information generated by moving the interaction device, and when detecting that a cursor point of a ray cursor of the virtual hand contacts the 2D interface, control the virtual hand to switch from a default gesture (None gesture) to a first Click gesture (Click gesture), and then to interact with interface elements in the 2D interface displayed in the virtual scene through the first Click gesture in response to operation information generated by triggering an operation key of the interaction device. For the 3D object, it may be configured to move the virtual hand by responding to movement information in the interaction control information generated by moving the interaction device, and when a cursor point of a ray cursor of the virtual hand is detected to contact the 3D object, control the virtual hand to switch from a default gesture (None gesture) to a first Click gesture (Click gesture), and then to interact with the 3D object displayed in the virtual scene through the first Click gesture in response to operation information in the interaction control information generated by triggering an operation key of the interaction device. In the interaction control event with the interaction object, for the far-field interaction mode, the interaction control information may include movement information generated by the mobile interaction device and operation information generated by an operation key triggering the interaction device.
And step 120, switching the virtual hand from the default gesture to an operation gesture corresponding to the interactive object in the virtual scene at least based on the interaction control information and/or the current scene information.
For example, if the virtual hand is driven by an interactive device, interactive control information from the interactive device may be obtained.
For example, the interactive device is connected with the augmented reality device in a wired or wireless manner, and the user generates interactive control information by operating the interactive device and transmits the interactive control information to the augmented reality device.
For example, the interactive device is a handle, and the user can generate interactive control information by operating a key on the handle and transmit the interactive control information to the augmented reality device. For example, the interactive device may be a digital glove, and the user may generate interactive control information by wearing the digital glove and making corresponding control gestures, and map the control gestures into control gestures of the virtual hand by transmitting the interactive control information to the augmented reality device.
For example, if the virtual hand is driven by a bare hand gesture of the user, interaction control information generated based on the bare hand gesture of the user may be acquired.
In some embodiments, the current scene information includes at least information of an interaction pattern and the interaction object; the interaction mode comprises a near field interaction mode or a far field interaction mode; the interactive object comprises a three-dimensional virtual object or a two-dimensional virtual interactive interface.
For example, the current scene information may be determined based on the interaction pattern and the current information of the interaction object.
For example, according to the interaction control information, the current scene information and the association relation between the preset operation gesture and the interaction control event, the operation gesture corresponding to the interaction object is determined, and then the displayed virtual hand is switched from the default gesture to the operation gesture corresponding to the interaction object.
In some embodiments, the operational gesture includes one or more of a tap gesture, a grab gesture, a pinch gesture, wherein the tap gesture includes a first tap gesture and a second tap gesture.
In some embodiments, before the switching the virtual hand from the default gesture to an operation gesture corresponding to an interactive object in the virtual scene based at least on the interaction control information and/or the current scene information, further comprising:
Acquiring a first position of the interaction object in the virtual scene and acquiring a second position of the virtual hand in the virtual scene;
and determining the interaction mode according to a first distance between the first position and the second position, wherein the interaction mode is a near-field interaction mode if the first distance is smaller than a first threshold value or is a far-field interaction mode if the first distance is larger than or equal to the first threshold value.
For example, the first threshold may be 20cm. In the virtual scene, the interaction mode may be divided into a far-field interaction mode and a near-field interaction mode according to a distance between the interaction object and the virtual hand. Alternatively, in the virtual scene, the interaction pattern may be divided into a far field interaction pattern and a near field interaction pattern according to a distance between the interaction object and a camera position of the virtual camera in the virtual scene.
In some embodiments, the switching the virtual hand from the default gesture to an operational gesture corresponding to an interactive object in the virtual scene based at least on interaction control information and/or current scene information includes:
and when the hidden ray cursor emitted from the right front of the virtual hand is detected to be suspended on the interactive object, switching the virtual hand from the default gesture to an operation gesture corresponding to the interactive object, and displaying the ray cursor.
For example, in far-field interaction mode, interaction with a two-dimensional virtual interaction interface (such as a 2D interface) and a three-dimensional virtual object (such as a 3D object) is mainly involved, and accuracy of interaction operation can be ensured by matching with a ray cursor.
In the far-field interaction mode of the virtual hand, the model of the virtual hand can be matched with a ray cursor and automatically switch different gesture postures according to the contacted interaction object and the pressed key.
As shown in fig. 4, fig. 4 shows a schematic diagram of interaction between the virtual hand 20 and the two-dimensional virtual interactive interface 11 (such as a 2D interface) when the interaction mode in the current scene information is the far field interaction mode. As in view (1) in fig. 4, at the initial state of the virtual hand 20, a default gesture (None gesture) of the virtual hand 20 is displayed; as in the view (2) in fig. 4, the virtual hand 20 is moved in response to the movement information in the interaction control information, and when it is detected that the hidden ray cursor 30 emitted from the right front of the virtual hand 20 is hovering over the two-dimensional virtual interaction interface 11, specifically, when it is detected that the cursor point 31 of the ray cursor 30 is hovering over the interaction region 111 of the two-dimensional virtual interaction interface 11, the displayed virtual hand 20 is controlled to switch from the default gesture to the operation gesture (such as the first click gesture) corresponding to the two-dimensional virtual interaction interface 11, and the ray cursor 30 emitted from the right front of the virtual hand 20 is displayed.
For example, in response to movement information in the interaction control information, the gesture may be temporarily not switched, but a hidden ray cursor 30 emitted from the right front of the virtual hand 20 may be generated in the background, and it may be detected whether the hidden ray cursor 30 emitted from the virtual hand 20 detects the interaction region 111 of the two-dimensional virtual interaction interface 11 (such as a 2D interface), when the hidden ray cursor 30 emitted from the virtual hand 20 detects the interaction region 111 of the two-dimensional virtual interaction interface 11 (such as a 2D interface), the ray cursor 30 is controlled to be displayed, and the virtual hand 20 is controlled to smoothly switch from the default gesture (None gesture) to the initial gesture of the first Click gesture (Click gesture) which may be seen in the gesture shown in view (2) in fig. 4.
As shown in fig. 5, fig. 5 shows a schematic diagram of interaction between the virtual hand 20 and the three-dimensional virtual object 12 (3D object) in the case where the interaction pattern in the current scene information is the far field interaction pattern. As in view (a) in fig. 5, at the initial state of the virtual hand 20, a default gesture (None gesture) of the virtual hand 20 is displayed; as in the view of fig. 5 (b), the virtual hand 20 is moved in response to the movement information in the interaction control information, and when it is detected that the hidden ray cursor 30 emerging from the right front of the virtual hand 20 is hovered over the three-dimensional virtual object 12, specifically, when it is detected that the cursor point 31 of the ray cursor 30 is hovered over the three-dimensional virtual object 12, the displayed virtual hand 20 is controlled to switch from a default gesture to an operation gesture (such as a first click gesture) corresponding to the three-dimensional virtual object 12, and the ray cursor 30 emerging from the right front of the operation gesture of the virtual hand 20 is displayed, and the three-dimensional virtual object 12 displayed in the virtual scene is clicked by the first click gesture in response to the operation information generated by the user triggering the operation key of the interaction device, to select the three-dimensional virtual object 12, and the selected aperture 40 is displayed around the three-dimensional virtual object 12.
Illustratively, in response to the interaction control information from the interaction device, the gesture may be temporarily not switched, but a hidden ray cursor 30 emitted from the right front of the virtual hand 20 may be generated in the background, and whether the hidden ray cursor 30 emitted from the virtual hand 20 detects the three-dimensional virtual object 12 (3D object) may be detected, and when the hidden ray cursor 30 emitted from the virtual hand 20 detects the three-dimensional virtual object 12 (3D object), the ray cursor 30 is controlled to be displayed, and the virtual hand 20 is controlled to be smoothly switched from the default gesture (None gesture) to the initial gesture of the first Click gesture (Click gesture) which may be referred to as the gesture shown in view (b) in fig. 5. For example, the three-dimensional virtual object 12 (3D object) contacted by the cursor point 31 of the ray cursor 30 may be presented in a Hover (Hover) state.
In some embodiments, the switching the virtual hand from the default gesture to an operational gesture corresponding to an interactive object in the virtual scene based at least on interaction control information and/or current scene information includes:
moving the virtual hand according to the movement information in the interaction control information under the condition that the interaction mode in the current scene information is the near field interaction mode, and acquiring a third position of the virtual hand in the virtual scene in the moving process in real time;
And when the second distance between the first position and the third position is detected to be smaller than a second threshold value, switching the displayed virtual hand from the default gesture to an operation gesture corresponding to the interaction object.
For example, the mobile information may be information generated by a mobile interactive device; or the movement information may be information generated in response to movement of the bare hand gesture of the user.
For example, in the near field interaction mode, interaction with a two-dimensional virtual interaction interface (such as a 2D interface) and a three-dimensional virtual object (such as a 3D object) is mainly involved, and direct interaction may be mainly performed without assistance of a ray cursor.
In the near field interaction mode of the virtual hand, the model of the virtual hand can automatically switch different gesture postures according to the contacted interaction object or the pressed key.
As shown in fig. 6, fig. 6 shows an interaction diagram between the virtual hand 20 and the interactive object 10 (such as the two-dimensional virtual interactive interface 11) in the case that the interaction mode in the current scene information is the near field interaction mode. When the virtual hand 20 is moved according to the movement information and the second distance between the first position of the interactive object 10 and the third position of the virtual hand 20 is detected to be smaller than the second threshold value, the virtual hand 20 is determined to contact the collision detection area, and the displayed virtual hand 20 is smoothly switched from the default gesture to the operation gesture corresponding to the interactive object 10.
For example, as shown in fig. 7, the interactive object 10 in fig. 7 is exemplified by the two-dimensional virtual interactive interface 11, the collision detection area 50 may be set to be in the range of 8-12cm with the front of the two-dimensional virtual interactive interface 11 facing, and the second threshold may be set to any value in the range of 8-12 cm.
For example, if the interactive object 10 is a three-dimensional virtual object 12, the collision detection area 50 may be set to be within a range of 8-12cm with the outer surface of the three-dimensional virtual object 12 facing outward, and the second threshold may be set to any value within a range of 8-12 cm.
In some embodiments, when the virtual hand is switched from the default gesture to an operation gesture corresponding to an interactive object in the virtual scene based at least on interaction control information and/or current scene information, the method further comprises:
and playing the gesture switching animation of the virtual hand from the default gesture to the operation gesture corresponding to the interactive object.
For example, when the displayed virtual hand is switched from the default gesture to the operation gesture corresponding to the interactive object, the gesture switching animation in which the virtual hand is switched from the default gesture to the operation gesture corresponding to the interactive object may be played, so as to realize smooth transition of the virtual hand switching gesture.
For example, as shown in fig. 8, the interactive object 10 describes a smooth transition scheme in which a virtual hand is switched from a default gesture to an operation gesture, taking a UI panel (two-dimensional virtual interactive interface) as an example. For example, the operation gesture is exemplified by a second Tap gesture (Tap gesture), and a smooth transition scheme for describing that the virtual hand is switched from a default gesture (None gesture) to the second Tap gesture (Tap gesture) is exemplified as follows.
For example, in the first aspect, after the virtual hand touches the collision detection region of the UI panel, the gesture switching animation of smoothly switching the virtual hand from the default gesture (None gesture) to the second Tap gesture (Tap gesture) corresponding to the interactive object may be directly triggered. For example, the duration of the gesture switching animation is about 1s to 1.5s.
For example, in the second mode, the movement speed of the virtual hand may be simulated according to the movement speed of the interactive device (such as a handle), so as to control the gesture switching animation from the default gesture (None gesture) to the second click gesture (Tap gesture), and when the moving virtual hand contacts the collision detection area of the second threshold (such as within 8 cm) of the front direction of the UI panel, the gesture switching animation starts to be played, where the play speed of the gesture switching animation is positively related to the movement speed of the interactive device (such as the handle). Wherein, at least the gesture switching animation is controlled to complete the playing of the gesture switching animation in 20% of the area entering the collision detection area.
Alternatively, in the second mode, when the moving speed of the interactive device (such as the handle) is greater than 2m/s, the gesture switching animation may be directly played in a mode, that is, when the moving speed of the interactive device (such as the handle) is greater than 2m/s, after the moving virtual hand touches the collision detection area of the UI panel, the gesture switching animation of smoothly switching the virtual hand from the default gesture (None gesture) to the second click gesture (Tap gesture) corresponding to the interactive object may be directly triggered, where the duration of the gesture switching animation is about 1s to 1.5s.
And 130, interacting with the interactive object through the operation gesture.
In some embodiments, the interacting with the interactive object through the operation gesture includes:
and under the condition that the interaction mode in the current scene information is the far-field interaction mode, responding to the interaction control information, and interacting with the interaction object through the operation gesture.
In some embodiments, if the interactive object in the current scene information is the two-dimensional virtual interactive interface, the operation gesture corresponding to the interactive object is the first click gesture;
the interaction with the interaction object through the operation gesture in response to the interaction control information comprises the following steps:
And responding to the operation information in the interaction control information, and interacting with interface elements in the two-dimensional virtual interaction interface through the first click gesture.
For example, the operation information may be information generated by triggering an operation key of the interactive apparatus; or the operation information may be information generated in response to a bare hand gesture of the user.
With continued reference to views (2) and (3) of fig. 4, the user may generate operation information in the interaction control information by clicking a Trigger key of the interaction device (such as a handle), and interact with an interface (UI) element in the two-dimensional virtual interaction interface 11 through a first click gesture of the virtual hand 20 in response to the operation information in the interaction control information, where the UI element is an element located within the interaction area 111, such as an input box in fig. 4.
Illustratively, the user may generate the operation information in the interaction control information by clicking a Trigger key of the interaction device (such as a handle) to interact with the UI element in the two-dimensional virtual interaction interface 11 according to the operation information, so that the virtual hand 20 performs a clicking action of a first clicking gesture (Click gesture) in the virtual scene, where the clicking action of the first clicking gesture (Click gesture) may refer to the gesture shown in view (3) in fig. 4. For example, in response to a Click action of the first Click gesture (Click gesture), the UI element in the two-dimensional virtual interactive interface 11 may be controlled to change the background color, such as in response to the background color of the UI element being white prior to the Click action; in response to the click action, the background color of the UI element is dark, such as a dark color closer to the color of the two-dimensional virtual interactive interface 11, and if the color of the two-dimensional virtual interactive interface 11 is light blue, the background color of the dark UI element is dark blue.
In some embodiments, if the interactive object is the three-dimensional virtual object, the operation gesture corresponding to the interactive object is the first click gesture;
the interaction with the interaction object through the operation gesture in response to the interaction control information comprises the following steps:
responding to trigger information in the interaction control information, clicking the three-dimensional virtual object displayed in the virtual scene through the first click gesture to select the three-dimensional virtual object, and displaying a selected aperture around the three-dimensional virtual object;
the first click gesture of the virtual hand is moved in response to position transformation information in the interaction control information to transform the position of the three-dimensional virtual object in the virtual scene by moving the first click gesture.
For example, the triggering information may be information generated by triggering an operation key of the interactive apparatus; or the trigger information may be information generated in response to a bare hand gesture of the user.
For example, the position conversion information may be information generated by clicking an operation key of the interactive apparatus and moving the interactive apparatus; or the position transformation information may be information generated in response to a bare hand gesture of the user.
With continued reference to views (b) and (c) of fig. 5, as shown in view (b), the user may control the selected three-dimensional virtual object 12 (3D object) through an interactive device (such as a handle) and keep clicking Trigger information in the interactive control information generated by a Trigger/clip key of the (hold) interactive device (such as the handle), click the three-dimensional virtual object 12 displayed in the virtual scene through the first click gesture in response to the Trigger information in the interactive control information to select the three-dimensional virtual object 12, and display a selected aperture 40 around the three-dimensional virtual object 12. As shown in the view (c), position transformation information in the interaction control information is generated by moving the Trigger/Grip key of the click (hold) interaction device (such as the Grip) in a state where the Trigger/Grip key is held, and a first click gesture of the virtual hand 20 is moved in response to the position transformation information in the interaction control information to perform position transformation of the position of the three-dimensional virtual object 12 in the virtual scene by moving the first click gesture of the virtual hand 20.
Illustratively, the user may control the selected three-dimensional virtual object 12 (3D object) through an interaction device (such as a handle), and Click Trigger information in the interaction control information is generated by pressing a Trigger/Grip key of the interaction device (such as a handle), so as to interact with the three-dimensional virtual object 12 (3D object) according to the Trigger information, and display a selected aperture 40 around the three-dimensional virtual object 12, and maintain a state of the selected three-dimensional virtual object 12 (3D object), so that the virtual hand 20 performs a Click down (Click down) action corresponding to a first Click gesture, and in this state, position conversion information in the interaction control information is generated by moving the Trigger/Grip key of the interaction device (such as the handle), so that the virtual hand 20 moves in a virtual scene in response to the position conversion information in the interaction control information to jointly move the position of the three-dimensional virtual object 12 (3D object) to perform position conversion on the three-dimensional virtual object 12 (3D object). When the user releases the Trigger/clip key through the interactive device (such as a handle) to generate release information in the interactive control information, the virtual hand 20 is caused to execute a Click release (Click up) action corresponding to the first Click gesture (Click gesture) according to the release information in the interactive control information, and the displayed virtual hand 20 is smoothly switched from the first Click gesture (Click gesture) to the default gesture (None gesture).
In some embodiments, the interacting with the interactive object through the operation gesture includes:
and under the condition that the interaction mode in the current scene information is the near-field interaction mode, when the operation gesture of the virtual hand is detected to touch the interaction object, the operation gesture is used for interacting with the interaction object.
In some embodiments, if the interactive object in the current scene information is the two-dimensional virtual interactive interface, the operation gesture corresponding to the interactive object is the second click gesture;
when detecting that the operation gesture of the virtual hand touches the interactive object, the interaction between the operation gesture and the interactive object is performed, including:
and moving the virtual hand according to the movement information in the interaction control information, and interacting with interface elements in the two-dimensional virtual interaction interface through the second clicking gesture when the second clicking gesture of the virtual hand is detected to touch the two-dimensional virtual interaction interface.
As shown in fig. 6, fig. 6 is a schematic diagram illustrating interaction between the virtual hand 20 and the two-dimensional virtual interactive interface 11 (such as a 2D UI panel) when the interaction mode in the current scene information is the near field interaction mode. In view (1) in fig. 6, continuing to move the virtual hand 20 (such as from the collision detection area of the two-dimensional virtual interactive interface 11 toward the two-dimensional virtual interactive interface 11) according to the movement information in the interaction control information, when it is detected that the second distance between the first position of the interactive object 10 and the third position of the virtual hand 20 is greater than (or equal to) the second threshold value, determining that the virtual hand 20 does not contact the collision detection area, and displaying a default gesture (None gesture) of the virtual hand 20; as in view (2) in fig. 6, when it is detected that the second distance between the first position of the interactive object 10 and the third position of the virtual hand 20 is smaller than the second threshold, the displayed virtual hand 20 is smoothly switched from the default gesture (None gesture) to the second Tap gesture (Tap gesture) corresponding to the two-dimensional virtual interactive interface 11. The UI element may then be interacted with by continuing to control the index finger of the virtual hand 20 to touch the interaction region of the two-dimensional virtual interaction interface 11 (such as a 2D UI panel) in response to the movement information.
In some embodiments, if the interactive object in the current scene information is the three-dimensional virtual object, the operation gesture corresponding to the interactive object is one of the grabbing gesture or the pinching gesture;
when detecting that the operation gesture of the virtual hand touches the interactive object, the interaction between the operation gesture and the interactive object is performed, including:
if the volume of the three-dimensional virtual object is larger than the preset volume, the operation gesture corresponding to the interactive object is the grabbing gesture, the virtual hand is moved according to the movement information in the interactive control information, and when the grabbing gesture of the virtual hand is detected to touch the interactive object, the three-dimensional virtual object is grabbed through the grabbing gesture in response to the operation information; or alternatively
If the volume of the three-dimensional virtual object is smaller than or equal to the preset volume, the operation gesture corresponding to the interactive object is the pinching gesture, the virtual hand is moved according to the movement information, and when the pinching gesture of the virtual hand is detected to touch the interactive object, the three-dimensional virtual object is pinched through the pinching gesture in response to the operation information in the interactive control information.
For example, the operation information may be information generated by triggering an operation key of the interactive apparatus; or the operation information may be information generated in response to a bare hand gesture of the user.
For example, the mobile information may be information generated by a mobile interactive device; or the movement information may be information generated in response to movement of the bare hand gesture of the user.
As shown in fig. 9, fig. 9 shows a schematic diagram of interaction between the virtual hand 20 and the three-dimensional virtual object 12 (such as a 3D object) in the case that the interaction mode in the current scene information is the near field interaction mode. To ensure uniformity of interaction of the interaction device (e.g., handle) key to the near-field three-dimensional virtual object 12 (e.g., 3D object) and the gesture of the virtual hand 20, the direct fetching operation is performed by uniformly using the clip (parent object) key and the Trigger (child object) key when interacting with the near-field three-dimensional virtual object 12 (e.g., 3D object).
As in view (1) of fig. 9, the interaction control information may include movement information and operation information, continue to move the virtual hand 20 (such as from a collision detection area contacting the three-dimensional virtual object 12 toward the contacting three-dimensional virtual object 12) according to the movement information in the interaction control information, and interact with the 3D object displayed in the virtual scene through a grabbing gesture (Grab gesture)/pinching gesture (pin gesture) in response to the operation information in the interaction control information generated by the user by clicking the Grip/Trigger of the interaction device (such as a handle) when the virtual hand 20 contacts the three-dimensional virtual object 12 (such as a 3D object). Wherein, since the volume of the three-dimensional virtual object 12 is greater than the preset volume, it is determined that the three-dimensional virtual object 12 is a 3D large object, the operation gesture corresponding to the three-dimensional virtual object 12 is a grabbing gesture (Grab gesture), and the virtual hand 20 is controlled to switch from the default gesture (None gesture) to the grabbing gesture (Grab gesture) in response to the operation information, so as to Grab the three-dimensional virtual object 12 displayed in the virtual scene through the grabbing gesture (Grab gesture).
As in view (2) of fig. 9, the interaction control information may include movement information and operation information, continue moving the virtual hand 20 (such as from a collision detection area contacting the three-dimensional virtual object 12 toward the three-dimensional virtual object 12) according to the movement information in the interaction control information, and interact with the 3D object displayed in the virtual scene through a grabbing gesture (Grab gesture)/pinching gesture (pin gesture) when the virtual hand 20 contacts the three-dimensional virtual object 12 (such as a 3D object) and in response to the operation information in the interaction control information generated by the user by clicking the Grip/Trigger key of the interaction device (such as a handle). Wherein, since the volume of the three-dimensional virtual object 12 is smaller than or equal to the preset volume, it is determined that the three-dimensional virtual object 12 is a 3D small object, the operation gesture corresponding to the three-dimensional virtual object 12 is a pinching gesture (Pinch gesture), and the three-dimensional virtual object 12 displayed in the virtual scene is pinched by the pinching gesture (Pinch gesture) in response to the operation information.
In some embodiments, the method further comprises:
if no interactive object is displayed in the virtual scene, displaying the default gesture of the virtual hand in the virtual scene;
when the interaction control information is acquired, switching the virtual hand from the default gesture to the corresponding operation gesture based on the interaction control information, so that a user can conduct gesture recognition through the displayed operation gesture.
For example, in an interactive control event without an interactive object, the interactive control information may include operation information generated by triggering operation keys of the interactive apparatus, or the interactive control information may include operation information generated in response to a bare hand gesture of the user.
For example, as shown in FIG. 10, without any interactive objects in the virtual scene, an initial pose of the virtual hand may be presented, which may be represented by a default gesture (None pose), where None state represents a state without any manipulation, simulating a C-type pose of the hand in the real world at most relaxed. In the case of no interactive object, operation information containing Trigger information can be generated by clicking a Trigger (Trigger) key of an interactive device (such as a handle), and the virtual hand is controlled to switch from a default gesture (None gesture) to a first Click gesture (Click gesture); the virtual hand may be controlled to switch from a default gesture (None gesture) to a grabbing gesture (Grab gesture) by clicking a Grip (Grip) key of an interactive device, such as a handle, to generate operation information containing the Grip information.
All the above technical solutions may be combined to form an optional embodiment of the present application, which is not described here in detail.
The default gesture of the virtual hand is displayed in the virtual scene; switching the virtual hand from a default gesture to an operation gesture corresponding to an interactive object in the virtual scene based at least on the interaction control information and/or the current scene information; and interacting with the interaction object through the operation gesture. The interaction control event can be corresponding to the virtual hand in the virtual scene, when the user interacts with different interaction objects, the gesture of the virtual hand displayed in the virtual scene can be automatically switched to the corresponding operation gesture, and the interaction is carried out with the interaction objects through the operation gesture, so that more interesting and more convenient experience is provided for the user, and the immersive experience of the user is improved.
In order to facilitate better implementation of the virtual scene interaction method in the embodiment of the application, the embodiment of the application also provides an interaction device of the virtual scene. Referring to fig. 11, fig. 11 is a schematic structural diagram of an interaction device for virtual scenes according to an embodiment of the present application. The interaction device 200 of the virtual scene may include:
a display unit 210 for displaying a default gesture of the virtual hand in the virtual scene;
a switching unit 220, configured to switch the virtual hand from the default gesture to an operation gesture corresponding to an interactive object in the virtual scene based at least on interaction control information and/or current scene information;
And an interaction unit 230 for interacting with the interaction object through the operation gesture.
In some embodiments, the current scene information includes at least information of an interaction pattern and the interaction object; the interaction mode comprises a near field interaction mode or a far field interaction mode; the interactive object comprises a three-dimensional virtual object or a two-dimensional virtual interactive interface.
In some embodiments, the operational gesture includes one or more of a tap gesture, a grab gesture, a pinch gesture, wherein the tap gesture includes a first tap gesture and a second tap gesture.
In some embodiments, the switching unit 220 is further configured to, before switching the virtual hand from the default gesture to an operation gesture corresponding to an interactive object in the virtual scene based at least on the interaction control information and/or the current scene information:
acquiring a first position of the interaction object in the virtual scene and acquiring a second position of the virtual hand in the virtual scene;
and determining the interaction mode according to a first distance between the first position and the second position, wherein the interaction mode is a near-field interaction mode if the first distance is smaller than a first threshold value or is a far-field interaction mode if the first distance is larger than or equal to the first threshold value.
In some embodiments, the switching unit 220 may be configured to:
and when the hidden ray cursor emitted from the right front of the virtual hand is detected to be suspended on the interactive object, switching the virtual hand from the default gesture to an operation gesture corresponding to the interactive object, and displaying the ray cursor.
In some embodiments, the interaction unit 230 may be configured to:
and under the condition that the interaction mode in the current scene information is the far-field interaction mode, responding to the interaction control information, and interacting with the interaction object through the operation gesture.
In some embodiments, if the interactive object in the current scene information is the two-dimensional virtual interactive interface, the operation gesture corresponding to the interactive object is the first click gesture;
the interaction unit 230, when interacting with the interactive object through the operation gesture in response to the interaction control information, may be configured to:
And responding to the operation information in the interaction control information, and interacting with interface elements in the two-dimensional virtual interaction interface through the first click gesture.
In some embodiments, if the interactive object in the current scene information is the three-dimensional virtual object, the operation gesture corresponding to the interactive object is the first click gesture;
the interaction unit 230, when interacting with the interactive object through the operation gesture in response to the interaction control information, may be configured to:
clicking the three-dimensional virtual object displayed in the virtual scene through the first click gesture in response to operation information in the interaction control information so as to select the three-dimensional virtual object, and displaying a selected aperture around the three-dimensional virtual object;
the first click gesture of the virtual hand is moved in response to position transformation information in the interaction control information to transform the position of the three-dimensional virtual object in the virtual scene by moving the first click gesture.
In some embodiments, the switching unit 220 may be configured to:
moving the virtual hand according to the movement information in the interaction control information under the condition that the interaction mode in the current scene information is the near field interaction mode, and acquiring a third position of the virtual hand in the virtual scene in the moving process in real time;
And when the second distance between the first position and the third position is detected to be smaller than a second threshold value, switching the displayed virtual hand from the default gesture to an operation gesture corresponding to the interaction object.
In some embodiments, the interaction unit 230 may be configured to:
and under the condition that the interaction mode in the current scene information is the near-field interaction mode, when the operation gesture of the virtual hand is detected to touch the interaction object, the operation gesture is used for interacting with the interaction object.
In some embodiments, if the interactive object in the current scene information is the two-dimensional virtual interactive interface, the operation gesture corresponding to the interactive object is the second click gesture;
the interaction unit 230 may be configured to, when detecting that an operation gesture of the virtual hand touches the interaction object, interact with the interaction object through the operation gesture:
and moving the virtual hand according to the movement information in the interaction control information, and interacting with interface elements in the two-dimensional virtual interaction interface through the second clicking gesture when the second clicking gesture of the virtual hand is detected to touch the two-dimensional virtual interaction interface.
In some embodiments, if the interactive object in the current scene information is the three-dimensional virtual object, the operation gesture corresponding to the interactive object is one of the grabbing gesture or the pinching gesture;
the interaction unit 230 may be configured to, when detecting that an operation gesture of the virtual hand touches the interaction object, interact with the interaction object through the operation gesture:
if the volume of the three-dimensional virtual object is larger than the preset volume, the operation gesture corresponding to the interactive object is the grabbing gesture, the virtual hand is moved according to the movement information in the interactive control information, and when the grabbing gesture of the virtual hand is detected to touch the interactive object, the three-dimensional virtual object is grabbed through the grabbing gesture in response to the operation information in the interactive control information; or alternatively
If the volume of the three-dimensional virtual object is smaller than or equal to the preset volume, the operation gesture corresponding to the interactive object is the pinching gesture, the virtual hand is moved according to the movement information in the interactive control information, and when the pinching gesture of the virtual hand is detected to touch the interactive object, the three-dimensional virtual object is pinched through the pinching gesture in response to the operation information in the interactive control information.
In some embodiments, the switching unit 220 is further configured to: and playing the gesture switching animation of the virtual hand from the default gesture to the operation gesture corresponding to the interactive object.
In some embodiments, the interaction device 200 of the virtual scene is further configured to: if no interactive object is displayed in the virtual scene, displaying the default gesture of the virtual hand in the virtual scene; when the interaction control information is acquired, switching the virtual hand from the default gesture to the corresponding operation gesture based on the interaction control information, so that a user can conduct gesture recognition through the displayed operation gesture.
The respective units in the interaction device 200 of the virtual scene may be implemented in whole or in part by software, hardware, and a combination thereof. The above units may be embedded in hardware or independent from a processor in the augmented reality device, or may be stored in software in a memory in the augmented reality device, so that the processor invokes and executes operations corresponding to the above units.
The virtual scene interaction device 200 may be integrated in a terminal or a server having a memory and a processor mounted therein and having an arithmetic capability, or the virtual scene interaction device 200 may be the terminal or the server.
In some embodiments, the application further provides an augmented reality device, including a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method embodiments described above when executing the computer program.
As shown in fig. 12, fig. 12 is a schematic structural diagram of an augmented reality device according to an embodiment of the present application, and the augmented reality device 300 may be generally provided in the form of glasses, a head mounted display (Head Mount Display, HMD), or contact lenses for realizing visual perception and other forms of perception, but the form of realization of the augmented reality device is not limited thereto, and may be further miniaturized or enlarged as required. The augmented reality device 300 may include, but is not limited to, the following:
the detection module 301: various sensors are used to detect user operation commands and act on the virtual environment, such as to update the images displayed on the display screen along with the user's line of sight, to achieve user interaction with the virtual and scene, such as to update real content based on the detected direction of rotation of the user's head.
Feedback module 302: receiving data from a sensor or from an interactive device, providing real-time feedback to a user; wherein the feedback module 302 may be for displaying a graphical user interface, such as displaying a virtual environment on the graphical user interface. For example, the feedback module 302 may include a display screen or the like.
Sensor 303: on one hand, an operation command from a user is accepted and acted on the virtual environment; on the other hand, the result generated after the operation is provided to the user in the form of various feedback.
Control module 304: the sensors and various input/output devices are controlled, including obtaining user data (e.g., motion, speech) and outputting sensory data, such as images, vibrations, temperature, sounds, etc., to affect the user, virtual environment, and the real world.
Modeling module 305: constructing a three-dimensional model of a virtual environment may also include various feedback mechanisms such as sound, touch, etc. in the three-dimensional model.
In the embodiment of the application, a virtual scene can be constructed through the modeling module 305, and a default gesture of a virtual hand is displayed in the virtual scene through the feedback module 302; the control module 304 switches the virtual hand from a default gesture to an operational gesture corresponding to an interactive object in the virtual scene based at least on the interaction control information and/or the current scene information, and interacts with the interactive object through the operational gesture.
In some embodiments, as shown in fig. 13, fig. 13 is another schematic structural diagram of an augmented reality device according to an embodiment of the present application, where the augmented reality device 300 further includes a processor 310 with one or more processing cores, a memory 320 with one or more computer readable storage media, and a computer program stored on the memory 320 and executable on the processor. The processor 310 is electrically connected to the memory 320. It will be appreciated by those skilled in the art that the configuration of the augmented reality device shown in the figures does not constitute a limitation of the augmented reality device, and may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
The processor 310 is a control center of the augmented reality device 300, connects various parts of the entire augmented reality device 300 using various interfaces and lines, and performs various functions of the augmented reality device 300 and processes data by running or loading software programs and/or modules stored in the memory 320 and calling data stored in the memory 320, thereby performing overall monitoring of the augmented reality device 300.
In the embodiment of the present application, the processor 310 in the augmented reality device 300 loads the instructions corresponding to the processes of one or more application programs into the memory 320 according to the following steps, and the processor 310 executes the application programs stored in the memory 320, so as to implement various functions:
displaying a default gesture of the virtual hand in the virtual scene; switching the virtual hand from the default gesture to an operation gesture corresponding to an interactive object in the virtual scene based at least on interaction control information and/or current scene information; and interacting with the interaction object through the operation gesture.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
In some embodiments, the processor 310 may include a detection module 301, a control module 304, and a modeling module 305.
In some embodiments, as shown in fig. 13, the augmented reality device 300 further comprises: radio frequency circuitry 306, audio circuitry 307, and power supply 308. The processor 310 is electrically connected to the memory 320, the feedback module 302, the sensor 303, the rf circuit 306, the audio circuit 307, and the power supply 308, respectively. It will be appreciated by those skilled in the art that the augmented reality device structure shown in fig. 12 or 13 does not constitute a limitation of the augmented reality device, and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The radio frequency circuitry 306 may be configured to receive and transmit radio frequency signals to and from a network device or other augmented reality device via wireless communication to and from the network device or other augmented reality device.
The audio circuitry 307 may be used to provide an audio interface between the user and the augmented reality device through speakers, microphones. The audio circuit 307 may transmit the received electrical signal after audio data conversion to a speaker, and convert the electrical signal into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 307 and converted into audio data, which are processed by the audio data output processor 301 for transmission to, for example, another augmented reality device via the radio frequency circuit 306, or which are output to a memory for further processing. The audio circuit 307 may also include an ear bud jack to provide communication of the peripheral headphones with the augmented reality device.
The power supply 308 is used to power the various components of the augmented reality device 300.
Although not shown in fig. 12 or 13, the augmented reality device 300 may further include a camera, a wireless fidelity module, a bluetooth module, an input module, etc., which are not described herein.
In some embodiments, the present application also provides a computer-readable storage medium for storing a computer program. The computer readable storage medium may be applied to an augmented reality device or a server, and the computer program causes the augmented reality device or the server to execute a corresponding flow in the interaction method of the virtual scene in the embodiments of the present application, which is not described herein for brevity.
In some embodiments, the present application also provides a computer program product comprising a computer program stored in a computer readable storage medium. The processor of the augmented reality device reads the computer program from the computer readable storage medium, and the processor executes the computer program, so that the augmented reality device executes a corresponding flow in the interaction method of the virtual scene in the embodiment of the application, which is not described herein for brevity.
The present application also provides a computer program comprising a computer program stored in a computer readable storage medium. The processor of the augmented reality device reads the computer program from the computer readable storage medium, and the processor executes the computer program, so that the augmented reality device executes a corresponding flow in the interaction method of the virtual scene in the embodiment of the application, which is not described herein for brevity.
It should be appreciated that the processor of an embodiment of the present application may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The processor may be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
It will be appreciated that the memory in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing an augmented reality device (which may be a personal computer, a server) to perform all or part of the steps of the method described in the various embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (18)

1. A method of interacting with a virtual scene, the method comprising:
displaying a default gesture of the virtual hand in the virtual scene;
switching the virtual hand from the default gesture to an operation gesture corresponding to an interactive object in the virtual scene based at least on interaction control information and/or current scene information;
and interacting with the interaction object through the operation gesture.
2. The method of interaction of virtual scenes according to claim 1, wherein the current scene information includes at least information of interaction pattern and the interaction object;
the interaction mode comprises a near field interaction mode or a far field interaction mode;
the interactive object comprises a three-dimensional virtual object or a two-dimensional virtual interactive interface.
3. The method of interaction of a virtual scene according to claim 2, wherein the operational gesture comprises one or more of a tap gesture, a grab gesture, a pinch gesture, wherein the tap gesture comprises a first tap gesture and a second tap gesture.
4. A method of interacting with a virtual scene as recited in claim 3, further comprising, prior to said switching the virtual hand from the default gesture to an operational gesture corresponding to an interactive object in the virtual scene based at least on interaction control information and/or current scene information:
acquiring a first position of the interaction object in the virtual scene and acquiring a second position of the virtual hand in the virtual scene;
and determining the interaction mode according to a first distance between the first position and the second position, wherein the interaction mode is a near-field interaction mode if the first distance is smaller than a first threshold value or is a far-field interaction mode if the first distance is larger than or equal to the first threshold value.
5. The method of interaction of a virtual scene according to claim 4, wherein the switching the virtual hand from the default gesture to an operational gesture corresponding to an interactive object in the virtual scene based at least on interaction control information and/or current scene information comprises:
and when the hidden ray cursor emitted from the right front of the virtual hand is detected to be suspended on the interactive object, switching the virtual hand from the default gesture to an operation gesture corresponding to the interactive object, and displaying the ray cursor.
6. The method for interacting with a virtual scene according to claim 4 or 5, wherein the interacting with the interactive object through the operation gesture comprises:
and under the condition that the interaction mode in the current scene information is the far-field interaction mode, responding to the interaction control information, and interacting with the interaction object through the operation gesture.
7. The method for interacting with a virtual scene according to claim 6, wherein if the interactive object in the current scene information is the two-dimensional virtual interactive interface, the operation gesture corresponding to the interactive object is the first click gesture;
the interaction with the interaction object through the operation gesture in response to the interaction control information comprises the following steps:
and responding to the operation information in the interaction control information, and interacting with interface elements in the two-dimensional virtual interaction interface through the first click gesture.
8. The method for interacting with a virtual scene according to claim 6, wherein if the interactive object in the current scene information is the three-dimensional virtual object, the operation gesture corresponding to the interactive object is the first click gesture;
The interaction with the interaction object through the operation gesture in response to the interaction control information comprises the following steps:
responding to trigger information in the interaction control information, clicking the three-dimensional virtual object displayed in the virtual scene through the first click gesture to select the three-dimensional virtual object, and displaying a selected aperture around the three-dimensional virtual object;
the first click gesture of the virtual hand is moved in response to position transformation information in the interaction control information to transform the position of the three-dimensional virtual object in the virtual scene by moving the first click gesture.
9. The method of interaction of a virtual scene according to claim 4, wherein the switching the virtual hand from the default gesture to an operational gesture corresponding to an interactive object in the virtual scene based at least on interaction control information and/or current scene information comprises:
moving the virtual hand according to the movement information in the interaction control information under the condition that the interaction mode in the current scene information is the near field interaction mode, and acquiring a third position of the virtual hand in the virtual scene in the moving process in real time;
And when the second distance between the first position and the third position is detected to be smaller than a second threshold value, switching the displayed virtual hand from the default gesture to an operation gesture corresponding to the interaction object.
10. The method for interacting with a virtual scene according to claim 9, wherein the interacting with the interactive object through the operation gesture comprises:
and under the condition that the interaction mode in the current scene information is the near-field interaction mode, when the operation gesture of the virtual hand is detected to touch the interaction object, the operation gesture is used for interacting with the interaction object.
11. The method for interacting with a virtual scene according to claim 10, wherein if the interactive object in the current scene information is the two-dimensional virtual interactive interface, the operation gesture corresponding to the interactive object is the second click gesture;
when detecting that the operation gesture of the virtual hand touches the interactive object, the interaction between the operation gesture and the interactive object is performed, including:
and moving the virtual hand according to the movement information in the interaction control information, and interacting with interface elements in the two-dimensional virtual interaction interface through the second clicking gesture when the second clicking gesture of the virtual hand is detected to touch the two-dimensional virtual interaction interface.
12. The method for interacting with a virtual scene according to claim 10, wherein if the interactive object in the current scene information is the three-dimensional virtual object, the operation gesture corresponding to the interactive object is one of the grasp gesture or the pinch gesture;
when detecting that the operation gesture of the virtual hand touches the interactive object, the interaction between the operation gesture and the interactive object is performed, including:
if the volume of the three-dimensional virtual object is larger than the preset volume, the operation gesture corresponding to the interactive object is the grabbing gesture, the virtual hand is moved according to the movement information in the interactive control information, and when the grabbing gesture of the virtual hand is detected to touch the interactive object, the three-dimensional virtual object is grabbed through the grabbing gesture in response to the operation information in the interactive control information; or alternatively
If the volume of the three-dimensional virtual object is smaller than or equal to the preset volume, the operation gesture corresponding to the interactive object is the pinching gesture, the virtual hand is moved according to the movement information in the interactive control information, and when the pinching gesture of the virtual hand is detected to touch the interactive object, the three-dimensional virtual object is pinched through the pinching gesture in response to the operation information in the interactive control information.
13. The method of interaction of a virtual scene according to claim 1, further comprising, when said switching said virtual hand from said default gesture to an operational gesture corresponding to an interactive object in said virtual scene based at least on interaction control information and/or current scene information:
and playing the gesture switching animation of the virtual hand from the default gesture to the operation gesture corresponding to the interactive object.
14. The method of interaction of a virtual scene of claim 1, the method further comprising:
if no interactive object is displayed in the virtual scene, displaying the default gesture of the virtual hand in the virtual scene;
when the interaction control information is acquired, switching the virtual hand from the default gesture to the corresponding operation gesture based on the interaction control information, so that a user can conduct gesture recognition through the displayed operation gesture.
15. An interactive apparatus for a virtual scene, the apparatus comprising:
a display unit for displaying a default gesture of a virtual hand in a virtual scene;
a switching unit, configured to switch the virtual hand from the default gesture to an operation gesture corresponding to an interactive object in the virtual scene based at least on interaction control information and/or current scene information;
And the interaction unit is used for interacting with the interaction object through the operation gesture.
16. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program adapted to be loaded by a processor for performing the interaction method of a virtual scene according to any of the claims 1-14.
17. An augmented reality device comprising a processor and a memory, the memory having stored therein a computer program, the processor being operable to perform the method of interaction of a virtual scene as claimed in any one of claims 1 to 14 by invoking the computer program stored in the memory.
18. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the method of interaction of virtual scenes according to any of claims 1-14.
CN202211059004.8A 2022-08-31 2022-08-31 Virtual scene interaction method and device, storage medium and equipment Pending CN117666769A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211059004.8A CN117666769A (en) 2022-08-31 2022-08-31 Virtual scene interaction method and device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211059004.8A CN117666769A (en) 2022-08-31 2022-08-31 Virtual scene interaction method and device, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN117666769A true CN117666769A (en) 2024-03-08

Family

ID=90064944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211059004.8A Pending CN117666769A (en) 2022-08-31 2022-08-31 Virtual scene interaction method and device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN117666769A (en)

Similar Documents

Publication Publication Date Title
JP2019071048A (en) System and method for deep learning based hand gesture recognition from first person view point
WO2018150831A1 (en) Information processing device, information processing method, and recording medium
GB2556347A (en) Virtual reality
CN111045511B (en) Gesture-based control method and terminal equipment
EP3761249A1 (en) Guided retail experience
KR20190122559A (en) Systems and methods for providing dynamic haptic playback for an augmented or virtual reality environments
CN113262465A (en) Virtual reality interaction method, equipment and system
CN108563327B (en) Augmented reality method, device, storage medium and electronic equipment
JPWO2018216355A1 (en) Information processing apparatus, information processing method, and program
CN112987924A (en) Method, apparatus, device and storage medium for device interaction
US10665067B2 (en) Systems and methods for integrating haptics overlay in augmented reality
CN110717993A (en) Interaction method, system and medium of split type AR glasses system
CN117666769A (en) Virtual scene interaction method and device, storage medium and equipment
CN114416237A (en) Display state switching method, device and system, electronic equipment and storage medium
WO2024060959A1 (en) Method and apparatus for adjusting viewing picture in virtual environment, and storage medium and device
CN117742478A (en) Information display method, device, equipment and medium
EP4083805A1 (en) System and method of error logging
JP7470226B2 (en) XR multi-window control
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
CN117742555A (en) Control interaction method, device, equipment and medium
CN117742554A (en) Man-machine interaction method, device, equipment and medium
CN117671201A (en) Information refreshing method, device, storage medium and equipment
CN117130465A (en) Parameter setting method, device, equipment and storage medium based on XR equipment
CN117572994A (en) Virtual object display processing method, device, equipment and medium
CN117742479A (en) Man-machine interaction method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination