CN117369627A - Man-machine interaction method, device, equipment and storage medium - Google Patents

Man-machine interaction method, device, equipment and storage medium Download PDF

Info

Publication number
CN117369627A
CN117369627A CN202311189142.2A CN202311189142A CN117369627A CN 117369627 A CN117369627 A CN 117369627A CN 202311189142 A CN202311189142 A CN 202311189142A CN 117369627 A CN117369627 A CN 117369627A
Authority
CN
China
Prior art keywords
target object
virtual reality
reality environment
presentation
presenting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311189142.2A
Other languages
Chinese (zh)
Inventor
汪圣杰
杨帆
秦劲启
冀利悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202311189142.2A priority Critical patent/CN117369627A/en
Publication of CN117369627A publication Critical patent/CN117369627A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a man-machine interaction method, a man-machine interaction device, man-machine interaction equipment and a storage medium. The method comprises the following steps: at an electronic device in communication with a display and one or more input devices: presenting a computer-generated virtual reality environment via a display; determining a birth location of the target object within the virtual reality environment in response to a gifting operation of any of the target objects within the virtual reality environment; and determining the presentation track and the presentation effect of the target object in the virtual reality environment according to the birth position and the type of the target object. According to the method and the device, diversified presentation effects of the target objects in the virtual reality environment can be achieved, intuitiveness and accuracy of the target objects in the virtual reality environment are guaranteed, interactive interestingness and user interaction atmosphere of the target objects in the virtual reality environment are enhanced, enthusiasm of user interaction in the virtual reality environment is mobilized, and immersive experience of the user in the virtual reality environment is improved.

Description

Man-machine interaction method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of data processing, in particular to a man-machine interaction method, a device, equipment and a storage medium.
Background
Currently, application scenarios of Extended Reality (XR) technology are becoming wider and wider, and include Virtual Reality (VR), augmented Reality (Augmented Reality, AR), and Mixed Reality (MR). In a virtual reality scene, a user can be enabled to view various virtual live pictures in an immersive manner through an XR technology, for example, the user can experience a real live interaction scene by wearing a head-mounted display (Head Mounted Display, HMD).
Typically, the viewer may pray, comment, give a virtual gift, etc. to the favorite anchor to augment user interaction in the virtual reality scenario. However, the effect of presenting the virtual gift to the host by the audience in the virtual reality scene is single, and a certain interactive interest is lacking.
Disclosure of Invention
The embodiment of the application provides a man-machine interaction method, device, equipment and storage medium, which realize diversified presentation effects when target objects are presented in a virtual reality environment, enhance interactive interestingness when the target objects are presented in the virtual reality environment and mobilize user interactive enthusiasm in the virtual reality environment.
In a first aspect, an embodiment of the present application provides a human-computer interaction method, where the method includes:
at an electronic device in communication with a display and one or more input devices:
presenting a computer-generated virtual reality environment via the display;
determining a birth location of any target object within the virtual reality environment in response to a gifting operation of the target object within the virtual reality environment;
and determining the presentation track and the giving effect of the target object in the virtual reality environment according to the birth position and the type of the target object.
In a second aspect, an embodiment of the present application provides a human-computer interaction device, including:
at an electronic device in communication with a display and one or more input devices, there is configured:
an environment presentation module for presenting a computer-generated virtual reality environment via the display;
a birth position determining module for determining a birth position of any target object in the virtual reality environment in response to a gifting operation of the target object in the virtual reality environment;
and the interaction module is used for determining the presentation track and the presentation effect of the target object in the virtual reality environment according to the birth position and the type of the target object.
In a third aspect, an embodiment of the present application provides an electronic device, including:
the system comprises a processor and a memory, wherein the memory is used for storing a computer program, and the processor is used for calling and running the computer program stored in the memory to execute the man-machine interaction method provided in the first aspect of the application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program that causes a computer to perform a human-computer interaction method as provided in the first aspect of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program/instructions for causing a computer to perform a human-machine interaction method as provided in the first aspect of the present application.
Through the technical scheme, the computer-generated virtual reality environment is presented through the display of the electronic device. When the giving operation of any target object in the virtual reality environment is detected, firstly, the birth position of the target object in the virtual reality environment is determined, then, the presentation track and the giving effect of the target object in the virtual reality environment are determined according to the birth position and the type of the target object, so that the diversified giving effect of the target object in the virtual reality environment is realized, the intuitiveness and the accuracy of the giving target object in the virtual reality environment are ensured, the interactive interestingness and the user interactive atmosphere of the giving target object in the virtual reality environment are enhanced, the enthusiasm of the user interaction in the virtual reality environment is mobilized, and the immersive experience of the user in the virtual reality environment is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a man-machine interaction method provided in an embodiment of the present application;
FIG. 2 is an exemplary schematic diagram of a gifting of a target object within a virtual reality environment provided by an embodiment of the present application;
FIG. 3 is a flowchart of another man-machine interaction method according to an embodiment of the present disclosure;
FIG. 4 is an exemplary schematic diagram of a presentation of multiple target objects presented by different users within a virtual reality environment provided by an embodiment of the present application;
FIG. 5a is an exemplary schematic diagram of a rendering effect when a target object provided in an embodiment of the present application is rendered along a rendering track;
FIG. 5b is an exemplary schematic diagram of another rendering effect when a target object provided in an embodiment of the present application is rendered along a rendering track;
FIG. 6a is an exemplary schematic diagram of a collision destruction effect of a target object according to an embodiment of the present disclosure;
FIG. 6b is another exemplary schematic diagram of a collision destruction effect of a target object according to an embodiment of the present disclosure;
FIG. 7 is an exemplary schematic diagram of a collision-rebound effect of a target object provided by an embodiment of the present application;
fig. 8 is a schematic diagram of a man-machine interaction device according to an embodiment of the present application;
fig. 9 is a schematic block diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application embodiment, the terms "exemplary" or "such as" and the like are used to denote examples, illustrations, or descriptions, and any embodiment or solution described as "exemplary" or "such as" in this application embodiment should not be construed as being preferred or advantageous over other embodiments or solutions. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Before introducing the specific technical scheme of the application, firstly, the application scene of the application is correspondingly described:
to enable gifting interaction of any target object within the virtual reality environment, the present application may display, at any electronic device in communication with a display and one or more input devices, the corresponding virtual reality environment through the display of the electronic device. The electronic device may be any Extended Reality (XR) device, and specifically may include a Virtual Reality (VR) device, an augmented Reality (Augmented Reality, AR) device, and a Mixed Reality (MR) device, which are not limited in this application.
The display may be any display screen that establishes communication connection with the electronic device. For example, the display may be a VR device, an AR device, or a display screen configured on an MR device, which is not limited in this application.
Moreover, in order to realize normal interaction of the user in the virtual reality environment, the application can initiate corresponding interaction operation on the virtual reality environment displayed by the display through one or more input devices in communication with the electronic device, so as to support the user to execute each interaction in the virtual reality environment.
The one or more input devices may be any control device and information acquisition device that establish communication connection with the electronic device. For example, the one or more input devices may be a handle configured on a VR device, an AR device, or an MR device, or a collection module for detecting hand operation, eye movement information, or a voice collector for collecting user voice information, etc., as not limited in this application.
At present, in order to avoid the problem that a user lacks certain interactive interestingness when giving a virtual object in a virtual reality environment, the invention concept of the application is as follows: within a computer-generated virtual reality environment presented via a display, if a gifting operation of any target object is detected, first the target object's birth location within the virtual reality environment. And then, determining the presentation track and the presentation effect of the target object in the virtual reality environment according to the birth position and the type of the target object, thereby realizing the diversified presentation effect when the target object in the virtual reality environment is presented, and enhancing the interactive interestingness and the user interaction atmosphere when the target object in the virtual reality environment is presented.
Fig. 1 is a flowchart of a human-computer interaction method according to an embodiment of the present application, where the method may be applied to an electronic device that communicates with a display and one or more input devices, but is not limited thereto. The method can be executed by the man-machine interaction device provided by the application, wherein the man-machine interaction device can be realized by any software and/or hardware mode. For example, the man-machine interaction device may be configured in an electronic device such as AR/VR/MR capable of simulating a virtual scene, and the specific type of the electronic device is not limited in the present application.
Specifically, as shown in fig. 1, the method may include the following steps:
s110, presenting the computer-generated virtual reality environment via a display.
In this application, to enable diverse interactions within a virtual reality environment, a display communicatively coupled to an electronic device may generally be supported to present a computer-generated virtual reality environment. The display may be any display screen, such as a display screen configured on a VR device, AR device, or MR device, or the like.
Among them, the virtual reality environment in the present application may include, but is not limited to, the following three types:
1) Purely fictive virtual scene
The computer can be used for pre-building pure virtual spaces under various types, wherein the pure virtual spaces comprise various pure virtual objects to form a pure virtual scene.
For example, a pure fictitious performance space of a bar may be built, and the whole pure fictitious performance space may include not only a virtual person representing a performer, but also a virtual person representing a viewer, and further virtual objects representing various objects in the bar.
2) Semi-fictional and semi-simulated scene fused by virtual environment and real environment
By enabling a Video See-Through (VST) function of electronic equipment configuration, a real environment where a user is currently located can be photographed in real time, so that a real environment picture where the user is located is continuously collected and presented via a display. Moreover, the corresponding virtual object can be pre-constructed through the computer, and the virtual object can be further displayed while the image of the real environment is displayed through the display, so that the fusion of the virtual object and the real environment is realized. Then, the semi-fictive semi-simulation scene fused by the virtual environment and the real environment can be presented through the display.
For example, the semi-fictional semi-simulated scene in the present application may be an MR space generated by a Mixed Reality (MR) technique.
3) Live interaction scene composed of video playing field and Unity interaction field
In order to ensure that a user can immersively feel the spatial sense and the stereoscopic sense of three-dimensional live broadcast interaction in the virtual reality environment, a live broadcast interaction scene consisting of a 3D video playing field and a Unity interaction field can be specially constructed and used as the virtual reality environment in the application.
The Unity interaction field can be used for presenting various virtual objects constructed by a user for a corresponding live broadcast scene by adopting a Unity engine so as to support the user to control the virtual controller to generate corresponding interaction operation with each virtual object. The video playing field can be used for presenting live video streams of any live broadcast room which the user wants to watch, interactive special effects generated after the user performs various interactive operations on various virtual objects, and the like.
Upon determining a computer-generated virtual reality environment, the virtual reality environment may be presented via a display such that a user can enter the virtual reality environment for corresponding interaction with virtual objects therein.
S120, determining the birth position of the target object in the virtual reality environment in response to the gift operation of any target object in the virtual reality environment.
The target object in the application may be a specific virtual object which is pre-built in the virtual reality environment and can support to be given to a corresponding user (such as a host).
After the virtual reality environment is presented through the display, the user is supported to execute corresponding presentation operation on any target object in the virtual reality environment to present the target object in the virtual reality environment, so that presentation interaction of the target object in the virtual reality environment is realized. Therefore, the method and the device can detect whether the user initiates the corresponding giving operation for any target object in the virtual reality environment in real time to judge whether the target object needs to be given into the virtual reality environment.
As an alternative implementation in the present application, the gifting operation of any target object in the virtual reality environment may include at least one of the following operations:
1) The gifting is performed by clicking on any of the target objects within the user interaction interface.
Wherein the user interaction interface may be a spatial panel presented at a user-front location within the virtual reality environment, displaying a plurality of target objects. After the user enters the virtual reality environment, the user can click any target object in the user interaction interface by controlling a handle model, a hand model or a real hand projection and the like which are presented in the virtual reality environment, so as to select the target object for giving.
Therefore, when the click operation of the user on any target object in the user interaction interface is detected in the virtual reality environment, the giving operation of the target object can be generated.
2) The gifting is performed by picking up any target object within the user interaction interface and casting the target object into the virtual reality environment.
After the user enters the virtual reality environment, any target object can be picked up from the user interaction interface by manipulating a hand model presented in the virtual reality environment or a real hand projection, and the picked target object is thrown into the virtual reality environment to represent that the target object is presented in the virtual reality environment.
Therefore, when the picking and throwing operation of the user on any target object in the user interaction interface is detected in the virtual reality environment, the giving operation of the target object can be generated.
3) Any target object is selected to be presented in a voice instruction mode.
After the user enters the virtual reality environment, voice information sent by the user can be collected in real time through a voice collector configured on the electronic equipment. Then, by carrying out corresponding semantic analysis on the voice information, specific operation information contained in the voice information is determined, and a corresponding voice instruction can be generated.
Therefore, when a voice instruction instructing to give any one of the target objects into the virtual reality environment is detected, a giving operation of the target object can be generated.
It can be appreciated that, in order to ensure diversified interactions within the virtual reality environment, a plurality of virtual screens such as corresponding control screens and public screens may be set in the virtual space, so as to facilitate the audience to execute different live broadcast functions. For example, information such as anchor information, online viewer information, related live recommendation lists, and sharpness options of the current live may be displayed within the dashboard to facilitate the viewer in performing related live operations, etc. Various audience comment information, praise, gift gifts and the like in the current live broadcast can be displayed in the public screen, so that the audience can perform relevant management on the current live broadcast.
Moreover, each virtual screen such as a control screen and a public screen faces the audience and is displayed at different positions in the virtual space. Furthermore, the position and style of any one virtual screen can be adjusted to prevent the other virtual screen from being blocked.
In some implementations, for various interactions between the audience and the anchor for each virtual object within the virtual reality environment, a corresponding gift entry is displayed within the virtual reality environment, as shown in fig. 2. When a viewer wants to give a virtual object to the anchor, the gift entry is triggered first, for example, by selecting the gift entry through a handle cursor, or controlling the hand model to click on the gift entry, etc. After the triggering instruction of the gift entrance is detected, a corresponding gift panel is displayed in the virtual space. The gift panel may be a near gift panel surrounding the periphery of the audience, or a far gift panel displayed in front of the audience, which is not limited in this application. The gift panel can be provided with a plurality of virtual objects, and the virtual objects can be used as various gifts which can be given to the anchor by the audience, so that the audience can conveniently select a certain virtual object to give to the anchor. The target object in the application can be various virtual gifts in the gift panel, which support the gifts of the audience to the anchor.
In some implementations, when each user gives a target object to a host within the virtual reality environment, any target object may be selected from the gift panel by a handle cursor, trigger key, and sent into the virtual reality environment. The present invention is also applicable to a gift panel, in which a handle model or a hand model simulated in a virtual reality environment is controlled by a handle operation, a gesture operation, or the like, and any target object is selected from the gift panel and is transmitted into the virtual reality environment, thereby performing a gift operation of the target object.
The target object in the present application may include throwing type gifts such as love gifts, individual roses, rosettes, ball gifts, etc., emitting type gifts such as firework sticks, bubble guns, etc., and display type gifts requiring a lot of resources such as sports cars, carnival, etc.
Then, the spectator can initiate the corresponding gifting operation through different manipulation operations of picking up and throwing to the virtual reality environment by the handle cursor, trigger key, manipulating the hand model or the real hand projection.
It will be appreciated that for any target object gifting operation in the virtual reality environment, if the current user needs to gish a certain target object to the host in the virtual reality environment, it may be determined whether the current user performs any target object gifting operation by detecting the control information triggered by the triggering of the handle of the XR device, the hand model, the real hand projection, etc. When the present user performs the gifting operation on any target object is detected, the information of the gifting operation performed by the present user on any target object can be sent to each other user in the virtual reality environment, so that the other users can timely acquire the gifting operation performed by the present user.
Therefore, the present application can determine the gifting operation of any target object in the virtual space by detecting the gifting operation of the current user on any target object in real time, and can also determine the gifting operation information of other users sent by other users on any target object.
Then, upon detecting a gifting operation of any one of the target objects within the virtual reality environment, the present application may set different target objects to begin rendering from different locations in order to enhance the rendering atmosphere of the different target objects within the virtual reality environment. Therefore, the present application may first determine the birth location of the target object within the virtual reality environment to indicate the starting point of the presentation of the target object within the virtual reality environment.
S130, determining the presentation track and the giving effect of the target object in the virtual reality environment according to the birth position and the type of the target object.
In order to enhance the diversified effects of different target objects when being presented in the virtual reality environment, the application can set that the effect that the target objects are presented in the virtual reality environment can be presented in the virtual reality environment in a flying way, so that the target objects can have a corresponding presentation track in the virtual reality environment.
Therefore, for any given target object in the virtual reality environment, after determining the birth position of the target object, the present application can determine the present track of the target object in the virtual reality environment according to the birth position of the target object and the center position in the virtual reality environment or the position of the given party, so that the target object can be given to the center position range in the virtual reality environment or the peripheral range of a specific user (for example, a host player) and the like, so as to enhance the diversification effect of the target object when given in the virtual reality environment.
In addition, in order to enhance the interactive interestingness of different target objects when being presented in the virtual reality environment, the application can analyze the specific interaction degree which needs to be reached when the target objects are presented in the virtual reality environment according to the types of the target objects, thereby determining the presentation effect of the target objects in the virtual reality environment, and enabling different presentation effects to be presented when different types of target objects are presented in the virtual reality environment, so as to enhance the interactive interestingness and the user interactive atmosphere when the target objects are presented in the virtual reality environment.
In addition, when each user gives a plurality of target objects in the virtual reality environment, the plurality of target objects can be presented on the same screen in the virtual reality environment, so that the number of target objects presented on the same screen in the virtual reality environment can be larger. While considering the display performance limitations of XR devices, too many virtual objects are not supported for on-screen presentation within a virtual reality environment. Therefore, the application can preset an upper limit of on-screen presentation, and the upper limit of on-screen presentation can be the maximum number of target objects supporting on-screen presentation in the virtual reality environment.
Therefore, in the whole live interaction process in the virtual reality environment, the method and the device can also detect the on-screen display quantity of the target objects in the virtual reality environment in real time, wherein the on-screen display quantity can be the quantity of the target objects displayed in the virtual reality environment simultaneously after each user presents the corresponding target objects to a performer (such as a host) in the virtual reality environment. And if the number of the on-screen presentations of the target object in the virtual reality environment is larger than the preset on-screen presentation upper limit, presenting the presentation ending special effect of the target gift in the virtual reality environment according to the priority of the target object.
That is, if the number of on-screen presentations of the target object in the virtual reality environment is greater than the preset on-screen presentation upper limit, it is indicated that too many target objects are simultaneously presented in the virtual reality environment, which affects the display performance of the XR device. Then, in order to ensure the optimal presentation of the target object in the virtual reality environment, the present application may set a priority for the target object in advance for each type of target object in the virtual reality environment. The priority may be lower for earlier gifted target objects and higher for current and in-house users.
Therefore, when the number of on-screen presentations of the target objects in the virtual reality environment is greater than the preset on-screen presentation upper limit, the method and the device can sort all the target objects presented in the virtual reality environment on the same screen according to the priority of the target objects. And then, according to the ordering of the target objects, the presentation ending special effects of all target gifts can be presented in the virtual reality environment in sequence, so that all target objects presented on the same screen are eliminated in the virtual reality environment in sequence, and the efficient presentation of the target objects in the virtual reality environment after presentation is ensured.
According to the technical scheme provided by the embodiment of the application, the computer-generated virtual reality environment is presented through the display of the electronic equipment. When the giving operation of any target object in the virtual reality environment is detected, firstly, the birth position of the target object in the virtual reality environment is determined, then, the presentation track and the giving effect of the target object in the virtual reality environment are determined according to the birth position and the type of the target object, so that the diversified giving effect of the target object in the virtual reality environment is realized, the intuitiveness and the accuracy of the giving target object in the virtual reality environment are ensured, the interactive interestingness and the user interactive atmosphere of the giving target object in the virtual reality environment are enhanced, the enthusiasm of the user interaction in the virtual reality environment is mobilized, and the immersive experience of the user in the virtual reality environment is improved.
As an alternative implementation scheme in the present application, in order to ensure intuitiveness and accuracy of presenting target objects to different spectators in a virtual space, the present application may explain in detail a specific determination process of a presentation track and a presentation effect of a target object presented in each presentation in a virtual reality environment and a specific presentation effect of the target object presented in the virtual reality environment.
Fig. 3 is a flowchart of another human-computer interaction method provided in an embodiment of the present application, where the method specifically may include the following steps:
s310, presenting a computer-generated virtual reality environment via a display.
S320, determining a gift initiator of the target object in response to the gift operation of any target object in the virtual reality environment.
Given the existence of multiple viewers within the virtual reality environment, the corresponding target object may be presented to the host. Then, in order to ensure the overall interaction when the target object is presented in the virtual reality environment, it is necessary to present the presentation effect of the target object presented by each viewer in the virtual reality environment. Therefore, in order to ensure the diversified presentation effect when the target object is presented in the virtual reality environment, when the presentation operation of any target object in the virtual reality environment is detected, the presentation initiator of the target object can be determined first, so that the appropriate presentation track can be determined by analyzing the relevant motion information when the presentation initiator presents the target object.
Wherein, the present application target object gifting initiator can include the current user and enter each user in the virtual reality environment.
In addition, the present application may determine the gifting operation of any target object in the virtual space by detecting the gifting operation performed by the current user on any target object in real time, and may also determine the gifting operation information performed by other users sent by other users on any target object.
S330, determining the birth position of the target object in the virtual reality environment according to the space pose of the giving initiator.
Since live scene pictures within the virtual reality environment are presented for the current user. Then, in order to intuitively display the target objects presented by different users in the virtual reality environment, the present application may analyze the relevant presentation operation information sent when the presentation initiator presents the target object for each presented target object in the virtual reality environment, so as to determine the spatial pose of the presentation initiator of the target object in the virtual reality environment.
Then, according to the spatial pose of the presenting initiator and the spatial pose of the current user, the relative spatial position of the presenting initiator relative to the current user when presenting the target object, and the presenting direction of the presenting initiator relative to the target object can be determined. Thus, the relative spatial position of the presentation initiator with respect to the current user can be taken as the birth position of the target object within the virtual reality environment. S340, determining the presenting track of the target object in the virtual reality environment according to the birth position and the motion variation of the target object when being presented.
In order to intuitively display target objects presented by different audiences in a virtual space, the method and the device can analyze relevant presentation operation information sent by a presentation initiator when presenting the target objects in the virtual space for each presentation target object, so as to determine the space pose of the presentation initiator of the target object in the virtual space and the instantaneous motion change quantity of the presentation initiator when presenting the target objects, wherein the presentation initiator drives the target objects to perform corresponding motions.
After the birth position of the target object in the virtual reality environment is determined, the instantaneous movement speed and the instantaneous movement direction of the target object when being presented can be determined according to the instantaneous movement variable quantity of the presenting initiator which drives the target object to perform corresponding movement when the target object is presented. Then, by comprehensively analyzing the presentation direction of the presentation initiator to the target object and the instantaneous movement direction when the target object is presented, the presentation direction of the target object in the virtual space can be roughly determined.
Moreover, according to the birth position, the instantaneous movement speed and the presentation direction of the target object in the virtual space, the presentation track of the target object which is presented in the virtual reality environment can be determined. As shown in fig. 4, for target objects gifted by different users to a performer (e.g., a host) within a virtual reality environment, different presentation trajectories may exist within the virtual reality environment, visually indicating the gifting initiator of each target object.
S350, determining the giving effect of the target object in the virtual reality environment according to the type of the target object.
In order to enhance the interactive interestingness of different target objects when being presented in the virtual reality environment, the application can analyze the specific interaction degree which needs to be reached when the target objects are presented in the virtual reality environment according to the types of the target objects, thereby determining the presentation effect of the target objects in the virtual reality environment, and enabling different presentation effects to be presented when different types of target objects are presented in the virtual reality environment, so as to enhance the interactive interestingness and the user interactive atmosphere when the target objects are presented in the virtual reality environment.
The gifting effect of the target object in the virtual reality environment may include, but is not limited to, a sending effect that the gifting initiator sends the target object to the virtual reality environment, and a response effect that the target object may collide or land after being sent to the virtual reality environment.
S360, controlling the target object to be presented along the presentation track in the virtual reality environment.
After determining the presenting track of the target object in the virtual reality environment, in order to ensure the diversified presenting effect when the target object is presented in the virtual reality environment, as shown in fig. 4, for each presented target object of each user in the virtual reality environment, the target object may be controlled to start from the starting point of the presenting track (that is, the birth position of the target object) and present along the presenting track in the virtual reality environment, so that the animation effect that a plurality of target objects are presented from different positions and move approximately towards the central position range of the virtual reality environment can be presented in the virtual reality environment.
In some implementations, to avoid the problem that live broadcast interaction performance is affected by watching live broadcast in the same room when users in the virtual reality environment are too many, each user in the virtual reality environment may be generally divided into different rooms, and users in the same room are supported to directly interact in the virtual reality environment. That is, when a user views live broadcast in a virtual reality environment, only the virtual character models of other users in the same room are supported to directly interact, but the virtual character models of other users in different rooms cannot be viewed, and thus the virtual character models cannot be directly interacted with.
Therefore, in order to accurately represent the relationship between the presentation initiator of the target object and the current user in the virtual reality environment, different presentation effects can be set for the presentation of the target object along the presentation track. From the above, it can be seen that the party giving the target object in the present application may include three situations of the current user, the same house user of the current user, and the different house user of the current user.
Next, the present application may respectively describe, for the presentation initiator under different conditions, a specific presentation effect that the target object presents along the presentation track:
Case one: if the presentation initiator is the current user or the same-house user of the current user, presenting the virtual part of the presentation initiator at the starting point of the presentation track, and controlling the target object to start from the virtual part and present along the presentation track.
When the presentation initiator of the target object is the current user or the same-house user of the current user, a virtual character model of the presentation initiator can be presented in the virtual reality environment. Then, for each gifted target object in the virtual reality environment, after determining the presentation track of the target object in the virtual space, in order to intuitively present the effect that the target object is gifted by the gifter, as shown in fig. 5a, a virtual part of the gifter may be presented at the starting point of the presentation track of the target object (i.e. the birth position of the target object). The virtual part may be a hand model of the donor initiator. Moreover, if the target object needs to borrow props to be gifted into the virtual reality environment, then the effect of holding the virtual props that the target object was gifted by the hand model of the gifting initiator may be presented at the start point of the presentation trajectory.
Then, for a virtual part of a presentation initiator presented at a start point of a presentation track of the target object, the target object may be controlled to start from the virtual part and be presented along the presentation track, so as to intuitively represent an effect that the target object is sent into the virtual reality environment by the virtual part.
And a second case: and if the presentation initiator is a different house user of the current user, presenting the asymptotic special effect of the target object in the virtual reality environment along the presenting track.
When the presentation initiator of the target object is a different house user of the current user, the virtual character model of the presentation initiator is not supported in the virtual reality environment. Then, for each gifted target object in the virtual reality environment, after determining the presenting track of the target object in the virtual reality environment, in order to intuitively present the effect that the target object is gifted by the gifter, the gifter may directly control the target object to be presented along the presenting track when the target object is gifted into the virtual reality environment. Moreover, as shown in fig. 5b, during the rendering process, the target object may gradually change from the hidden state to the display state from the starting point of the rendering track (i.e. the birth position of the target object), so that the target object may realize the fade-in rendering along the rendering track, thereby rendering the fade-in special effect of the target object along the rendering track within the virtual reality environment.
And S370, presenting the giving effect of the target object in the virtual reality environment according to the preset presentation time length of the target object.
In order to ensure different giving effects of different target objects in a virtual reality environment, the method and the device can set a total duration which can support presentation in a virtual space for each target object in advance according to the type of each target object, and the total duration is used as a preset presentation duration of the target object. For example, for a bubble gift, the preset presentation time period may be 10s; for toy duck gifts, the preset presentation duration can be 50s and the like.
Then, when the target object is controlled to be presented along the presenting track in the virtual reality environment, the present application determines the presented duration of the target object in real time. And then, determining that the target object is in a specific presentation stage in the virtual reality environment by judging whether the presented time length of the target object reaches the preset presentation time length in real time. And in different presentation phases, different presentation effects of the target object can be presented in the virtual reality environment.
Among other things, the presentation effect of the target object within the virtual reality environment may include, but is not limited to, a presentation pose of the target object as it is presented along a presentation trajectory and a presentation completion effect of the target object.
Next, the present application may explain the gifting effect of the target object in different presentation phases:
in the first case, in a presentation stage in which the presented duration of the target object does not reach the preset presentation duration, in order to enhance the diversified presentation effect of the target object presented along the presentation track, the present application may determine, according to the type of the target object, a presentation gesture when the target object is presented along the presentation track.
That is, different types of target objects may set different presentation effects within the virtual reality environment, e.g., an loving gift may be presented toward the presentation initiator within the virtual reality environment, while a ball gift may be presented in a constantly rotating manner within the virtual reality environment.
Therefore, in the process of presenting the target object along the presenting track, the target object can be controlled to continuously change the corresponding gesture according to the real-time presenting gesture, for example, the target object is controlled to be presented into the virtual reality environment towards the presenting initiator, and all the target objects are prevented from being presented into the virtual reality environment towards the current user, so that the presenting effect of the target object in the virtual reality environment is enhanced.
In the second case, when the presented time length of the target object reaches the preset presentation time length, the completion of the presentation of the target object in the virtual reality environment is indicated, and the presentation completion effect of the target object can be presented in the virtual reality environment, so that the presentation of the target object in the virtual reality environment is canceled.
In some implementations, the presentation effect of the target object may include a collision response effect after the target object collides with any object in the virtual reality environment, because the target object may collide with any object in the virtual reality environment during presentation of the target object in the virtual reality environment. Wherein the object may comprise a purely fictitious virtual object or a real object projection within a real environment.
Next, the present application may explain the gifting effect that the target object presents in different situations:
case one: and if the target object collides with any object in the virtual reality environment when the presented time length is smaller than the preset presenting time length, presenting the collision response effect of the target object in the virtual reality environment.
In the process of controlling the target object to be presented along the presenting track in the virtual reality environment, if the presented time length of the target object is smaller than the preset presenting time length, the target object collides with any object in the virtual reality environment, and in order to enhance the giving effect of the target object in the virtual reality environment, the collision response effect of the target object can be presented in the virtual reality environment.
It should be noted that, in the present application, the types of the target objects are different, so that the collision response effects of the target objects are different, so as to enhance the collision effects of the different target objects. For example, a bubble gift may disappear after colliding with any object, and a ball gift may rebound after colliding with any object.
Therefore, the method and the device can divide the target object into two types of the first type virtual object and the second type virtual object according to the collision response effect of the target object. The first type of virtual object may be a target object that needs to disappear after collision, such as a bubble gift, a firework gift, and the like. The second type of virtual object may be a target object that does not disappear after collision, but bounces, such as a ball gift, a toy duck gift, a love gift, etc.
Next, the present application may explain the impact response effect after the impact between the target object and any object in different types:
in the first case: and if the target object is a first type of virtual object, presenting the collision destruction effect of the target object in the virtual reality environment.
The target object is a first-class virtual object, which indicates that the target object disappears after collision with any object, and does not continue to be presented in the virtual reality environment. Therefore, if the target object collides with any object in the virtual reality environment when the presented time length is smaller than the preset presenting time length and the target object is a first type of virtual object, the present application can present the collision destruction effect of the target object in the virtual reality environment so as to cancel the presentation of the target object in the virtual reality environment.
Taking a target object as a bubble gift as an example, if a bubble gift presented in the virtual reality environment collides with any object in the virtual reality environment when the presented time period does not reach the preset presented time period, the explosion animation of the bubble gift can be played in the virtual reality environment and used as the collision destroying effect of the bubble gift, so that the presentation of the bubble gift in the virtual reality environment is canceled.
As an alternative implementation in the present application, since a variety of virtual objects are provided within the virtual reality environment, boundary elements of the virtual reality environment, interior decoration elements, other target objects that have been gifted, and the like may be included, but are not limited to. When the collision destruction effect of the target object is presented near some virtual objects in the virtual reality environment, the real presentation effect of the virtual objects in the virtual reality environment may be affected. For example, if a firework gift collides with a certain person object in the virtual reality environment, and a firework explosion special effect is directly presented around the person object, the person object may be exploded from visual sense, and the reality effect presented by the person object in the virtual reality environment is affected.
Therefore, according to the collision object that collides with the target object in the virtual reality environment, a specific presentation effect of the collision destruction effect of the target object in the virtual reality environment needs to be described:
1) If the collision object of the target object is a defined safe zone boundary within the virtual reality environment, the collision destruction effect of the target object is presented at the collision position.
In order to ensure immersive interactive experience of each user in the virtual reality environment, a safety zone can be defined in advance in the virtual reality environment according to the space size of the reality environment where the current user is located.
In some implementations, if the real environment in which the current user is located is a closed room, the defined safe zone within the virtual reality environment may be a spatial zone that the current user controls the handle ray movement of the XR device to point to and remain consistent with the closed room formed by the respective boundary lines of the closed room.
In other implementations, if the real environment in which the current user is located is an unsealed external open environment, the defined safe zone within the virtual reality environment may control handle ray movement of the XR device for the current user, a closed space region divided within the real environment. Alternatively, the defined safe area in the virtual reality environment may be a closed space area that is defined by taking the current location of the current user in the reality environment as a center point and according to a preset length.
In the present application, the partitioned safe zone boundaries within the virtual reality environment may include various boundary surfaces that form the safe zone. For example, assuming that the safe zone is a cube region within the virtual reality environment centered about the current user, the safe zone boundary may include six interior surfaces of the cube region.
Therefore, if the target object is a first type of virtual object and the collision object of the target object is a defined safe zone boundary in the virtual space, it is indicated that the target object collides with the safe zone boundary surface in the virtual reality environment, and the safe zone boundary in the virtual reality environment is not affected by the collision destruction effect of the target object. Then, as shown in fig. 6a, the present application may determine the collision position of the target object on the boundary of the safety zone, and then directly present the collision destruction effect of the target object at the collision position.
2) If the collision object of the target object is an internal scene object in the virtual reality environment, the target object is controlled to rebound from the collision position, and the collision destruction effect of the target object is presented in the virtual reality environment according to the preset rebound duration and the preset presentation duration.
In order to ensure an immersive interactive experience of a user in a virtual reality environment, various virtual objects capable of supporting different user interaction operations are generally arranged in the virtual reality environment to form internal scene objects in the virtual reality environment. The internal scene objects may include, but are not limited to, scene decoration elements, character objects, and other target objects that are presented after gifting within the virtual reality environment.
Because the internal scene objects in the virtual reality environment can support each audience to perform interactive operation, the impact of collision destruction effect of the target object can be born. Therefore, if the target object is a first type of virtual object and the collision object of the target object is an internal scene object in the virtual space, in order to avoid the influence of the collision destruction effect of the target object on the internal scene object, the present application may first determine the collision position between the target object and the internal scene object, and control the target object to bounce from the collision position as shown in fig. 6 b.
In addition, in order to ensure the normal presentation of the collision destruction effect of the target object, a rebound duration can be preset as a preset rebound duration in the application. Then, after the control target object bounces from the collision position, the rebound duration of the target object can be determined in real time, so as to judge whether the rebound duration reaches the preset rebound duration.
Then, the method and the device can judge whether the collision destruction effect of the target object needs to be presented by detecting whether the rebounded duration of the target object reaches the preset rebounded duration or the presented duration of the target object reaches the preset presented duration.
The method comprises the steps that the rebound duration of a target object firstly reaches a preset rebound duration, but the present duration of the target object does not reach the preset present duration, and the maximum duration requirement which can be supported by destroying rebound after collision is achieved by the target object. Or the presented time length of the target object reaches the preset presenting time length firstly, but the rebounded time length of the target object does not reach the preset rebounded time length, which indicates that the target object reaches the maximum time length requirement which can be supported by the presentation.
Therefore, when the target object satisfies either one of the above two conditions, the collision destruction effect of the target object can be presented within the virtual reality environment.
In the second case: and if the target object is a second type of virtual object, presenting a collision rebound effect of the target object in the virtual reality environment.
The target object is a second-class virtual object, which indicates that the target object cannot disappear after collision with any object, and can continue to be presented in the virtual reality environment. Therefore, if the target object collides with any virtual object in the virtual reality environment when the presented duration is smaller than the preset presented duration and the target object is the second-class virtual object, the present application can present the collision rebound effect of the target object in the virtual reality environment so as to change the presented track of the target object in the virtual reality environment, so that the target object continues to be presented in the virtual reality environment.
It can be appreciated that the normal rebound after the collision of the target object as the second type virtual object in the virtual reality environment is not limited by the preset rebound duration. After the collision rebound effect of the target object is presented in the virtual reality environment, the presentation of the target object in the virtual reality environment is canceled by presenting the presentation end special effect of the target gift in the virtual reality environment until the presented time of the target object reaches the preset presentation time.
As an alternative implementation scheme in the application, for the collision rebound effect of the target object, the application can control the target object to rebound from the collision position and present the associated interaction effect of the target object in the virtual reality environment.
That is, when the target object collides with any object in the virtual reality environment, the collision position between the target object and the collision object is first determined, and as shown in fig. 7, the target object is controlled to bounce from the collision position, thereby changing the presentation trajectory of the target object in the virtual reality environment.
In addition, in order to enhance the collision rebound effect of the target object in the virtual reality environment, the method and the device set an associated interaction effect for the target object so as to further trigger the associated interaction of the target object through collision rebound and ensure diversified interaction in the virtual reality environment.
Furthermore, after the control target object rebounds from the collision position, the application can synchronously present the associated interaction effect of the target object in the virtual reality environment in the rebound movement process of the target object.
Taking a target object as a toy duck gift as an example, if a certain toy duck gift presented in the virtual reality environment collides with any object in the virtual reality environment when the presented time length does not reach the preset presenting time length, the toy duck gift can be controlled to rebound from the collision position and synchronously control the toy duck gift to dance, and the toy duck gift is used as the associated interaction effect of the toy duck gift.
And a second case: and if the target object does not collide with any object in the virtual space when the presented duration is equal to the preset presented duration, presenting the presented ending special effect of the target object in the virtual reality environment.
In the process of controlling the target object to be presented along the presenting track in the virtual reality environment, if the presented time length of the target object has reached the preset presenting time length, the target object is not collided with any object in the virtual reality environment, which indicates that the target object has completed presenting in the virtual reality environment. Then, the rendering end effect of the target object may be rendered directly within the virtual reality environment to cancel the rendering of the target object within the virtual reality environment.
According to the technical scheme provided by the embodiment of the application, when the gifting operation of any target object is detected in the virtual reality environment, the gifting initiator of the target object is determined first. Then, according to the space pose of the presenting initiator and the motion variation of the target object when being presented, the presenting track of the target object in the virtual reality environment is determined, and according to the type of the target object, the presenting effect of the target object in the virtual reality environment is determined, so that the target objects presented by different users can present different presenting correlations in the virtual reality environment, the diversified presenting effect of the target object in the virtual reality environment when being presented is realized, the intuitiveness and the accuracy of the target object in the virtual reality environment are ensured, the interactive interest and the interactive atmosphere of the user when the target object is presented in the virtual reality environment are enhanced, the enthusiasm of the user interaction in the virtual reality environment is mobilized, and the immersive experience of the user in the virtual reality environment is improved.
Fig. 8 is a schematic diagram of a man-machine interaction apparatus provided in an embodiment of the present application, where the man-machine interaction apparatus 800 may be configured at an electronic device in communication with a display and one or more input devices, and the man-machine interaction apparatus 800 includes:
An environment presentation module 810 for presenting a computer-generated virtual reality environment via the display;
a birth position determination module 820 for determining a birth position of any target object within the virtual reality environment in response to a gifting operation of the target object within the virtual reality environment;
the interaction module 830 is configured to determine a presentation track and a presentation effect of the target object in the virtual reality environment according to the birth position and the type of the target object.
In some implementations, the birth location determination module 820 may be specifically configured to:
determining a gifting initiator of the target object;
and determining the birth position of the target object in the virtual reality environment according to the space pose of the giving initiator.
In some implementations, the interaction module 830 may be specifically configured to:
determining a presentation track of the target object in the virtual reality environment according to the birth position and the motion variation of the target object when being presented;
and determining the giving effect of the target object in the virtual reality environment according to the type of the target object.
In some implementations, the human-computer interaction device 800 may further include:
The track presentation module is used for controlling the target object to be presented along the presentation track in the virtual reality environment;
and the presentation effect presentation module is used for presenting the presentation effect of the target object in the virtual reality environment according to the preset presentation time length of the target object.
In some implementations, the presentation effect includes a presentation pose of the target object as presented along the presentation track and a presentation completion effect of the target object.
In some implementations, the track presentation module may be specifically configured to:
if the presentation initiator is the current user or the same-house user of the current user, presenting a virtual part of the presentation initiator at the starting point of the presentation track, and controlling the target object to start from the virtual part and present along the presentation track;
and if the gifting initiator is a different house user of the current user, presenting the asymptotic special effect of the target object in the virtual reality environment along the presenting track.
In some implementations, the complimentary effect of the target object includes a collision response effect after the target object collides with any object within the virtual reality environment, including a purely imaginary virtual object or a real object projection within a real environment.
In some implementations, the presentation effect presentation module may include:
a collision subunit, configured to present a collision response effect of the target object in the virtual reality environment if the target object collides with any object in the virtual reality environment when the presented duration is less than the preset presented duration;
and the presentation ending sub-unit is used for presenting the presentation ending effect of the target object in the virtual reality environment if the target object does not collide with any object in the virtual reality environment when the presented duration is equal to the preset presentation duration. In some implementations, the collision subunit may be specifically configured to:
if the target object is a first type of virtual object, presenting a collision destruction effect of the target object in the virtual reality environment;
and if the target object is a second-class virtual object, presenting a collision rebound effect of the target object in the virtual reality environment.
In some implementations, the collision subunit may be specifically configured to:
if the collision object of the target object is a defined safe zone boundary in the virtual reality environment, presenting a collision destruction effect of the target object at a collision position;
And if the collision object of the target object is an internal scene object in the virtual reality environment, controlling the target object to rebound from the collision position, and presenting the collision destruction effect of the target object in the virtual reality environment according to the preset rebound duration and the preset presentation duration.
In some implementations, the internal scene objects include scene decoration elements, character objects, and other target objects that are presented after gifting within the virtual reality environment.
In some implementations, the collision subunit may also be specifically configured to:
and controlling the target object to rebound from the collision position, and presenting the associated interaction effect of the target object in the virtual reality environment.
In some implementations, the gifting operation of any target object within the virtual reality environment includes at least one of:
giving away by clicking any target object in the user interaction interface;
gifting by picking up any target object within a user interaction interface and casting the target object into the virtual reality environment;
any target object is selected to be presented in a voice instruction mode.
In some implementations, the human-computer interaction device 800 may further include:
and the on-screen presentation ending module is used for presenting the presentation ending special effect of the target gift in the virtual reality environment according to the priority of the target object if the number of on-screen presentations of the target object in the virtual reality environment is larger than a preset on-screen presentation upper limit.
In some implementations, the virtual reality environment includes a purely fictitious virtual scene, a semi-fictitious semi-simulated scene fused by the virtual environment and the real environment, and a live interaction scene composed of a video playfield and a Unity interaction field.
In an embodiment of the application, a computer-generated virtual reality environment is presented via a display of an electronic device. When the giving operation of any target object in the virtual reality environment is detected, firstly, the birth position of the target object in the virtual reality environment is determined, then, the presentation track and the giving effect of the target object in the virtual reality environment are determined according to the birth position and the type of the target object, so that the diversified giving effect of the target object in the virtual reality environment is realized, the intuitiveness and the accuracy of the giving target object in the virtual reality environment are ensured, the interactive interestingness and the user interactive atmosphere of the giving target object in the virtual reality environment are enhanced, the enthusiasm of the user interaction in the virtual reality environment is mobilized, and the immersive experience of the user in the virtual reality environment is improved.
It should be understood that the apparatus embodiment may correspond to a method embodiment in the present application, and similar descriptions may refer to a method embodiment in the present application. To avoid repetition, no further description is provided here.
Specifically, the apparatus 800 shown in fig. 8 may perform any method embodiment provided herein, and the foregoing and other operations and/or functions of each module in the apparatus 800 shown in fig. 8 are respectively for implementing the corresponding flow of the method embodiment described above, which is not repeated herein for brevity.
The above method embodiments of the present application are described above from the perspective of functional modules in connection with the accompanying drawings. It should be understood that the functional module may be implemented in hardware, or may be implemented by instructions in software, or may be implemented by a combination of hardware and software modules. Specifically, each step of the method embodiments in the embodiments of the present application may be implemented by an integrated logic circuit of hardware in a processor and/or an instruction in software form, and the steps of the method disclosed in connection with the embodiments of the present application may be directly implemented as a hardware decoding processor or implemented by a combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in a well-established storage medium in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads information in the memory, and in combination with hardware, performs the steps in the above method embodiments.
Fig. 9 is a schematic block diagram of an electronic device provided in an embodiment of the present application.
As shown in fig. 9, the electronic device 900 may include:
a memory 910 and a processor 920, the memory 910 being configured to store a computer program and to transfer the program code to the processor 920. In other words, the processor 920 may call and run a computer program from the memory 910 to implement the methods in the embodiments of the present application.
For example, the processor 920 may be configured to perform the above-described method embodiments according to instructions in the computer program.
In some embodiments of the present application, the processor 920 may include, but is not limited to:
a general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
In some embodiments of the present application, the memory 910 includes, but is not limited to:
volatile memory and/or nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DR RAM).
In some embodiments of the present application, the computer program may be partitioned into one or more modules that are stored in the memory 910 and executed by the processor 920 to perform the methods provided herein. The one or more modules may be a series of computer program instruction segments capable of performing the specified functions, which are included in the description of the execution of the computer program by the electronic device 900.
As shown in fig. 9, the electronic device may further include:
a transceiver 930, the transceiver 930 being connectable to the processor 920 or the memory 910.
The processor 920 may control the transceiver 930 to communicate with other devices, and in particular, may send information or data to other devices or receive information or data sent by other devices. Transceiver 930 may include a transmitter and a receiver. Transceiver 930 may further include antennas, the number of which may be one or more.
It should be appreciated that the various components in the electronic device 900 are connected by a bus system that includes a power bus, a control bus, and a status signal bus in addition to a data bus.
The present application also provides a computer storage medium having stored thereon a computer program which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments.
Embodiments of the present application also provide a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of the method embodiments described above.
When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces, in whole or in part, a flow or function consistent with embodiments of the present application. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. A human-machine interaction method, characterized in that the method comprises:
at an electronic device in communication with a display and one or more input devices:
presenting a computer-generated virtual reality environment via the display;
determining a birth location of any target object within the virtual reality environment in response to a gifting operation of the target object within the virtual reality environment;
and determining the presentation track and the giving effect of the target object in the virtual reality environment according to the birth position and the type of the target object.
2. The method of claim 1, wherein the determining the birth location of the target object within the virtual reality environment comprises:
determining a gifting initiator of the target object;
and determining the birth position of the target object in the virtual reality environment according to the space pose of the giving initiator.
3. The method of claim 1, wherein the determining the presentation trajectory and the gifting effect of the target object within the virtual reality environment based on the birth location and the type of the target object comprises:
determining a presentation track of the target object in the virtual reality environment according to the birth position and the motion variation of the target object when being presented;
and determining the giving effect of the target object in the virtual reality environment according to the type of the target object.
4. A method according to claim 3, characterized in that the method further comprises:
controlling the target object to be presented along the presenting track in the virtual reality environment;
and presenting the giving effect of the target object in the virtual reality environment according to the preset presenting duration of the target object.
5. The method of claim 4, wherein the presentation effect comprises a presentation pose of the target object as presented along the presentation trajectory and a presentation completion effect of the target object.
6. The method of claim 4, wherein controlling the presentation of the target object along the presentation trajectory within the virtual reality environment comprises:
If the presentation initiator of the target object is the current user or the same-house user of the current user, presenting a virtual part of the presentation initiator at the starting point of the presentation track, and controlling the target object to start from the virtual part and present along the presentation track;
and if the presentation initiator of the target object is a different house user of the current user, presenting the fade-in special effect of the target object in the virtual reality environment along the presenting track.
7. The method of claim 4, wherein the gifting effect of the target object comprises a collision response effect after the target object collides with any object within the virtual reality environment, the object comprising a purely imaginary virtual object or a real object projection within a real environment.
8. The method of claim 7, wherein the presenting the gifting effect of the target object within the virtual reality environment according to the preset presentation time period of the target object comprises:
if the target object collides with any object in the virtual reality environment when the presented duration is smaller than the preset presented duration, presenting a collision response effect of the target object in the virtual reality environment;
And if the target object does not collide with any object in the virtual reality environment when the presented duration is equal to the preset presented duration, presenting the presentation ending effect of the target object in the virtual reality environment.
9. The method of claim 8, wherein the presenting the collision response effect of the target object within the virtual reality environment comprises:
if the target object is a first type of virtual object, presenting a collision destruction effect of the target object in the virtual reality environment;
and if the target object is a second-class virtual object, presenting a collision rebound effect of the target object in the virtual reality environment.
10. The method of claim 9, wherein the presenting the impact destruction effect of the target object within the virtual reality environment comprises:
if the collision object of the target object is a defined safe zone boundary in the virtual reality environment, presenting a collision destruction effect of the target object at a collision position;
and if the collision object of the target object is an internal scene object in the virtual reality environment, controlling the target object to rebound from the collision position, and presenting the collision destruction effect of the target object in the virtual reality environment according to the preset rebound duration and the preset presentation duration.
11. The method of claim 10, wherein the internal scene objects include scene decoration elements, character objects, and other target objects that are presented after gifting within the virtual reality environment.
12. The method of claim 9, wherein the presenting the impact resilience effects of the target object within the virtual reality environment comprises:
and controlling the target object to rebound from the collision position, and presenting the associated interaction effect of the target object in the virtual reality environment.
13. The method of claim 1, wherein the gifting of any target object within the virtual reality environment comprises at least one of:
giving away by clicking any target object in the user interaction interface;
gifting by picking up any target object within a user interaction interface and casting the target object into the virtual reality environment;
any target object is selected to be presented in a voice instruction mode.
14. The method according to claim 1, wherein the method further comprises:
and if the number of the on-screen presentations of the target object in the virtual reality environment is larger than a preset on-screen presentation upper limit, presenting the presentation ending special effect of the target gift in the virtual reality environment according to the priority of the target object.
15. The method of any of claims 1-14, wherein the virtual reality environment comprises a purely fictional virtual scene, a semi-fictional semi-simulated scene fused by a virtual environment and a real environment, and a live interaction scene consisting of a video playfield and a Unity interaction field.
16. A human-machine interaction device, the device comprising:
at an electronic device in communication with a display and one or more input devices, there is configured:
an environment presentation module for presenting a computer-generated virtual reality environment via the display;
a birth position determining module for determining a birth position of any target object in the virtual reality environment in response to a gifting operation of the target object in the virtual reality environment;
and the interaction module is used for determining the presentation track and the presentation effect of the target object in the virtual reality environment according to the birth position and the type of the target object.
17. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the human-machine interaction method of any of claims 1-15 via execution of the executable instructions.
18. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the human machine interaction method of any of claims 1-15.
19. A computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform the human-machine interaction method of any of claims 1-15.
CN202311189142.2A 2023-09-14 2023-09-14 Man-machine interaction method, device, equipment and storage medium Pending CN117369627A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311189142.2A CN117369627A (en) 2023-09-14 2023-09-14 Man-machine interaction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311189142.2A CN117369627A (en) 2023-09-14 2023-09-14 Man-machine interaction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117369627A true CN117369627A (en) 2024-01-09

Family

ID=89395319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311189142.2A Pending CN117369627A (en) 2023-09-14 2023-09-14 Man-machine interaction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117369627A (en)

Similar Documents

Publication Publication Date Title
US11514653B1 (en) Streaming mixed-reality environments between multiple devices
CN112334886B (en) Content distribution system, content distribution method, and recording medium
US11527052B2 (en) Method and apparatus for controlling placement of virtual character and storage medium
CN107680157B (en) Live broadcast-based interaction method, live broadcast system and electronic equipment
CN106095235B (en) control method and device based on virtual reality
WO2020090786A1 (en) Avatar display system in virtual space, avatar display method in virtual space, and computer program
US8957858B2 (en) Multi-platform motion-based computer interactions
US11194400B2 (en) Gesture display method and apparatus for virtual reality scene
JP2020017244A (en) Server, method, program, and dynamic image broadcasting system
WO2012021901A2 (en) Methods and systems for virtual experiences
JP7503122B2 (en) Method and system for directing user attention to a location-based gameplay companion application - Patents.com
CN110800310A (en) Subtitle processing method and director system for sports game video
US20190043263A1 (en) Program executed on a computer for providing vertual space, method and information processing apparatus for executing the program
CN114080259A (en) Program, method, and information terminal device
WO2024114518A1 (en) Display control method, display control apparatus, and electronic device
US20210146265A1 (en) Augmented reality system for enhancing the experience of playing with toys
US20180373884A1 (en) Method of providing contents, program for executing the method on computer, and apparatus for providing the contents
CN117369627A (en) Man-machine interaction method, device, equipment and storage medium
JP2023547721A (en) Screen display methods, devices, equipment, and programs in virtual scenes
CN114007707A (en) Game program, game method, and information terminal device
US20240177435A1 (en) Virtual interaction methods, devices, and storage media
WO2024037559A1 (en) Information interaction method and apparatus, and human-computer interaction method and apparatus, and electronic device and storage medium
WO2024037565A1 (en) Human-computer interaction method, apparatus, device and medium, virtual reality space-based display processing method, apparatus, device and medium, virtual reality space-based model display method, apparatus, device and medium
US12026847B2 (en) Method and apparatus for controlling placement of virtual character and storage medium
CN117631815A (en) Virtual interaction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination