CN117221641A - Virtual interaction method, device, equipment and storage medium - Google Patents

Virtual interaction method, device, equipment and storage medium Download PDF

Info

Publication number
CN117221641A
CN117221641A CN202311061397.0A CN202311061397A CN117221641A CN 117221641 A CN117221641 A CN 117221641A CN 202311061397 A CN202311061397 A CN 202311061397A CN 117221641 A CN117221641 A CN 117221641A
Authority
CN
China
Prior art keywords
interaction
target
virtual space
interactive
prompt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311061397.0A
Other languages
Chinese (zh)
Inventor
汪圣杰
付平非
杨帆
李正卿
李梓宁
黄翔宇
冀利悦
曾思聪
高银波
冯涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202311061397.0A priority Critical patent/CN117221641A/en
Publication of CN117221641A publication Critical patent/CN117221641A/en
Pending legal-status Critical Current

Links

Abstract

The application provides a virtual interaction method, a device, equipment and a storage medium, wherein the method comprises the following steps: according to the interaction triggering operation sent by the first object, the current environment information of the virtual space is adjusted to be target environment information, wherein the target environment information comprises: the interaction area is provided with a first object positioned in the interaction area; selecting at least one target second object from at least two second objects in the virtual space in response to a selection operation for the second objects; each target second object is transferred from the current location into the interaction zone so that the first object interacts with each target second object. The application can enrich the interaction form of the virtual space and enhance the interaction interestingness of the virtual space, thereby improving the interaction enthusiasm of virtual interaction between audiences and anchor.

Description

Virtual interaction method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a virtual interaction method, a device, equipment and a storage medium.
Background
With the development of the internet, a virtual space is gradually widely used by the public as a new information communication mode. For example, a user creates a virtual space in the identity of a host, and performs live performance such as a performance of a talent or a knowledge sharing in the virtual space. In general, during a performance of a presenter in a virtual space, a viewer can virtually interact with the presenter by giving a virtual gift to the presenter, by sending a bullet screen, and the like. However, the existing interaction mode is single and lacks interestingness.
Disclosure of Invention
The embodiment of the application provides a virtual interaction method, a device, equipment and a storage medium, which can enrich the interaction form of a virtual space, strengthen the interaction interestingness of the virtual space, and further improve the interaction enthusiasm of virtual interaction between audiences and anchor.
In a first aspect, an embodiment of the present application provides a virtual interaction method, including:
according to the interaction triggering operation sent by the first object, the current environment information of the virtual space is adjusted to be target environment information, wherein the target environment information comprises: the interaction area is provided with a first object, and the first object is positioned in the interaction area;
selecting at least one target second object from at least two second objects in the virtual space in response to a selection operation for the second object;
and transmitting each target second object from the current position into the interaction area so that the first object interacts with each target second object.
In a second aspect, an embodiment of the present application provides a virtual interactive apparatus, including:
the environment adjustment module is configured to adjust current environment information of the virtual space to target environment information according to an interactive triggering operation sent by the first object, where the target environment information includes: the interaction area is provided with a first object, and the first object is positioned in the interaction area;
The object selection module is used for responding to the selection operation of the second objects and selecting at least one target second object from at least two second objects in the virtual space;
and the object interaction module is used for transmitting each target second object from the current position into the interaction area so that the first object and each target second object interact.
In a third aspect, an embodiment of the present application provides an electronic device, including:
the system comprises a processor and a memory, wherein the memory is used for storing a computer program, and the processor is used for calling and running the computer program stored in the memory to execute the virtual interaction method in the embodiment of the first aspect or various implementation manners thereof.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, where the computer program causes a computer to execute the virtual interaction method as described in the first aspect embodiment or implementations thereof.
In a fifth aspect, embodiments of the present application provide a computer program product comprising program instructions which, when run on an electronic device, cause the electronic device to perform the virtual interaction method as described in the embodiments of the first aspect or implementations thereof.
The technical scheme disclosed by the embodiment of the application has at least the following beneficial effects:
the current environment information of the virtual space is adjusted to be the target environment information corresponding to the interaction operation according to the interaction operation triggered by the first object in the virtual space, so that the first object can perform virtual interaction with any second object in the virtual space in an interaction area in the target environment information, the interaction mode of the virtual space can be enriched, the interaction interestingness of the virtual space is enhanced, and the interaction enthusiasm of virtual interaction between audiences and anchor can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario of a virtual interaction method according to an embodiment of the present application;
FIG. 2 is a flowchart of a first virtual interaction method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating different panels disposed in a virtual space according to an embodiment of the present application;
FIG. 4 is a schematic diagram of virtual space environment information as target environment information according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an interactive region and a visual effect presented at the interface between the interactive region and the non-interactive region according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a method for transferring viewers from an interactive area to a non-interactive area according to an embodiment of the present application;
FIG. 7a is a schematic diagram of a status tag of a viewer in a first virtual space according to an embodiment of the present application;
FIG. 7b is a schematic diagram of a status tag of a viewer in a second virtual space according to an embodiment of the present application;
FIG. 7c is a schematic diagram of a status tag of a viewer in a third virtual space according to an embodiment of the present application;
FIG. 7d is a schematic diagram of a status tag of a viewer in a fourth virtual space according to an embodiment of the present application;
FIG. 7e is a schematic diagram of a status tag of a viewer in a fifth virtual space according to an embodiment of the present application;
FIG. 8 is a flowchart for adjusting virtual space environment information according to an embodiment of the present application;
FIG. 9 is a flowchart of a second virtual interaction method according to an embodiment of the present application;
FIG. 10 is a flowchart of a third virtual interaction method according to an embodiment of the present application;
FIG. 11a is a schematic diagram showing a first prompt pop-up window to a viewer according to an embodiment of the present application;
FIG. 11b is a schematic diagram showing a second prompt pop-up window to a viewer according to an embodiment of the present application;
FIG. 11c is a schematic diagram showing a third prompt pop-up window to a viewer according to an embodiment of the present application;
FIG. 12 is a flowchart of a fourth virtual interaction method according to an embodiment of the present application;
FIG. 13 is a schematic diagram showing an interactive application control at a boundary position of an interactive area according to an embodiment of the present application;
FIG. 14 is a flowchart of a fifth virtual interaction method according to an embodiment of the present application;
FIG. 15a is a schematic view of at least one interaction site disposed in an interaction region according to an embodiment of the present application;
FIG. 15b is a transmission sequence diagram of transmitting a target object to a corresponding interaction site according to an embodiment of the present application;
FIG. 16 is a flowchart of a sixth virtual interaction method according to an embodiment of the present application;
FIG. 17 is a schematic block diagram of a virtual interactive apparatus according to an embodiment of the present application;
fig. 18 is a schematic block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In embodiments of the application, the words "exemplary" or "such as" are used to mean that any embodiment or aspect of the application described as "exemplary" or "such as" is not to be interpreted as preferred or advantageous over other embodiments or aspects. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In order to facilitate understanding of the embodiments of the present application, before describing the embodiments of the present application, some concepts related to all embodiments of the present application are explained appropriately, specifically as follows:
1) Virtual Reality (VR), a technology of creating and experiencing a Virtual world, generating a Virtual environment by calculation, is a multi-source information (the Virtual Reality mentioned herein at least comprises visual perception, and may further comprise auditory perception, tactile perception, motion perception, and even further comprises gustatory perception, olfactory perception, etc.), realizes the simulation of a fused and interactive three-dimensional dynamic view and entity behavior of the Virtual environment, immerses a user into the simulated Virtual Reality environment, and realizes application in various Virtual environments such as a map, a game, a video, education, medical treatment, simulation, collaborative training, sales, assistance in manufacturing, maintenance, repair, and the like.
2) The virtual reality device (VR device) may be provided in the form of glasses, a head mounted display (Head Mount Display, HMD), or a contact lens, for realizing visual perception and other forms of perception, but the form of the virtual reality device is not limited thereto, and may be further miniaturized or enlarged according to actual needs.
Optionally, the virtual reality device described in the embodiments of the present application may include, but is not limited to, the following types:
2.1 Computer-side virtual reality (PCVR) equipment, which utilizes the PC side to perform the related computation of the virtual reality function and data output, and external computer-side virtual reality equipment utilizes the data output by the PC side to realize the effect of virtual reality.
2.2 Mobile virtual reality device, supporting the setting of a mobile terminal (e.g., a smart phone) in various ways (e.g., a head mounted display provided with a dedicated card slot), performing related calculations of virtual reality functions by the mobile terminal through wired or wireless connection with the mobile terminal, and outputting data to the mobile virtual reality device, e.g., viewing virtual reality video through the APP of the mobile terminal.
2.3 The integrated virtual reality device has a processor for performing the related computation of the virtual function, so that the integrated virtual reality device has independent virtual reality input and output functions, does not need to be connected with a PC end or a mobile terminal, and has high use freedom.
3) Augmented reality (Augmented Reality, AR): a technique for calculating camera pose parameters of a camera in a real world (or three-dimensional world, real world) in real time during image acquisition by the camera, and adding virtual elements on the image acquired by the camera according to the camera pose parameters. Virtual elements include, but are not limited to: images, videos, and three-dimensional models. The goal of AR technology is to socket the virtual world over the real world on the screen for interaction.
4) Mixed Reality (MR): a simulated scenery integrating computer-created sensory input (e.g., a virtual object) with sensory input from a physical scenery or a representation thereof, in some MR sceneries, the computer-created sensory input may be adapted to changes in sensory input from the physical scenery. In addition, some electronic systems for rendering MR scenes may monitor orientation and/or position relative to the physical scene to enable virtual objects to interact with real objects (i.e., physical elements from the physical scene or representations thereof). For example, the system may monitor movement such that the virtual plants appear to be stationary relative to the physical building.
5) Extended Reality (XR) refers to all real and virtual combined environments and human-machine interactions generated by computer technology and wearable devices, which include multiple forms of Virtual Reality (VR), augmented Reality (AR), and Mixed Reality (MR).
6) A virtual scene (also referred to as a virtual space) is a virtual scene that an application program displays (or provides) when running on an electronic device. The virtual scene may be a simulation environment for the real world, a semi-simulation and semi-fictional virtual scene, or a pure fictional virtual scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, the virtual scene may include sky, land, sea, etc., and the land may include environmental elements such as deserts, cities, etc.
7) Virtual objects, objects that interact in a virtual scene, objects that are under the control of a user or a robot program (e.g., an artificial intelligence based robot program) are capable of being stationary, moving, and performing various actions in the virtual scene.
As previously described, traditional users create a virtual space with a host's identity and, while performing live shows within the virtual space, spectators may give virtual gifts, speak to the host, and otherwise virtually interact with the host. However, the interaction mode between the prior audience and the anchor is single, and the prior audience and the anchor are lack of interest.
In order to solve the technical problems, the application concept of the application is as follows: according to the interaction triggering operation sent by the anchor in the virtual space, the environment information of the virtual space is adjusted to be the target environment information corresponding to the interaction triggering operation, wherein the target environment information comprises an interaction area, and then the anchor can perform virtual interaction with any audience in the virtual space in the interaction area provided by the target environment information, so that the interaction form of the anchor and the audience in the virtual space is enriched, and the interaction interestingness of the virtual space is enhanced.
It should be understood that the technical solution of the present application can be applied, but is not limited to, the following scenarios:
as shown in fig. 1, the application scenario may include: a first client 110, a second client 120, and a server 130. Wherein the first client 110 and the second client 120 may communicate with the server 130 over a network.
The first client 110 and the second client 120 may be selected as any device capable of supporting the function of creating a virtual space, such as a personal computer, a notebook, a smart phone, a tablet, a wearable device, an AR device, a VR device, an MR device, an XR device, and the like. Illustratively, the first client 110 of the present application may be selected as an object device, such as a hosting device, that creates a virtual space and performs live performances within the virtual space. The second client 120 may be selected as a device of the virtual space participation object, such as a viewer-side device.
The server 130 may be selected as a background server that provides virtual space based interactive data processing to the first client 110 and the second client 120. Also, the server 130 may be one or more of an independent server, a server cluster formed by a plurality of servers, a cloud server, etc., which is not particularly limited in the present application.
When the anchor logs in the first client 110 with the first object in the application scene and performs live broadcast in the constructed virtual space through the first client 110, the first client 110 sends a live video stream of the first object to the server 130, and a viewer in the virtual space accesses the server 130 with the second object through the second client 120 to obtain the live video stream of the first client 110, so that live broadcast content of the anchor can be watched.
It should be noted that the first client 110, the second client 120, and the server 130 shown in fig. 1 are only schematic, and the number and the device types of the first client 110, the second client 120, and the server 130 may be flexibly adjusted according to the actual use situation, which is not limited to the one shown in fig. 1.
After an application scenario of the embodiment of the present application is introduced, the following details of the technical solution of the present application are described.
Fig. 2 is a flow chart of a virtual interaction method according to an embodiment of the application. The virtual interaction method provided by the application can be executed by the virtual interaction device. The virtual interactive apparatus may be composed of hardware and/or software and may be integrated in an electronic device. The electronic device may be selected as a hosting device or a server, which is not particularly limited in the present application.
As shown in fig. 2, the virtual interaction method may include the following steps:
s101, according to the interaction triggering operation sent by the first object, adjusting the current environment information of the virtual space into target environment information, wherein the target environment information comprises: the first object is positioned in the interaction area.
In the application, the first object is any anchor or a first virtual object corresponding to any anchor. The first virtual object can be a personalized virtual object created by a host based on a virtual object creation function provided by the electronic equipment; alternatively, the anchor may use a default virtual object created by the electronic device as its own virtual object, which is not particularly limited in the present application. It should be appreciated that virtual objects may be referred to as virtual roles (Avatar), two-dimensional objects, three-dimensional objects, or super-dimensional objects, etc.
The virtual space is any virtual scene which is simulated by the electronic equipment aiming at the live broadcast scene selected by the anchor. For example, when a host needs to create a live scene with a feeling of science and technology, the electronic device may create a virtual space with a feeling of science and technology according to the type of space selected by the host. For another example, when the anchor needs to create a live scene of the princess wind, the electronic device may create a virtual space of the princess wind according to the type of space selected by the anchor, and so on.
In the present application, the virtual space may be a 3-degree-of-freedom (Dof) space or a 6-degree-of-freedom space. Considering that 6 degrees of freedom can support translation and rotation, 3 degrees of freedom only support translation, and viewing angles may need to be switched when a viewer in virtual space views a live performance of a host. For example, a viewer may walk from the front of the anchor to the back of the anchor, where the viewing angle of the viewer needs to be switched from being able to see the front of the anchor to being able to see the back of the anchor. Alternatively, a viewer may need to look at the entire virtual space from top to bottom. Therefore, the present application prefers that the virtual space be a 6-degree-of-freedom space so that the audience can view the live performance of the anchor and/or any spatial region of the virtual space at any viewing angle within the virtual space.
After the virtual space is built, the host may perform live performance of the first object within the virtual space. Wherein, when the host broadcast is live in the virtual space, the virtual space can be understood as a live broadcast room.
In addition, in the live broadcasting process of the host in the virtual space, various interactions such as the interaction of the ornamental prop, the interaction of background music, the interaction of the connection line and the like can be carried out with the audience in the virtual space, so that the atmosphere of the virtual space is improved, and more audiences are attracted to stay in the virtual space to watch the performance of the host.
As shown in fig. 3, a main broadcasting personal information panel, a main display panel and a sub display panel may be provided in the virtual space created by the present application. Wherein, the anchor personal information panel includes: head portrait information, anchor identification information, anchor fan information, attention controls, and the like. The above-mentioned anchor identification information may be understood as a user name or nickname of the anchor, etc. The main display panel is used for displaying the content which needs to be displayed by the screen throwing in the live broadcast process of the anchor. For example, when the live content of the anchor is knowledge sharing, the knowledge content shared by the anchor is displayed on the main display panel. For another example, the live content of the main broadcasting is a dancing performance, and the main broadcasting live image of the main broadcasting is displayed on the main display panel. The auxiliary display panel is used for displaying live broadcast pictures of the anchor, namely a camera for acquiring live broadcast pictures of the anchor is arranged in the virtual space, and the live broadcast pictures of the anchor acquired by the camera in real time are sent to the auxiliary display panel, so that the auxiliary display panel displays the live broadcast pictures of the anchor.
The advantage of setting up like this is that through setting up host computer personal information panel, main display panel and auxiliary display panel in virtual space for every spectator who gets into this virtual space all can know the personal information of host computer, know the live broadcast content of host computer in this virtual space based on main display panel, and know the live broadcast picture of host computer based on auxiliary display panel, thereby help the spectator who gets into in this virtual space can know live broadcast information in this virtual space comprehensively.
In some alternative embodiments, when the anchor interacts with the audience in the virtual space, the interaction triggering operation may be sent to the electronic device by triggering some interaction controls or making some preset interaction actions, and the like.
The interactive controls may be controls within the virtual space that are only visible to the anchor; or, the method is any interactive control on an interactive interface opened by the anchor by using external equipment; or, the anchor uses a preset button on the external device, etc. The preset interaction action can be a preset gesture made by a host, such as a heart comparison gesture, and the application does not limit the interaction control and the preset interaction action.
The external device may be a handle, a hand controller, a bracelet, a glove, or other devices.
When the anchor triggers the interactive control, the application can also output the first vibration feedback to the anchor through the external equipment so as to improve the use experience of the anchor through the touch sense. The vibration frequency, the vibration intensity and the vibration duration of the first vibration feedback can be flexibly set according to the feedback requirement, and the application is not particularly limited to this. Illustratively, the vibration frequency of the first vibration feedback in the present application may be selected to be 127 hz, the vibration intensity may be selected to be 0.8, the vibration duration may be selected to be 15 ms, etc.
In addition, considering that the anchor controls the cursor to suspend on any interactive control before triggering the interactive control, a second vibration feedback can be output to the anchor at the moment, wherein the second vibration feedback is different from the first vibration feedback, so that the control is in a selected state or in a suspending state. And the vibration frequency, the vibration intensity and the vibration duration of the second vibration feedback can be flexibly set according to the feedback requirement, and the application is not particularly limited to the above. Illustratively, the vibration frequency of the second vibration feedback in the present application may be 177 hz, the vibration intensity may be 0.13, the vibration duration may be 5 ms, etc.
Furthermore, after the electronic device obtains the interactive triggering operation sent by the anchor, the electronic device can obtain the corresponding target environment information based on the interactive triggering operation. The current environmental information of the virtual space is then adjusted to the target environmental information such that the anchor can interact with at least one viewer within the virtual space under the target environmental information.
The target environment information may include: an interaction area. The shape and size of the interaction area can be flexibly set according to the interaction requirement, and the application is not particularly limited. The shape of the interaction area can be round, square or special-shaped; the size of the interactive area may be any size preset according to the size of the virtual space, for example the virtual space has a size of 10 meters (m) by 10m, the size of the interaction area may be 4m, or 5m, etc. Therefore, the anchor can interact with any audience in the interaction area, so that the first object can be distinguished to interact with which audience and not interact with which audience based on the interaction area, and meanwhile, the interaction mode between the anchor and the audience can be increased on the basis of the existing interaction, so that the interaction mode of the virtual space can be enriched, and the interestingness of the virtual interaction is increased.
In some optional embodiments, the target environment information includes, in addition to the interaction area, optional steps further include: light effects. For example, when the interactive triggering operation sent by the anchor is obtained, the electronic device may adjust the light effect in the current environmental information of the virtual space from the first brightness value to the second brightness value according to the interactive triggering operation, and simultaneously set a top light on the top of the central area of the virtual space, which can be seen in fig. 4. In fig. 4, the interaction area in the virtual space is circular, and the interaction area is located in the central area of the virtual space. Of course, other forms than the target environment information form shown in fig. 4 described above are possible, and the present application is not limited in any way.
The first luminance value is larger than the second luminance value. That is, the application is to dim the virtual space light from the first brightness value to the second brightness value.
The top light is atmosphere light projected from the top of the virtual space to the ground of the virtual space.
In some alternative embodiments, the interactive region of the present application has a first visual effect, and the interface between the interactive region and the non-interactive region has a second visual effect. Wherein the first visual effect may be the same or different from the second visual effect. Preferably, the first visual effect is different from the second visual effect, so that the interactive area and the non-interactive area can be better distinguished, the audience can know that the scene information of the virtual space is switched to the interactive scene information based on the visual effect presented in the virtual space, and the audience cannot enter the interactive area on the basis of being not invited by the first object.
As shown in fig. 5, the first visual effect presented in the interaction region is an effect a, and the second visual effect presented at the interface between the interaction region and the non-interaction region is an effect b.
In some alternative embodiments, in consideration of the interactive triggering operation sent by the anchor, before the current environmental information of the virtual space is adjusted to the target environmental information, there may be some audience located in the interactive area in the target environmental information. Therefore, after the current environmental information of the virtual space is adjusted to the target environmental information, all audiences in the interaction area are optionally transmitted to the non-interaction area, so that no audience exists in the interaction area, a foundation is laid for virtual interaction between a subsequent anchor and a specific audience in the interaction area, and meanwhile, the situation that the anchor interacts with a non-specific anchor in the interaction area by mistake can be effectively avoided.
The above-mentioned transferring of all viewers in the interactive area to the non-interactive area can present transition pictures, such as blinking black screen or loading interface, to the viewers to be transferred, so as to avoid the abrupt sense caused by the visual abrupt position change of the viewers caused by directly transferring the viewers to be transferred from the interactive area to the non-interactive area. Illustratively, all viewers located in the interactive zone are automatically transferred to the non-interactive zone as shown in fig. 6.
S102, responding to the selection operation for the second objects, and selecting at least one target second object from at least two second objects in the virtual space.
The second object may be understood as a second virtual object corresponding to any real audience or any real audience in the same virtual space as the first object. The second virtual object may be a personalized virtual object created by the viewer based on a virtual object creation function provided by the electronic device, or may be a default virtual object created by the electronic device by the viewer as a virtual object of the viewer, which is not particularly limited in the present application. It should be understood that the virtual object may be referred to as a virtual character (Avatar), a secondary object, a tertiary object, or a super secondary object, etc., and the present application is not limited in this regard.
It is contemplated that multiple viewers may be present within a virtual space. In order to ensure that a host can have enough time to perform live performance in a live broadcast process, a corresponding interaction time length is generally set for each interaction link. Because of the limited interaction time, the anchor cannot interact with all audiences in one interaction link. It should be understood that a plurality in the present application means at least two, i.e., two and more.
Therefore, after the environment information of the virtual space is adjusted to the target environment information corresponding to the interaction triggering operation, a limited number of target audience members can be selected from all audience members in the virtual space based on the preset interaction time length and the preset interaction time with each audience member. The preset interaction time with each audience can be flexibly set according to the interaction requirement, such as 3 minutes (min) or 5 min.
Wherein the limited number may be determined based on a preset interaction time period and a preset interaction time with each viewer. For example, the interaction time is 15min, and the interaction time with each audience is less than or equal to 3min, then any selection strategy or rule can be adopted to screen out less than or equal to 15/3=5 target audience from all audiences.
In some alternative embodiments, the present application may set a status tag for each viewer entering the virtual space, such that each viewer is made aware of where in the virtual space it is based on the status tag, as well as the real-time status information of other viewers in the virtual space.
The status tag is used to characterize real-time status information of the viewer, and is dynamically updated according to the status of the viewer. For example, as shown in fig. 7a, when a certain audience only views a live performance of a main cast in a virtual space, the status tag of the audience is audience identification information. For another example, as shown in fig. 7b, when a certain viewer's network is not good, the viewer's status tag is viewer tag information and a network not good icon, and so on. The viewer identification information may be understood as a user name, a nickname, or the like of the viewer.
And S103, transmitting each target second object from the current position into the interaction area so that the first object interacts with each target second object.
Consider the present application where an interaction zone is used for a region of virtual interaction with a target audience. Therefore, after the target audience is selected, each target audience can be transmitted to the interaction area from the current position in the non-interaction area, so that each target audience can interact with the anchor.
When each target audience is transmitted from the current position into the interaction area, a transition picture such as a blink black screen or a loading interface is optionally presented to each target audience, so that the abrupt sense caused by the visual position mutation of the target audience is avoided when the target audience is directly transmitted from the non-interaction area into the interaction area.
In the application, the anchor interacts with each target audience in the interaction area, optionally with wheat. Wherein, the wheat linking interaction can comprise voice wheat linking interaction or video wheat linking interaction. Accordingly, the interaction zone may be understood as a wheat-linked interaction zone.
The voice-to-microphone interaction refers to interaction through a language communication mode. The video-to-microphone interaction refers to interaction through voice and implementation pictures. It should be understood that the voice-to-microphone interaction can only hear the voices of both the anchor and the target audience, and cannot see the real-time pictures of both parties. The video link microphone interaction can hear the voices of the two interaction parties and can also see the real-time pictures of the two interaction parties.
In some alternative embodiments, each target audience is associated with different status information in view of the host's communication with the target audience. For example, a part of target audience is in communication with the anchor, a part of target audience is in close communication with the anchor, and a part of target is in poor network. The part of target audience is in the closing of wheat, which can be the autonomous closing of the target audience or the closing of wheat forbidden by the anchor, the application does not limit the closing of the target audience.
Therefore, the application can acquire the state information of each target audience and update the state label of each target audience based on the state information, so that the anchor, the target audience and the audience positioned in the non-interactive area can know the real-time state information of each target audience based on the state label of each target audience.
Illustratively, as shown in fig. 7c, when a target audience is being connected to the anchor and in a vocalized state, the state label of the target audience is target audience label information and an icon in vocalization. As shown in fig. 7d, when a certain target audience is connecting with the anchor but in the closed-wheat state, the state label of the target audience is the target audience identification information and the closed-wheat icon. As shown in fig. 7e, when a certain target audience is connecting with the anchor but in the screen-off state, the state label of the target audience is target audience label information, screen-off icon and screen-off information.
According to the technical scheme provided by the embodiment of the application, the current environmental information of the virtual space is adjusted to the target environmental information corresponding to the interactive operation according to the interactive operation triggered by the first object in the virtual space, so that the first object can perform virtual interaction with any second object in the virtual space in the interactive area in the target environmental information, the interactive mode of the virtual space can be enriched, the interactive interestingness of the virtual space is enhanced, and the interactive enthusiasm of virtual interaction between audiences and anchor can be improved.
On the basis of the above embodiment, the present application further explains the adjustment of the current environment information of the virtual space to the target environment information according to the interactive triggering operation sent by the first object, specifically referring to fig. 8.
As shown in fig. 8, S101 includes the following steps:
s101-1, when the interactive triggering operation sent by the first object is obtained, determining the interactive type of the interactive triggering operation.
S101-2, acquiring target environment information from an environment information list according to the interaction type of the interaction triggering operation sent by the first object.
S101-3, current environment information of the virtual space is adjusted according to target environment information, wherein the target environment information comprises: the first object is positioned in the interaction area.
Optionally, the application can set a corresponding interaction type for each interaction triggering operation. For example, when the interaction triggering operation is based on triggering the headset control, the interaction type of the interaction triggering operation can be determined to be headset interaction. For another example, when the interactive triggering operation is based on triggering a play object control, it may be determined that the type of interaction of the interactive triggering operation is a play object-like interaction, and so on.
In some alternative embodiments, after the interactive trigger operation sent by the anchor is obtained, the application can query the interactive type of the interactive trigger operation in the mapping relation between the interactive trigger operation and the interactive type based on the control associated with the interactive trigger operation.
Further, based on the inquired interaction type, environment information associated with the interaction type is inquired in an environment information list, and the environment information is determined as target environment information. In some alternative embodiments, if the interaction type corresponding to the interaction triggering operation is a headset interaction, the headset environment information associated with the headset interaction is queried from the environment information list, and the headset environment information is used as the target environment information. Therefore, the current environment information of the virtual space is adjusted based on the target environment information, so that the whole scene of the virtual space is transformed into a special scene for virtual interaction, and the interaction form of the virtual space can be enriched.
The environment information list can be updated based on the updating of the interaction type, so that the switching requirements of different virtual interaction scenes can be met, and conditions are provided for improving the use experience of users.
Based on the foregoing embodiments, the present application may acquire bullet screen information and/or a second object list in the virtual space in consideration of the performance of the anchor in the virtual space. Therefore, when at least one target second object is selected from at least two second objects in the virtual space, the selection can be realized according to bullet screen information and/or a second object list in the virtual space. The selection of at least one target second object is further explained below in connection with fig. 9. As shown in fig. 9, the virtual interaction method may include the following steps:
s201, according to the interaction triggering operation sent by the first object, current environment information of the virtual space is adjusted to target environment information, wherein the target environment information comprises: the first object is positioned in the interaction area.
S202, bullet screen information and/or a second object list in the virtual space are obtained.
S203, selecting at least one target second object from at least two second object lists in the virtual space according to the barrage information and/or the second object list.
S204, transmitting each target second object from the current position into the interaction area so that the first object interacts with each target second object.
The second object list may be understood as an on-line audience list located in the virtual space.
Optionally, during live performance of the anchor in the virtual space, any audience located in the virtual space may interact with the anchor in the form of a public screen speaking (i.e., bullet screen). And, each bullet screen corresponds to one audience. Therefore, the method can acquire real-time barrage information in the virtual space. Further, a corresponding audience is determined based on the acquired real-time bullet screen information. Then, a preset number of target audience members are selected from all the determined audience members according to a preset screening rule.
In some alternative embodiments, the present application may acquire an online audience list for a virtual space, considering that the online audience list is also available in the virtual space. Then, a preset number of target audience members are selected from the online audience list according to a preset screening rule.
In the present application, the preset screening rule may be selected as screening identification information, such as preset identification information, and the present application is not limited in any way. The identification information specifically refers to a user name or a user nickname of the viewer, or the like.
In addition, the preset number may be determined based on a preset interaction time and a preset interaction time with each viewer, and the specific determination process refers to the foregoing embodiment, which is not described in detail herein.
Optionally, after the target audience is selected, the method can transmit the target audience from the current position to the interaction area, so that the anchor in the interaction area can virtually interact with each target audience.
In some optional real-time modes, after the target audience is selected, the application can optionally send the interaction invitation popup to each target audience, so that each target audience can determine whether to perform interaction operation with the anchor or not based on the interaction invitation popup, thereby ensuring initiative and control right of interaction between the audience and the anchor, enabling the audience to selectively perform virtual interaction with the anchor according to own requirements, and further improving use experience of the audience.
For example, the interactive invitation popup may include: inviting information such as "the anchor invites you to interwork together", the anchor avatar, accept invitation control, and reject invitation control. If the target audience X1 is performing other operations, such as setting operations, when the audience device sees the interactive invitation popup displayed in the virtual space, the target audience X1 may select the reject invitation control in the interactive invitation popup to reject the interactive invitation sent by the anchor. If the target audience X2 does not perform any operation when seeing the interactive invitation popup window displayed in the virtual space based on the client, the target audience X2 can select an invitation accepting control in the interactive invitation popup window to accept the interactive invitation sent by the host.
In some alternative embodiments, it is contemplated that any target audience may be performing a barrage editing or expression editing operation while sending an interactive invitation to each target audience. At this time, the application can see the mini interactive invitation popup window displayed in the virtual space at the target audience performing editing operation. The mini interactive invitation popup window is optionally displayed in any blank area except the editing area in the virtual space. The advantage of this arrangement is that it enables interactive invitations to be sent to the target audience without obscuring the normal editing operations of the target audience.
Illustratively, it is assumed that when an interactive invitation pop-up window is sent to a target audience, the target audience is performing a pop-up editing operation. Then, the application can display the mini interactive invitation popup window in the blank area above the popup editing area. Wherein, mini interactive invitation popup includes: inviting information and accepting controls. The invitation information can be selected as 'the host invites to connect with you' or 'the host invites you to connect with wheat to interact' and the like. When the target audience wants to interact with the anchor, an acceptance control in the mini interaction invitation popup window can be selected to accept the anchor's interaction invitation. When the target audience does not want to interact with the anchor, the mini interaction invitation popup can be ignored, and when the popup display time reaches the preset time and the response operation of the target audience is not detected, the target audience is determined to reject the interaction invitation of the anchor. The preset duration may be flexibly set according to the display requirement of the popup window, for example, 10 seconds (S), which is not limited herein.
It should be noted that, after the mini-interactive invitation popup displayed to the target audience, if the target audience exits the editing operation, the mini-interactive invitation popup is restored to the normal version of the interactive invitation popup at this time.
Optionally, when any target audience accepts the interaction invitation of the anchor, the application can transmit the target audience accepting the interaction invitation from the current position to the interaction area, so that the anchor in the interaction area can virtually interact with the target audience accepting the invitation.
In some optional embodiments, before the target audience receiving the interaction invitation is transmitted to the interaction area, a first interaction prompt interface may be optionally displayed to each target audience, so as to inform the target audience that the target audience is about to interact with the anchor through the first interaction prompt interface, so that the target audience can prepare the interaction content with the anchor. Illustratively, the first interactive alert interface displayed to each target audience includes: interactive prompt information and a head portrait of a host. The interaction prompt information can be selected from 'about to link with a host after 5 seconds' or 'about to link with a host together for interaction', and the like, wherein if timing exists in the interaction prompt information, the timing is realized in a countdown mode. For example, the count down time is 5 seconds(s), then the count down time is reduced by 1s per second, specifically 5s→4s→3s→2s→1s→0.
As an alternative implementation, consider that some targeted viewers may temporarily have nothing to interact with the anchor while they will. Therefore, before the target audience is transmitted into the interaction area from the current position, a second interaction prompt interface can be displayed for each target audience, so that each target audience is informed of the impending interaction with the anchor through the second interaction prompt interface. Thus, when a target audience has a temporary need to forgo interaction with the anchor, the target audience can forgo interaction with the anchor by selecting the abandon control in the second interaction prompt interface. Wherein, the second interactive prompt interface that shows to each target audience includes: interactive prompt information, a head portrait of a host and a abandon control. In the application, the interaction prompt information can be selected from 'about to link with the host after 5 seconds', 'about to link with the host together for interaction', or other forms, and the application is not limited in particular.
In the application, when the target audience selects any control in the popup window, the application can output the first vibration feedback to the target audience through the external equipment, so that the target audience can know that the target audience responds to the presented popup window. The vibration frequency, the vibration intensity and the vibration duration of the first vibration feedback can be flexibly set according to the feedback requirement, and the application is not particularly limited to this. Illustratively, the vibration frequency of the first vibration feedback in the present application may be selected to be 127 hz, the vibration intensity may be selected to be 0.8, the vibration duration may be selected to be 15 ms, etc.
It should be noted that, before any control in the popup window is selected by the target audience, the cursor can be controlled to suspend on any control, and at this time, the external device can be controlled to output a second vibration feedback to the audience, wherein the second vibration feedback is different from the first vibration feedback. And the vibration frequency, the vibration intensity and the vibration duration of the second vibration feedback can be flexibly set according to the feedback requirement, and the application is not particularly limited to the above. Illustratively, the vibration frequency of the second vibration feedback in the present application may be 177 hz, the vibration intensity may be 0.13, the vibration duration may be 5 ms, etc.
According to the technical scheme provided by the embodiment of the application, the current environmental information of the virtual space is adjusted to the target environmental information corresponding to the interactive operation according to the interactive operation triggered by the first object in the virtual space, so that the first object can perform virtual interaction with any second object in the virtual space in the interactive area in the target environmental information, the interactive mode of the virtual space can be enriched, the interactive interestingness of the virtual space is enhanced, and the interactive enthusiasm of virtual interaction between audiences and anchor can be improved. In addition, the target audience is selected based on the barrage information and/or the second object list in the virtual space, and the target audience is invited to interact with the anchor in the interaction area, so that initiative and control right of interaction between the audience and the anchor can be ensured, virtual interaction between the audience and the anchor can be selectively carried out according to own requirements, and interaction experience of the audience can be improved.
In some optional embodiments, after the current environmental information of the virtual space is adjusted to the target environmental information according to the interaction triggering operation sent by the first object, a prompt popup window may be optionally presented in the virtual space, so that the second object in the virtual space may apply for interaction with the first object according to the prompt information in the prompt popup window. Furthermore, the method and the device can screen the target second object from all the second objects which send the application interaction. As shown in fig. 10, the method may include the steps of:
s301, according to the interaction triggering operation sent by the first object, adjusting the current environment information of the virtual space into target environment information, wherein the target environment information comprises: the first object is positioned in the interaction area.
S302, determining current state information of each second object in the virtual space.
S303, if the current state information of any second object is a viewing state, a first prompt popup or a second prompt popup is presented in the virtual space, wherein the first prompt popup comprises: first suggestion information, first control and second control, the second suggestion bullet window includes: and second prompt information.
The first prompt popup window and the second prompt popup window are used for indicating that the first object opens an interaction link.
For example, a first prompt pop-up may be as shown in fig. 11a and a second prompt pop-up may be as shown in fig. 11 b. The first control in fig. 11a may be selected as an entry control, and the second control may be selected as a discard control. The second control may also be a cancel control.
S304, if the current state information of any second object is an editing state, a third prompt popup window is presented in the virtual space, and the third prompt popup window comprises: third prompt information and a third control.
The above editing state can be understood as that the second object is performing editing operations, such as barrage editing, expression editing, setting, and the like, which is not limited in any way by the present application.
Wherein, third suggestion bullet window optional is mini version suggestion bullet window.
The mini-version prompt popup window is optionally displayed in any blank area except the editing area in the virtual space. The advantage of this arrangement is that it can be achieved that the reminder information can be sent to the target audience without blocking the normal editing operation of the target audience.
Illustratively, as shown in fig. 11c, it is assumed that a certain viewer is performing a barrage editing operation while displaying a first prompt pop-up window or a second prompt pop-up window. Then, the application can display the mini-version prompt popup window in the blank area above the popup editing area. Wherein, mini version suggestion bullet window includes: prompt information and a third control. The third control in fig. 11c may be selected as the entry control, and the prompt may be selected as "the host has opened a wheat link" or "the host has opened a wheat link, to get his/her wheat link bar" etc.
And after the mini-version prompt popup displayed to the audience, if the audience exits the editing operation, the mini-version prompt popup is restored to the first prompt popup or the second prompt popup at the moment.
S305, determining candidate second objects from all second objects in the virtual space based on the first prompt popup, the second prompt popup or the third prompt popup.
The candidate second object is the candidate audience.
Alternatively, determining candidate second objects from all second objects within the virtual space may include the following:
case one
And determining the second object as a candidate second object in response to the selection operation of any second object for the first control in the first prompt popup or for the third control in the third prompt popup.
In the second case of the method, the device,
and determining the second object as a candidate second object based on the fact that the distance between the second prompt popup and the interaction region is smaller than a distance threshold value.
The distance threshold value can be flexibly set according to the contact precision between the audience and the interaction area. Such as 0.3 centimeters (cm), 0.1cm, or 0.15cm, etc., which are not particularly limited herein. For example, assuming that the distance threshold is 0.1cm, when the distance between the viewer and the interaction area is 0.09cm, it is determined that contact between the viewer and the interaction area occurs, and the viewer is a candidate viewer.
In some alternative embodiments, it is contemplated that the audience needs to have interactive rights when the audience applies to interact with the anchor. Therefore, after determining that the distance between any audience and the interaction area is smaller than the distance threshold value, the method further comprises the following optional steps: determining whether each second object has interactive authority; and if any second object does not have the interaction right, presenting the authorized popup in the virtual space so that the second object applies for the interaction right based on the authorized popup.
When the audience applies for the interaction rights based on the authorized popup, the audience can apply for the interaction rights by checking certain agreements, interaction terms or clicking an authorized control in the authorized popup.
Furthermore, after applying for the interactive authority to the audience that interacted with the anchor, the present application can determine the audience having the interactive authority as a candidate audience. Therefore, the target audience screened from the candidate audience can be ensured to normally interact with the anchor, and the interaction effect and the interaction quality are improved.
S306, screening at least one target second object from the candidate second objects according to a preset screening rule.
In the application, the preset screening rules can be some screening strategies set according to actual interaction requirements. Optionally, the preset screening rule in the present application may include the following:
first, screening candidate second objects with identification information being preset identification information as target second objects.
For example, if the preset identification information is a user name a, a user name B and a user name C, candidate second objects with user names of the user name a, the user name B and the user name C are screened from all candidate second objects to be target second objects.
Second, based on the selection time of the controls, screening a preset number of target second objects from the candidate second objects according to the sequence from front to back, wherein the controls are first controls or third controls.
The preset quantity is determined based on preset interaction time and preset interaction time with each audience.
Illustratively, assuming that the candidate audience triggering the first control or the third control includes: audience Y1, audience Y2, audience Y3, audience Y4, audience Y5, audience Y6, and audience Y7. Then, when the preset number is 5, and the time when the viewer Y1 selects the first control is the first time, the time when the viewer Y2 selects the third control is the second time, the time when the viewer Y3 selects the third control is the third time, the time when the viewer Y4 selects the first control is the fourth time, the time when the viewer Y5 selects the third control is the fifth time, the time when the viewer Y6 selects the first control is the sixth time, and the time when the viewer Y7 selects the first control is the seventh time. Wherein seventh time < third time < second time < fourth time < fifth time < first time < sixth time. Further, based on the selection time of the first control or the third control, the viewer Y7 corresponding to the seventh time, the viewer Y3 corresponding to the third time, the viewer Y2 corresponding to the second time, the viewer Y4 corresponding to the fourth time, and the viewer Y5 corresponding to the fifth time are selected in order from front to back as target viewers.
That is, the closer the first control or the third control is selected by the viewer (the smaller) the probability that the viewer is selected as the target viewer is.
Thirdly, screening a preset number of target objects from the candidate second objects according to the sequence from front to back based on the time when the second objects are contacted with the interaction area.
Wherein, based on the preset interaction time length and the preset interaction time with each audience, the method is determined together.
Illustratively, the audience assumed to be in contact with the interaction zone includes: audience Y11, audience Y12, audience Y13, audience Y14, audience Y15, audience Y16, and audience Y17. Then, when the predetermined number is 4, the time for the viewer Y11 to contact the interaction area is time t1, the time for the viewer Y12 to contact the interaction area is time t2, the time for the viewer Y13 to contact the interaction area is time t3, the time for the viewer Y14 to contact the interaction area is time t4, the time for the viewer Y15 to contact the interaction area is time t5, the time for the viewer Y16 to contact the interaction area is time t6, and the time for the viewer Y17 to contact the interaction area is time t7. Wherein t6< t3< t5< t1< t4< t2< t7. Based on the time, the viewers Y16 corresponding to t6, the viewer Y13 corresponding to t3, the viewer Y15 corresponding to t5, and the viewer Y11 corresponding to t1 are selected in the order from front to back.
That is, the closer the viewer is in contact with the interaction zone (the smaller) the greater the probability that the viewer will be selected as the target viewer.
In some alternative embodiments, it is contemplated that the anchor or server may not determine the target audience based on candidate audiences. In other words, no particular audience among all audiences actively applying for interaction with the anchor, i.e., the particular audience that the anchor wants to interact with, does not actively apply for interaction with the anchor. At this point, the anchor may screen the target audience from the remaining audience that is not actively applying for interaction. Optionally, screening the target audience from the remaining audience who is not actively applying for interaction includes the following steps:
step 11, in response to the selection operation of the first object for the remaining second objects, presenting a fourth prompt popup to the selected remaining second objects in the virtual space, where the fourth prompt popup includes: fourth prompt message and fourth control.
And step 12, responding to the selected operation of the arbitrarily selected remaining second object for the fourth control in the fourth prompt popup window, and determining the selected remaining second object as a target second object.
The rest second objects are second objects with the distance between the rest second objects and the interaction area being greater than or equal to a distance threshold value, wherein the second controls in the first prompt popup window are selected, and the third controls in the second prompt popup window are not selected.
Illustratively, the remaining viewers that are assumed to be actively applying for interaction with the anchor are viewer X11, viewer X12, viewer X13, viewer X14, viewer X15, and viewer X16. Then, when the presenter selects the remaining spectators X14 and X15 based on the remaining spectator identification information, a fourth prompt pop for the presenter invitation interaction may be displayed to the spectators X14 and X15. The fourth prompt message included in the fourth prompt pop-up window may be, but is not limited to: "the anchor invites you to interact together", etc., the fourth control may be selected as an accept invitation control and a reject control. When the audience X14 selects the accept invitation control in the fourth prompt pop-up window, the audience X14 is determined to be the target audience. When the viewer X15 selects the reject control in the fourth prompt pop-up window, the viewer X15 is determined to be a non-target viewer.
In some alternative embodiments, it is contemplated that the arbitrarily selected remaining viewer may be performing a barrage editing or expression editing operation while presenting the fourth prompt pop-up window to the selected remaining viewer. At this time, the selected residual audience sees the mini-version fourth prompting popup window displayed in the virtual space. The mini-version fourth prompt popup window is optionally displayed in any blank area except the editing area in the virtual space. The advantage of this arrangement is that it enables interactive invitations to be sent to the audience without obscuring the normal editing operations of the audience.
The fourth prompting popup window of the mini version can comprise: fourth prompt message, accept control and reject control. The fourth prompting information can be selected from 'the host invitation and you link wheat' or 'the host invitation and you link wheat interaction', etc.
After the mini-version fourth prompt popup displayed in the virtual space to the selected remaining audience, if the selected remaining audience exits the editing operation, the mini-version fourth prompt popup is restored to the normal-version fourth prompt popup.
In some alternative embodiments, it is contemplated that a new audience may be entered during a live performance by a host in a virtual space. Then, when the virtual space is determined to enter a new audience, the application optionally presents the new audience with a prompt popup which is interacted with by the anchor, so that the new audience can determine whether to apply for interaction operation with the anchor based on the prompt popup.
Illustratively, the cue pop-up window that the anchor is interacting with includes: interactive prompt information, registration control and cancellation control. The interaction prompt information can be, but is not limited to, "the anchor is connecting with the wheat", etc. Further, a new audience member may interact with the anchor by selecting an entry control application. Alternatively, the new audience may refuse to interact with the anchor by selecting the cancel control.
S307, each target second object is transmitted from the current position into the interaction area, so that the first object interacts with each target second object.
Alternatively, after the target audience is selected, the present application may transmit the target audience from the current location into the interaction zone, such that the anchor located in the interaction zone may perform a virtual interaction operation with the target audience.
In some optional embodiments, before the target audience is transmitted to the interaction area, a first interaction prompt interface or a second interaction prompt interface may be optionally displayed to each target audience, so that the target audience is informed of the impending interaction with the anchor through the first interaction prompt interface or the second interaction prompt interface, and the target audience can prepare the interaction content with the anchor. The specific implementation process may refer to the S204 portion in the foregoing embodiment, and will not be described herein in detail.
And when the target audience selects any control in the popup window, the application can output the first vibration feedback to the target audience through the external equipment, so that the target audience can know that the target audience responds to the presented popup window. The vibration frequency, the vibration intensity and the vibration duration of the first vibration feedback can be flexibly set according to the feedback requirement, and the application is not particularly limited to this. Illustratively, the vibration frequency of the first vibration feedback in the present application may be selected to be 127 hz, the vibration intensity may be selected to be 0.8, the vibration duration may be selected to be 15 ms, etc.
It should be noted that, before any control in the popup window is selected by the target audience, the cursor can be controlled to suspend on any control, and at this time, the external device can be controlled to output a second vibration feedback to the audience, wherein the second vibration feedback is different from the first vibration feedback. And the vibration frequency, the vibration intensity and the vibration duration of the second vibration feedback can be flexibly set according to the feedback requirement, and the application is not particularly limited to the above. Illustratively, the vibration frequency of the second vibration feedback in the present application may be 177 hz, the vibration intensity may be 0.13, the vibration duration may be 5 ms, etc.
According to the technical scheme provided by the embodiment of the application, the current environmental information of the virtual space is adjusted to the target environmental information corresponding to the interactive operation according to the interactive operation triggered by the first object in the virtual space, so that the first object can perform virtual interaction with any second object in the virtual space in the interactive area in the target environmental information, the interactive mode of the virtual space can be enriched, the interactive interestingness of the virtual space is enhanced, and the interactive enthusiasm of virtual interaction between audiences and anchor can be improved. In addition, the application can selectively present different prompt popups to each second object according to the state information of the second object in the virtual space, so that the second object can autonomously select whether to interact with the first object or not based on the prompt popups, thereby ensuring the interactive control right and the interactive autonomy of the audience under the condition of providing interaction with the host for the audience.
In some optional embodiments, after the current environmental information of the virtual space is adjusted to the target environmental information according to the interaction triggering operation sent by the first object, an interaction application control can be optionally presented on the boundary of the interaction area, and then any second object can apply for interaction with the first object by selecting the interaction application control. Furthermore, the method and the device can screen the target second object from all the second objects selected from the interactive application control. As shown in fig. 12, the method may include the steps of:
s401, according to the interaction triggering operation sent by the first object, adjusting the current environment information of the virtual space into target environment information, wherein the target environment information comprises: the first object is positioned in the interaction area.
And S402, presenting an interactive application control on the boundary position of the interactive area in the virtual space.
In the application, the interactive application control can be changed along with the change of the viewing angle of the audience. That is, the interactive application control is always perpendicular to the viewing direction of the viewer, and moves along with the viewing direction of the viewer along with the boundary position of the interactive area. Exemplary, as shown in fig. 13, the interactive application control is located at the boundary position of the interactive area and is perpendicular to the viewing direction of the viewer. In addition, the information of the number of the applied audience people with the thumbnail version is also displayed nearby the interactive application control.
S403, responding to the selected operation of any second object for the interactive application control, and determining that the second object is a candidate second object.
When any audience wants to virtually interact with the anchor, the interactive application control on the boundary position of the interaction area can be selected to register and participate in the interactive application with the anchor. And after any audience selects the interactive application control, prompt information of successful registration is displayed to the audience in the virtual space, so that the audience knows that the audience has successfully registered based on the prompt information.
In some alternative embodiments, it is contemplated that the audience needs to have interactive rights when the audience applies to interact with the anchor. Therefore, after determining the selection operation of any audience to the interactive application control, the method optionally further comprises the following steps: determining whether each second object has interactive authority; and if any second object does not have the interaction right, presenting the authorized popup in the virtual space so that the second object applies for the interaction right based on the authorized popup. When the second object applies for the interaction right, the interaction right can be applied by checking some agreements in the authorized popup window, the interaction application clause or clicking the authorized control.
Furthermore, after applying for the interactive authority to the audience that interacted with the anchor, the present application can determine the audience having the interactive authority as a candidate audience. Therefore, the target audience screened from the candidate audience can be ensured to normally interact with the anchor, and the interaction effect and the interaction quality are improved.
In some alternative embodiments, there may be some viewers within the virtual space that maliciously disrupt the live hosting broadcast. The audience can continuously select the interactive application control and then select to cancel the interactive application, so that the normal audience cannot establish a normal interactive relation with the anchor. After the interactive application control is selected, the interactive application control on the boundary position of the interactive area becomes a cancel application control.
Therefore, when detecting that some audiences continuously select the interactive application control for many times and then select to cancel the interactive application, the application presents preset information, such as 'system abnormality, please retry later' or 'connection failure, please retry later', and the like, to the audiences so as to avoid the malicious audiences from disturbing the interaction between the audiences and the normal audiences.
In some alternative embodiments, the application may also detect the number of viewers selecting the interactive application control, considering that the interactive link has a time limit and/or the virtual space supports the number of people interacting. And when the number of the audiences reaches the maximum value, displaying early warning information to the audiences who subsequently select the interactive application control. For example, "the number of people queued has reached an upper limit, please retry later", etc. Therefore, the situation that the electronic equipment processes too much data and is blocked or flash-backed when too many audiences apply for interaction with the anchor at the same time can be avoided.
S404, screening at least one target second object from the candidate second objects according to a preset screening rule.
In the application, the preset screening rules can be some screening strategies set according to actual interaction requirements. Optionally, the preset screening rule in the present application may include the following:
first, screening candidate second objects with identification information being preset identification information as target second objects.
Second, based on the selection time of the control, screening a preset number of target second objects from the candidate second objects according to the sequence from front to back, wherein the control is an interactive application control.
The preset number can be determined based on preset interaction time and preset interaction time with each audience.
And, the earlier (smaller) the trigger time that the viewer selected the interactive application control, the greater the probability that the viewer was selected as the target viewer.
In some alternative embodiments, it is contemplated that the target audience may not be determined by the anchor or the server based on the candidate audience selected for the interactive application control. In other words, no particular audience among all audiences actively applying for interaction with the anchor, i.e., the particular audience that the anchor wants to interact with, does not actively apply for interaction with the anchor. At this point, the anchor may screen the target audience from the remaining audience that is not actively applying for interaction. Optionally, screening the target audience from the remaining audience who is not actively applying for interaction includes the following steps:
Step 21, in response to the selection operation of the first object for the remaining second objects, presenting a fourth prompt pop to the selected remaining second objects in the virtual space, where the fourth prompt pop includes: fourth prompt message and fourth control.
And step 22, determining the selected remaining second object as a target second object in response to the selected operation of the arbitrarily selected remaining second object on the fourth control in the fourth prompt popup.
The remaining second objects are second objects of the unselected interactive application control.
Illustratively, the remaining viewers that are assumed to be actively applying for interaction with the anchor are viewer X21, viewer X22, viewer X23, viewer X24, viewer X25, and viewer X26. Then, when the presenter selects the remaining audience X22 based on the remaining audience identification information, a fourth prompt pop for presenter invitation interaction may be displayed to the audience X22. The fourth prompt message included in the fourth prompt pop-up window may be, but is not limited to: "the anchor invites you to interact together", etc., the fourth control may be selected as an accept invitation control and a reject control. When the viewer X22 selects the accept invitation control in the fourth prompt pop-up window, the viewer X14 is determined to be the target second object. When the viewer X22 selects the reject control in the fourth prompt pop-up window, the viewer X22 is determined to be a non-target second object.
In some alternative embodiments, it is contemplated that the arbitrarily selected remaining viewer may be performing a barrage editing or expression editing operation while presenting the fourth prompt pop-up window to the selected remaining viewer. At this time, the viewer sees a mini-version fourth prompt popup window displayed in the virtual space. The mini-version fourth prompt popup window is optionally displayed in any blank area except the editing area in the virtual space. This has the advantage that it is possible to send an interactive invitation to the second object without blocking the normal editing operation of the viewer.
After the mini-version fourth prompt popup displayed in the virtual space to the selected remaining audience, if the selected remaining audience exits the editing operation, the mini-version fourth prompt popup is restored to the normal-version fourth prompt popup.
And S405, transmitting each target second object from the current position into the interaction area so that the first object interacts with each target second object.
The implementation principle of S405 is the same as that of S307, and the specific implementation process can refer to the aforementioned S307, which is not described in detail herein.
According to the technical scheme provided by the embodiment of the application, the current environmental information of the virtual space is adjusted to the target environmental information corresponding to the interactive operation according to the interactive operation triggered by the first object in the virtual space, so that the first object can perform virtual interaction with any second object in the virtual space in the interactive area in the target environmental information, the interactive mode of the virtual space can be enriched, the interactive interestingness of the virtual space is enhanced, and the interactive enthusiasm of virtual interaction between audiences and anchor can be improved. In addition, the interactive application control is displayed on the boundary position of the interactive area, so that the audience can autonomously select whether to select the interactive application control to interact with the anchor, the implementation mode of the interaction between the audience application and the anchor can be enriched, and the personalized interaction requirement of the audience is met.
In some alternative embodiments, at least one interaction position may be set in the interaction area in the present application, as shown in fig. 14, the virtual interaction method may include the following steps:
s501, according to the interaction triggering operation sent by the first object, adjusting the current environment information of the virtual space into target environment information, wherein the target environment information comprises: the first object is positioned in the interaction area.
S502, at least one target second object is selected from at least two second objects in the virtual space in response to the selection operation for the second objects.
S503, transmitting each target second object from the current position to the corresponding interaction position in the interaction area, so that the first object interacts with each target second object.
In the present application, the shape of the interaction site may be circular, square, or other shapes. For example, as shown in fig. 15a, the number of interaction sites in the interaction area is 6, and each interaction site is cylindrical in shape.
In some alternative implementations, when an interaction site is disposed in the interaction area, a selected target second object may be transmitted to the interaction site. When at least two interaction sites are disposed in the interaction region, each target second object may be transferred to the object interaction site as follows, but not limited to:
In one mode, as shown in fig. 15b, the target second objects are transferred to the corresponding interaction sites in the order from the center to the two sides.
The more forward the application interaction time of the target second object is, the closer the interaction position is to the center. Thus, the closer to the front of the anchor.
And secondly, transmitting the target second object to the corresponding interaction position according to the left-to-right sequence.
And thirdly, transmitting the target second object to the corresponding interaction position according to the order from right to left.
Consider that the interaction site at the central location is closer to the anchor and can see the anchor's positive information. Therefore, in a first preferred mode of the present application, the target audience is transmitted to the corresponding interaction site in the order from front to back according to the application interaction time triggered by the target audience. Thus, the target audience applying for interaction earlier can be preferentially transmitted to the interaction position close to the center and closer to the anchor.
In the application, after the target audience is transmitted to the corresponding interaction position, the target audience can move and turn the angle of view on the interaction position but cannot leave the interaction position, so that the interaction link between the target audience and the anchor caused by leaving the interaction position can be avoided.
According to the technical scheme provided by the embodiment of the application, the current environmental information of the virtual space is adjusted to the target environmental information corresponding to the interactive operation according to the interactive operation triggered by the first object in the virtual space, so that the first object can perform virtual interaction with any second object in the virtual space in the interactive area in the target environmental information, the interactive mode of the virtual space can be enriched, the interactive interestingness of the virtual space is enhanced, and the interactive enthusiasm of virtual interaction between audiences and anchor can be improved. In addition, the interactive positions are arranged in the interactive areas, so that target audiences can be transmitted to the corresponding interactive positions, the anchor can perform virtual interaction with the target audiences on each interactive position, the interactive mode of the virtual space is further enriched, and the virtual interaction diversification requirements of users are met.
In some optional embodiments, after each target second object is transferred from the current position into the interaction area, an interactive control panel may be optionally presented in the virtual space, so that the target second object may interact with the first object based on the interactive control panel. As shown in fig. 16, the step S103 may optionally include the following steps:
S601, presenting an interactive control panel in a virtual space, wherein the interactive control panel comprises: and exiting the interactive control.
S602, responding to the selected operation of any target second object for exiting the interactive control, and presenting a fifth prompt popup in the virtual space, wherein the fifth prompt popup comprises: fifth prompt information and a fifth control.
And S603, controlling the target second object to exit interaction with the first object in response to the selected operation of any target second object for the fifth control.
Optionally, after the target audience enters the interaction area, the application can present the interaction prompt information and the interaction operation panel to the target audience. The interaction prompt message can be selected as 'you have entered into interaction, and the interaction bar is fast to interact with the anchor'. The interactive operation panel includes: and exiting the interactive control and the interactive control. The optional interaction control is a microphone control. The microphone control is emphasized in the interactive control panel, and the exiting interactive control is weakened, so that the time length of the target audience in the interactive area is saved as much as possible through the display mode.
Furthermore, the microphone control of the target audience can be opened through the target audience or the anchor, so that the microphone associated with the microphone control can collect the voice information output by the target audience, and further the communication interaction is carried out with the anchor based on the voice information.
During the interaction with the anchor, if any target audience gives some bad comments, the anchor can force the target audience to finish the interaction operation so as to kick the target audience out of the interaction link.
In addition, if any target audience needs to send a barrage in the interaction process, the voice input control in the barrage boundary panel is set to be in an unavailable state, and only the keyboard input barrage function is reserved, so that the target audience can normally interact with a host through the microphone control.
In some alternative embodiments, during the interaction between the target audience and the anchor, if any target audience needs to exit the current interaction, the target audience may select an exit interaction control in the interaction control panel to send an exit interaction operation to the electronic device. When the electronic device acquires the interaction exiting operation sent by any target audience, a fifth prompt popup window is presented for the target audience so as to prompt the target audience whether to determine to leave the interaction with the anchor. If the target audience determines to leave the interaction with the anchor, the target audience may select a fifth control (e.g., a determination control) in a fifth prompt pop to send the determination to the electronic device to leave the interaction with the anchor. At this time, the electronic device determines, according to the determination control selected by the target audience, that the target audience is about to exit the interaction with the anchor, thereby controlling the target audience to return from the interaction area to the non-interaction area, and viewing the interaction process between the anchor and other target audience with the identity of the audience. If the target audience does not want to leave the interaction with the anchor, the target audience can select a cancel control in a fifth prompt popup to send an interaction link which does not leave the anchor to the electronic equipment, so that the electronic equipment keeps the interaction capability of the target audience in the interaction area.
In some alternative embodiments, it may be desirable to leave the virtual space during the target audience interaction with the anchor. At this time, the target audience may trigger the shortcut panel through the external device, and when the departure control on the shortcut surface is selected, the electronic device presents a sixth prompt popup window in the virtual space. The sixth prompt pop-up window includes: sixth prompt, sixth control and cancel control. The sixth prompt information can be selected as 'you are currently connecting with wheat, whether you leave the live broadcast room', and the sixth control can be selected as a determination control or a leave control. If the target audience determines to leave the virtual space, the target audience may select a sixth control in a sixth prompt pop-up to send an instruction to the electronic device to determine to leave the virtual space. At this time, the electronic device controls the target audience to exit the virtual space according to the determination control selected by the target audience. If the target audience does not want to leave the virtual space, the target audience can select a cancel control in the sixth prompt popup window to send the electronic equipment without leaving the virtual space, so that the electronic equipment continues to keep the interaction capability of the target audience in the interaction area.
In some alternative embodiments, if the target audience remains in the interaction area until the interaction ends, the present application displays a finishing communication in the virtual space to prompt the target audience that the interaction link with the anchor has ended. Thus, the target audience can continue to watch live performances of the anchor in the virtual space. After the interaction is finished, the scene information of the virtual space is automatically switched from the target environment information to the environment information in normal live broadcasting.
According to the technical scheme provided by the embodiment of the application, the current environmental information of the virtual space is adjusted to the target environmental information corresponding to the interactive operation according to the interactive operation triggered by the first object in the virtual space, so that the first object can perform virtual interaction with any second object in the virtual space in the interactive area in the target environmental information, the interactive mode of the virtual space can be enriched, the interactive interestingness of the virtual space is enhanced, and the interactive enthusiasm of virtual interaction between audiences and anchor can be improved.
A virtual interactive apparatus according to an embodiment of the present application is described below with reference to fig. 17. Fig. 17 is a schematic block diagram of a virtual interactive apparatus according to an embodiment of the present application.
As shown in fig. 17, the virtual interactive apparatus 700 includes: an environment adjustment module 710, an object selection module 720, and an object interaction module 730.
The environment adjustment module 710 is configured to adjust current environment information of the virtual space to target environment information according to an interactive triggering operation sent by the first object, where the target environment information includes: the interaction area is provided with a first object, and the first object is positioned in the interaction area;
an object selection module 720, configured to select at least one target second object from at least two second objects in the virtual space in response to a selection operation for the second objects;
And an object interaction module 730, configured to transmit each target second object from the current position into the interaction area, so that the first object interacts with each target second object.
One or more alternative implementations of embodiments of the application, the environment adjustment module 710, includes:
the acquisition unit is used for acquiring target environment information from an environment information list according to the interaction type of the interaction triggering operation sent by the first object;
and the adjusting unit is used for adjusting the current environment information of the virtual space according to the target environment information.
One or more optional implementation manners of the embodiment of the present application, the obtaining unit is specifically configured to:
and if the interaction type corresponding to the interaction triggering operation is the link-wheat interaction, acquiring link-wheat environment information from the environment information list as target environment information.
One or more optional implementations of embodiments of the present application, the object selection module 720 is specifically configured to:
acquiring bullet screen information and/or a second object list in the virtual space;
and selecting at least one target second object from at least two second object lists in the virtual space according to the barrage information and/or the second object list.
One or more alternative implementations of the embodiments of the present application further include:
a state determining module, configured to determine current state information of each second object in the virtual space;
the first display module is configured to present a first prompt pop-up window or a second prompt pop-up window in the virtual space if the current state information of any of the second objects is a viewing state, where the first prompt pop-up window includes: the first prompt message, the first control and the second control, the second prompt popup includes: a second prompt message; if the current state information of any second object is an editing state, a third prompt popup window is presented in the virtual space, and the third prompt popup window comprises: third prompt information and a third control.
One or more alternative implementations of the embodiments of the present application further include:
the object determining module is used for determining that any second object is a candidate second object according to the selected operation of the second object on the first control in the first prompt popup or on the third control in the third prompt popup; or, in response to any of the second objects being less than a distance threshold based on a distance between the second prompt pop and the interaction region, determining the second object as a candidate second object.
One or more alternative implementations of the embodiments of the present application further include:
the second display module is used for presenting an interactive application control on the boundary position of the interactive area in the virtual space;
and the object determining module is also used for determining the second object as a candidate second object in response to the selected operation of any second object on the interactive application control.
One or more optional implementations of embodiments of the present application, the object selection module 720 is specifically configured to:
and screening at least one target second object from the candidate second objects according to a preset screening rule.
One or more optional implementation manners of the embodiment of the present application, the preset screening rule includes:
screening candidate second objects with identification information being preset identification information as target second objects;
or,
screening a preset number of target second objects from the candidate second objects according to a front-to-back sequence based on the selection time of the controls, wherein the controls are first controls, third controls or interactive application controls;
or,
and screening a preset number of target objects from the candidate second objects according to the sequence from front to back based on the time when the second objects are contacted with the interaction area.
One or more optional implementations of embodiments of the present application, the object selection module 720 is specifically configured to:
in response to a selection operation of the first object for the remaining second objects, presenting a fourth prompt pop to the selected remaining second objects in the virtual space, the fourth prompt pop including: the fourth prompt message and the fourth control;
responding to the selected operation of the arbitrarily selected remaining second object aiming at the fourth control in the fourth prompt popup window, and determining the selected remaining second object as a target second object;
the remaining second objects are second objects in the first prompt popup window selected, the third control in the third prompt popup window is not selected, and the distance between the third prompt popup window and the interaction area is greater than or equal to a distance threshold value, or the second objects of the interaction application control are not selected.
One or more alternative implementations of the embodiments of the present application, the interaction region includes: at least one interaction site;
accordingly, the object interaction module 730 is specifically configured to:
and transmitting each target second object from the current position to a corresponding interaction position in the interaction area.
One or more alternative implementations of the embodiments of the present application further include:
The third display module is configured to present an interactive control panel in the virtual space, where the interactive control panel includes: exiting the interactive control;
the fourth display module is configured to respond to a selection operation of any target second object for the exit interaction control, and present a fifth prompt popup in the virtual space, where the fifth prompt popup includes: fifth prompt information and a fifth control;
and the control module is used for responding to the selected operation of any target second object for the fifth control and controlling the target second object to exit the interaction with the first object.
One or more alternative implementations of the embodiments of the present application further include:
a fifth display module, configured to respond to an exit operation of any of the target second objects with respect to the virtual space, and present a sixth prompt pop in the virtual space, where the sixth prompt pop includes: a sixth prompt message and a sixth control;
and the control module is also used for responding to the selected operation of any target second object for the sixth control and controlling the target second object to exit the virtual space.
In one or more alternative implementations of the embodiments of the present application, the interactive region presents a first visual effect, and the interface between the interactive region and the non-interactive region presents a second visual effect.
One or more alternative implementations of embodiments of the present application, the object interaction module 730, specifically, is configured to: and carrying out wheat-linking interaction between the first object and each target second object.
It should be understood that apparatus embodiments and the foregoing method embodiments may correspond to each other, and similar descriptions may refer to the method embodiments. To avoid repetition, no further description is provided here. Specifically, the apparatus 700 shown in fig. 17 may perform the method embodiment corresponding to fig. 2, and the foregoing and other operations and/or functions of each module in the apparatus 700 are respectively for implementing the corresponding flow in each method in fig. 2, and are not further described herein for brevity.
The apparatus 700 of the embodiment of the present application is described above in terms of functional modules in conjunction with the accompanying drawings. It should be understood that the functional module may be implemented in hardware, or may be implemented by instructions in software, or may be implemented by a combination of hardware and software modules. Specifically, each step of the method embodiment of the first aspect in the embodiment of the present application may be implemented by an integrated logic circuit of hardware in a processor and/or an instruction in software, and the steps of the method of the first aspect disclosed in connection with the embodiment of the present application may be directly implemented as a hardware decoding processor or implemented by a combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in a well-established storage medium in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads information in the memory, and in combination with hardware, performs the steps in the method embodiment of the first aspect.
Fig. 18 is a schematic block diagram of an electronic device according to an embodiment of the present application. As shown in fig. 18, the electronic device 800 may include:
a memory 810 and a processor 820, the memory 810 being for storing a computer program and transmitting the program code to the processor 820. In other words, the processor 820 may call and run a computer program from the memory 810 to implement the virtual interaction method in the embodiment of the present application.
For example, the processor 820 may be configured to execute the virtual interactive method embodiments described above according to instructions in the computer program.
In some embodiments of the application, the processor 820 may include, but is not limited to:
a general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
In some embodiments of the application, the memory 810 includes, but is not limited to:
volatile memory and/or nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DR RAM).
In some embodiments of the present application, the computer program may be partitioned into one or more modules that are stored in the memory 810 and executed by the processor 820 to perform the virtual interaction method provided by the present application. The one or more modules may be a series of computer program instruction segments capable of performing the specified functions, which are used to describe the execution of the computer program in the electronic device.
As shown in fig. 18, the electronic device 800 may further include:
a transceiver 830, the transceiver 830 being connectable to the processor 820 or the memory 810.
Processor 820 may control transceiver 830 to communicate with other devices, and in particular, may send information or data to other devices or receive information or data sent by other devices. Transceiver 630 may include a transmitter and a receiver. Transceiver 830 may further include antennas, the number of which may be one or more.
It will be appreciated that the various components in the electronic device are connected by a bus system that includes, in addition to a data bus, a power bus, a control bus, and a status signal bus.
The application also provides a computer readable storage medium for storing a computer program, wherein the computer program enables a computer to execute the virtual interaction method according to the embodiment of the method.
The embodiment of the application also provides a computer program product containing program instructions, which when run on electronic equipment, cause the electronic equipment to execute the virtual interaction method of the embodiment of the method.
When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. For example, functional modules in various embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. A method of virtual interaction, comprising:
according to the interaction triggering operation sent by the first object, the current environment information of the virtual space is adjusted to be target environment information, wherein the target environment information comprises: the interaction area is provided with a first object, and the first object is positioned in the interaction area;
Selecting at least one target second object from at least two second objects in the virtual space in response to a selection operation for the second object;
and transmitting each target second object from the current position into the interaction area so that the first object interacts with each target second object.
2. The method according to claim 1, wherein the adjusting the current environmental information of the virtual space to the target environmental information according to the interactive triggering operation sent by the first object includes:
acquiring target environment information from an environment information list according to the interaction type of the interaction triggering operation sent by the first object;
and adjusting the current environment information of the virtual space according to the target environment information.
3. The method according to claim 2, wherein the obtaining the target environmental information from the environmental information list according to the interaction type of the interaction triggering operation sent by the first object includes:
and if the interaction type corresponding to the interaction triggering operation is the link-wheat interaction, acquiring link-wheat environment information from the environment information list as target environment information.
4. The method of claim 1, wherein selecting at least one target second object from at least two second objects within the virtual space in response to the selection operation for the second object comprises:
acquiring bullet screen information and/or a second object list in the virtual space;
and selecting at least one target second object from at least two second object lists in the virtual space according to the barrage information and/or the second object list.
5. The method of claim 1, further comprising, after adjusting the current environment information of the virtual space to the target environment information:
determining current state information of each second object in the virtual space;
if the current state information of any second object is a viewing state, a first prompt popup or a second prompt popup is presented in the virtual space, wherein the first prompt popup comprises: the first prompt message, the first control and the second control, the second prompt popup includes: a second prompt message;
if the current state information of any second object is an editing state, a third prompt popup window is presented in the virtual space, and the third prompt popup window comprises: third prompt information and a third control.
6. The method as recited in claim 5, further comprising:
determining that any second object is a candidate second object according to the selected operation of the second object on the first control in the first prompt popup or on the third control in the third prompt popup;
or,
and determining that the second object is a candidate second object based on the fact that the distance between the second prompt popup and the interaction area is smaller than a distance threshold value.
7. The method of claim 1, further comprising, after adjusting the current environment information of the virtual space to the target environment information:
presenting an interactive application control at the boundary position of the interactive area in the virtual space;
and responding to the selected operation of any second object on the interactive application control, and determining the second object as a candidate second object.
8. The method according to claim 6 or 7, wherein selecting at least one target second object from at least two second objects within the virtual space in response to a selection operation for the second object, comprises:
And screening at least one target second object from the candidate second objects according to a preset screening rule.
9. The method of claim 8, wherein the preset screening rules comprise:
screening candidate second objects with identification information being preset identification information as target second objects;
or,
screening a preset number of target second objects from the candidate second objects according to a front-to-back sequence based on the selection time of the controls, wherein the controls are first controls, third controls or interactive application controls;
or,
and screening a preset number of target objects from the candidate second objects according to the sequence from front to back based on the time when the second objects are contacted with the interaction area.
10. The method according to claim 5 or 7, wherein selecting at least one target second object from at least two second objects within the virtual space in response to a selection operation for the second object, comprises:
in response to a selection operation of the first object for the remaining second objects, presenting a fourth prompt pop to the selected remaining second objects in the virtual space, the fourth prompt pop including: the fourth prompt message and the fourth control;
Responding to the selected operation of the arbitrarily selected remaining second object aiming at the fourth control in the fourth prompt popup window, and determining the selected remaining second object as a target second object;
the remaining second objects are second objects in the first prompt popup window selected, the third control in the third prompt popup window is not selected, and the distance between the third prompt popup window and the interaction area is greater than or equal to a distance threshold value, or the second objects of the interaction application control are not selected.
11. The method of claim 1, wherein the interaction zone comprises: at least one interaction site;
correspondingly, the step of transmitting each target second object from the current position to the interaction area comprises the following steps:
and transmitting each target second object from the current position to a corresponding interaction position in the interaction area.
12. The method of claim 1 or 11, further comprising, after said transferring each of said target second objects from the current location into said interaction zone:
presenting an interactive control panel in the virtual space, the interactive control panel comprising: exiting the interactive control;
responding to the selected operation of any target second object on the exit interaction control, and presenting a fifth prompt popup in the virtual space, wherein the fifth prompt popup comprises: fifth prompt information and a fifth control;
And responding to the selected operation of any target second object on the fifth control, and controlling the target second object to exit the interaction with the first object.
13. The method according to claim 1 or 11, further comprising:
in response to an exit operation of any target second object for the virtual space, a sixth prompt pop is presented in the virtual space, the sixth prompt pop including: a sixth prompt message and a sixth control;
and responding to the selected operation of any target second object on the sixth control, and controlling the target second object to exit the virtual space.
14. The method of claim 1, wherein a first visual effect is presented in the interactive zone and a second visual effect is presented at a junction of the interactive zone and the non-interactive zone.
15. The method of claim 1, wherein the first object interacts with each of the target second objects, comprising:
and carrying out wheat-linking interaction between the first object and each target second object.
16. A virtual interactive apparatus, comprising:
the environment adjustment module is configured to adjust current environment information of the virtual space to target environment information according to an interactive triggering operation sent by the first object, where the target environment information includes: the interaction area is provided with a first object, and the first object is positioned in the interaction area;
The object selection module is used for responding to the selection operation of the second objects and selecting at least one target second object from at least two second objects in the virtual space;
and the object interaction module is used for transmitting each target second object from the current position into the interaction area so that the first object and each target second object interact.
17. An electronic device, comprising:
a processor and a memory for storing a computer program, the processor for invoking and running the computer program stored in the memory to perform the virtual interaction method of any of claims 1-15.
18. A computer readable storage medium storing a computer program for causing a computer to perform the virtual interaction method of any of claims 1-15.
19. A computer program product comprising program instructions which, when run on an electronic device, cause the electronic device to perform the virtual interaction method of any of claims 1-15.
CN202311061397.0A 2023-08-22 2023-08-22 Virtual interaction method, device, equipment and storage medium Pending CN117221641A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311061397.0A CN117221641A (en) 2023-08-22 2023-08-22 Virtual interaction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311061397.0A CN117221641A (en) 2023-08-22 2023-08-22 Virtual interaction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117221641A true CN117221641A (en) 2023-12-12

Family

ID=89039844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311061397.0A Pending CN117221641A (en) 2023-08-22 2023-08-22 Virtual interaction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117221641A (en)

Similar Documents

Publication Publication Date Title
US11722537B2 (en) Communication sessions between computing devices using dynamically customizable interaction environments
CN110300909B (en) Systems, methods, and media for displaying an interactive augmented reality presentation
US11206373B2 (en) Method and system for providing mixed reality service
US10445941B2 (en) Interactive mixed reality system for a real-world event
US11457178B2 (en) Three-dimensional modeling inside a virtual video conferencing environment with a navigable avatar, and applications thereof
CN110971925B (en) Display method, device and system of live broadcast interface
US11184362B1 (en) Securing private audio in a virtual conference, and applications thereof
US20220360742A1 (en) Providing awareness of who can hear audio in a virtual conference, and applications thereof
JP2019204244A (en) System for animated cartoon distribution, method, and program
US20240087236A1 (en) Navigating a virtual camera to a video avatar in a three-dimensional virtual environment, and applications thereof
US20230353616A1 (en) Communication Sessions Between Devices Using Customizable Interaction Environments And Physical Location Determination
US11928774B2 (en) Multi-screen presentation in a virtual videoconferencing environment
CN117221641A (en) Virtual interaction method, device, equipment and storage medium
US20240007593A1 (en) Session transfer in a virtual videoconferencing environment
US11748939B1 (en) Selecting a point to navigate video avatars in a three-dimensional environment
US20240031531A1 (en) Two-dimensional view of a presentation in a three-dimensional videoconferencing environment
US11776227B1 (en) Avatar background alteration
US11741652B1 (en) Volumetric avatar rendering
US11741664B1 (en) Resituating virtual cameras and avatars in a virtual environment
US20230393648A1 (en) System for multi-user collaboration within a virtual reality environment
US11876630B1 (en) Architecture to control zones
WO2024037001A1 (en) Interaction data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
US20240031182A1 (en) Access control in zones
WO2024020452A1 (en) Multi-screen presentation in a virtual videoconferencing environment
CN116614543A (en) Virtual interaction method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination