CN114845129B - Interaction method, device, terminal and storage medium in virtual space - Google Patents

Interaction method, device, terminal and storage medium in virtual space Download PDF

Info

Publication number
CN114845129B
CN114845129B CN202210450735.9A CN202210450735A CN114845129B CN 114845129 B CN114845129 B CN 114845129B CN 202210450735 A CN202210450735 A CN 202210450735A CN 114845129 B CN114845129 B CN 114845129B
Authority
CN
China
Prior art keywords
resource
interaction
special effect
terminal
joint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210450735.9A
Other languages
Chinese (zh)
Other versions
CN114845129A (en
Inventor
谢京辉
尚鹏
刘婉思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210450735.9A priority Critical patent/CN114845129B/en
Publication of CN114845129A publication Critical patent/CN114845129A/en
Application granted granted Critical
Publication of CN114845129B publication Critical patent/CN114845129B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to an interaction method, an interaction device, a terminal and a storage medium in a virtual space, and belongs to the technical field of networks. According to the method and the device, under the condition that the audience object in the first virtual space triggers the virtual resource, the first special effect resource of the virtual resource is displayed in the joint picture displayed by the terminal of the first object, and the second special effect resource of the virtual resource is displayed on the terminal of the second object, so that each object in the first virtual space of the first object can watch the first special effect resource, each object in the second virtual space of the second object can watch the second special effect resource, and therefore part of special effect resources of the virtual resource are displayed through different terminals, interaction among multiple virtual spaces is achieved, interaction efficiency among all virtual spaces in the joint virtual space is improved, and man-machine interaction efficiency is improved.

Description

Interaction method, device, terminal and storage medium in virtual space
Technical Field
The disclosure relates to the field of network technologies, and in particular, to an interaction method, device, terminal and storage medium in a virtual space.
Background
With the rapid development of network technology, a manner of interaction through a virtual space is gradually introduced into daily life of people as an entertainment manner, wherein the virtual space is a network space which can watch live broadcast and support interaction, such as a network live broadcast room.
The anchor opens a virtual space on the network virtual space platform, and the anchor can also perform joint interaction (commonly referred to as "PK") with other anchor virtual spaces to form a joint virtual space.
However, in the joint interaction process, the special effects triggered in the virtual space of the anchor can only be displayed in the virtual space of the anchor, so that the interaction efficiency between the virtual spaces in the joint virtual space is reduced, and the man-machine interaction efficiency is affected.
Disclosure of Invention
The disclosure provides an interaction method, device, terminal and storage medium in a virtual space, so as to at least solve the problem of low interaction efficiency between virtual spaces in a joint virtual space in the related art. The technical scheme of the present disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an interaction method in a virtual space, the method including:
displaying a joint picture of a plurality of objects, the joint picture comprising pictures of virtual spaces of the plurality of objects;
When a first audience object in a virtual space of a first object in the plurality of objects triggers a virtual resource, displaying a first special effect resource included in the virtual resource in the joint picture;
the virtual resource comprises the first special effect resource and a second special effect resource, the first special effect resource and the second special effect resource correspond to different objects, and the second special effect resource is used for being displayed on a terminal of a second object in the plurality of objects.
In a possible implementation manner, the first special effect resource corresponds to a first period of time, the second special effect resource corresponds to a second period of time, and the end time of the first period of time is the start time of the second period of time, and displaying, in the joint picture, the first special effect resource included in the virtual resource includes:
and displaying the first special effect resource in the combined picture in the first time period.
In one possible embodiment, the method further comprises:
and displaying the second special effect resource in the combined picture.
In a possible implementation manner, the displaying, in the joint screen, the first special effects resource included in the virtual resource includes:
Displaying the first special effect resource in the combined picture in the first time period;
and displaying the second special effect resource in the combined picture comprises the following steps:
and in the second time period, displaying the second special effect resource in the combined picture.
In a possible implementation manner, the displaying, in the joint screen, the first special effects resource included in the virtual resource includes:
in the joint picture, playing joint video streams of the plurality of objects, wherein the joint video streams comprise first rendering data of the first special effect resource, and the first rendering data are used for displaying the first special effect resource;
the joint video stream is provided to a terminal of the first viewer object.
In one possible embodiment, the method further comprises:
receiving a first interaction instruction of the virtual resource, wherein the first interaction instruction carries a resource identifier of the second special effect resource;
and if the first interaction instruction also carries the identifier of the second object, displaying the second special effect resource in the combined picture.
In a possible implementation manner, the displaying, in the joint picture, the second special effects resource includes:
And if the first interaction instruction carries a display position identifier, displaying the second special effect resource on a target display position indicated by the display position identifier in the combined picture.
In a possible implementation manner, if the first interaction instruction carries a display position identifier, displaying the second special effect resource on the target display position indicated by the display position identifier in the combined picture includes:
if the display position mark indicates the target part of the object, identifying the target part of the second object in the combined picture;
and displaying the second special effect resource at the identified target part of the second object.
In one possible implementation, the virtual resource includes a plurality of second effect resources corresponding to different second objects.
In one possible embodiment, the method further comprises:
and respectively providing a first interaction instruction for the terminal of at least one second object in the plurality of objects through a first communication channel, wherein the first interaction instruction is used for indicating the terminal of one second object to display a second special effect resource, and the first communication channel is a communication channel between anchor terminals.
In one possible embodiment, the method further comprises:
providing a third interaction instruction for the terminal of the first audience object through a second communication channel, wherein the third interaction instruction is used for indicating the terminal of the first audience object to display the first special effect resource, and the second communication channel is a communication channel between a main broadcasting end and an audience end;
and providing a fourth interaction instruction for the terminal of the second object through the first communication channel, wherein the first interaction instruction is used for indicating the terminal of the second audience object in the virtual space of the second object to display the second special effect resource.
According to a second aspect of embodiments of the present disclosure, there is provided an interaction method in a virtual space, the method including:
displaying a joint picture of a plurality of objects, the joint picture comprising pictures of virtual spaces of the plurality of objects;
when a first audience object in a virtual space of a first object in the plurality of objects triggers a virtual resource, displaying a second special effect resource included in the virtual resource in the joint picture;
the virtual resource comprises a first special effect resource and a second special effect resource, the first special effect resource and the second special effect resource correspond to different objects, and the first special effect resource is used for being displayed on a terminal of the first object.
In a possible implementation manner, the first special effect resource corresponds to a first period of time, the second special effect resource corresponds to a second period of time, and the ending time of the first period of time is the starting time of the second period of time, and displaying, in the joint picture, the second special effect resource included in the virtual resource includes:
and displaying the second special effect resource in the combined picture in the second time period after the first time period.
In one possible embodiment, the method further comprises:
and displaying the first special effect resource in the combined picture.
In a possible implementation manner, the displaying, in the joint screen, the first special effects resource includes:
displaying the first special effect resource in the combined picture in the first time period;
and displaying the second special effect resource included in the virtual resource in the joint picture comprises the following steps:
and in the second time period, displaying the second special effect resource in the combined picture.
In a possible implementation manner, the displaying, in the joint screen, the second special effects resource included in the virtual resource includes:
Playing a joint video stream of the plurality of objects in the joint picture, wherein the joint video stream comprises second rendering data of the second special effect resource, and the second rendering data is used for displaying the second special effect resource;
and transmitting the joint video stream to a terminal of a second audience object in the virtual space of a second object in the plurality of objects.
In a possible implementation manner, the displaying, in the joint picture, the second special effect resource corresponding to the virtual resource includes:
receiving a first interactive instruction, wherein the first interactive instruction carries the identification of the second special effect resource,
and if the first interaction instruction also carries the identification of a second object in the plurality of objects, displaying the second special effect resource in the combined picture.
In a possible implementation manner, the displaying, in the joint picture, the second special effects resource includes:
and if the first interaction instruction carries a display position identifier, displaying the second special effect resource on a target display position indicated by the display position identifier in the combined picture.
In a possible implementation manner, if the first interaction instruction carries a display position identifier, displaying the second special effect resource on the target display position indicated by the display position identifier in the joint picture includes:
If the display position identifier indicates the target part of the object, identifying the target part of the second object in the combined picture based on the second identifier;
and displaying the second special effect resource at the identified target part of the second object.
In one possible implementation manner, the receiving the second interaction instruction includes:
and receiving the first interaction instruction through a first communication channel, wherein the first communication channel is a communication channel between the anchor terminals.
In one possible embodiment, the method further comprises:
receiving a fourth interaction instruction through the first communication channel, wherein the fourth interaction instruction is used for indicating a terminal of a second audience object in the virtual space of the second object to display the second special effect resource;
and providing the fourth interaction instruction to the terminal of the second audience object through a second communication channel, wherein the second communication channel is a communication channel between the anchor end and the audience end.
According to a third aspect of embodiments of the present disclosure, an interaction device in a virtual space, the device comprises:
a display unit configured to perform a joint screen displaying a plurality of objects, the joint screen including a screen of a virtual space of the plurality of objects;
A display unit configured to execute, in a case where a first viewer object within a virtual space of a first object of the plurality of objects triggers a virtual resource, a second special effect resource included in the virtual resource in the joint screen;
the virtual resource comprises a first special effect resource and a second special effect resource, the first special effect resource and the second special effect resource correspond to different objects, and the first special effect resource is used for being displayed on a terminal of the first object.
In a possible implementation manner, the first special effect resource corresponds to a first period of time, the second special effect resource corresponds to a second period of time, and the end time of the first period of time is the start time of the second period of time, and the display unit is configured to execute:
and displaying the first special effect resource in the combined picture in the first time period.
In a possible embodiment, the display unit is further configured to perform:
and displaying the second special effect resource in the combined picture.
In a possible embodiment, the display unit is further configured to perform:
displaying the first special effect resource in the combined picture in the first time period;
And in the second time period, displaying the second special effect resource in the combined picture.
In a possible embodiment, the display unit is further configured to perform:
in the joint picture, playing joint video streams of the plurality of objects, wherein the joint video streams comprise first rendering data of the first special effect resource, and the first rendering data are used for displaying the first special effect resource;
the joint video stream is provided to a terminal of the first viewer object.
In one possible embodiment, the display unit comprises:
the receiving subunit is configured to execute a first interaction instruction for receiving the virtual resource, wherein the first interaction instruction carries a resource identifier of the second special effect resource;
and the display subunit is configured to execute the second special effect resource in the combined picture if the first interactive instruction also carries the identifier of the first object.
In one possible implementation, the presentation subunit is further configured to perform:
and if the first interaction instruction carries a display position identifier, displaying the second special effect resource on a target display position indicated by the display position identifier in the combined picture.
In one possible implementation, the presentation subunit is further configured to perform:
if the display position mark indicates the target part of the object, identifying the target part of the second object in the combined picture;
and displaying the second special effect resource at the identified target part of the second object.
In one possible implementation, the virtual resource includes a plurality of second effect resources corresponding to different second objects.
In one possible embodiment, the apparatus further comprises:
the providing unit is configured to execute a first interaction instruction to the terminal of at least one second object in the plurality of objects through a first communication channel, wherein the first interaction instruction is used for indicating the terminal of one second object to display a second special effect resource, and the first communication channel is a communication channel between anchor terminals.
In a possible implementation, the providing unit is further configured to perform:
providing a third interaction instruction for the terminal of the first audience object through a second communication channel, wherein the third interaction instruction is used for indicating the terminal of the first audience object to display the first special effect resource, and the second communication channel is a communication channel between a main broadcasting end and an audience end;
And providing a fourth interaction instruction for the terminal of the second object through the first communication channel, wherein the first interaction instruction is used for indicating the terminal of the second audience object in the virtual space of the second object to display the second special effect resource.
According to a fourth aspect of embodiments of the present disclosure, there is provided an interaction device in a virtual space, the device comprising:
a display unit configured to perform a joint screen displaying a plurality of objects, the joint screen including a screen of a virtual space of the plurality of objects;
a display unit configured to execute, in a case where a first viewer object within a virtual space of a first object of the plurality of objects triggers a virtual resource, a second special effect resource included in the virtual resource in the joint screen;
the virtual resource comprises a first special effect resource and a second special effect resource, the first special effect resource and the second special effect resource correspond to different objects, and the first special effect resource is used for being displayed on a terminal of the first object.
In a possible implementation manner, the first special effect resource corresponds to a first period of time, the second special effect resource corresponds to a second period of time, and the end time of the first period of time is the start time of the second period of time, and the display unit is configured to execute:
And displaying the second special effect resource in the combined picture in the second time period after the first time period.
In a possible embodiment, the display unit is further configured to perform:
and displaying the first special effect resource in the combined picture.
In a possible embodiment, the display unit is further configured to perform:
displaying the first special effect resource in the combined picture in the first time period;
and in the second time period, displaying the second special effect resource in the combined picture.
In one possible implementation, the presentation unit is configured to perform:
playing a joint video stream of the plurality of objects in the joint picture, wherein the joint video stream comprises second rendering data of the second special effect resource, and the second rendering data is used for displaying the second special effect resource;
and transmitting the joint video stream to a terminal of a second audience object in the virtual space of a second object in the plurality of objects.
In one possible embodiment, the display unit comprises:
a receiving subunit configured to execute receiving a first interaction instruction, where the first interaction instruction carries an identifier of the second special effect resource,
And the display subunit is configured to execute the second special effect resource in the combined picture if the first interaction instruction also carries the identification of the second object in the plurality of objects.
In one possible implementation, the presentation subunit is further configured to perform:
and if the first interaction instruction carries a display position identifier, displaying the second special effect resource on a target display position indicated by the display position identifier in the combined picture.
In one possible implementation, the presentation subunit is further configured to perform:
if the display position identifier indicates the target part of the object, identifying the target part of the second object in the combined picture based on the second identifier;
and displaying the second special effect resource at the identified target part of the second object.
In one possible implementation, the receiving subunit is configured to perform:
and receiving the first interaction instruction through a first communication channel, wherein the first communication channel is a communication channel between the anchor terminals.
In one possible embodiment, the apparatus further comprises:
A receiving unit configured to execute a fourth interaction instruction through the first communication channel, where the fourth interaction instruction is used to instruct a terminal of a second audience object in the virtual space of the second object to display the second special effect resource;
and the providing unit is configured to provide the fourth interaction instruction to the terminal of the second audience object through a second communication channel, wherein the second communication channel is a communication channel between a main broadcasting end and an audience end.
According to a fifth aspect of embodiments of the present disclosure, there is provided a terminal comprising:
one or more processors;
one or more memories for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform the interaction method in the virtual space in any of the possible implementations of the first aspect or to perform the interaction method in the virtual space in any of the possible implementations of the second aspect.
According to a sixth aspect of the disclosed embodiments, there is provided a computer readable storage medium, at least one instruction of which, when executed by one or more processors of a terminal, enables the terminal to perform the method of interaction in virtual space in any of the possible implementations of the first aspect described above, or is configured to perform the method of interaction in virtual space in any of the possible implementations of the second aspect described above.
According to a seventh aspect of the disclosed embodiments, there is provided a computer program product comprising one or more instructions executable by one or more processors of a terminal, such that the terminal is capable of performing the method of interaction in a virtual space in any of the possible implementations of the first aspect described above, or is configured to perform the method of interaction in a virtual space in any of the possible implementations of the second aspect described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
under the condition that the audience object in the first virtual space triggers the virtual resource, the first special effect resource of the virtual resource is displayed in the joint picture displayed by the terminal of the first object, and the second special effect resource of the virtual resource is displayed on the terminal of the second object, so that each object in the first virtual space of the first object can watch the first special effect resource, each object in the second virtual space of the second object can watch the second special effect resource, and therefore part of special effect resources of the virtual resource are displayed through different terminals, interaction among multiple virtual spaces is achieved, interaction efficiency among each virtual space in the joint virtual space is improved, and man-machine interaction efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram of an interactive system, according to an exemplary embodiment.
FIG. 2 is a flow chart illustrating a method of interaction in a virtual space, according to an example embodiment.
FIG. 3 is a flowchart illustrating a method of interaction in a virtual space, according to an example embodiment.
FIG. 4 is an interaction flow diagram illustrating a method of interaction in a virtual space, according to an example embodiment.
Fig. 5 is a schematic diagram showing a joint screen displayed by a first terminal according to an exemplary embodiment.
Fig. 6 is a schematic diagram showing a joint screen displayed by a second terminal according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating a logical structure of an interactive apparatus in a virtual space according to an exemplary embodiment.
Fig. 8 is a logical block diagram illustrating an interactive apparatus in a virtual space according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The user information referred to in the present disclosure may be information authorized by the user or sufficiently authorized by each party.
The video stream photographed by the terminal related to the present disclosure may be authorized by the user or sufficiently authorized by each party.
Fig. 1 is a schematic diagram of an interactive system according to an exemplary embodiment, and referring to fig. 1, the interactive system 100 includes a plurality of terminals 11, a first server 12, a second server 13, and a third server 14. A plurality of terminals 11 such as terminals 111, 1111, 112, 1121, 113, 1131 in fig. 1. The terminal 11 in the interactive system 100 includes at least one of a smart phone, a tablet computer, a smart speaker, a smart watch, a notebook computer, a smart palm phone, a portable game device, or a desktop computer, but the type of the terminal 11 is not limited thereto.
An application supporting a function of opening a virtual space is run in each terminal 11, and the application includes any one of a live application, a short video application, a social application, or a game application. For convenience of description, a user who performs live broadcasting in the virtual space is referred to as a anchor, and the terminal 11 used by the anchor is referred to as an anchor terminal. A user who views the virtual space is referred to as a viewer object, and the terminal 101 used by the viewer object is referred to as a viewer terminal.
One or more of the anchor terminal and the audience terminal in the interactive system 100. For ease of description, any anchor terminal in the interactive system 100 is referred to as a first terminal (e.g., terminal 111 in fig. 1), and anchors using the first terminal are referred to as first objects. Any anchor terminal in the interactive system 100 other than the first terminal is referred to as a second terminal (e.g., terminal 112 in fig. 1), and anchors using the second terminal are referred to as second objects. The viewer object in the virtual space of the first object is referred to as a first viewer object, and the viewer terminal used by the first viewer object is referred to as a third terminal (e.g., terminal 1111 in fig. 1). The viewer user in the virtual space of the second object is referred to as a second viewer object, and the viewer terminal used by the second viewer object is referred to as a fourth terminal (e.g., terminal 1121 in fig. 1).
For any one of the first server 12, the second server 13, and the third server 14, any one server includes at least one of a server, a plurality of servers, a cloud computing platform, or a virtualization center. Optionally, any server is an independent physical server, or a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligent platforms, and the like. Any server is used for providing background service for application programs supporting the functions of opening virtual space or watching the virtual space. Optionally, the first server 12, the second server 13 and the third server 14 are responsible for primary computing tasks, and the terminals 11 in the interactive system 100 are responsible for secondary computing tasks; alternatively, the first server 12, the second server 13 and the third server 14 are responsible for secondary computing tasks, and the terminals 11 in the interactive system 100 are responsible for primary computing tasks; alternatively, the terminal 11, the first server 12, the second server 13 and the third server 14 in the interactive system 100 are cooperatively calculated by adopting a distributed computing architecture.
The first server 12 is configured to push a video stream of the anchor terminal to the viewer terminal, thereby implementing a communication channel from the anchor terminal to the viewer terminal through the first server 12. The first server 12 is, for example, a CDN. The second server 13 is configured to push video streams of other anchor terminals to anchor terminals participating in the joint interaction when the joint interaction is performed between anchor, so that a communication channel from anchor terminal to anchor terminal is implemented through the second server 13, and the second server 13 is, for example, an MCU (Multipoint Conferencing Unit, multipoint conference unit).
Here, the following description is made of a virtual space scene according to the present application:
when the first anchor logs in to the first terminal through the first object and live broadcast in the virtual space through the first terminal, the first terminal transmits a video stream (commonly referred to as "push stream") of the first object to the first server 12, and the first audience object can access the first server 12 through the third terminal to acquire the video stream (commonly referred to as "pull stream") of the first terminal. For example, the third terminal requests the server 12 to access the video stream of the first object, and the first server 12 transmits the video stream of the first object to the third terminal.
In one exemplary federated virtual space scenario, it is assumed that virtual spaces of a first object and a second object are federated to form a federated virtual space. When the first and second objects perform joint interaction (commonly called as 'wheat-over interaction', 'wheat-over fight') through the joint virtual space, the first terminal of the first object shoots a video stream of the first object (i.e. a video stream of the virtual space of the first object, abbreviated as first video stream), the first terminal pushes the first video stream into the second server 13 for buffering, the second terminal of the second object shoots a video stream of the second object (i.e. a video stream of the virtual space of the second object, abbreviated as second video stream), the second terminal pushes the second video stream into the second server 13 for buffering, the second server 13 sends the first video stream to the second terminal, and sends the second video stream to the first terminal, so that the first terminal and the second terminal play the first video and the second video stream in the joint picture respectively, the first virtual space of the first object and the second virtual space of the second object are displayed in the joint picture, the first virtual space is the picture of the played first video stream, and the second virtual space is the played effect of the second video stream.
The first server 12 pulls a first joint video stream from the first terminal, and pulls a second joint video stream from the second terminal, where the first joint video stream includes a first video stream and a second video stream that are played by the first terminal, and the second joint video stream is a first video stream and a second video stream that are played by the second terminal. The first server 12 transmits the first joint video stream to the third terminal used by each first viewer object in the first virtual space of the first object, and each viewer terminal displays the joint picture displayed by the first terminal by playing the first joint video stream. The first server 12 transmits the second combined video stream to the fourth terminals used by the respective second viewer objects in the second virtual space, and the respective fourth terminals display the combined picture displayed by the second terminals by playing the second combined video stream.
The third server 14 is configured to send an interaction instruction to the anchor terminal corresponding to any virtual space when the object in the virtual space triggers the virtual resource in the joint interaction process, where the anchor terminal controls each anchor terminal function participating in the joint interaction to display the special effect resource of the virtual resource based on the interaction instruction. Wherein the virtual resource includes a virtual gift, a virtual prop, and the like. The special effect forms of the special effect resource of the virtual resource comprise magic expression dynamic effect, atmosphere special effect, skill special effect, bombing special effect or other special effect forms, and the special effect forms of the special effect resource are not limited in the embodiment of the disclosure. The third server 14 may be a separate server in the interactive system 100, or may be a computing unit in the first server 12 or the second server 13, and the deployment manner of the third server in the interactive system 100 is not limited in this embodiment of the disclosure.
In an exemplary joint virtual space scenario, in the process of joint interaction between a first object and a second object, a virtual resource is triggered by a viewer object in a first virtual space at a third terminal, so that the third terminal requests the third server 14 to give the virtual resource to the first object, the third server 14 sends an interaction instruction to the first terminal of the first object based on the request of the first terminal, and the first terminal controls the first terminal and the second terminal of the second object to display the special effect resource of the virtual resource in an interactive mode based on the interaction instruction.
It should be noted that, in the above example, only two anchor groups are taken as an example to perform joint interaction, alternatively, the number of anchor groups performing joint interaction may be two or more, still taking fig. 1 as an example, the terminal 113 is an anchor terminal of a third object, the first object and the second object perform joint interaction, the terminal 1131 is an audience terminal used by an audience user watching a virtual space of the third object, and when a virtual resource is triggered in the first virtual space, the first terminal controls the first terminal, the second terminal and the third terminal based on an interaction instruction, and displays a special effect resource of the virtual resource in an interactive manner. The number of anchor programs for joint interaction is not particularly limited in the embodiment of the disclosure. In the above example, only one audience object in each anchor virtual space is taken as an example for explanation, alternatively, the number of audience objects in each anchor virtual space may be viewed, and the number of audience objects in each anchor virtual space is not specifically limited in the embodiments of the present disclosure.
Fig. 2 is a flowchart illustrating an interaction method in a virtual space, according to an exemplary embodiment, referring to fig. 2, the interaction method is applied to a first terminal of a first object,
in step 201, a first terminal displays a joint screen of a plurality of objects, the joint screen including a screen of a virtual space of the plurality of objects.
In step 202, in a case that a first viewer object in a virtual space of a first object in the plurality of objects triggers a virtual resource, the first terminal displays a first special effect resource included in the virtual resource in the joint picture.
The virtual resource comprises the first special effect resource and a second special effect resource, the first special effect resource and the second special effect resource correspond to different objects, and the second special effect resource is used for being displayed on a terminal of a second object in the plurality of objects.
According to the method provided by the embodiment of the disclosure, under the condition that the audience object in the first virtual space triggers the virtual resource, the first special effect resource of the virtual resource is displayed in the joint picture displayed by the terminal of the first object, and the second special effect resource of the virtual resource is displayed on the terminal of the second object, so that each object in the first virtual space of the first object can watch the first special effect resource, each object in the second virtual space of the second object can watch the second special effect resource, and therefore interaction among multiple virtual spaces is realized through partial special effect resources of the virtual resource displayed by different terminals, interaction efficiency among each virtual space in the joint virtual space is improved, and man-machine interaction efficiency is improved.
In a possible implementation manner, the first special effect resource corresponds to a first period of time, the second special effect resource corresponds to a second period of time, and the end time of the first period of time is the start time of the second period of time, and displaying, in the joint picture, the first special effect resource included in the virtual resource includes:
and displaying the first special effect resource in the combined picture in the first time period.
In one possible embodiment, the method further comprises:
and displaying the second special effect resource in the combined picture.
In a possible implementation manner, the displaying, in the joint screen, the first special effects resource included in the virtual resource includes:
displaying the first special effect resource in the combined picture in the first time period;
and displaying the second special effect resource in the combined picture comprises the following steps:
and in the second time period, displaying the second special effect resource in the combined picture.
In a possible implementation manner, the displaying, in the joint screen, the first special effects resource included in the virtual resource includes:
In the joint picture, playing joint video streams of the plurality of objects, wherein the joint video streams comprise first rendering data of the first special effect resource, and the first rendering data are used for displaying the first special effect resource;
the joint video stream is provided to a terminal of the first viewer object.
In a possible implementation manner, the displaying, in the joint screen, the first special effects resource included in the virtual resource includes:
receiving a second interaction instruction of the virtual resource, wherein the second interaction instruction carries a resource identifier of the first special effect resource;
and if the second interaction instruction also carries the identification of the first object, displaying the first special effect resource in the combined picture.
In a possible implementation manner, the displaying, in the joint screen, the first special effects resource includes:
and if the second interaction instruction carries a display position identifier, displaying the first special effect resource on a target display position indicated by the display position identifier in the combined picture.
In a possible implementation manner, if the second interaction instruction carries a display position identifier, displaying the first special effect resource on the target display position indicated by the display position identifier in the combined picture includes:
If the display position mark indicates the target part of the object, identifying the target part of the first object in the combined picture;
and displaying the first special effect resource at the identified target part of the first object.
In one possible implementation, the virtual resource includes a plurality of second effect resources corresponding to different second objects.
In one possible embodiment, the method further comprises:
and respectively providing a first interaction instruction for the terminal of at least one second object in the plurality of objects through a first communication channel, wherein the first interaction instruction is used for indicating the terminal of one second object to display a second special effect resource, and the first communication channel is a communication channel between anchor terminals.
In one possible embodiment, the method further comprises:
providing a third interaction instruction for the terminal of the first audience object through a second communication channel, wherein the third interaction instruction is used for indicating the terminal of the first audience object to display the first special effect resource, and the second communication channel is a communication channel between a main broadcasting end and an audience end;
And providing a fourth interaction instruction for the terminal of the second object through the first communication channel, wherein the first interaction instruction is used for indicating the terminal of the second audience object in the virtual space of the second object to display the second special effect resource.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
Fig. 3 is a flowchart illustrating a method of interaction in a virtual space according to an exemplary embodiment, which is applied to a second terminal of a second object, as shown in fig. 3, the embodiment including the following steps.
In step 301, the second terminal displays a joint screen of a plurality of objects, the joint screen including a screen of a virtual space of the plurality of objects.
In step 302, in a case that a first audience object in a virtual space of a first object in the plurality of objects triggers a virtual resource, the second terminal displays a second special effect resource included in the virtual resource in the joint picture.
The virtual resource comprises a first special effect resource and a second special effect resource, the first special effect resource and the second special effect resource correspond to different objects, and the first special effect resource is used for being displayed on a terminal of the first object.
According to the method provided by the embodiment of the disclosure, under the condition that the audience object in the first virtual space triggers the virtual resource, the first special effect resource of the virtual resource is displayed in the joint picture displayed by the terminal of the first object, and the second special effect resource of the virtual resource is displayed on the terminal of the second object, so that each object in the first virtual space of the first object can watch the first special effect resource, each object in the second virtual space of the second object can watch the second special effect resource, and therefore interaction among multiple virtual spaces is realized through partial special effect resources of the virtual resource displayed by different terminals, interaction efficiency among each virtual space in the joint virtual space is improved, and man-machine interaction efficiency is improved.
In a possible implementation manner, the first special effect resource corresponds to a first period of time, the second special effect resource corresponds to a second period of time, the end time of the first period of time is the start time of the second period of time, and displaying, in the joint picture, the second special effect resource corresponding to the virtual resource includes:
and displaying the second special effect resource in the combined picture in the second time period after the first time period.
In one possible embodiment, the method further comprises:
and displaying the first special effect resource in the combined picture.
In a possible implementation manner, the displaying, in the joint screen, the first special effects resource includes:
displaying the first special effect resource in the combined picture in the first time period;
and displaying the second special effect resource included in the virtual resource in the joint picture comprises the following steps:
and in the second time period, displaying the second special effect resource in the combined picture.
In a possible implementation manner, the displaying, in the joint screen, the second special effects resource included in the virtual resource includes:
playing a joint video stream of the plurality of objects in the joint picture, wherein the joint video stream comprises second rendering data of the second special effect resource, and the second rendering data is used for displaying the second special effect resource;
and transmitting the joint video stream to a terminal of a second audience object in the virtual space of a second object in the plurality of objects.
In a possible implementation manner, the displaying, in the joint picture, the second special effect resource corresponding to the virtual resource includes:
Receiving a first interactive instruction, wherein the first interactive instruction carries the identification of the second special effect resource,
and if the first interaction instruction also carries the identification of a second object in the plurality of objects, displaying the second special effect resource in the combined picture.
In a possible implementation manner, the displaying, in the joint picture, the second special effects resource includes:
and if the first interaction instruction carries a display position identifier, displaying the second special effect resource on a target display position indicated by the display position identifier in the combined picture.
In a possible implementation manner, if the first interaction instruction carries a display position identifier, displaying the second special effect resource on the target display position indicated by the display position identifier in the joint picture includes:
if the display position identifier indicates the target part of the object, identifying the target part of the second object in the combined picture based on the second identifier;
and displaying the second special effect resource at the identified target part of the second object.
In one possible implementation manner, the receiving the second interaction instruction includes:
And receiving the first interaction instruction through a first communication channel, wherein the first communication channel is a communication channel between the anchor terminals.
In one possible embodiment, the method further comprises:
receiving a fourth interaction instruction through the first communication channel, wherein the fourth interaction instruction is used for indicating a terminal of a second audience object in the virtual space of the second object to display the second special effect resource;
and providing the fourth interaction instruction to the terminal of the second audience object through a second communication channel, wherein the second communication channel is a communication channel between the anchor end and the audience end.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
When the first object and the second object perform joint interaction, under the condition that the object in the first virtual space triggers the virtual resource, triggering the third server to send an interaction instruction to the first terminal of the first object, so that the first terminal controls each terminal participating in the joint interaction based on the interaction instruction, and the special effect resource of the virtual resource is displayed in an interaction mode. To further illustrate this process, an interactive flow chart for participating in a method of interaction in a virtual space is shown in FIG. 4 according to an exemplary embodiment. The method is used in the interaction process of the terminal and the server.
In step 401, the first terminal, the second terminal, the third terminal, and the fourth terminal respectively display a joint screen of a plurality of objects, where the joint screen includes a screen of a virtual space of the plurality of objects.
For convenience of description, any one of the multiple anchor is referred to as a first object, each anchor except the first object in the multiple anchor is collectively referred to as a second object, at least one of the second objects in the multiple objects is provided, and the first terminal is a terminal of the first object. The second terminal is a terminal of the second object, and it is understood that if there is at least one of the second objects, there is at least one of the second terminals. The virtual space of the first object is referred to as a first virtual space, the audience objects in the first virtual space are collectively referred to as first audience objects, and the third terminal is a terminal of the first audience objects. The virtual space of the second object is referred to as a second virtual space, the audience objects in the second virtual space are collectively referred to as second audience objects, and the fourth terminal is a terminal of the second audience objects.
The joint picture is a man-machine interaction interface when the multiple anchor performs joint interaction through the virtual space. The pictures of the virtual space of each anchor in the joint picture are used for displaying the video stream shot by the corresponding anchor terminal. There are two or more anchor members participating in the joint interaction, and there are one or more audience members in each anchor member's virtual space.
For convenience of description, this step 401 is described below by taking as an example that the anchor participating in the joint interaction includes a first object and a second object, a first audience object in the first virtual space, and a second audience object in the second virtual space:
when the pictures of the first virtual space are video pictures of the video stream shot by the first terminal (namely, the first video stream of the first object) in the joint interaction of the plurality of objects. The second virtual space is a video frame of a video stream (i.e., a second video stream of the second object) shot by the second terminal when the pictures show joint interaction.
When a plurality of objects jointly interact, the first terminal shoots videos in real time to obtain a first video stream of the first object, and the shot first video stream is pushed to the second server in real time. And the second terminal shoots the video in real time to obtain a second video stream of the second object, and the shot second video stream is pushed to the second server in real time. Accordingly, the second server can acquire the first video stream and the second video stream in real time. The second server pushes the second video stream to the first terminal in real time and pushes the first video stream to the second terminal in real time. The first terminal and the second terminal acquire the second video stream and the first video stream from the third server, respectively. The first terminal and the second terminal play the first video stream in respective joint pictures to display pictures of the first virtual space in the joint pictures, and play the second video stream in respective joint pictures to display pictures of the second virtual space in the joint pictures.
The first server acquires a first joint video stream of a first object and a second object from a first terminal, wherein the first joint video stream is a video stream forming a joint picture of the first terminal, and the first joint video stream comprises the first video stream and the second video stream. The second server sends the first joint video stream to a third terminal of the first audience object in the first virtual space, and the third terminal displays the joint picture displayed by the first terminal by playing the first joint video stream.
The first server acquires a second combined video stream of the first object and the second object from the second terminal, wherein the second combined video stream is a video stream of a combined picture of the second terminal, and the second combined video stream comprises the first video stream and the second video stream. The second server sends the second combined video stream to a fourth terminal of a second audience object in the second virtual space, and the fourth terminal displays the combined picture displayed by the second terminal by playing the second combined video stream.
In step 402, the third server responds to the plurality of objects to perform joint interaction, and opens a target authority for at least one object of the first object, the second object, the first audience object and the second audience object, where the target authority is an interaction authority of the virtual resource across the virtual space.
The interaction of the virtual resources across the virtual space refers to a manner in which objects in a plurality of virtual spaces participating in joint interaction interact through the virtual resources. The virtual resources include virtual gifts, virtual props or other types of virtual resources, and the types and the number of the virtual resources are not limited in the embodiments of the present disclosure.
When detecting that any one of the anchor is performing joint interaction with at least one other anchor, the third server responds to the joint interaction and opens target authority for at least one object in the objects participating in the joint interaction. Wherein the objects participating in the joint interaction include respective anchor participants participating in the joint interaction and respective audience objects within a virtual space of each anchor. In the embodiment of the disclosure, taking the joint interaction between the first object and the second object as an example, the third server responds to the joint interaction between the first object and the second object to open the target authority for at least one object of the first object, the second object, the first audience object and the second audience object.
For any object participating in the joint interaction, the way that the third server opens the target authority for the any object includes:
And the third server sends a permission opening instruction to the terminal of any object, wherein the permission opening instruction is used for indicating to open target permission for the first object. After receiving the permission opening instruction, the terminal displays the virtual resource in the joint picture so that any object interacts with other objects participating in joint interaction by triggering the displayed virtual resource.
Or the virtual resource does not display the identification, and the virtual resource is not displayed in the joint picture, so that the third server opens the target authority for any object through the authority opening information record and stores the authority opening information. And the third server sends the authority opening information to the terminal of any object, and after the terminal receives the authority opening information, the authority opening information is displayed at the associated position of the combined picture so as to prompt that any object has opened the target authority.
The permission opening instruction and the permission opening information message form include long connection information. In one possible implementation manner, when the number of the objects participating in the joint interaction is greater than a first threshold, the third server adopts a peak removal strategy to send a permission opening instruction or permission opening information to the terminals of the objects participating in the joint interaction, so as to reduce the real-time working pressure of the third server.
In another possible implementation, the joint interaction corresponds to at least one interaction scenario, and the target right is opened for at least one object participating in the joint interaction based on the interaction scenario of the participating joint interaction. Wherein the at least one interactive scene is, for example, a link wheat scene, a PK scene, etc. For example, when a plurality of anchor persons perform joint interaction, the third server opens the target authority for at least one object participating in the joint interaction every time the third server detects that the joint interaction belongs to any one of the at least one interaction scene.
In another possible implementation manner, in the process of performing joint interaction by multiple anchor, if any object is detected to join the joint interaction, the third server opens the target authority for any object. Wherein, any object is a host or audience object. In the process of performing joint interaction by a plurality of anchor, if any object added to leave the joint interaction is detected, the third server closes the target authority for any object. If any object is a host, any object leaves the joint interaction and appears as follows: and does not jointly interact with other anchor. If any object is a viewer object, any object leaves the joint interaction and appears as follows: leaving the virtual space in which the arbitrary object is located.
In step 403, in the case that the object in the first virtual space triggers the virtual resource, the third server sends a target interaction instruction to the first terminal, where the target interaction instruction indicates that the plurality of objects interact through the virtual resource.
The virtual resource comprises a first special effect resource and a second special effect resource, and the first special effect resource and the second special effect resource are different on objects. For example, a first special effect resource corresponds to an anchor (e.g., a first object) in a virtual space that triggers the virtual resource, and a second special effect resource corresponds to an anchor (e.g., at least one second object) in a virtual space that does not trigger the virtual resource among the plurality of objects involved in the joint interaction. Optionally, the first special effect resource and the second special effect resource are respectively used for reflecting the action effect of the virtual resource on different objects. Taking a virtual resource as a bomber as an example, a first object emits the bomber to a second object to interact with the second object, wherein the first special effect resource is an emission special effect of the emission bomber, and the second special effect resource is a falling special effect of the bomber and a collision special effect of collision after the falling is completed. Taking virtual resources as 'cooling for you' as an example, a first object is cooled for a second object, interaction with the second object is performed, the first special effect resources are special effects for releasing snowfall, and the second special effect resources are special effects for an icing atmosphere.
In one possible implementation, the first special effect resource corresponds to a first time period, the second special effect resource corresponds to a second time period, and the end time of the first time period is the start time of the second time period. The first special effect resource and the second characteristic resource are the whole special effect resource of the virtual resource.
In one possible implementation, the virtual resource includes a plurality of second effect resources corresponding to different second objects. Optionally, the multiple second special effect resources are respectively used for reflecting the action effect of the virtual resource on different second objects. If the virtual resource comprises a plurality of second special effect resources, the plurality of second special effect resources correspond to a second time period. Still taking the bomber as a virtual resource as an example, the plurality of second special effect resources are a plurality of collision special effects.
The target interaction instruction comprises interaction instructions of each anchor participating in joint interaction aiming at the virtual resource, and each anchor display interaction instruction is used for indicating a mode of displaying the virtual resource by a terminal of the corresponding anchor. For convenience of description, the display interaction of the first object is referred to as a second interaction instruction, and the display task of the second object is referred to as a first interaction instruction. The second interaction instruction is used for indicating a first terminal of the first object to display the first special effect resource. The first interactive instruction is used for indicating a second terminal of a second object to display a second special effect resource.
In one possible implementation, the second interaction instruction carries an identifier of the first object, a resource identifier of the first special effect resource, and a first period of time. And indicating the terminal of the first object to display the first special effect resource of the virtual resource in the first time period. In another possible implementation manner, the second interactive instruction further carries the first rendering mode. The first rendering mode is a mode of rendering the first special effect resource for the first terminal. The first rendering mode comprises a local rendering mode or a confluent rendering mode.
In one possible implementation manner, the first interaction instruction carries an identifier of the second object, a resource identifier of the virtual resource, a resource identifier of the second special effect resource, and a second time period, so as to instruct a second terminal of the second object to display the second special effect resource of the virtual resource in the second time period. In another possible implementation manner, the first interactive instruction further carries a second rendering mode. The second rendering mode is a mode of rendering the second special effect resource for the second terminal. The second rendering mode includes a local rendering mode or a confluent rendering mode.
In another possible implementation, the target interaction instruction further includes an interaction instruction of each audience object in each virtual space participating in the joint interaction. For convenience of description, the interaction instruction of the first viewer object will be referred to as a third interaction instruction, and the interaction instruction of the fourth viewer object will be referred to as a fourth interaction instruction. The third interaction instruction is used for indicating a third terminal of the first audience object to display the first special effect resource. The fourth interaction instruction is used for indicating a fourth terminal of a second audience object in the virtual space of the second object to display the second special effect resource.
In one possible implementation, the third interaction instruction includes an identification of the first audience object, a resource identification of the virtual resource, a resource identification of the first special effects resource, and a first time period to indicate that the first special effects resource of the virtual resource is presented in the first time period. In another possible implementation manner, the third interaction instruction further includes a third rendering mode. The third rendering mode is a mode of rendering the first special effect resource for the third terminal. The third rendering mode includes a local rendering mode. In addition, in the target interaction instruction, the third interaction instruction corresponds to the second interaction instruction, so as to indicate that the third audience object is an audience object in the first virtual space of the first object.
In one possible implementation, the fourth interaction instruction includes an identification of the second audience object, a resource identification of the virtual resource, a resource identification of the second special effects resource, and a second time period to indicate that the second special effects resource of the virtual resource is displayed in the second time period. In another possible implementation manner, the first interactive instruction further includes a fourth rendering mode. The fourth rendering mode is a mode of rendering the second special effect resource for the fourth terminal. The second rendering mode includes a local rendering mode. In addition, in the target interaction instruction, the fourth interaction instruction corresponds to the first interaction instruction, so as to indicate that the fourth audience object is an audience object in the second virtual space of the second object.
In another possible implementation manner, each of the target interaction instructions further carries a display position identifier, which is used to indicate a display position of the corresponding first special effect resource or the second special effect resource.
It can be understood that each interaction instruction in the target interaction instructions displays a virtual resource display task for the terminal of each object participating in the joint interaction, so that a customized virtual resource display scheme is provided for each object participating in the joint interaction.
It should be noted that, the interaction instruction of each audience object in the target interaction instruction is optional. For example, if the first rendering mode in the second interactive instruction in the target interactive instruction is confluent rendering, the target interactive instruction does not include the third interactive instruction. If the second rendering mode in the first interactive instruction in the target interactive instruction is confluence rendering, the target interactive instruction does not comprise a fourth interactive instruction.
The case that the object in the first virtual space triggers the virtual resource includes any one of the following case 1 or case 2.
Case 1, audience objects within a first virtual space trigger virtual resources. For case 1, this step 403 includes the following steps 4031-4033.
In step 4031, the third terminal sends a virtual resource donation request to the third server in response to the trigger operation of the first viewer object on the virtual resource.
Wherein the virtual resource presentation request instructs a first spectator object to present the virtual resource to the first object. The virtual resource presentation request includes the first spectator object, the first object, and a resource identification of the virtual resource.
The third audience terminal displays a joint picture of the first object and the second object, the audience user performs triggering operation on the virtual resource displayed in the joint picture, and the third terminal detects the triggering operation on the virtual resource and learns that the first audience object triggers the virtual resource. The third terminal sends a resource presentation request to the third server in response to the first spectator object triggering the virtual resource.
In step 4032, the third server receives the resource-gifting request and generates the target-interaction instruction based on the resource-gifting request.
After the third server receives the request for giving the resources, analyzing the content of the request for giving the resources, and obtaining that the first audience object gives the virtual resources to the first object according to the analyzed content, so that the third server issues the virtual resources to the first virtual space of the first object.
In order to enable a first object in a first virtual space to interact with a second object in a second virtual space through a issued virtual resource, a third server respectively composes interaction instructions of the virtual resource for each object participating in joint interaction based on the first special effect resource and the second special effect resource included in the virtual resource, and generates the target interaction instructions based on the composed interaction instructions of each object.
It should be noted that, when the multiple anchor participating in the joint interaction includes multiple second objects, the types of the second special effects resources scheduled for each of the second objects may be the same or different.
In step 4033, the third server sends the target interaction instruction to the first terminal.
Case 2, a first object within a first virtual space triggers a virtual resource. Based on case 2, this step 403 is described as follows:
the third server sets a trigger condition for the virtual resource, where the trigger condition includes at least one of the at least one interaction scenario, the at least one target time, or the interaction success. When the joint interaction of the first object and the second object is detected to be in accordance with any one of the at least one interaction scene, the triggering condition is met, and the first object triggers the virtual resource.
Or in the process of the joint interaction between the first object and the second object, if the current time is any one of the at least one target time, the triggering condition is met, and the first object triggers the virtual resource.
Or when the first object and the second object perform joint interaction, if the interaction of the first object is successful and the interaction of the second object is failed, the first object triggers the virtual resource.
The server triggers the virtual resource in response to the first object, and the server issues the virtual resource as a reward to the first object. In order to enable a first object in a first virtual space to interact with a second object in a second virtual space through a issued virtual resource, a third server lays out an interaction instruction for displaying the virtual resource for each object participating in joint interaction based on the first special effect resource and the second special effect resource included in the virtual resource, generates the target interaction instruction based on the interaction instruction for arranging each object, and sends the generated target interaction instruction to a first terminal of the first object.
In step 404, the first terminal receives the target interaction instruction, and displays the first special effect resource included in the virtual resource in the combined picture based on the target interaction instruction.
After receiving the target interaction instruction, the first terminal analyzes the target interaction instruction, and analyzes a second interaction instruction of the first object and a first interaction instruction of the second object from the target interaction instruction. Of course, when the target interaction instruction includes a third interaction instruction of the first audience object and a fourth interaction instruction of the second object, the first terminal can also parse the third interaction instruction and the fourth interaction instruction from the target interaction instruction.
After a second interaction instruction of the first object is analyzed from the target interaction instruction, the first terminal displays a first special effect resource included in the virtual resource in the combined picture based on the second interaction instruction.
For example, the first terminal displays the first special effect resource in the combined picture based on the resource identifier of the first special effect resource in the second interactive instruction.
If the second interaction instruction also carries the identifier of the first object, the second interaction instruction is an interaction instruction to be executed by the first terminal, and the first terminal displays the first special effect resource in the combined picture.
And if the second interaction instruction carries a display position identifier, the first terminal displays the first special effect resource on a target display position indicated by the display position identifier in the combined picture. Wherein the display position identifier indicates a target portion of the first object. In one possible implementation manner, if the display position identifier indicates a target portion of the object, the first terminal identifies the target portion of the first object in the joint picture, and displays the first special effect resource at the identified target portion of the first object.
Taking the target part of the first object as an example, the first terminal identifies the hand of the first object displayed in the joint picture, and displays the first special effect resource on the identified hand of the first object. For another example, the display position identifier indicates a picture of the first virtual space, and the first terminal identifies a picture of the first virtual space displayed in the joint picture and displays the first special effect resource on the identified picture of the first virtual space.
If the second interactive command also carries a first time period, the first terminal displays the first special effect resource in the combined picture in the first time period. For example, before the first period of time, the first terminal takes the second interactive instruction as a first rendering task, adds the first rendering task into a rendering queue, so that the first terminal displays a first special effect resource included in the virtual resource in the joint picture by executing the first rendering task. In addition, after the first terminal obtains the first rendering task from the rendering queue, if the current time does not reach the starting time of the first time period, the first terminal temporarily does not execute the first rendering task, and when the current time reaches the starting time of the first time period, the first rendering task is executed again, so that the first special effect resource of the virtual resource is displayed in the joint picture by executing the first rendering task in the first time period.
If the first rendering mode in the first rendering task is confluence rendering, the first terminal generates first rendering data of the first special effect resource based on the resource identification of the virtual resource and the resource identification of the first special effect resource in the first rendering task, wherein the first rendering data is used for displaying the first special effect resource. And the first terminal and the second terminal respectively merge the video streams of the objects to obtain a joint video stream, and play the joint video stream in a joint picture. In order to facilitate distinguishing the joint video streams of the first terminal side and the second terminal side, the joint video stream obtained by the first terminal converging is called a first joint video stream, and the joint video stream obtained by the second terminal converging is called a first joint video stream. The first terminal adds the first rendering resources to the first joint video stream, and the first terminal achieves displaying the first special effect resources in the joint picture by playing the first joint video stream comprising the first rendering data in the joint picture. For example, the first terminal adds the first rendering data to the first joint video stream during a first period of time such that the first joint video stream includes the first rendering data. In the first time period, when the first terminal plays the first joint video stream in the joint picture, the first rendering data is also played at the same time, so that the first special effect resource is displayed in the joint picture.
If the first rendering mode in the first rendering task is local rendering, the first terminal renders a first display animation of the first special effect resource based on the resource identification of the virtual resource in the first rendering task and the resource identification of the first special effect resource. And in the process of playing the first joint video stream in the joint picture, the first terminal covers the upper layer of the first joint video stream played in the joint picture, so that the first special effect resource included in the virtual resource is displayed on the joint picture through the first display animation. For example, in the first period, the first terminal overlays the first presentation animation on an upper layer of the first joint video stream played in the joint picture, so that in the first period, the first terminal can display the first special effect resource in the joint picture.
In one possible implementation, the first effect resource is displayed in a screen of the first virtual space in the joint screen or in a place other than a screen of the second virtual space in the joint screen. Here, the embodiment of the disclosure does not limit the display position of the first special effect resource in the joint picture.
In one possible implementation manner, the first special effect resource is a moving track of the virtual resource in the joint picture, and the starting point of the moving track includes any position except the interface of the first virtual space or the interface of the first virtual space. The end point of the movement track comprises any position except the interface of the second virtual space or the interface of the second virtual space. For example, fig. 5 shows a schematic diagram of a joint screen displayed by a first terminal according to an exemplary embodiment, and the joint screen 500 shown in fig. 5 includes a screen 501 of a first virtual space, a screen 502 of a second virtual space, and a message area 503, where each viewer object in the first virtual space can issue a message in the message area 503. Taking the virtual resource as the bomber 504 as an example, the first special effect resource of the bomber 504 is the emission special effect 505 of the bomber 504, and the moving track of the bomber (i.e. the emission special effect 505) reaches the picture 502 of the second virtual space from the right boundary of the combined picture.
If the case of triggering the virtual resource is the case 1, the process shown in the step 404 is a process in which the first terminal displays the first special effect resource included in the virtual resource on the joint screen, that is, when the first viewer object in the virtual space of the first object in the plurality of objects triggers the virtual resource. If the case of triggering the virtual resource is the case 2 described above, the process shown in step 404 is a process in which the first terminal displays the first special effect resource included in the virtual resource in the joint picture, that is, in the case that the first object in the first virtual space triggers the virtual resource.
In step 405, the first terminal provides a first interaction instruction to a second terminal based on the target interaction instruction.
After the first terminal analyzes the first interaction instruction from the target interaction instruction, the first terminal sends the first interaction instruction to the second terminal before the first time period is finished, so that the second terminal can finish the first interaction instruction in a second time period after the first time period is finished.
In one possible implementation manner, the first terminal provides the first interaction instruction to the second terminal of the at least one second object through the first communication channel. Communication channels are arranged among the anchor terminals participating in the joint interaction, and the communication channels among the anchor terminals are called first communication channels. Optionally, the first communication channel is a communication channel between the second server and each anchor.
For example, before the first period of time ends, the first terminal sends the first interaction instruction to the first communication channel based on the first communication mode of the first communication channel, and the first interaction instruction is transmitted to the second terminal through the first communication channel. The first communication manner is a communication manner supported by the first communication channel, and the first communication manner includes any one of long-link communication, video stream communication, audio stream communication or other communication manners, and herein, the embodiment of the disclosure does not limit the first communication manner.
It should be noted that, if the first terminal further analyzes the fourth interaction instruction from the target interaction instruction, the first terminal learns that the second audience object is an audience object in the second virtual space of the second object according to the corresponding relationship between the fourth interaction instruction and the first interaction instruction in the target interaction instruction, and then the first terminal further sends the fourth interaction instruction to the second terminal of the second object. For example, the first terminal sends a fourth interaction instruction to the second terminal through the first communication channel.
In step 406, the second terminal receives the first interaction instruction, and based on the first interaction instruction, displays the second special effect resource included in the virtual resource in the joint picture.
After receiving the first interaction instruction sent by the second terminal, the second terminal analyzes the first interaction instruction, and analyzes at least one of the identification of the second object, the resource identification of the virtual resource, the resource identification of the second special effect resource, the second time period and the second rendering mode from the first interaction instruction.
And after the first interaction instruction is finished, the second terminal displays the first special effect resource included in the virtual resource in the combined picture based on the first interaction instruction. For example, the second terminal displays the second special effect resource in the joint picture based on the resource identifier of the second special effect resource in the first interactive instruction.
If the first interaction instruction also carries the identifier of the second object, the first interaction instruction is an interaction instruction to be executed by the second terminal, and the second terminal displays the second special effect resource in the combined picture.
And if the first interaction instruction carries a display position identifier, the second terminal displays the second special effect resource on a target display position indicated by the display position identifier in the combined picture. Wherein the display position identifier indicates a target portion of the first object. In one possible implementation manner, if the display position identifier indicates a target portion of the object, the second terminal identifies the target portion of the second object in the joint picture, and displays the second special effect resource at the identified target portion of the second object. Taking the target part of the second object as an example of the face, the second terminal identifies the face of the second object displayed in the joint picture, and displays the second special effect resource on the identified face of the second object.
If the first interactive instruction also carries a second time period, the second terminal displays the second special effect resource in the combined picture in the second time period. For example, before the second terminal takes the first interactive instruction as a second rendering task and adds the second rendering task into a rendering queue, so that the second terminal displays a second special effect resource included in the virtual resource in the joint picture by executing the second rendering task. In addition, after the second terminal obtains the second rendering task from the rendering queue, if the current time does not reach the starting time of the second time period, the second terminal temporarily does not execute the second rendering task, and when the current time reaches the starting time of the second time period, the second terminal executes the second rendering task so as to display the second special effect resource of the virtual resource in the joint picture by executing the second rendering task in the second time period.
If the second rendering mode in the second rendering task is confluence rendering, the second terminal generates second rendering data of the second special effect resource based on the resource identification of the virtual resource in the second rendering task and the resource identification of the second special effect resource, and the second rendering data is used for displaying the second special effect resource. And the second terminal merges the video streams of the objects to obtain a second combined video stream, and plays the second combined video stream in the combined picture. The second terminal adds the second rendering resources to the second combined video stream being played by the second terminal, and the second special effect resources are displayed in the combined picture by playing the second combined video stream comprising the second rendering data in the combined picture. For example, the second terminal adds the second rendering data to the second unified video stream in a second period of time such that the second unified video stream includes the second rendering data. And in a second time period, when the second terminal plays the second combined video stream in the combined picture, the second rendering data is also played at the same time, so that a second special effect resource is displayed in the combined picture.
If the second rendering mode in the second rendering task is local rendering, the second terminal renders a second display animation of the second special effect resource based on the resource identifier of the virtual resource in the second rendering task and the resource identifier of the second special effect resource. And in the process of playing the second combined video stream in the combined picture, the second terminal covers the upper layer of the second combined video stream played in the combined picture, so that the second special effect resource included in the virtual resource is displayed on the combined picture through the second display animation. For example, in the second period, the second terminal overlays the second presentation animation on an upper layer of the second joint video stream played in the joint picture, so that in the second period, the second terminal can display the second special effects resource in the joint picture.
In one possible implementation, the second special effects resource is displayed in a screen of the second virtual space in the joint screen, or is displayed in a place other than the screen of the second virtual space in the joint screen. Here, the embodiment of the disclosure does not limit the display position of the second special effect resource in the joint picture.
For example, fig. 6 is a schematic diagram of a joint screen displayed by a second terminal according to an exemplary embodiment, where the joint screen 600 shown in fig. 6 includes a screen 601 of a first virtual space, a screen 602 of a second virtual space, and a message area 603, and still taking a virtual resource as an example of a bomber, and the second special effect resource of the virtual resource is a falling special effect and an explosion special effect 604, and then, in a second period of time, the second terminal displays the falling special effect and the explosion special effect 604 of the bomber in the second virtual space.
If the case of triggering the virtual resource is the case 1, the process shown in the step 406 is a process in which the second terminal displays the second special effect resource included in the virtual resource on the joint screen, that is, when the first viewer object in the virtual space of the first object in the plurality of objects triggers the virtual resource. The case of triggering the virtual resource is the case 2 described above, and the process shown in step 406 is a process in which the second terminal displays, in the joint screen, the second special effect resource included in the virtual resource, in the case that the virtual resource is triggered by the first viewer object in the virtual space of the first object in the plurality of objects.
In step 407, the first terminal provides a third interaction instruction to the third terminal based on the target interaction instruction.
If the third interaction instruction is analyzed from the target interaction instruction, the first terminal determines that the first audience object indicated by the third interaction instruction is an audience object in the first virtual space based on the corresponding relation between the third interaction instruction and the second interaction instruction in the target interaction instruction. The first terminal sends a third interaction instruction to the third terminal of the first audience object before the first time period based on the first audience object in the third interaction instruction, so that the third terminal can complete the third interaction instruction in the first time period.
In one possible implementation, the first terminal provides the third interaction instruction to a third terminal of the first audience object through the second communication channel. For any anchor participating in joint interaction, a communication channel is set between the terminal of the any anchor and the terminal of each audience object in the virtual space of the any anchor, and the communication channel between the anchor end and the audience end is called a second communication channel. Optionally, the second communication channel is a communication channel between the first server to each anchor end to the viewer end.
For example, before the first period of time, the first terminal sends the third interaction instruction to the second communication channel based on the second communication mode of the second communication channel, and the third interaction instruction is transmitted to the third terminal through the second communication channel. The second communication mode is a communication mode supported by the second communication channel, and the second communication mode includes any one of long-link communication, video stream communication, audio stream communication or other communication modes, and the embodiment of the disclosure is not limited herein.
In step 408, the third terminal receives the third interaction instruction, and displays the first special effect resource included in the virtual resource in the joint picture based on the third interaction instruction.
After receiving a third interaction instruction sent by the first terminal, the third terminal analyzes the third interaction instruction, and analyzes at least one of the identification of the first audience object, the resource identification of the virtual resource, the resource identification of the first special effect resource, the first time period and the third rendering mode from the third interaction instruction.
After the identification of the first audience object, the resource identification of the virtual resource and the resource identification of the first special effect resource are analyzed from the third interaction instruction, the third terminal determines that the third interaction instruction is the display task to be executed by the local terminal based on the identification of the first audience object. The third terminal displays the first special effect resource included in the virtual resource in the combined picture based on the resource identification of the virtual resource and the resource identification of the first special effect resource.
If the third interaction instruction includes a first time period, the third terminal displays the first special effect resource in the combined picture in the first time period.
For example, before the first period of time, the third terminal takes the third interaction instruction as a third rendering task, and adds the third rendering task into a rendering queue, so that the third terminal displays the first special effect resource in the joint picture by executing the third rendering task. In addition, after the third terminal obtains the third rendering task from the rendering queue, if the current time does not reach the starting time of the first time period, the third terminal temporarily does not execute the third rendering task, and when the current time reaches the starting time of the first time period, the third terminal executes the third rendering task so as to display the first special effect resource in the combined picture through executing the third rendering task in the first time period, thereby being aligned with the display time of the first special effect resource displayed by the first terminal.
If the third rendering mode in the third rendering task is local rendering, the third terminal renders a first display animation of the first special effect resource based on the resource identifier of the virtual resource and the resource identifier of the first special effect resource in the third rendering task. At this time, the first joint video stream of the first terminal does not include the first rendering data, the first terminal pushes the first joint video stream to the first server in real time, and the first server pushes the first joint video stream to the third terminal.
And displaying a joint interface displayed by the first terminal in the process of playing the first joint video stream pushed by the first server by the third terminal, and covering the first display animation on the upper layer of the joint picture, so that the first special effect resource is displayed on the joint picture through the first display animation. For example, in the first period, the third terminal overlays the first presentation animation on an upper layer of the joint picture, so that in the first period, the third terminal can display the first special effects resource on the joint picture.
It can be understood that the first special effect resource displayed by the third terminal is the same as the first special effect resource displayed by the second terminal and the duration of display is the same.
It should be noted that steps 407-408 are optional steps. For example, if the first rendering mode in the second interaction instruction is confluence rendering, the first terminal does not execute the steps 407-408. The first joint video stream of the first terminal includes first rendering data, and the first terminal provides the first joint video stream to the third terminal. For example, the first terminal merges the video streams of the plurality of objects to obtain a first joint video stream, at this time, the first joint video stream includes first rendering data, the first terminal pushes the first joint video stream to the first server, and the first server pushes the first joint video stream to the third terminal. And the third terminal plays the first rendering data simultaneously when playing the first joint video stream pushed by the first server, so that the first special effect resource can be displayed in the joint picture.
In step 409, the second terminal provides a fourth interaction instruction to the fourth terminal.
If a fourth interaction instruction sent by the first terminal is received, the second terminal determines that a second audience object indicated by the fourth interaction instruction is an audience object in a second virtual space based on the corresponding relation between the fourth interaction instruction and the first interaction instruction. The second terminal sends a fourth interaction instruction to the fourth terminal of the second audience object before the second time period starts based on the second audience object in the fourth interaction instruction, so that the fourth terminal can complete the fourth interaction instruction in the second time period.
In one possible implementation, the second terminal provides the fourth interaction instruction to a fourth terminal of the second audience object through the second communication channel. Wherein the third communication channel is a communication channel between the second terminal of the second object and the fourth terminal of the second viewer object.
For example, before the second period of time, the second terminal sends the fourth interaction instruction to the second communication channel based on the second communication mode of the second communication channel, and the fourth interaction instruction is transmitted to the fourth terminal through the second communication channel.
In step 410, the fourth terminal receives the fourth interaction instruction, and displays the second special effect resource included in the virtual resource in the combined picture based on the fourth interaction instruction.
After receiving a fourth interaction instruction sent by the second terminal, the fourth terminal analyzes the fourth interaction instruction, and at least one of the identification of the second audience object, the resource identification of the virtual resource, the resource identification of the second special effect resource, the second time period and the fourth rendering mode is analyzed from the fourth interaction instruction.
After the identification of the second audience object, the resource identification of the virtual resource and the resource identification of the second special effect resource are analyzed from the fourth interaction instruction, the fourth terminal determines that the fourth interaction instruction is the display task to be executed by the terminal based on the identification of the second audience object. And the fourth terminal displays the second special effect resource included by the virtual resource in the combined picture based on the resource identifier of the virtual resource and the resource identifier of the second special effect resource.
And if the fourth interaction instruction carries a second time period, displaying a second special effect resource included in the virtual resource in the combined picture in the second time period by the fourth terminal.
For example, before the second time period, the fourth terminal takes the fourth interaction instruction as a fourth rendering task, and adds the fourth rendering task into a rendering queue, so that the fourth terminal displays the second special effect resource included in the virtual resource in the joint picture by executing the fourth rendering task. In addition, after the fourth terminal obtains the fourth rendering task from the rendering queue, if the current time does not reach the starting time of the second time period, the fourth terminal temporarily does not execute the fourth rendering task, and when the current time reaches the starting time of the second time period, the fourth terminal executes the fourth rendering task so as to display the second special effect resource of the virtual resource in the joint picture through executing the fourth rendering task in the second time period, so that the display time of the second special effect resource displayed by the second terminal is aligned with the display time of the second special effect resource displayed by the second terminal.
If the fourth rendering mode in the fourth rendering task is local rendering, the third terminal renders a second display animation of the second special effect resource based on the resource identifier of the virtual resource and the resource identifier of the second special effect resource in the fourth rendering task. At this time, the second video stream of the second terminal does not include the second rendering data, the second terminal pushes the second video stream to the first server in real time, and the first server pushes the second video stream to the fourth terminal.
And displaying a joint interface displayed by the second terminal in the process of playing the second joint video stream pushed by the first server by the fourth terminal, and covering the second display animation on the upper layer of the joint picture, so that the second special effect resource is displayed on the joint picture through the second display animation. For example, in the second period, the fourth terminal overlays the second presentation animation on an upper layer of the joint screen, so that in the second period, the fourth terminal can display the second special effects resource in the joint screen.
It can be understood that the second special effect resource displayed by the second terminal is the same as the second special effect resource displayed by the fourth terminal and the duration of display is the same.
It should be noted that steps 409-410 are optional steps. For example, if the second rendering mode in the first interactive instruction is confluence rendering, the second terminal does not execute steps 410-410. The second video stream of the second terminal includes second rendering data, and the second terminal provides the fourth terminal with the second video stream including the second rendering data. For example, the second terminal joins the video streams of the plurality of objects to obtain a second combined video stream, and at this time, the second combined video stream includes second rendering data, the second terminal pushes the second video combined video stream to the first server, and the first server pushes the second combined video stream to the fourth terminal. And the fourth terminal plays the second rendering data simultaneously when playing the second combined video stream pushed by the first server, so that the second special effect resource can be displayed in the combined picture.
According to the method provided by the embodiment of the disclosure, under the condition that the audience object in the first virtual space triggers the virtual resource, the first special effect resource of the virtual resource is displayed in the joint picture displayed by the terminal of the first object, and the second special effect resource of the virtual resource is displayed on the terminal of the second object, so that each object in the first virtual space of the first object can watch the first special effect resource, each object in the second virtual space of the second object can watch the second special effect resource, and therefore interaction among multiple virtual spaces is realized through partial special effect resources of the virtual resource displayed by different terminals, interaction efficiency among each virtual space in the joint virtual space is improved, and man-machine interaction efficiency is improved.
The embodiment shown in fig. 4 is described by taking the first special effect resource of the virtual resource displayed by the first terminal and the second special effect resource of the virtual resource displayed by the second terminal as an example. In another possible implementation manner, the first terminal and the second terminal can each display the first special effect resource and the second special effect resource included in the virtual resource in respective joint pictures. For example, for any one of the first terminal and the second terminal, the any one of the terminals displays the first special effect resource included in the virtual resource in the joint picture in the first period, and displays the second special effect resource included in the virtual resource in the joint picture in the second period, so that the any one of the terminals realizes the process of displaying the complete special effect resource of the virtual resource in the joint picture in the total period from the first period to the second period.
In another possible implementation, the first terminal receives a first interaction instruction of the virtual resource. For example, the first terminal receives the target instruction, and obtains the first interaction instruction from the target instruction. The first terminal displays the second special effect resource in the combined picture based on the resource identification of the second special effect resource in the first interactive instruction.
If the first interactive instruction also carries the identifier of the second object, the first terminal displays the second special effect resource in the combined picture. And if the first interaction instruction carries a display position identifier, the first terminal displays the second special effect resource on a target display position indicated by the display position identifier in the combined picture. In one possible implementation manner, if the display position identifier indicates a target portion of the object, the first terminal identifies the target portion of the second object in the joint picture, and displays the second special effect resource at the identified target portion of the second object. Taking the target part of the second object as an example of a face, the first terminal identifies the face of the second object displayed in the joint picture, and displays the second special effect resource on the identified face of the second object.
In order to enable the first terminal and the second terminal to display the first special effect resource and the second special effect resource included by the virtual resource in respective joint pictures. The first terminal adopts a converging rendering mode, first rendering data of first special effect resources are added in a first video stream, after the first video stream comprising the first rendering data is pushed to a second terminal, the second terminal merges the video streams of a plurality of objects to obtain a second joint video stream, the second joint video stream comprises the first rendering data at the moment, and correspondingly, when the second terminal plays the second joint video stream in a joint picture, the first rendering data in the second joint video stream are simultaneously played, so that the first special effect resources are displayed in the joint picture. Correspondingly, the fourth terminal can also display the first special effect resource in the joint picture. The second terminal adopts a confluence rendering mode, second rendering data of a second special effect resource are added in a second video stream, when the second video stream comprising the second rendering data is pushed to the first terminal, the first terminal carries out confluence on the video streams of a plurality of objects to obtain a first combined video stream, and then the first combined video stream comprises the second rendering data, and correspondingly, when the first terminal plays the first combined video stream in a combined picture, the second rendering data in the first combined video stream are simultaneously played, so that the second special effect resource is displayed in the combined picture. Correspondingly, the third terminal can also display the second special effect resource in the joint picture.
Fig. 7 is a block diagram illustrating a logical structure of an interactive apparatus in a virtual space according to an exemplary embodiment. The device 700 comprises a display unit 701 and a presentation unit 702.
A display unit 701 configured to execute a joint screen displaying a plurality of objects, the joint screen including a screen of a virtual space of the plurality of objects;
a display unit 702 configured to perform, in the joint screen, display, in a case where a first viewer object in a virtual space of a first object of the plurality of objects triggers a virtual resource, a second special effect resource included in the virtual resource;
the virtual resource comprises a first special effect resource and a second special effect resource, the first special effect resource and the second special effect resource correspond to different objects, and the first special effect resource is used for being displayed on a terminal of the first object.
In a possible implementation manner, the first special effects resource corresponds to a first period of time, the second special effects resource corresponds to a second period of time, and the end time of the first period of time is the start time of the second period of time, and the presentation unit 702 is configured to execute:
and displaying the first special effect resource in the combined picture in the first time period.
In a possible implementation, the presentation unit 702 is further configured to perform:
and displaying the second special effect resource in the combined picture.
In a possible implementation, the presentation unit 702 is further configured to perform:
displaying the first special effect resource in the combined picture in the first time period;
and in the second time period, displaying the second special effect resource in the combined picture.
In a possible implementation, the presentation unit 702 is further configured to perform:
in the joint picture, playing joint video streams of the plurality of objects, wherein the joint video streams comprise first rendering data of the first special effect resource, and the first rendering data are used for displaying the first special effect resource;
the joint video stream is provided to a terminal of the first viewer object.
In one possible implementation, the display unit 702 includes:
the receiving subunit is configured to execute a first interaction instruction for receiving the virtual resource, wherein the first interaction instruction carries a resource identifier of the second special effect resource;
and the display subunit is configured to execute the second special effect resource in the combined picture if the first interactive instruction also carries the identifier of the second object.
In one possible implementation, the presentation subunit is further configured to perform:
and if the first interaction instruction carries a display position identifier, displaying the second special effect resource on a target display position indicated by the display position identifier in the combined picture.
In one possible implementation, the presentation subunit is further configured to perform:
if the display position mark indicates the target part of the object, identifying the target part of the second object in the combined picture;
and displaying the second special effect resource at the identified target part of the second object.
In one possible implementation, the virtual resource includes a plurality of second effect resources corresponding to different second objects.
In one possible implementation, the apparatus 700 further includes:
the providing unit is configured to execute a first interaction instruction to the terminal of at least one second object in the plurality of objects through a first communication channel, wherein the first interaction instruction is used for indicating the terminal of one second object to display a second special effect resource, and the first communication channel is a communication channel between anchor terminals.
In a possible implementation, the providing unit is further configured to perform:
providing a third interaction instruction for the terminal of the first audience object through a second communication channel, wherein the third interaction instruction is used for indicating the terminal of the first audience object to display the first special effect resource, and the second communication channel is a communication channel between a main broadcasting end and an audience end;
and providing a fourth interaction instruction for the terminal of the second object through the first communication channel, wherein the first interaction instruction is used for indicating the terminal of the second audience object in the virtual space of the second object to display the second special effect resource.
With respect to the apparatus 700 in the above embodiment, the specific manner in which the apparatus 700 is configured as the first terminal of the first object, in which the respective units perform the operations has been described in detail in the embodiment of the interaction method related to the virtual space, which will not be described in detail herein.
Fig. 8 is a logical block diagram illustrating an interactive apparatus in a virtual space according to an exemplary embodiment. The device 800 comprises a display unit 801 and a presentation unit 802.
A display unit configured to perform a joint screen displaying a plurality of objects, the joint screen including a screen of a virtual space of the plurality of objects;
A display unit 802 configured to perform, in the joint screen, display, in a case where a first viewer object in a virtual space of a first object of the plurality of objects triggers a virtual resource, a second special effect resource included in the virtual resource;
the virtual resource comprises a first special effect resource and a second special effect resource, the first special effect resource and the second special effect resource correspond to different objects, and the first special effect resource is used for being displayed on a terminal of the first object.
In a possible implementation manner, the first special effects resource corresponds to a first period of time, the second special effects resource corresponds to a second period of time, and the end time of the first period of time is the start time of the second period of time, and the presentation unit 802 is configured to execute:
and displaying the second special effect resource in the combined picture in the second time period after the first time period.
In a possible implementation, the presentation unit 802 is further configured to perform:
and displaying the first special effect resource in the combined picture.
In a possible implementation, the presentation unit 802 is further configured to perform:
Displaying the first special effect resource in the combined picture in the first time period;
and in the second time period, displaying the second special effect resource in the combined picture.
In one possible implementation, the presentation unit 802 is configured to perform:
playing a joint video stream of the plurality of objects in the joint picture, wherein the joint video stream comprises second rendering data of the second special effect resource, and the second rendering data is used for displaying the second special effect resource;
and transmitting the joint video stream to a terminal of a second audience object in the virtual space of a second object in the plurality of objects.
In one possible implementation, the display unit 802 includes:
a receiving subunit configured to execute receiving a first interaction instruction, where the first interaction instruction carries an identifier of the second special effect resource,
and the display subunit is configured to execute the second special effect resource in the combined picture if the first interaction instruction also carries the identification of the second object in the plurality of objects.
In one possible implementation, the presentation subunit is further configured to perform:
And if the first interaction instruction carries a display position identifier, displaying the second special effect resource on a target display position indicated by the display position identifier in the combined picture.
In one possible implementation, the presentation subunit is further configured to perform:
if the display position identifier indicates the target part of the object, identifying the target part of the second object in the combined picture based on the second identifier;
and displaying the second special effect resource at the identified target part of the second object.
In one possible implementation, the receiving subunit is configured to perform:
and receiving the first interaction instruction through a first communication channel, wherein the first communication channel is a communication channel between the anchor terminals.
In one possible implementation, the apparatus 800 further includes:
a receiving unit configured to execute a fourth interaction instruction through the first communication channel, where the fourth interaction instruction is used to instruct a terminal of a second audience object in the virtual space of the second object to display the second special effect resource;
and the providing unit is configured to provide the fourth interaction instruction to the terminal of the second audience object through a second communication channel, wherein the second communication channel is a communication channel between a main broadcasting end and an audience end.
With respect to the apparatus 800 in the above embodiment, the specific manner in which the apparatus 800 is configured as the second terminal of the second object, in which the respective units perform the operations has been described in detail in the embodiment of the interaction method related to the virtual space, which will not be described in detail herein.
Fig. 9 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment, and a terminal 900 shown in fig. 9 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 900 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, the terminal 900 includes: a processor 901 and a memory 902.
Processor 901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 901 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 901 may also include a main processor and a coprocessor, the main processor being a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 901 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 901 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
The memory 902 may include one or more computer-readable storage media, which may be non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 902 is used to store at least one instruction for execution by processor 901 to implement the method of interaction in a virtual space provided by various embodiments in the present disclosure.
In some embodiments, the terminal 900 may further optionally include: a peripheral interface 903, and at least one peripheral. The processor 901, memory 902, and peripheral interface 903 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 903 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 904, a touch display 905, a camera assembly 906, audio circuitry 907, and a power source 908.
The peripheral interface 903 may be used to connect at least one peripheral device associated with an I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 901, the memory 902, and the peripheral interface 903 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 904 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 904 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 904 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 904 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 904 may also include NFC (Near Field Communication, short range wireless communication) related circuitry, which is not limited by the present disclosure.
The display 905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 905 is a touch display, the display 905 also has the ability to capture touch signals at or above the surface of the display 905. The touch signal may be input as a control signal to the processor 901 for processing. At this time, the display 905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 905 may be one, providing a front panel of the terminal 900; in other embodiments, the display 905 may be at least two, respectively disposed on different surfaces of the terminal 900 or in a folded design; in still other embodiments, the display 905 may be a flexible display disposed on a curved surface or a folded surface of the terminal 900. Even more, the display 905 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 905 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 906 is used to capture images or video. Optionally, the camera assembly 906 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 906 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 907 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 901 for processing, or inputting the electric signals to the radio frequency circuit 904 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be plural and disposed at different portions of the terminal 900. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 901 or the radio frequency circuit 904 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 907 may also include a headphone jack.
A power supply 908 is used to power the various components in the terminal 900. The power source 908 may be alternating current, direct current, disposable or rechargeable. When the power source 908 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 900 can further include one or more sensors 910. The one or more sensors 910 include, but are not limited to: acceleration sensor 911, gyro sensor 912, pressure sensor 913, optical sensor 914, and proximity sensor 915.
The acceleration sensor 911 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 900. For example, the acceleration sensor 911 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 901 may control the touch display 905 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 911. The acceleration sensor 911 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 912 may detect a body direction and a rotation angle of the terminal 900, and the gyro sensor 912 may collect a 3D motion of the user on the terminal 900 in cooperation with the acceleration sensor 911. The processor 901 may implement the following functions according to the data collected by the gyro sensor 912: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 913 may be provided at a side frame of the terminal 900 and/or a lower layer of the touch display 905. When the pressure sensor 913 is provided at a side frame of the terminal 900, a grip signal of the user to the terminal 900 may be detected, and the processor 901 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 913. When the pressure sensor 913 is disposed at the lower layer of the touch display 905, the processor 901 performs control of the operability control on the UI interface according to the pressure operation of the user on the touch display 905. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 914 is used to collect the ambient light intensity. In one embodiment, processor 901 may control the display brightness of touch display 905 based on the intensity of ambient light collected by optical sensor 914. Specifically, when the ambient light intensity is high, the display brightness of the touch display 905 is turned up; when the ambient light intensity is low, the display brightness of the touch display panel 905 is turned down. In another embodiment, the processor 901 may also dynamically adjust the shooting parameters of the camera assembly 906 based on the ambient light intensity collected by the optical sensor 914.
A proximity sensor 915, also referred to as a distance sensor, is typically provided on the front panel of the terminal 900. The proximity sensor 915 is used to collect a distance between a user and the front surface of the terminal 900. In one embodiment, when the proximity sensor 915 detects that the distance between the user and the front surface of the terminal 900 gradually decreases, the processor 901 controls the touch display 905 to switch from the bright screen state to the off screen state; when the proximity sensor 915 detects that the distance between the user and the front surface of the terminal 900 gradually increases, the processor 901 controls the touch display 905 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 9 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
In an exemplary embodiment, a computer readable storage medium is also provided, such as a memory, comprising at least one instruction executable by a processor in a terminal to perform the interaction method in the virtual space provided in the above embodiment. Alternatively, the above-described computer-readable storage medium may be a non-transitory computer-readable storage medium, which may include, for example, ROM (Read-Only Memory), RAM (Random-Access Memory), CD-ROM (Compact Disc Read-Only Memory), magnetic tape, floppy disk, optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, including one or more instructions executable by a processor of a terminal to perform the method of interaction in a virtual space provided by the above embodiments.
It should be noted that, information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, the video streams referred to in this application, the account numbers (i.e., objects) used by the user to log into the application, are all acquired with sufficient authorization.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A method of interaction in a virtual space, the method comprising:
displaying a joint picture of a plurality of objects, the joint picture comprising pictures of virtual spaces of the plurality of objects;
receiving a target interaction instruction sent by a third server under the condition that a first audience object in a virtual space of a first object in the plurality of objects triggers a virtual resource; a second interaction instruction of the first object and a first interaction instruction of a second object in the plurality of objects are analyzed from the target interaction instruction, the second interaction instruction carries a first display position identifier and a first time period, the first interaction instruction is used for indicating a second terminal of one second object to display a second special effect resource, the first interaction instruction carries a second display position identifier, a second time period, a resource identifier of the second special effect resource and an identifier of the second object, the ending time of the first time period is the starting time of the second time period, the first special effect resource and a plurality of second special effect resources form the whole special effect resource of the virtual resource, the plurality of second special effect resources correspond to different second objects and have different display effects, and the types of the second special effect resources corresponding to each second object are different;
The third server is used for detecting that any one anchor performs joint interaction with at least one other anchor and the number of objects participating in the joint interaction is larger than a first threshold, and sending a permission opening instruction to a terminal of each object participating in the joint interaction by adopting a peak removal strategy based on an interaction scene of the joint interaction, wherein the permission opening instruction is used for indicating to open target permission, the objects participating in the joint interaction comprise each anchor participating in the joint interaction and each audience object in a virtual space of each anchor, and the target permission is the interaction permission of the virtual resource across the virtual space;
based on the second interaction instruction, displaying the first special effect resource at a first target display position indicated by the first display position identifier in the combined picture in the first time period;
before the first time period is over, the first interaction instruction is sent to the second terminal of at least one second object, the second terminal is used for receiving the first interaction instruction, and in the second time period, the second special effect resource is displayed at a second target display position indicated by the second display position identifier in the combined picture.
2. The method of interaction in a virtual space of claim 1, wherein displaying the first effect resource at the first target display location indicated by the first display location identifier in the combined picture comprises:
in the joint picture, playing joint video streams of the plurality of objects, wherein the joint video streams comprise first rendering data of the first special effect resource, and the first rendering data are used for displaying the first special effect resource;
the joint video stream is provided to a terminal of the first viewer object.
3. The method of interaction in virtual space of claim 1, wherein said exposing said second effect resource comprises:
if the display position mark indicates the target part of the object, identifying the target part of the second object in the combined picture;
and displaying the second special effect resource at the identified target part of the second object.
4. The method of interaction in a virtual space of claim 1, further comprising:
and respectively providing a first interaction instruction for the terminal of at least one second object in the plurality of objects through a first communication channel, wherein the first communication channel is a communication channel between the anchor terminals.
5. The method of interaction in virtual space of claim 4, further comprising:
providing a third interaction instruction for the terminal of the first audience object through a second communication channel, wherein the third interaction instruction is used for indicating the terminal of the first audience object to display the first special effect resource, and the second communication channel is a communication channel between a main broadcasting end and an audience end;
and providing a fourth interaction instruction for the terminal of the second object through the first communication channel, wherein the fourth interaction instruction is used for indicating the terminal of the second audience object in the virtual space of the second object to display the second special effect resource.
6. A method of interaction in a virtual space, the method comprising:
displaying a joint picture of a plurality of objects, the joint picture comprising pictures of virtual spaces of the plurality of objects;
when a first audience object in a virtual space of a first object in the plurality of objects triggers a virtual resource, displaying a second special effect resource included in the virtual resource at a second target display position indicated by a second display position identifier in the combined picture in a second time period according to a first interaction instruction sent by a first terminal of the first object, wherein the first interaction instruction is used for indicating a second terminal of a second object to display a second special effect resource, and the first interaction instruction carries the second display position identifier, the second time period, a resource identifier of the second special effect resource and an identifier of the second object;
The first terminal is used for receiving a target interaction instruction sent by a third server under the condition that a first audience object in a virtual space of a first object in the plurality of objects triggers a virtual resource; analyzing a second interaction instruction of the first object and a first interaction instruction of a second object in the plurality of objects from the target interaction instruction, wherein the second interaction instruction carries a first display position identifier and a first time period, the ending time of the first time period is the starting time of the second time period, a first special effect resource and a plurality of second special effect resources form the whole special effect resource of the virtual resource, the plurality of second special effect resources correspond to different second objects and have different display effects, and the types of the second special effect resources corresponding to each second object are different; based on the second interaction instruction, displaying the first special effect resource at a first target display position indicated by the first display position identifier in the combined picture in the first time period; before the first time period is over, respectively sending the first interaction instruction to the second terminal of at least one second object;
The third server is used for detecting that any one of the anchor and at least one other anchor perform joint interaction, and when the number of objects participating in the joint interaction is larger than a first threshold, based on an interaction scene of the joint interaction, sending a permission opening instruction to a terminal of each object participating in the joint interaction by adopting a peak removal strategy, wherein the permission opening instruction is used for indicating to open a target permission, the objects participating in the joint interaction comprise each anchor participating in the joint interaction and each audience object in a virtual space of each anchor, and the target permission is the interaction permission of the virtual resource across the virtual space.
7. The method according to claim 6, wherein displaying the second special effects resource included in the virtual resource at the second target display position indicated by the second display position identifier in the joint picture includes:
playing a joint video stream of the plurality of objects in the joint picture, wherein the joint video stream comprises second rendering data of the second special effect resource, and the second rendering data is used for displaying the second special effect resource;
and transmitting the joint video stream to a terminal of a second audience object in the virtual space of a second object in the plurality of objects.
8. The method of interaction in a virtual space of claim 6, wherein said exposing the second effect resource comprised by the virtual resource comprises:
if the display position identifier indicates the target part of the object, identifying the target part of the second object in the combined picture based on a second identifier;
and displaying the second special effect resource at the identified target part of the second object.
9. The method of interaction in virtual space of claim 6, further comprising:
and receiving the first interaction instruction through a first communication channel, wherein the first communication channel is a communication channel between the anchor terminals.
10. The method of interaction in virtual space of claim 6, further comprising:
receiving a fourth interaction instruction through the first communication channel, wherein the fourth interaction instruction is used for indicating a terminal of a second audience object in the virtual space of the second object to display the second special effect resource;
and providing the fourth interaction instruction to the terminal of the second audience object through a second communication channel, wherein the second communication channel is a communication channel between the anchor end and the audience end.
11. An interactive apparatus in a virtual space, the apparatus comprising:
a display unit configured to perform a joint screen displaying a plurality of objects, the joint screen including a screen of a virtual space of the plurality of objects;
the display unit is configured to execute a target interaction instruction sent by a third server under the condition that a first audience object in a virtual space of a first object in the plurality of objects triggers a virtual resource; a second interaction instruction of the first object and a first interaction instruction of a second object in the plurality of objects are analyzed from the target interaction instruction, the second interaction instruction carries a first display position identifier and a first time period, the first interaction instruction is used for indicating a second terminal of one second object to display a second special effect resource, the first interaction instruction carries a second display position identifier, a second time period, a resource identifier of the second special effect resource and an identifier of the second object, the ending time of the first time period is the starting time of the second time period, the first special effect resource and a plurality of second special effect resources form the whole special effect resource of the virtual resource, the plurality of second special effect resources correspond to different second objects and have different display effects, and the types of the second special effect resources corresponding to each second object are different;
The third server is used for detecting that any one anchor performs joint interaction with at least one other anchor and the number of objects participating in the joint interaction is larger than a first threshold, and sending a permission opening instruction to a terminal of each object participating in the joint interaction by adopting a peak removal strategy based on an interaction scene of the joint interaction, wherein the permission opening instruction is used for indicating to open target permission, the objects participating in the joint interaction comprise each anchor participating in the joint interaction and each audience object in a virtual space of each anchor, and the target permission is the interaction permission of the virtual resource across the virtual space; based on the second interaction instruction, displaying the first special effect resource at a first target display position indicated by the first display position identifier in the combined picture in the first time period;
a module for performing the steps of: before the first time period is over, the first interaction instruction is sent to the second terminal of at least one second object, the second terminal is used for receiving the first interaction instruction, and in the second time period, the second special effect resource is displayed at a second target display position indicated by the second display position identifier in the combined picture.
12. An interactive apparatus in a virtual space, the apparatus comprising:
a display unit configured to perform a joint screen displaying a plurality of objects, the joint screen including a screen of a virtual space of the plurality of objects;
the display unit is configured to execute a first special effect resource included in a virtual resource in a virtual space of a first object in the plurality of objects according to a first interaction instruction sent by a first terminal of the first object, in a second time period, at a second target display position indicated by a second display position identifier in the joint picture, wherein the first interaction instruction is used for indicating a second terminal of one second object to display a second special effect resource, and the first interaction instruction carries the second display position identifier, the second time period, a resource identifier of the second special effect resource and an identifier of the second object;
the first terminal is used for receiving a target interaction instruction sent by a third server under the condition that a first audience object in a virtual space of a first object in the plurality of objects triggers a virtual resource; analyzing a second interaction instruction of the first object and a first interaction instruction of a second object in the plurality of objects from the target interaction instruction, wherein the second interaction instruction carries a first display position identifier and a first time period, the ending time of the first time period is the starting time of the second time period, a first special effect resource and a plurality of second special effect resources form the whole special effect resource of the virtual resource, the plurality of second special effect resources correspond to different second objects and have different display effects, and the types of the second special effect resources corresponding to each second object are different; based on the second interaction instruction, displaying the first special effect resource at a first target display position indicated by the first display position identifier in the combined picture in the first time period; before the first time period is over, respectively sending the first interaction instruction to the second terminal of at least one second object;
The third server is used for detecting that any one of the anchor and at least one other anchor perform joint interaction, and when the number of objects participating in the joint interaction is larger than a first threshold, based on an interaction scene of the joint interaction, sending a permission opening instruction to a terminal of each object participating in the joint interaction by adopting a peak removal strategy, wherein the permission opening instruction is used for indicating to open a target permission, the objects participating in the joint interaction comprise each anchor participating in the joint interaction and each audience object in a virtual space of each anchor, and the target permission is the interaction permission of the virtual resource across the virtual space.
13. A terminal, comprising:
one or more processors;
one or more memories for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to execute the instructions to implement the interaction method in virtual space of any one of claims 1 to 10.
14. A computer-readable storage medium, characterized in that at least one instruction in the computer-readable storage medium, when executed by one or more processors of a terminal, enables the terminal to perform the interaction method in the virtual space of any one of claims 1 to 10.
CN202210450735.9A 2022-04-26 2022-04-26 Interaction method, device, terminal and storage medium in virtual space Active CN114845129B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210450735.9A CN114845129B (en) 2022-04-26 2022-04-26 Interaction method, device, terminal and storage medium in virtual space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210450735.9A CN114845129B (en) 2022-04-26 2022-04-26 Interaction method, device, terminal and storage medium in virtual space

Publications (2)

Publication Number Publication Date
CN114845129A CN114845129A (en) 2022-08-02
CN114845129B true CN114845129B (en) 2023-05-30

Family

ID=82567601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210450735.9A Active CN114845129B (en) 2022-04-26 2022-04-26 Interaction method, device, terminal and storage medium in virtual space

Country Status (1)

Country Link
CN (1) CN114845129B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115297339B (en) * 2022-08-25 2024-06-04 北京达佳互联信息技术有限公司 Virtual space-based object interaction method and device, electronic equipment and medium
CN116896649B (en) * 2023-09-11 2024-01-19 北京达佳互联信息技术有限公司 Live interaction method and device, electronic equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194973A (en) * 2018-09-26 2019-01-11 广州华多网络科技有限公司 A kind of more main broadcaster's direct broadcasting rooms give the methods of exhibiting, device and equipment of virtual present
CN111010585B (en) * 2019-12-06 2021-10-22 广州方硅信息技术有限公司 Virtual gift sending method, device, equipment and storage medium
CN111246232A (en) * 2020-01-17 2020-06-05 广州华多网络科技有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN111970533B (en) * 2020-08-28 2022-11-04 北京达佳互联信息技术有限公司 Interaction method and device for live broadcast room and electronic equipment
CN111818359B (en) * 2020-09-14 2021-01-26 北京达佳互联信息技术有限公司 Processing method and device for live interactive video, electronic equipment and server
CN113438490A (en) * 2021-05-27 2021-09-24 广州方硅信息技术有限公司 Live broadcast interaction method, computer equipment and storage medium
CN114095772B (en) * 2021-12-08 2024-03-12 广州方硅信息技术有限公司 Virtual object display method, system and computer equipment under continuous wheat direct sowing

Also Published As

Publication number Publication date
CN114845129A (en) 2022-08-02

Similar Documents

Publication Publication Date Title
CN109600678B (en) Information display method, device and system, server, terminal and storage medium
CN112929687B (en) Live video-based interaction method, device, equipment and storage medium
CN110865754B (en) Information display method and device and terminal
CN110213636B (en) Method and device for generating video frame of online video, storage medium and equipment
CN109729411B (en) Live broadcast interaction method and device
CN109803154B (en) Live broadcast method, equipment and storage medium for chess game
CN110213608B (en) Method, device, equipment and readable storage medium for displaying virtual gift
CN110139116B (en) Live broadcast room switching method and device and storage medium
CN109729372B (en) Live broadcast room switching method, device, terminal, server and storage medium
CN112118477B (en) Virtual gift display method, device, equipment and storage medium
CN114845129B (en) Interaction method, device, terminal and storage medium in virtual space
CN110149332B (en) Live broadcast method, device, equipment and storage medium
CN110300274B (en) Video file recording method, device and storage medium
CN111464830B (en) Method, device, system, equipment and storage medium for image display
CN108897597B (en) Method and device for guiding configuration of live broadcast template
CN113490010B (en) Interaction method, device and equipment based on live video and storage medium
CN111045945B (en) Method, device, terminal, storage medium and program product for simulating live broadcast
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
CN113395566B (en) Video playing method and device, electronic equipment and computer readable storage medium
CN113204671A (en) Resource display method, device, terminal, server, medium and product
CN115086737B (en) Data processing method, device, electronic equipment and storage medium
CN113141538B (en) Media resource playing method, device, terminal, server and storage medium
CN114245148B (en) Live interaction method, device, terminal, server and storage medium
CN112423008B (en) Live broadcast method, device, terminal, server and storage medium
CN112770149B (en) Video processing method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant