CN109729411B - Live broadcast interaction method and device - Google Patents

Live broadcast interaction method and device Download PDF

Info

Publication number
CN109729411B
CN109729411B CN201910020548.5A CN201910020548A CN109729411B CN 109729411 B CN109729411 B CN 109729411B CN 201910020548 A CN201910020548 A CN 201910020548A CN 109729411 B CN109729411 B CN 109729411B
Authority
CN
China
Prior art keywords
user
terminal
virtual
information
live broadcast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910020548.5A
Other languages
Chinese (zh)
Other versions
CN109729411A (en
Inventor
白伟民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201910020548.5A priority Critical patent/CN109729411B/en
Publication of CN109729411A publication Critical patent/CN109729411A/en
Application granted granted Critical
Publication of CN109729411B publication Critical patent/CN109729411B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a live broadcast interaction method and device, and belongs to the technical field of internet. The method comprises the following steps: displaying a live broadcast interface, wherein the live broadcast interface comprises a first video playing area and a second video playing area, the first video playing area is used for playing a live broadcast video of a first user, and the second video playing area is used for playing a live broadcast video of a second user; receiving a virtual item gifting instruction, wherein the virtual item gifting instruction is used for indicating a current user to gift a virtual item to a first user; displaying the animation corresponding to the virtual article in the second video playing area; and when the interaction ending information is received, ending the display of the animation corresponding to the virtual article. The invention can meet the requirement of interaction between users in different live broadcast rooms and improve the interest of the interaction.

Description

Live broadcast interaction method and device
Technical Field
The invention relates to the technical field of internet, in particular to a live broadcast interaction method and device.
Background
With the development of internet technology, live webcasting becomes an entertainment mode popular to the public, more and more users watch live webcasting, in order to improve the interest of live webcasting, the users can interact in the live webcasting process, and if audience users can give virtual articles to the main webcasting user.
In the related technology, the terminals of two anchor users can carry out wheat connection, after the wheat connection is successful, the two anchor users can carry out conversation and exchange, and audience users in two live broadcast rooms can see live broadcast videos of the two anchor users. During the course of connecting to the wheat, the audience user of any anchor user may present a virtual item to the anchor user so that the virtual item will be displayed in the live video playback area of the anchor user.
The live broadcast interaction mode in the technology is only to show virtual articles given to the anchor user by audience users, and the interest of the interaction is poor.
Disclosure of Invention
The embodiment of the invention provides a live broadcast interaction method and device, which can solve the problem of poor interest of interaction in related technologies. The technical scheme is as follows:
in a first aspect, a live broadcast interaction method is provided, including:
displaying a live broadcast interface, wherein the live broadcast interface comprises a first video playing area and a second video playing area, the first video playing area is used for playing a live broadcast video of a first user, and the second video playing area is used for playing a live broadcast video of a second user;
receiving a virtual item gifting instruction, wherein the virtual item gifting instruction is used for indicating a current user to donate a virtual item to the first user, and the current user is a user in a live broadcast of the first user;
displaying the animation corresponding to the virtual article in the second video playing area;
and when interaction ending information is received, ending the display of the animation corresponding to the virtual article, wherein the interaction ending information is triggered by the interaction operation performed by the user in the live broadcast room of the second user.
In one possible implementation manner, the displaying the animation corresponding to the virtual article in the second video playing area includes:
acquiring the motion tracks of a plurality of virtual articles in the second video playing area;
and displaying animation of the plurality of virtual articles moving in the second video playing area according to the motion tracks of the plurality of virtual articles.
In one possible implementation manner, the ending, when the interaction ending information is received, the displaying of the animation corresponding to the virtual object includes:
and when first end information is received, ending the display of at least one virtual article in the plurality of virtual articles, wherein the first end information is triggered by a first interactive operation performed by a user in the live broadcast room of the second user.
In one possible implementation, the ending the display of the at least one of the plurality of virtual items includes:
and displaying animation of the residual virtual articles moving in the second video playing area according to the movement track of the residual virtual articles except the at least one virtual article in the plurality of virtual articles.
In one possible implementation manner, the displaying the animation corresponding to the virtual article in the second video playing area includes:
acquiring animation data and area information for generating preset effects, wherein the preset effects comprise a smoke effect, a flame effect and an explosion effect, and the area information is used for indicating a first local area of the second video playing area;
and displaying the animation generating the preset effect in the first local area according to the animation data and the first local area.
In one possible implementation manner, when the interaction end information is received, ending displaying of the animation corresponding to the virtual object in the second video playing area includes:
and when second end information is received, ending the display of the preset effect in a second local area of the first local area, wherein the second end information is triggered by second interaction operation performed by a user in a live broadcast room of a second user.
In one possible implementation, the ending the displaying of the preset effect in the second partial area of the first partial area includes:
determining a remaining local area except the second local area in the first local area according to the first local area and the second local area;
and displaying the animation generating the preset effect in the residual local area according to the animation data and the residual local area.
In one possible implementation manner, the displaying the animation corresponding to the virtual article in the second video playing area includes:
adding a virtual article display view on the upper layer of the second video playing area;
and displaying the animation corresponding to the virtual article on the virtual article display view, wherein the transparency of the virtual article display view is a target transparency, and the target transparency is smaller than the transparency of the second video playing area.
In a second aspect, a live interaction method is provided, including:
displaying a live broadcast interface, wherein the live broadcast interface comprises a first video playing area and a second video playing area, the first video playing area is used for playing a live broadcast video of a first user, and the second video playing area is used for playing a live broadcast video of a second user;
receiving virtual article display information, wherein the virtual article display information is used for indicating that an animation corresponding to a virtual article is displayed in the second video playing area, and the virtual article is given to the first user by a user in a live broadcast of the first user;
displaying the animation corresponding to the virtual article in the second video playing area;
and when the interactive operation is detected, ending the display of the animation corresponding to the virtual article.
In one possible implementation manner, the ending, when the interactive operation is detected, the displaying of the animation corresponding to the virtual object includes:
when the interactive operation is detected, sending interactive feedback information to a server, wherein the interactive feedback information is used for feeding back the interactive operation performed by a current user to the server, and the current user is a user in a live broadcast room of the second user;
and when the interaction ending information sent by the server is received, ending the display of the animation corresponding to the virtual article.
In one possible implementation, the virtual article display information includes a motion trajectory of a plurality of virtual articles within the second video playback area;
the displaying the animation corresponding to the virtual article in the second video playing area comprises:
and displaying animation of the plurality of virtual articles moving in the second video playing area according to the motion tracks of the plurality of virtual articles.
In a possible implementation manner, the ending, when the interaction ending information sent by the server is received, the displaying of the animation corresponding to the virtual article, includes:
and when first end information sent by the server is received, ending the display of at least one virtual article in the plurality of virtual articles, wherein the first end information is triggered by a first interactive operation performed by the current user.
In one possible implementation, the ending the display of the at least one of the plurality of virtual items includes:
and displaying animation of the residual virtual articles moving in the second video playing area according to the movement track of the residual virtual articles except the at least one virtual article in the plurality of virtual articles.
In one possible implementation, the virtual article display information includes animation data for generating preset effects including a smoke effect, a flame effect, and an explosion effect, and region information for indicating a first partial region of the second video playing region;
the displaying the animation corresponding to the virtual article in the second video playing area comprises:
and displaying the animation generating the preset effect in the first local area according to the animation data and the first local area.
In one possible implementation manner, when the interaction end information is received, ending displaying of the animation corresponding to the virtual object in the second video playing area includes:
and when second end information is received, ending the display of the preset effect in a second local area of the first local area, wherein the second end information is triggered by a second interaction operation performed by the current user.
In one possible implementation, the ending the displaying of the preset effect in the second partial area of the first partial area includes:
determining a remaining local area except the second local area in the first local area according to the first local area and the second local area;
and displaying the animation which generates the preset effect in the residual local area according to the virtual article display information and the residual local area.
In one possible implementation manner, the displaying the animation corresponding to the virtual article in the second video playing area includes:
adding an animation display view on the upper layer of the second video playing area;
and displaying the animation corresponding to the virtual article on the animation display view, wherein the transparency of the animation display view is a target transparency, and the target transparency is smaller than the transparency of the second video playing area.
In a third aspect, a live broadcast interaction method is provided, including:
when a first terminal where a first user is located and a second terminal where a second user is located are in a live broadcast stream merging state, receiving a virtual article presenting instruction sent by a third terminal, wherein the third terminal is a terminal where the user is located in a live broadcast room of the first user, and the virtual article presenting instruction is used for indicating the user of the third terminal to present a virtual article to the first user;
sending virtual article display information to the third terminal and a fourth terminal, wherein the virtual article display information is used for indicating that an animation corresponding to the virtual article is displayed in a second video playing area of a live broadcast interface, the second video playing area is used for playing a live broadcast video of the second user, and the fourth terminal is a terminal where the user in a live broadcast room of the second user is located;
and when receiving interaction feedback information sent by the fourth terminal, sending interaction end information to the third terminal and the fourth terminal, wherein the interaction end information is used for indicating the end of displaying the animation corresponding to the virtual article, and the interaction feedback information is used for feeding back the interaction operation performed by the user at the fourth terminal to the server.
In one possible implementation, the virtual article display information includes a motion trajectory of a plurality of virtual articles within the second video playback area;
when receiving the interactive feedback information sent by the fourth terminal, sending interactive end information to the third terminal and the fourth terminal, including:
when first interaction feedback information sent by the fourth terminal is received, first end information is sent to the third terminal and the fourth terminal, the first end information is used for indicating that the display of at least one virtual article in the plurality of virtual articles is ended, and the first interaction feedback information is used for feeding back first interaction operation performed by a user of the fourth terminal to the server.
In one possible implementation, the virtual article display information includes animation data for generating preset effects including a smoke effect, a flame effect, and an explosion effect, and region information for indicating a first partial region of the second video playing region;
when receiving the interactive feedback information sent by the fourth terminal, sending interactive end information to the third terminal and the fourth terminal, including:
and when second interaction feedback information sent by the fourth terminal is received, sending second end information to the third terminal and the fourth terminal, wherein the second end information is used for indicating the end of displaying the preset effect in a second local area of the first local area, and the second interaction feedback information is used for feeding back a second interaction operation performed by a user of the fourth terminal to the server.
In a fourth aspect, a live broadcast interactive device is provided, which includes:
the display module is used for displaying a live broadcast interface, the live broadcast interface comprises a first video playing area and a second video playing area, the first video playing area is used for playing a live broadcast video of a first user, and the second video playing area is used for playing a live broadcast video of a second user;
a receiving module, configured to receive a virtual article presenting instruction, where the virtual article presenting instruction is used to instruct a current user to present a virtual article to the first user, and the current user is a user in a live broadcast of the first user;
the display module is further used for displaying the animation corresponding to the virtual article in the second video playing area;
the display module is further used for finishing displaying the animation corresponding to the virtual article when interaction finishing information is received, and the interaction finishing information is triggered by interaction operation performed by a user in a live broadcast room of the second user.
In one possible implementation manner, the display module is configured to acquire motion trajectories of a plurality of virtual articles in the second video playing area; and displaying animation of the plurality of virtual articles moving in the second video playing area according to the motion tracks of the plurality of virtual articles.
In a possible implementation manner, the display module is configured to end the display of at least one of the plurality of virtual items when first end information is received, where the first end information is triggered by a first interactive operation performed by a user in a live room of the second user.
In one possible implementation manner, the display module is configured to display an animation that the remaining virtual items move in the second video playing area according to the movement track of the remaining virtual items in the plurality of virtual items except the at least one virtual item.
In one possible implementation manner, the display module is configured to obtain animation data and area information for generating a preset effect, where the preset effect includes a smoke effect, a flame effect, and an explosion effect, and the area information is used to indicate a first partial area of the second video playing area;
and displaying the animation generating the preset effect in the first local area according to the animation data and the first local area.
In a possible implementation manner, the display module is configured to end the displaying of the preset effect in the second local area of the first local area when second end information is received, where the second end information is triggered by a second interaction operation performed by a user in a live broadcast room of the second user.
In one possible implementation manner, the display module is configured to determine a remaining partial area within the first partial area except for the second partial area according to the first partial area and the second partial area; and displaying the animation generating the preset effect in the residual local area according to the animation data and the residual local area.
In one possible implementation manner, the display module is used for adding a virtual article display view on the upper layer of the second video playing area; and displaying the animation corresponding to the virtual article on the virtual article display view, wherein the transparency of the virtual article display view is a target transparency, and the target transparency is smaller than the transparency of the second video playing area.
In a fifth aspect, a live interactive device is provided, which includes:
the display module is used for displaying a live broadcast interface, the live broadcast interface comprises a first video playing area and a second video playing area, the first video playing area is used for playing a live broadcast video of a first user, and the second video playing area is used for playing a live broadcast video of a second user;
a receiving module, configured to receive virtual article display information, where the virtual article display information is used to indicate that an animation corresponding to a virtual article is displayed in the second video playing area, and the virtual article is given to the first user by a user in a live broadcast room of the first user;
the display module is further used for displaying the animation corresponding to the virtual article in the second video playing area;
the display module is further used for finishing displaying the animation corresponding to the virtual article when the interactive operation is detected.
In one possible implementation manner, the display module is configured to send interaction feedback information to a server when an interaction operation is detected, where the interaction feedback information is used to feed back, to the server, an interaction operation performed by a current user, where the current user is a user in a live broadcast room of the second user; and when the interaction ending information sent by the server is received, ending the display of the animation corresponding to the virtual article.
In one possible implementation, the virtual article display information includes a motion trajectory of a plurality of virtual articles within the second video playback area;
the display module is used for displaying the animation of the plurality of virtual articles moving in the second video playing area according to the motion tracks of the plurality of virtual articles.
In a possible implementation manner, the display module is configured to end the display of at least one of the plurality of virtual items when first end information sent by the server is received, where the first end information is triggered by a first interactive operation performed by the current user.
In one possible implementation manner, the display module is configured to display an animation that the remaining virtual items move in the second video playing area according to the movement track of the remaining virtual items in the plurality of virtual items except the at least one virtual item.
In one possible implementation, the virtual article display information includes animation data for generating preset effects including a smoke effect, a flame effect, and an explosion effect, and region information for indicating a first partial region of the second video playing region;
the display module is used for displaying the animation which generates the preset effect in the first local area according to the animation data and the first local area.
In a possible implementation manner, the display module is configured to end the displaying of the preset effect in the second local area of the first local area when second end information is received, where the second end information is triggered by a second interaction operation performed by the current user.
In one possible implementation manner, the display module is configured to determine a remaining partial area within the first partial area except for the second partial area according to the first partial area and the second partial area; and displaying the animation which generates the preset effect in the residual local area according to the virtual article display information and the residual local area.
In one possible implementation manner, the display module is used for adding an animation display view on the upper layer of the second video playing area; and displaying the animation corresponding to the virtual article on the animation display view, wherein the transparency of the animation display view is a target transparency, and the target transparency is smaller than the transparency of the second video playing area.
In a sixth aspect, a live broadcast interaction device is provided, which includes:
the receiving module is used for receiving a virtual article presenting instruction sent by a third terminal when a first terminal where a first user is located and a second terminal where a second user is located are in a live broadcast stream merging state, wherein the third terminal is a terminal where the user is located in a live broadcast room of the first user, and the virtual article presenting instruction is used for indicating the user of the third terminal to present a virtual article to the first user;
a sending module, configured to send virtual article display information to the third terminal and a fourth terminal, where the virtual article display information is used to indicate that an animation corresponding to the virtual article is displayed in a second video playing area of a live broadcast interface, the second video playing area is used to play a live broadcast video of the second user, and the fourth terminal is a terminal where a user in a live broadcast room of the second user is located;
the sending module is further configured to send interaction end information to the third terminal and the fourth terminal when receiving interaction feedback information sent by the fourth terminal, where the interaction end information is used to indicate that display of an animation corresponding to the virtual article is ended, and the interaction feedback information is used to feed back, to the server, an interaction operation performed by a user at the fourth terminal.
In one possible implementation, the virtual article display information includes a motion trajectory of a plurality of virtual articles within the second video playback area;
the sending module is configured to send first end information to the third terminal and the fourth terminal when receiving first interaction feedback information sent by the fourth terminal, where the first end information is used to indicate that display of at least one of the plurality of virtual articles is ended, and the first interaction feedback information is used to feed back a first interaction operation performed by a user of the fourth terminal to the server.
In one possible implementation, the virtual article display information includes animation data for generating preset effects including a smoke effect, a flame effect, and an explosion effect, and region information for indicating a first partial region of the second video playing region;
the sending module is used for sending second ending information to the third terminal and the fourth terminal when receiving second interaction feedback information sent by the fourth terminal, the second ending information is used for indicating ending of displaying the preset effect in a second local area of the first local area, and the second interaction feedback information is used for feeding back second interaction operation performed by a user of the fourth terminal to the server.
In a seventh aspect, a terminal is provided that includes a processor and a memory; the memory is used for storing a computer program; the processor is configured to execute the computer program stored in the memory to implement the method steps of any one of the implementations of the first aspect and the second aspect.
In an eighth aspect, a server is provided that includes a processor and a memory; the memory is used for storing a computer program; the processor is configured to execute the computer program stored in the memory to implement the method steps of any one of the implementation manners of the third aspect.
In a ninth aspect, a computer-readable storage medium is provided, in which a computer program is stored, which, when being executed by a processor, carries out the method steps of any one of the implementation manners of the first aspect.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
the live video playing area of the first user and the live video playing area of the second user are displayed on the live interface, the user in the live broadcast room of the first user can give a virtual article to the first user, when a virtual article giving instruction is received, the animation corresponding to the virtual article is displayed in the live video playing area of the second user, the user in the live broadcast room of the second user is prompted to perform interactive operation to trigger interactive end information, and when the interactive end information is received, the display of the animation corresponding to the virtual article is ended. According to the live broadcast interaction scheme, the animation corresponding to the virtual article presented by the user in the live broadcast room of the first user is displayed in the live broadcast video playing area of the second user, so that the user in the live broadcast room of the second user is prompted to finish displaying the animation through interactive operation, the requirement for interaction among the users in different live broadcast rooms is met, and the interest of interaction is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a live broadcast interaction method according to an embodiment of the present invention;
fig. 2 is a flowchart of a live broadcast interaction method according to an embodiment of the present invention;
fig. 3 is a flowchart of a live broadcast interaction method according to an embodiment of the present invention;
fig. 4 is a flowchart of a live broadcast interaction method according to an embodiment of the present invention;
fig. 5 is a flowchart of a live broadcast interaction method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a live broadcast interaction apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a live broadcast interaction apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a live broadcast interaction apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a terminal 900 according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a server 1000 according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment of a live broadcast interaction method according to an embodiment of the present invention. Referring to fig. 1, the implementation environment may include a first terminal 101, a second terminal 102, a third terminal 103, a fourth terminal 104, and a server 105.
The first terminal 101 is a terminal on which a first user performs live broadcast, and the second terminal 102 is a terminal on which a second user performs live broadcast, where the first user and the second user are both anchor users. The third terminal 103 is a terminal where a user in the live broadcast room of the first user is located, and the fourth terminal 104 is a terminal where a user in the live broadcast room of the second user is located.
The server 105 is configured to receive live streams sent by the first terminal 101 and the second terminal 102, and forward the live streams to the third terminal 103 and the fourth terminal 104, so that the third terminal 103 and the fourth terminal 104 can display corresponding live videos based on the received live streams.
Fig. 2 is a flowchart of a live broadcast interaction method according to an embodiment of the present invention. Referring to fig. 2, the method includes:
201. and displaying a live broadcast interface, wherein the live broadcast interface comprises a first video playing area and a second video playing area, the first video playing area is used for playing a live broadcast video of a first user, and the second video playing area is used for playing a live broadcast video of a second user.
202. Receiving a virtual item gifting instruction, wherein the virtual item gifting instruction is used for instructing a current user to gift a virtual item to the first user, and the current user is a user in the live broadcast of the first user.
203. And displaying the animation corresponding to the virtual object in the second video playing area.
204. And when interaction ending information is received, ending the display of the animation corresponding to the virtual article, wherein the interaction ending information is triggered by the interaction operation performed by the user in the live broadcast room of the second user.
According to the method provided by the embodiment of the invention, the live video playing area of the first user and the live video playing area of the second user are displayed on the live interface, the user in the live broadcast room of the first user can give a virtual article to the first user, when a virtual article giving instruction is received, the animation corresponding to the virtual article is displayed in the live video playing area of the second user, the user in the live broadcast room of the second user is prompted to carry out interaction operation to trigger interaction ending information, and when the interaction ending information is received, the display of the animation corresponding to the virtual article is ended. According to the live broadcast interaction scheme, the animation corresponding to the virtual article presented by the user in the live broadcast room of the first user is displayed in the live broadcast video playing area of the second user, so that the user in the live broadcast room of the second user is prompted to finish displaying the animation through interactive operation, the requirement for interaction among the users in different live broadcast rooms is met, and the interest of interaction is improved.
In one possible implementation manner, the displaying the animation corresponding to the virtual object in the second video playing area includes:
acquiring the motion tracks of a plurality of virtual articles in the second video playing area;
and displaying animation of the plurality of virtual articles moving in the second video playing area according to the motion tracks of the plurality of virtual articles.
In a possible implementation manner, when the interaction end information is received, ending the display of the animation corresponding to the virtual object, including:
and when first end information is received, ending the display of at least one virtual article in the plurality of virtual articles, wherein the first end information is triggered by a first interactive operation performed by a user in the live broadcast room of the second user.
In one possible implementation, the ending the displaying of the at least one of the plurality of virtual items includes:
and displaying the animation of the residual virtual article moving in the second video playing area according to the motion trail of the residual virtual article except the at least one virtual article in the plurality of virtual articles.
In one possible implementation manner, the displaying the animation corresponding to the virtual object in the second video playing area includes:
acquiring animation data and region information for generating preset effects including a smoke effect, a flame effect and an explosion effect, wherein the region information is used for indicating a first local region of the second video playing region;
displaying an animation generating the preset effect in the first local area according to the animation data and the first local area.
In a possible implementation manner, when the interaction end information is received, ending displaying the animation corresponding to the virtual object in the second video playing area, including:
and when second ending information is received, ending the display of the preset effect in a second local area of the first local area, wherein the second ending information is triggered by second interaction operation performed by a user in the live broadcast room of the second user.
In one possible implementation, the ending the displaying of the preset effect in the second partial area of the first partial area includes:
determining a remaining local area in the first local area except the second local area according to the first local area and the second local area;
and displaying the animation generating the preset effect in the residual local area according to the animation data and the residual local area.
In one possible implementation manner, the displaying the animation corresponding to the virtual object in the second video playing area includes:
adding a virtual article display view on the upper layer of the second video playing area;
and displaying the animation corresponding to the virtual article on the virtual article display view, wherein the transparency of the virtual article display view is a target transparency, and the target transparency is smaller than the transparency of the second video playing area.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Fig. 3 is a flowchart of a live broadcast interaction method according to an embodiment of the present invention. Referring to fig. 3, the method includes:
301. and displaying a live broadcast interface, wherein the live broadcast interface comprises a first video playing area and a second video playing area, the first video playing area is used for playing a live broadcast video of a first user, and the second video playing area is used for playing a live broadcast video of a second user.
302. And receiving virtual article display information, wherein the virtual article display information is used for indicating that the animation corresponding to the virtual article is displayed in the second video playing area, and the virtual article is given to the first user by a user in the live broadcast room of the first user.
303. And displaying the animation corresponding to the virtual object in the second video playing area.
304. And when the interactive operation is detected, ending the display of the animation corresponding to the virtual article.
According to the method provided by the embodiment of the invention, the live video playing area of the first user and the live video playing area of the second user are displayed on the live interface, the user in the live broadcast room of the first user can give a virtual article to the first user, when the display information of the virtual article is received, the animation corresponding to the virtual article is displayed in the live video playing area of the second user, and when the user in the live broadcast room of the second user performs interactive operation, the display of the animation corresponding to the virtual article is finished. According to the live broadcast interaction scheme, the animation corresponding to the virtual article presented by the user in the live broadcast room of the first user is displayed in the live broadcast video playing area of the second user, so that the user in the live broadcast room of the second user is prompted to finish displaying the animation through interactive operation, the requirement for interaction among the users in different live broadcast rooms is met, and the interest of interaction is improved.
In one possible implementation manner, when the interactive operation is detected, ending the displaying of the animation corresponding to the virtual object includes:
when the interactive operation is detected, sending interactive feedback information to a server, wherein the interactive feedback information is used for feeding back the interactive operation performed by the current user to the server, and the current user is a user in the live broadcast room of the second user;
and when the interaction ending information sent by the server is received, ending the display of the animation corresponding to the virtual article.
In one possible implementation, the virtual article display information includes a motion trajectory of a plurality of virtual articles within the second video playback area;
should show the animation that this virtual article corresponds in this second video playback zone, include:
and displaying animation of the plurality of virtual articles moving in the second video playing area according to the motion tracks of the plurality of virtual articles.
In a possible implementation manner, when the interaction end information sent by the server is received, ending the display of the animation corresponding to the virtual object, including:
and when first end information sent by the server is received, ending the display of at least one virtual article in the plurality of virtual articles, wherein the first end information is triggered by the first interactive operation performed by the current user.
In one possible implementation, the ending the displaying of the at least one of the plurality of virtual items includes:
and displaying the animation of the residual virtual article moving in the second video playing area according to the motion trail of the residual virtual article except the at least one virtual article in the plurality of virtual articles.
In one possible implementation, the virtual article display information includes animation data for generating preset effects including a smoke effect, a flame effect, and an explosion effect, and region information for indicating a first partial region of the second video playback region;
should show the animation that this virtual article corresponds in this second video playback zone, include:
displaying an animation generating the preset effect in the first local area according to the animation data and the first local area.
In a possible implementation manner, when the interaction end information is received, ending displaying the animation corresponding to the virtual object in the second video playing area, including:
and when second end information is received, ending the display of the preset effect in a second local area of the first local area, wherein the second end information is triggered by a second interaction operation performed by the current user.
In one possible implementation, the ending the displaying of the preset effect in the second partial area of the first partial area includes:
determining a remaining local area in the first local area except the second local area according to the first local area and the second local area;
and displaying the animation which generates the preset effect in the residual local area according to the virtual article display information and the residual local area.
In one possible implementation manner, the displaying the animation corresponding to the virtual object in the second video playing area includes:
adding an animation display view on the upper layer of the second video playing area;
and displaying the animation corresponding to the virtual article on the animation display view, wherein the transparency of the animation display view is the target transparency, and the target transparency is smaller than the transparency of the second video playing area.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Fig. 4 is a flowchart of a live broadcast interaction method according to an embodiment of the present invention. Referring to fig. 4, the method includes:
401. when a first terminal where a first user is located and a second terminal where a second user is located are in a live broadcast stream merging state, receiving a virtual article presenting instruction sent by a third terminal, wherein the third terminal is the terminal where the user is located in a live broadcast room of the first user, and the virtual article presenting instruction is used for indicating the user of the third terminal to present a virtual article to the first user.
402. And sending virtual article display information to the third terminal and the fourth terminal, wherein the virtual article display information is used for indicating that the animation corresponding to the virtual article is displayed in a second video playing area of a live broadcast interface, the second video playing area is used for playing a live broadcast video of the second user, and the fourth terminal is a user in a live broadcast room of the second user.
403. And when receiving interaction feedback information sent by the fourth terminal, sending interaction end information to the third terminal and the fourth terminal, wherein the interaction end information is used for indicating the end of displaying the animation corresponding to the virtual article, and the interaction feedback information is used for feeding back the interaction operation performed by the user at the fourth terminal to the server.
According to the method provided by the embodiment of the invention, when the user in the live broadcast room of the first user presents the virtual article to the first user, the virtual article display information is sent to the users in the live broadcast rooms of the first user and the second user according to the received virtual article presentation instruction, the virtual article display information is used for indicating that the animation corresponding to the virtual article is displayed in the live broadcast video playing area of the second user, the user in the live broadcast room of the second user is prompted to carry out interactive operation, and the display of the animation corresponding to the virtual article is finished. According to the live broadcast interaction scheme, the animation corresponding to the virtual article presented by the user in the live broadcast room of the first user is displayed in the live broadcast video playing area of the second user, so that the user in the live broadcast room of the second user is prompted to finish displaying the animation through interactive operation, the requirement for interaction among the users in different live broadcast rooms is met, and the interest of interaction is improved.
In one possible implementation, the virtual article display information includes a motion trajectory of a plurality of virtual articles within the second video playback area;
when receiving the interaction feedback information sent by the fourth terminal, sending interaction end information to the third terminal and the fourth terminal, including:
when first interaction feedback information sent by the fourth terminal is received, first end information is sent to the third terminal and the fourth terminal, the first end information is used for indicating that the display of at least one virtual article in the plurality of virtual articles is ended, and the first interaction feedback information is used for feeding back first interaction operation performed by a user of the fourth terminal to the server.
In one possible implementation, the virtual article display information includes animation data for generating preset effects including a smoke effect, a flame effect, and an explosion effect, and region information for indicating a first partial region of the second video playback region;
when receiving the interaction feedback information sent by the fourth terminal, sending interaction end information to the third terminal and the fourth terminal, including:
and when second interaction feedback information sent by the fourth terminal is received, sending second end information to the third terminal and the fourth terminal, wherein the second end information is used for indicating the end of displaying the preset effect in a second local area of the first local area, and the second interaction feedback information is used for feeding back a second interaction operation performed by a user of the fourth terminal to the server.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Fig. 5 is a flowchart of a live broadcast interaction method according to an embodiment of the present invention. The method can be applied to the implementation environment shown in the corresponding embodiment of fig. 1, and referring to fig. 5, the method includes:
501. when a first terminal where a first user is located and a second terminal where a second user is located are in a live broadcast stream merging state, a third terminal displays a live broadcast interface, the live broadcast interface comprises a first video playing area and a second video playing area, the first video playing area is used for playing a live broadcast video of the first user, and the second video playing area is used for playing a live broadcast video of the second user.
The first user and the second user are both anchor users, and the third terminal is a terminal where a user in a live broadcast room of the first user is located, that is, a terminal where an audience user of the first user is located.
In this embodiment of the present invention, the first terminal and the second terminal may perform live stream merging, that is, perform wheat connection, and in a possible implementation manner, the wheat connection process may include: the first user can perform a microphone connecting operation on the first terminal, for example, clicking a microphone connecting button, and triggering the first terminal to initiate a microphone connecting request to the server so as to perform microphone connecting live broadcast with a terminal where other users are located. After the server receives the connecting request, the second user of the second terminal can be allocated to connect with the first user of the first terminal. Of course, the wheat connecting request may also be initiated by the second user of the second terminal, and the server allocates the first user of the first terminal to connect to the second user of the second terminal. After successful microphone connection, the first terminal and the second terminal are in a live broadcast stream merging state, in the state, the first terminal and the second terminal send respective live broadcast streams to the server, the server can merge the live broadcast streams sent by the first terminal and the second terminal, and then the merged live broadcast streams are sent to terminals where users in a live broadcast room are located, including a third terminal where the user in the live broadcast room of the first user is located and a fourth terminal where the user in the live broadcast room of the second user is located. After receiving the live stream sent by the server, the third terminal and the fourth terminal can play the live video of the first user in a first video playing area of a live interface and play the live video of the second user in a second video playing area of the live interface based on the received live stream. The first video playing area may be on the left side of the second video playing area, or may be on the right side of the second video playing area, which is not limited in the embodiment of the present invention.
502. And when the first terminal where the first user is located and the second terminal where the second user is located are in a live stream merging state, the fourth terminal displays a live interface.
The fourth terminal is a terminal where a user in the live broadcast room of the second user is located, that is, a terminal where an audience user of the second user is located.
Step 502 is the same as step 501, and the specific implementation process has been described in detail in step 501, and is not described here again.
It should be noted that, in the embodiment of the present invention, only the step 501 of displaying the live interface by the third terminal and the step 502 of displaying the live interface by the fourth terminal are taken as examples for description, and the step 501 and the step 502 do not represent an execution sequence.
503. When receiving the virtual item present instruction, the third terminal transmits the virtual item present instruction to the server, the virtual item present instruction being used to instruct the current user to present the virtual item to the first user.
The virtual article is also referred to as a virtual gift, and the current user refers to a user of the third terminal, that is, a user in the live broadcast of the first user.
In the embodiment of the invention, when the first terminal and the second terminal are in a live broadcast stream merging state, a user of the third terminal can present virtual articles to the first user, if the user can operate, the third terminal is triggered to display a virtual article selection interface, the virtual article selection interface is used for the user to select various types of virtual articles, and the user can select any type of virtual article on the virtual article selection interface to trigger the virtual article presentation instruction.
504. And when receiving a virtual article presenting instruction sent by the third terminal, the server sends virtual article display information to the third terminal and the fourth terminal, wherein the virtual article display information is used for indicating that the animation corresponding to the virtual article is displayed in a second video playing area of the live broadcast interface.
In the embodiment of the invention, when the server receives the virtual article presenting instruction, the server can query the corresponding relation between the type of the virtual article and the display information of the virtual article according to the type of the virtual article indicated by the virtual article presenting instruction to obtain the corresponding display information of the virtual article, and send the queried display information of the virtual article to the terminals of all users in the live broadcast room, including the third terminal and the fourth terminal. The corresponding relationship may include virtual article display information corresponding to a plurality of virtual article types, different virtual article types may correspond to different virtual article display information, and different virtual article display information is used to display different animations.
In one possible implementation, the virtual item display information includes, but is not limited to, the following two forms:
first, the virtual article display information includes the motion tracks of a plurality of virtual articles in the second video playing area.
Accordingly, the virtual item display information is used to display an animation of the plurality of virtual items moving within the second video playback area. For example, the virtual item type may be a bee, and the corresponding virtual item display information may include a movement track of a swarm of bees in the second video playing area, for displaying an animation of the swarm of bees flying. The virtual item type may be a zombie, and the corresponding virtual item display information may include a movement trajectory of a group of zombies in the second video playing area, for displaying an animation of a movement of the group of zombies.
Of course, the above are only two examples of the virtual item type and the virtual item display information, and it is understood that the virtual item type and the virtual item display information may have other forms, which is not specifically limited in this embodiment of the present invention.
Second, the virtual article display information includes animation data for generating a preset effect including a smoke effect, a flame effect, and an explosion effect, and region information for indicating a first partial region of the second video playing region;
accordingly, the virtual item display information is used to display an animation that produces a preset effect within the first partial area. For example, the virtual item type may be a smoke bomb whose corresponding virtual item display information is used to display a smoke-turning animation. The virtual item type may be a fire, and the virtual item display information corresponding thereto is used to display animation of burning of a burning bear. The virtual article type may be a bomb, and the corresponding virtual article display information is used to display animation of an explosion, such as a crack in an interface caused by the explosion, or animation of a user's face being blackened in a live video, or animation of a user's hair becoming an explosive head.
505. And when receiving the virtual article display information sent by the server, the third terminal displays the animation corresponding to the virtual article in the second video playing area.
For the two virtual items in step 504, the animation corresponding to the virtual items may include, but is not limited to, the following two forms:
the first animation corresponding to the virtual object is an animation in which a plurality of virtual objects move in the second video playing area.
The process of displaying the first animation by the terminal can comprise the following steps: acquiring the motion tracks of a plurality of virtual articles in the second video playing area; and displaying animation of the plurality of virtual articles moving in the second video playing area according to the motion tracks of the plurality of virtual articles.
The third terminal may obtain the motion trajectories of the plurality of virtual items from the virtual item display information, and display animation in which the plurality of virtual items move in the second video playing area according to the respective motion trajectories, for example, display that the second video playing area is covered by a group of flying bees.
And the second animation corresponding to the virtual article is the animation generating the preset effect in the second video playing area.
The process of displaying two animations by the terminal can comprise the following steps: acquiring animation data and region information for generating preset effects including a smoke effect, a flame effect and an explosion effect, wherein the region information is used for indicating a first local region of the second video playing region; displaying an animation generating the preset effect in the first local area according to the animation data and the first local area.
The third terminal may obtain the animation data and the area information from the virtual article display information, and perform animation rendering in the first local area based on the animation data, so as to display an animation that generates a preset effect in the first local area, for example, display that the second video playing area is covered by a cloud of smoke.
In one possible implementation manner, the terminal may add a virtual article display view on the upper layer of the second video playing area; and displaying the animation corresponding to the virtual article on the virtual article display view, wherein the transparency of the virtual article display view is a target transparency, and the target transparency is smaller than the transparency of the second video playing area.
The size of the virtual article display view can be the same as that of the second video playing area, and the animation can be displayed by adopting the virtual article display view with the transparency smaller than that of the second video playing area, so that the effect of shielding the second video playing area can be achieved. Of course, the transparency of only a part of the views in the virtual article display view may be the target transparency, for example, the transparency of a part of the views for displaying the animation may be the target transparency, and the transparency of the remaining part of the views may be the same as the transparency of the second video playing area, so that the display of the animation may have an effect of partially shielding the second video playing area.
506. And when receiving the virtual article display information sent by the server, the fourth terminal displays the animation corresponding to the virtual article in the second video playing area.
Step 506 is the same as step 505 and will not be described again.
It should be noted that, in the embodiment of the present invention, only the step 505 of displaying the animation corresponding to the virtual article by the third terminal is taken as an example, and the step 506 of displaying the animation corresponding to the virtual article by the fourth terminal is taken as an example for explanation, and the step 505 and the step 506 do not represent an execution sequence.
507. And when the interactive operation is detected, the fourth terminal sends interactive feedback information to the server, wherein the interactive feedback information is used for feeding back the interactive operation performed by the current user to the server.
The current user refers to a user of the fourth terminal, that is, a user in the live broadcast of the second user.
In the embodiment of the invention, when the fourth terminal displays the animation corresponding to the virtual article in the second video playing area of the live broadcast interface, the watching experience of the user is influenced, and at the moment, the user at the fourth terminal can perform interactive operation.
For the first animation in step 505, in order to end the display of the animation, the user of the fourth terminal may perform a first interactive operation. For example, when displaying an animation of a group of bees flying, the first interactive operation may be an operation in which the user clicks on the bees flying in the second video playing area, or clicks on any position in the second video playing area.
For the second animation in step 505, in order to end the display of the animation, the user of the fourth terminal may perform a second interactive operation. For example, when a smoke-turning animation is displayed, the second interactive operation may be a sliding operation performed by the user in the second video playing area.
It should be noted that different animations may correspond to the same interactive operation or different interactive operations, that is, the first interactive operation and the second interactive operation may be the same or different, and the specific form of the first interactive operation and the second interactive operation is not limited in the embodiment of the present invention.
After detecting the interactive operation of the user, the fourth terminal can inform the server that the user currently performs the interactive operation in a mode of sending interactive feedback information. Specifically, when a first interaction operation is detected, the fourth terminal sends first interaction feedback information to the server, wherein the first interaction feedback information is used for feeding back the first interaction operation performed by a user of the fourth terminal to the server; and when detecting a second interactive operation, the fourth terminal sends second interactive feedback information to the server, wherein the second interactive feedback information is used for feeding back the second interactive operation performed by the user of the fourth terminal to the server.
It should be noted that the user at the fourth terminal may perform multiple interactive operations, and the fourth terminal may send a corresponding interactive feedback information to the server every time the fourth terminal detects an interactive operation, and of course, the fourth terminal may also set a preset time length, and send a corresponding interactive feedback information to the server every preset time length according to the interactive operation detected within the preset time length, where the interactive feedback information is used for feeding back the interactive operation performed by the user within the preset time length.
508. And when receiving the interaction feedback information sent by the fourth terminal, the server sends interaction end information to the third terminal and the fourth terminal, wherein the interaction end information is used for indicating the end of displaying the animation corresponding to the virtual article.
For the first animation in step 505, in one possible implementation, this step 508 may include: and when first interactive feedback information sent by the fourth terminal is received, sending first end information to the third terminal and the fourth terminal, wherein the first end information is used for indicating that the display of at least one virtual article in the plurality of virtual articles is ended.
When the server receives first interaction feedback information sent by the fourth terminal, the server can know that a user of the fourth terminal performs a first interaction operation, at this time, the server can send first end information to the third terminal and the fourth terminal to indicate the third terminal and the fourth terminal to end displaying one or more virtual objects, and when the third terminal and the fourth terminal display the last virtual object, the first end information sent by the server at this time can be used as interaction end information to indicate the third terminal and the fourth terminal to end displaying the animation.
In a possible implementation manner, when receiving the first interaction feedback information, the server may first determine whether the first interaction operation performed by the user satisfies a preset condition, and if the first interaction operation satisfies the preset condition, execute the step of sending the first end information. For example, the preset condition may be that the number of times of the first interaction operation performed by the user of the fourth terminal is equal to a preset number of times, where the preset number of times may be equal to or greater than 1. The number of the first interactive operations may be obtained by statistics according to interactive feedback information sent by one fourth terminal, or by statistics according to interactive feedback information sent by a plurality of fourth terminals, which is not specifically limited in the embodiment of the present invention.
For the second animation in step 505, in one possible implementation, this step 508 may include: and when second interaction feedback information sent by the fourth terminal is received, sending second ending information to the third terminal and the fourth terminal, wherein the second ending information is used for indicating ending of displaying the preset effect in the second local area of the first local area.
The second local area may be an area with a preset size, or may be determined according to the position and size of the second interactive operation. And when the server receives second interaction feedback information sent by the fourth terminal, the server can know that the user of the fourth terminal performs second interaction operation, and at the moment, the server can send second end information to the third terminal and the fourth terminal to instruct the third terminal and the fourth terminal to end displaying the preset effect in the local area. Similarly to the case where the server sends the first end information, the server may also execute the step of sending the second end information when the preset condition is satisfied, which is not described again.
509. And when receiving the interaction ending information, the third terminal ends the display of the animation corresponding to the virtual article, wherein the interaction ending information is triggered by the interaction operation performed by the user in the live broadcast room of the second user.
For the first animation in step 505, in one possible implementation, this step 509 may include: when the first end information is received, the display of at least one of the plurality of virtual items is ended.
The first interaction feedback information is sent when the fourth terminal detects the first interaction operation of the user, and the user of the fourth terminal is the user in the live broadcast room of the second user, so the first end information is also triggered by the first interaction operation performed by the user in the live broadcast room of the second user, and the first end information is used for indicating that the display of the at least one virtual article is ended. The first end information may carry information of the at least one virtual item, such as a label of the virtual item or a current location of the virtual item, so that the terminal may determine the at least one virtual item. Of course, the first end information may also carry a target number, so that the terminal may randomly select a target number of virtual articles from the plurality of virtual articles as the at least one virtual article according to the target number.
In one possible implementation, the ending the displaying of the at least one of the plurality of virtual items includes: and displaying the animation of the residual virtual article moving in the second video playing area according to the motion trail of the residual virtual article except the at least one virtual article in the plurality of virtual articles.
The terminal may determine the remaining virtual items according to the plurality of virtual items and the at least one virtual item, obtain the motion trajectories of the remaining virtual items from the first virtual item display information in step 504, and display the animation in which the remaining virtual items move in the second video playing area according to the respective motion trajectories.
For example, in step 505, the third terminal may display a first number of virtual items in a second video playing area of the live interface, when first end information is received for the first time, the third terminal may display a second number of virtual items in the second video playing area, where the second number is smaller than the first number, and when first end information is received for the second time, the third terminal may display a third number of virtual items in the second video playing area, where the third number is smaller than the second number, and so on, and the number of virtual items displayed by the third terminal in the second video playing area decreases continuously until no virtual items remain.
The animation corresponding to the virtual article is taken as the animation of flying of a swarm of bees, the interactive operation is click operation as an example, the server can send first ending information every time the user at the fourth terminal performs click operation, the third terminal and the fourth terminal can end the display of one bee, the effect of driving one bee at a time by clicking is achieved, or the display of a plurality of bees is ended, and the effect of driving a plurality of bees at a time by clicking is achieved. Or, the users of the fourth terminal cooperate to perform click operation for a preset number of times, the server sends the first end information, and the third terminal and the fourth terminal can end the display of one bee, so that the effect of repelling one bee by cooperating to click for multiple times is achieved.
For the second animation in step 505, in one possible implementation, this step 509 may include: and when second end information is received, ending the display of the preset effect in the second partial area of the first partial area.
The second interactive feedback information is sent when the fourth terminal detects a second interactive operation of the user, and the user of the fourth terminal is a user in the live broadcast room of the second user, so the second end information is also triggered by the second interactive operation performed by the user in the live broadcast room of the second user, and the second end information is used for indicating to end the display of the preset effect in the second local area. The second end information may carry information of the second local area, such as location information of the second local area, so that the terminal may determine the second local area. Of course, the first end information may also carry a target area, so that the terminal may randomly determine a local area of the target area from the first local area as the second local area according to the target area.
In one possible implementation, the ending the displaying of the preset effect in the second partial area of the first partial area includes: determining a remaining local area in the first local area except the second local area according to the first local area and the second local area; and displaying the animation generating the preset effect in the residual local area according to the animation data and the residual local area.
The terminal may determine the remaining local area according to the first local area and the second local area, and perform animation rendering in the remaining local area based on the animation data included in the second type of virtual article display information in step 504, so as to display an animation generating a preset effect in the remaining local area.
For example, in step 505, the third terminal may display the preset effect in a first local area of a second video playing area, when receiving a second end message for the first time, the third terminal may display the preset effect in a second local area of the first local area, when receiving a second end message for the second time, the third terminal may display the preset effect in a third local area of the second local area, and so on, until no more area displays the preset effect.
The animation corresponding to the virtual object is taken as the smoke-dazzling animation, the interactive operation is taken as an example, the server can send second end information every time the user of the fourth terminal performs a sliding operation, and the third terminal and the fourth terminal can end the display of the smoke effect in a local area, so that the effect of sliding one area to erase smoke in one area is achieved.
The virtual article is presented by the user in the live broadcast room of the first user, the virtual article is displayed in the live broadcast video playing area of the second user, the user in the live broadcast room of the second user is promoted to participate in interaction, the display of animation is finished, and the requirement of interaction among the users in different live broadcast rooms can be met.
510. And when the interaction ending information sent by the server is received, the fourth terminal ends the display of the animation corresponding to the virtual article.
Step 510 is the same as step 509 and will not be described again.
It should be noted that, step 507 and step 510 are one possible implementation manner of ending displaying the animation corresponding to the virtual object in the second video playing area when the interactive operation is detected. When the interactive operation is detected, the detected interactive operation is fed back to the server in the form of interactive feedback information, so that the server can send interactive end information to a terminal where a user in a live broadcast room is located according to the received interactive feedback information to end the interactive process.
It should be noted that, in the embodiment of the present invention, only the step 509 is performed by taking the third terminal as an example to end displaying the animation corresponding to the virtual article, and the step 510 is performed by taking the fourth terminal as an example to end displaying the animation corresponding to the virtual article, and the step 509 and the step 510 do not represent an execution sequence.
In the embodiment of the invention, after the audience user in the live broadcast room of the first user presents the virtual object, the animation corresponding to the virtual object is displayed in the second video playing area of the live broadcast interface, the audience user in the live broadcast room of the second user performs interactive operation, and animation display is ended. Certainly, the audience user in the live broadcast room of the second user may present a virtual object, display an animation corresponding to the virtual object in the first video playing area of the live broadcast interface, perform an interactive operation on the first user and/or the audience user in the live broadcast room of the first user, and end the animation display. The specific process is the same as steps 501 to 510, and is not described again.
According to the method provided by the embodiment of the invention, the live video playing area of the first user and the live video playing area of the second user are displayed on the live interface, the user in the live broadcast room of the first user can give a virtual article to the first user, when a virtual article giving instruction is received, the animation corresponding to the virtual article is displayed in the live video playing area of the second user, the user in the live broadcast room of the second user is prompted to carry out interaction operation to trigger interaction ending information, and when the interaction ending information is received, the display of the animation corresponding to the virtual article is ended. According to the live broadcast interaction scheme, the animation corresponding to the virtual article presented by the user in the live broadcast room of the first user is displayed in the live broadcast video playing area of the second user, so that the user in the live broadcast room of the second user is prompted to finish displaying the animation through interactive operation, the requirement for interaction among the users in different live broadcast rooms is met, and the interest of interaction is improved.
Fig. 6 is a schematic structural diagram of a live broadcast interaction device according to an embodiment of the present invention. Referring to fig. 6, the apparatus includes:
the display module 601 is configured to display a live broadcast interface, where the live broadcast interface includes a first video playing area and a second video playing area, the first video playing area is used to play a live broadcast video of a first user, and the second video playing area is used to play a live broadcast video of a second user;
a receiving module 602, configured to receive a virtual item gifting instruction, where the virtual item gifting instruction is used to instruct a current user to gift a virtual item to the first user, and the current user is a user in a live room of the first user;
the display module 601 is further configured to display an animation corresponding to the virtual article in the second video playing area;
the display module 601 is further configured to end displaying the animation corresponding to the virtual object when receiving interaction end information, where the interaction end information is triggered by an interaction operation performed by a user in the live broadcast room of the second user.
In a possible implementation manner, the display module 601 is configured to obtain motion trajectories of a plurality of virtual articles in the second video playing area; and displaying animation of the plurality of virtual articles moving in the second video playing area according to the motion tracks of the plurality of virtual articles.
In a possible implementation manner, the display module 601 is configured to end the display of at least one of the plurality of virtual items when receiving first end information, where the first end information is triggered by a first interactive operation performed by a user in the live room of the second user.
In one possible implementation manner, the display module 601 is configured to display an animation that the remaining virtual item moves in the second video playing area according to a motion track of the remaining virtual item except the at least one virtual item in the plurality of virtual items.
In one possible implementation manner, the display module 601 is configured to obtain animation data and area information for generating preset effects, where the preset effects include a smoke effect, a flame effect, and an explosion effect, and the area information is used to indicate a first local area of the second video playing area;
displaying an animation generating the preset effect in the first local area according to the animation data and the first local area.
In a possible implementation manner, the display module 601 is configured to end the displaying of the preset effect in the second local area of the first local area when receiving second end information, where the second end information is triggered by a second interaction operation performed by a user in the live broadcast room of the second user.
In one possible implementation manner, the display module 601 is configured to determine a remaining local area within the first local area except for the second local area according to the first local area and the second local area; and displaying the animation generating the preset effect in the residual local area according to the animation data and the residual local area.
In one possible implementation, the display module 601 is configured to add a virtual article display view on the upper layer of the second video playing area; and displaying the animation corresponding to the virtual article on the virtual article display view, wherein the transparency of the virtual article display view is a target transparency, and the target transparency is smaller than the transparency of the second video playing area.
In the embodiment of the invention, the live video playing area of the first user and the live video playing area of the second user are displayed on the live interface, the user in the live broadcast room of the first user can present a virtual article to the first user, when a virtual article presenting instruction is received, the animation corresponding to the virtual article is displayed in the live video playing area of the second user, the user in the live broadcast room of the second user is prompted to carry out interactive operation to trigger interactive end information, and when the interactive end information is received, the display of the animation corresponding to the virtual article is ended. According to the live broadcast interaction scheme, the animation corresponding to the virtual article presented by the user in the live broadcast room of the first user is displayed in the live broadcast video playing area of the second user, so that the user in the live broadcast room of the second user is prompted to finish displaying the animation through interactive operation, the requirement for interaction among the users in different live broadcast rooms is met, and the interest of interaction is improved.
Fig. 7 is a schematic structural diagram of a live broadcast interaction apparatus according to an embodiment of the present invention. Referring to fig. 6, the apparatus includes:
a display module 701, configured to display a live broadcast interface, where the live broadcast interface includes a first video playing area and a second video playing area, the first video playing area is used to play a live broadcast video of a first user, and the second video playing area is used to play a live broadcast video of a second user;
a receiving module 702, configured to receive virtual article display information, where the virtual article display information is used to indicate that an animation corresponding to a virtual article is displayed in the second video playing area, and the virtual article is given to the first user by a user in the live broadcast room of the first user;
the display module 701 is further configured to display an animation corresponding to the virtual article in the second video playing area;
the display module 701 is further configured to end the displaying of the animation corresponding to the virtual object when the interactive operation is detected.
In a possible implementation manner, the display module 701 is configured to send interaction feedback information to a server when an interaction operation is detected, where the interaction feedback information is used to feed back, to the server, an interaction operation performed by a current user, where the current user is a user in a live broadcast room of the second user; and when the interaction ending information sent by the server is received, ending the display of the animation corresponding to the virtual article.
In one possible implementation, the virtual article display information includes a motion trajectory of a plurality of virtual articles within the second video playback area;
the display module 701 is configured to display an animation of the plurality of virtual objects moving in the second video playing area according to the motion trajectories of the plurality of virtual objects.
In a possible implementation manner, the display module 701 is configured to end the display of at least one of the plurality of virtual items when receiving first end information sent by the server, where the first end information is triggered by a first interactive operation performed by the current user.
In one possible implementation manner, the display module 701 is configured to display an animation that the remaining virtual item moves in the second video playing area according to a motion track of the remaining virtual item except the at least one virtual item in the plurality of virtual items.
In one possible implementation, the virtual article display information includes animation data for generating preset effects including a smoke effect, a flame effect, and an explosion effect, and region information for indicating a first partial region of the second video playback region;
the display module 701 is configured to display an animation generating the preset effect in the first local area according to the animation data and the first local area.
In one possible implementation manner, the display module 701 is configured to end the displaying of the preset effect in the second partial area of the first partial area when receiving second end information, where the second end information is triggered by a second interaction operation performed by the current user.
In one possible implementation manner, the display module 701 is configured to determine, according to the first local area and the second local area, a remaining local area within the first local area except for the second local area; and displaying the animation which generates the preset effect in the residual local area according to the virtual article display information and the residual local area.
In one possible implementation, the display module 701 is configured to add an animation display view on an upper layer of the second video playing area; and displaying the animation corresponding to the virtual article on the animation display view, wherein the transparency of the animation display view is the target transparency, and the target transparency is smaller than the transparency of the second video playing area.
In the embodiment of the invention, the live video playing area of the first user and the live video playing area of the second user are displayed on the live interface, the user in the live broadcast room of the first user can present a virtual article to the first user, when the display information of the virtual article is received, the animation corresponding to the virtual article is displayed in the live video playing area of the second user, and when the user in the live broadcast room of the second user performs interactive operation, the display of the animation corresponding to the virtual article is finished. According to the live broadcast interaction scheme, the animation corresponding to the virtual article presented by the user in the live broadcast room of the first user is displayed in the live broadcast video playing area of the second user, so that the user in the live broadcast room of the second user is prompted to finish displaying the animation through interactive operation, the requirement for interaction among the users in different live broadcast rooms is met, and the interest of interaction is improved.
Fig. 8 is a schematic structural diagram of a live broadcast interaction device according to an embodiment of the present invention. Referring to fig. 6, the apparatus includes:
a receiving module 801, configured to receive a virtual article presenting instruction sent by a third terminal when a first terminal where a first user is located and a second terminal where a second user is located are in a live broadcast stream merging state, where the third terminal is a terminal where a user is located in a live broadcast room of the first user, and the virtual article presenting instruction is used to instruct the user of the third terminal to present a virtual article to the first user;
a sending module 802, configured to send virtual article display information to the third terminal and the fourth terminal, where the virtual article display information is used to indicate that an animation corresponding to the virtual article is displayed in a second video playing area of a live broadcast interface, the second video playing area is used to play a live broadcast video of the second user, and the third terminal is a terminal where a user in a live broadcast room of the first user is located;
the sending module 802 is further configured to send interaction end information to the third terminal and the fourth terminal when receiving the interaction feedback information sent by the fourth terminal, where the interaction end information is used to indicate that the display of the animation corresponding to the virtual object is ended, and the interaction feedback information is used to feed back, to the server, the interaction operation performed by the user at the fourth terminal.
In one possible implementation, the virtual article display information includes a motion trajectory of a plurality of virtual articles within the second video playback area;
the sending module 802 is configured to send first end information to the third terminal and the fourth terminal when receiving first interaction feedback information sent by the fourth terminal, where the first end information is used to indicate that display of at least one of the plurality of virtual items is ended, and the first interaction feedback information is used to feed back a first interaction operation performed by a user of the fourth terminal to the server.
In one possible implementation, the virtual article display information includes animation data for generating preset effects including a smoke effect, a flame effect, and an explosion effect, and region information for indicating a first partial region of the second video playback region;
the sending module 802 is configured to send second end information to the third terminal and the fourth terminal when receiving second interaction feedback information sent by the fourth terminal, where the second end information is used to indicate that the displaying of the preset effect in the second local area of the first local area is ended, and the second interaction feedback information is used to feed back a second interaction operation performed by the user of the fourth terminal to the server.
In the embodiment of the invention, when the user in the live broadcast room of the first user presents the virtual article to the first user, the virtual article display information is sent to the users in the live broadcast rooms of the first user and the second user according to the received virtual article presenting instruction and is used for indicating that the animation corresponding to the virtual article is displayed in the live broadcast video playing area of the second user, so that the user in the live broadcast room of the second user is prompted to carry out interactive operation, and the display of the animation corresponding to the virtual article is finished. According to the live broadcast interaction scheme, the animation corresponding to the virtual article presented by the user in the live broadcast room of the first user is displayed in the live broadcast video playing area of the second user, so that the user in the live broadcast room of the second user is prompted to finish displaying the animation through interactive operation, the requirement for interaction among the users in different live broadcast rooms is met, and the interest of interaction is improved.
It should be noted that: in the live broadcast interaction device provided in the above embodiment, only the division of the functional modules is used for illustration in live broadcast interaction, and in practical applications, the function distribution can be completed by different functional modules as needed, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the live broadcast interaction device and the live broadcast interaction method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
Fig. 9 is a schematic structural diagram of a terminal 900 according to an embodiment of the present invention. The terminal 900 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 900 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
In general, terminal 900 includes: a processor 901 and a memory 902.
Processor 901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 901 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 902 may include one or more computer-readable storage media, which may be non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 902 is used to store at least one instruction for execution by processor 901 to implement the live interaction method provided by the method embodiments herein.
In some embodiments, terminal 900 can also optionally include: a peripheral interface 903 and at least one peripheral. The processor 901, memory 902, and peripheral interface 903 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 904, display screen 905, camera 906, audio circuitry 907, positioning component 908, and power supply 909.
The peripheral interface 903 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 901, the memory 902 and the peripheral interface 903 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 904 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 904 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 904 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 905 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 905 is a touch display screen, the display screen 905 also has the ability to capture touch signals on or over the surface of the display screen 905. The touch signal may be input to the processor 901 as a control signal for processing. At this point, the display 905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 905 may be one, providing the front panel of the terminal 900; in other embodiments, the number of the display panels 905 may be at least two, and each of the display panels is disposed on a different surface of the terminal 900 or is in a foldable design; in still other embodiments, the display 905 may be a flexible display disposed on a curved surface or a folded surface of the terminal 900. Even more, the display screen 905 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display panel 905 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 906 is used to capture images or video. Optionally, camera assembly 906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuit 907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 901 for processing, or inputting the electric signals to the radio frequency circuit 904 for realizing voice communication. For stereo sound acquisition or noise reduction purposes, the microphones may be multiple and disposed at different locations of the terminal 900. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 901 or the radio frequency circuit 904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuit 907 may also include a headphone jack.
The positioning component 908 is used to locate the current geographic Location of the terminal 900 for navigation or LBS (Location Based Service). The Positioning component 908 may be a Positioning component based on the GPS (Global Positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 909 is used to provide power to the various components in terminal 900. The power source 909 may be alternating current, direct current, disposable or rechargeable. When power source 909 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 900 can also include one or more sensors 910. The one or more sensors 910 include, but are not limited to: acceleration sensor 911, gyro sensor 912, pressure sensor 913, fingerprint sensor 914, optical sensor 915, and proximity sensor 916.
The acceleration sensor 911 can detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 900. For example, the acceleration sensor 911 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 901 can control the touch display 905 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 911. The acceleration sensor 911 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 912 may detect a body direction and a rotation angle of the terminal 900, and the gyro sensor 912 may cooperate with the acceleration sensor 911 to acquire a 3D motion of the user on the terminal 900. The processor 901 can implement the following functions according to the data collected by the gyro sensor 912: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 913 may be disposed on the side bezel of terminal 900 and/or underneath touch display 905. When the pressure sensor 913 is disposed on the side frame of the terminal 900, the user's holding signal of the terminal 900 may be detected, and the processor 901 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 913. When the pressure sensor 913 is disposed at a lower layer of the touch display 905, the processor 901 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 914 is used for collecting a fingerprint of the user, and the processor 901 identifies the user according to the fingerprint collected by the fingerprint sensor 914, or the fingerprint sensor 914 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 901 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 914 may be disposed on the front, back, or side of the terminal 900. When a physical key or vendor Logo is provided on the terminal 900, the fingerprint sensor 914 may be integrated with the physical key or vendor Logo.
The optical sensor 915 is used to collect ambient light intensity. In one embodiment, the processor 901 may control the display brightness of the touch display 905 based on the ambient light intensity collected by the optical sensor 915. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 905 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 905 is turned down. In another embodiment, the processor 901 can also dynamically adjust the shooting parameters of the camera assembly 906 according to the ambient light intensity collected by the optical sensor 915.
Proximity sensor 916, also known as a distance sensor, is typically disposed on the front panel of terminal 900. The proximity sensor 916 is used to collect the distance between the user and the front face of the terminal 900. In one embodiment, when the proximity sensor 916 detects that the distance between the user and the front face of the terminal 900 gradually decreases, the processor 901 controls the touch display 905 to switch from the bright screen state to the dark screen state; when the proximity sensor 916 detects that the distance between the user and the front surface of the terminal 900 gradually becomes larger, the processor 901 controls the touch display 905 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 9 does not constitute a limitation of terminal 900, and may include more or fewer components than those shown, or may combine certain components, or may employ a different arrangement of components.
Fig. 10 is a schematic structural diagram of a server 1000 according to an embodiment of the present invention, where the server 1000 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1001 and one or more memories 1002, where the memory 1002 stores at least one instruction, and the at least one instruction is loaded and executed by the processors 1001 to implement the live broadcast interaction method provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, storing a computer program is also provided, which when executed by a processor implements the live interaction method in the above embodiments. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (27)

1. A live interaction method, comprising:
displaying a live broadcast interface, wherein the live broadcast interface comprises a first video playing area and a second video playing area, the first video playing area is used for playing a live broadcast video of a first user, and the second video playing area is used for playing a live broadcast video of a second user;
receiving a virtual item gifting instruction, wherein the virtual item gifting instruction is used for indicating a current user to donate a virtual item to the first user, and the current user is a user in a live broadcast of the first user;
acquiring animation data and area information for generating a preset effect, wherein the area information is used for indicating a first local area of the second video playing area, and displaying the animation generating the preset effect in the first local area according to the animation data and the first local area;
when second end information is received, ending the display of the preset effect in a second local area of the first local area, wherein the second end information is triggered by second interaction operation performed by a user in a live broadcast room of a second user;
and when first end information is received, ending the display of at least one virtual article in the plurality of virtual articles, wherein the first end information is triggered by a first interactive operation performed by a user in the live broadcast room of the second user.
2. The method of claim 1, further comprising:
acquiring the motion tracks of a plurality of virtual articles in the second video playing area;
and displaying animation of the plurality of virtual articles moving in the second video playing area according to the motion tracks of the plurality of virtual articles.
3. The method of claim 2, wherein said ending the display of at least one of the plurality of virtual items comprises:
and displaying animation of the residual virtual articles moving in the second video playing area according to the movement track of the residual virtual articles except the at least one virtual article in the plurality of virtual articles.
4. The method of claim 1, wherein the preset effects comprise smoke effects, flame effects, and explosion effects.
5. The method of claim 1, wherein said ending the display of the preset effect in the second partial region of the first partial region comprises:
determining a remaining local area except the second local area in the first local area according to the first local area and the second local area;
and displaying the animation generating the preset effect in the residual local area according to the animation data and the residual local area.
6. The method of claim 1, further comprising:
adding a virtual article display view on the upper layer of the second video playing area;
and displaying the animation corresponding to the virtual article on the virtual article display view, wherein the transparency of the virtual article display view is a target transparency, and the target transparency is smaller than the transparency of the second video playing area.
7. A live interaction method, comprising:
displaying a live broadcast interface, wherein the live broadcast interface comprises a first video playing area and a second video playing area, the first video playing area is used for playing a live broadcast video of a first user, and the second video playing area is used for playing a live broadcast video of a second user;
receiving virtual article display information, wherein the virtual article display information is used for indicating that an animation corresponding to a virtual article is displayed in the second video playing area, the virtual article is presented to the first user by a user in a live broadcast of the first user, the virtual article display information comprises animation data used for generating a preset effect and area information, and the area information is used for indicating a first local area of the second video playing area;
displaying an animation generating the preset effect in the first local area according to the animation data and the first local area;
when the interactive operation is detected, sending interactive feedback information to a server, wherein the interactive feedback information is used for feeding back the interactive operation performed by a current user to the server, and the current user is a user in a live broadcast room of the second user;
when second end information is received, ending the display of the preset effect in a second local area of the first local area, wherein the second end information is triggered by second interaction operation performed by the current user;
and when first end information sent by the server is received, ending the display of at least one virtual article in the plurality of virtual articles, wherein the first end information is triggered by a first interactive operation performed by the current user.
8. The method of claim 7, wherein the virtual item display information comprises a motion trajectory of a plurality of virtual items within the second video playback area;
the method further comprises the following steps:
and displaying animation of the plurality of virtual articles moving in the second video playing area according to the motion tracks of the plurality of virtual articles.
9. The method of claim 8, further comprising:
and displaying animation of the residual virtual articles moving in the second video playing area according to the movement track of the residual virtual articles except the at least one virtual article in the plurality of virtual articles.
10. The method of claim 7, wherein the preset effects include smoke effects, flame effects, and explosion effects.
11. The method of claim 7, wherein said ending the display of the preset effect in the second partial region of the first partial region comprises:
determining a remaining local area except the second local area in the first local area according to the first local area and the second local area;
and displaying the animation which generates the preset effect in the residual local area according to the virtual article display information and the residual local area.
12. The method of claim 7, further comprising:
adding an animation display view on the upper layer of the second video playing area;
and displaying the animation corresponding to the virtual article on the animation display view, wherein the transparency of the animation display view is a target transparency, and the target transparency is smaller than the transparency of the second video playing area.
13. A live interaction method, comprising:
when a first terminal where a first user is located and a second terminal where a second user is located are in a live broadcast stream merging state, receiving a virtual article presenting instruction sent by a third terminal, wherein the third terminal is a terminal where the user is located in a live broadcast room of the first user, and the virtual article presenting instruction is used for indicating the user of the third terminal to present a virtual article to the first user;
sending virtual article display information to the third terminal and a fourth terminal, wherein the virtual article display information is used for indicating that an animation corresponding to a virtual article is displayed in a second video playing area of a live interface, the second video playing area is used for playing a live video of a second user, the fourth terminal is a terminal where the user is located in a live room of the second user, the virtual article display information comprises animation data used for generating a preset effect and area information, and the area information is used for indicating a first local area of the second video playing area;
when second interaction feedback information sent by the fourth terminal is received, sending second end information to the third terminal and the fourth terminal, wherein the second end information is used for indicating to end displaying of the preset effect in a second local area of the first local area, the second interaction feedback information is used for feeding back a second interaction operation performed by a user of the fourth terminal to a server, the interaction end information is used for indicating to end displaying of an animation corresponding to the virtual article, and the interaction feedback information is used for feeding back the interaction operation performed by the user of the fourth terminal to the server;
when first interaction feedback information sent by the fourth terminal is received, first end information is sent to the third terminal and the fourth terminal, the first end information is used for indicating that the display of at least one virtual article in the plurality of virtual articles is ended, and the first interaction feedback information is used for feeding back first interaction operation performed by a user of the fourth terminal to the server.
14. The method of claim 13, wherein the virtual item display information comprises a motion trajectory of a plurality of virtual items within the second video playback area.
15. The method of claim 13, wherein the preset effects include smoke effects, flame effects, and explosion effects.
16. A live interaction device, the device comprising:
the display module is used for displaying a live broadcast interface, the live broadcast interface comprises a first video playing area and a second video playing area, the first video playing area is used for playing a live broadcast video of a first user, and the second video playing area is used for playing a live broadcast video of a second user;
a receiving module, configured to receive a virtual article presenting instruction, where the virtual article presenting instruction is used to instruct a current user to present a virtual article to the first user, and the current user is a user in a live broadcast of the first user;
the display module is used for acquiring animation data and area information used for generating a preset effect, and the area information is used for indicating a first local area of the second video playing area;
the display module is further used for displaying the animation generating the preset effect in the first local area according to the animation data and the first local area;
the display module is further used for finishing displaying the animation corresponding to the virtual article when interaction finishing information is received, and the interaction finishing information is triggered by interaction operation performed by a user in a live broadcast room of the second user; and when first end information is received, ending the display of at least one virtual article in the plurality of virtual articles, wherein the first end information is triggered by a first interactive operation performed by a user in the live broadcast room of the second user.
17. The apparatus according to claim 16, wherein the display module is configured to obtain motion trajectories of a plurality of virtual objects in the second video playing area; and displaying animation of the plurality of virtual articles moving in the second video playing area according to the motion tracks of the plurality of virtual articles.
18. The apparatus of claim 16, wherein the preset effects comprise smoke effects, flame effects, and explosion effects.
19. A live interaction device, the device comprising:
the display module is used for displaying a live broadcast interface, the live broadcast interface comprises a first video playing area and a second video playing area, the first video playing area is used for playing a live broadcast video of a first user, and the second video playing area is used for playing a live broadcast video of a second user;
a receiving module, configured to receive virtual article display information, where the virtual article display information is used to indicate that an animation corresponding to a virtual article is displayed in the second video playing area, the virtual article is presented to the first user by a user in a live broadcast room of the first user, the virtual article display information includes animation data used to generate a preset effect and area information, and the area information is used to indicate a first local area of the second video playing area;
the display module is further used for displaying the animation generating the preset effect in the first local area according to the animation data and the first local area;
the display module is used for finishing the display of the preset effect in a second local area of the first local area when second finishing information is received, and the second finishing information is triggered by second interactive operation performed by a current user;
the display module is further used for sending interaction feedback information to a server when interaction operation is detected, wherein the interaction feedback information is used for feeding back the interaction operation performed by a current user to the server, and the current user is a user in a live broadcast room of the second user; and when first end information sent by the server is received, ending the display of at least one virtual article in the plurality of virtual articles, wherein the first end information is triggered by a first interactive operation performed by the current user.
20. The apparatus of claim 19, wherein the virtual item display information comprises a motion trajectory of a plurality of virtual items within the second video playback area;
the display module is used for displaying the animation of the plurality of virtual articles moving in the second video playing area according to the motion tracks of the plurality of virtual articles.
21. The apparatus of claim 19, wherein the preset effects comprise smoke effects, flame effects, and explosion effects.
22. A live interaction device, the device comprising:
the receiving module is used for receiving a virtual article presenting instruction sent by a third terminal when a first terminal where a first user is located and a second terminal where a second user is located are in a live broadcast stream merging state, wherein the third terminal is a terminal where the user is located in a live broadcast room of the first user, and the virtual article presenting instruction is used for indicating the user of the third terminal to present a virtual article to the first user;
a sending module, configured to send virtual article display information to the third terminal and a fourth terminal, where the virtual article display information is used to indicate that an animation corresponding to the virtual article is displayed in a second video playing area of a live interface, the second video playing area is used to play a live video of the second user, the fourth terminal is a terminal where the user is located in a live room of the second user, the virtual article display information includes animation data used to generate a preset effect and area information, and the area information is used to indicate a first local area of the second video playing area;
the sending module is further configured to send, by the sending module, second end information to the third terminal and the fourth terminal when second interaction feedback information sent by the fourth terminal is received, where the second end information is used to indicate that display of the preset effect in a second local area of the first local area is ended, the second interaction feedback information is used to feed back, to a server, a second interaction operation performed by a user of the fourth terminal, and the interaction feedback information is used to feed back, to the server, an interaction operation performed by the user of the fourth terminal;
the sending module is further configured to send first end information to the third terminal and the fourth terminal when first interaction feedback information sent by the fourth terminal is received, where the first end information is used to indicate that display of at least one of the plurality of virtual articles is ended, and the first interaction feedback information is used to feed back a first interaction operation performed by a user of the fourth terminal to the server.
23. The apparatus of claim 22, wherein the virtual item display information comprises a motion trajectory of a plurality of virtual items within the second video playback area.
24. The apparatus of claim 22, wherein the predetermined effects include smoke effects, flame effects, and explosion effects.
25. A terminal comprising a processor and a memory; the memory is used for storing a computer program; the processor, configured to execute the computer program stored in the memory, implements the method steps of any of claims 1-12.
26. A server, comprising a processor and a memory; the memory is used for storing a computer program; the processor, configured to execute the computer program stored on the memory, implements the method steps of any of claims 13-15.
27. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 15.
CN201910020548.5A 2019-01-09 2019-01-09 Live broadcast interaction method and device Active CN109729411B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910020548.5A CN109729411B (en) 2019-01-09 2019-01-09 Live broadcast interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910020548.5A CN109729411B (en) 2019-01-09 2019-01-09 Live broadcast interaction method and device

Publications (2)

Publication Number Publication Date
CN109729411A CN109729411A (en) 2019-05-07
CN109729411B true CN109729411B (en) 2021-07-09

Family

ID=66298904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910020548.5A Active CN109729411B (en) 2019-01-09 2019-01-09 Live broadcast interaction method and device

Country Status (1)

Country Link
CN (1) CN109729411B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110149332B (en) * 2019-05-22 2022-04-22 北京达佳互联信息技术有限公司 Live broadcast method, device, equipment and storage medium
CN110213608B (en) * 2019-06-11 2021-09-28 广州酷狗计算机科技有限公司 Method, device, equipment and readable storage medium for displaying virtual gift
CN110337023B (en) * 2019-07-02 2022-05-13 游艺星际(北京)科技有限公司 Animation display method, device, terminal and storage medium
CN112995774A (en) * 2019-12-13 2021-06-18 阿里巴巴集团控股有限公司 Video playing method, device, terminal and storage medium
CN111050189B (en) * 2019-12-31 2022-06-14 成都酷狗创业孵化器管理有限公司 Live broadcast method, device, equipment and storage medium
CN111246236B (en) * 2020-01-22 2021-08-13 北京达佳互联信息技术有限公司 Interactive data playing method, device, terminal, server and storage medium
CN111711831B (en) * 2020-06-28 2021-08-24 腾讯科技(深圳)有限公司 Data processing method and device based on interactive behavior and storage medium
CN112000252B (en) * 2020-08-14 2022-07-22 广州市百果园信息技术有限公司 Virtual article sending and displaying method, device, equipment and storage medium
CN112333460B (en) * 2020-11-02 2021-10-26 腾讯科技(深圳)有限公司 Live broadcast management method, computer equipment and readable storage medium
CN113573117A (en) * 2021-07-15 2021-10-29 广州方硅信息技术有限公司 Video live broadcast method and device and computer equipment
CN114666672B (en) * 2022-03-28 2023-08-18 广州方硅信息技术有限公司 Live fight interaction method and system initiated by audience and computer equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106028166A (en) * 2016-06-24 2016-10-12 北京小米移动软件有限公司 Method and device for switching live broadcasting rooms in live broadcasting process
CN106060597A (en) * 2016-06-30 2016-10-26 广州华多网络科技有限公司 Method and system for carrying out anchor competition
CN106210757A (en) * 2016-07-28 2016-12-07 北京小米移动软件有限公司 Live broadcasting method, live broadcast device and live broadcast system
CN106685971A (en) * 2016-12-30 2017-05-17 广州华多网络科技有限公司 Method and device for handling microphone connection live broadcast on clients
CN107911724A (en) * 2017-11-21 2018-04-13 广州华多网络科技有限公司 Living broadcast interactive method, apparatus and system
CN108156507A (en) * 2017-12-27 2018-06-12 广州酷狗计算机科技有限公司 Virtual objects presentation method, device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10762675B2 (en) * 2016-12-12 2020-09-01 Facebook, Inc. Systems and methods for interactive broadcasting

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106028166A (en) * 2016-06-24 2016-10-12 北京小米移动软件有限公司 Method and device for switching live broadcasting rooms in live broadcasting process
CN106060597A (en) * 2016-06-30 2016-10-26 广州华多网络科技有限公司 Method and system for carrying out anchor competition
CN106210757A (en) * 2016-07-28 2016-12-07 北京小米移动软件有限公司 Live broadcasting method, live broadcast device and live broadcast system
CN106685971A (en) * 2016-12-30 2017-05-17 广州华多网络科技有限公司 Method and device for handling microphone connection live broadcast on clients
CN107911724A (en) * 2017-11-21 2018-04-13 广州华多网络科技有限公司 Living broadcast interactive method, apparatus and system
CN108156507A (en) * 2017-12-27 2018-06-12 广州酷狗计算机科技有限公司 Virtual objects presentation method, device and storage medium

Also Published As

Publication number Publication date
CN109729411A (en) 2019-05-07

Similar Documents

Publication Publication Date Title
CN109729411B (en) Live broadcast interaction method and device
CN110267067B (en) Live broadcast room recommendation method, device, equipment and storage medium
CN109246466B (en) Video playing method and device and electronic equipment
CN109618212B (en) Information display method, device, terminal and storage medium
CN110278464B (en) Method and device for displaying list
CN111918090B (en) Live broadcast picture display method and device, terminal and storage medium
CN108737897B (en) Video playing method, device, equipment and storage medium
CN110213608B (en) Method, device, equipment and readable storage medium for displaying virtual gift
CN111050189B (en) Live broadcast method, device, equipment and storage medium
CN110139116B (en) Live broadcast room switching method and device and storage medium
CN111355974A (en) Method, apparatus, system, device and storage medium for virtual gift giving processing
CN112118477B (en) Virtual gift display method, device, equipment and storage medium
CN110830811A (en) Live broadcast interaction method, device, system, terminal and storage medium
CN109275013B (en) Method, device and equipment for displaying virtual article and storage medium
CN112181572A (en) Interactive special effect display method and device, terminal and storage medium
CN110418152B (en) Method and device for carrying out live broadcast prompt
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
WO2023000677A1 (en) Content item display method and apparatus
CN107896337B (en) Information popularization method and device and storage medium
CN114116053B (en) Resource display method, device, computer equipment and medium
CN113411680A (en) Multimedia resource playing method, device, terminal and storage medium
CN111901658A (en) Comment information display method and device, terminal and storage medium
CN111669640B (en) Virtual article transfer special effect display method, device, terminal and storage medium
CN113318442A (en) Live interface display method, data uploading method and data downloading method
CN111628925A (en) Song interaction method and device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant