CN113676747B - Continuous wheat live broadcast fight interaction method, system and device and computer equipment - Google Patents

Continuous wheat live broadcast fight interaction method, system and device and computer equipment Download PDF

Info

Publication number
CN113676747B
CN113676747B CN202111136353.0A CN202111136353A CN113676747B CN 113676747 B CN113676747 B CN 113676747B CN 202111136353 A CN202111136353 A CN 202111136353A CN 113676747 B CN113676747 B CN 113676747B
Authority
CN
China
Prior art keywords
live
anchor
animation
combat
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111136353.0A
Other languages
Chinese (zh)
Other versions
CN113676747A (en
Inventor
雷兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202111136353.0A priority Critical patent/CN113676747B/en
Publication of CN113676747A publication Critical patent/CN113676747A/en
Application granted granted Critical
Publication of CN113676747B publication Critical patent/CN113676747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4437Implementing a Virtual Machine [VM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application relates to the technical field of network live broadcast, and provides a live broadcast fight interaction method, system, device and computer equipment for continuous wheat, wherein the method comprises the following steps: the server establishes the wheat-connecting session connection of the anchor client corresponding to each anchor identifier; the client added into the live broadcasting room acquires audio and video stream data, and outputs the audio and video stream data in the live broadcasting room; the client added into the live broadcasting room responds to the animation display instruction, acquires video stream data of mixed stream animation data, and displays the animation of the virtual object executing a plurality of actions in a video window of the live broadcasting room according to the video stream data of the mixed stream animation data; the server responds to the action recognition success instruction and updates the combat score corresponding to the target anchor identifier; and the server responds to the command of ending the live fight of the link wheat, and outputs the live fight result of the link wheat in the live broadcast room. Compared with the prior art, the method and the device can promote the revenues of the anchor and realize the introduction of the flow, and enhance the interest of live wheat-linking fight.

Description

Continuous wheat live broadcast fight interaction method, system and device and computer equipment
Technical Field
The embodiment of the application relates to the technical field of network live broadcast, in particular to a live broadcast fight interaction method, system and device for continuous wheat and computer equipment.
Background
With the progress of network communication technology, network live broadcast becomes an emerging network interaction mode, and the network live broadcast is favored by more and more audiences due to the characteristics of real-time property, interactivity and the like.
At present, in the network live broadcast process, the anchor can carry out various types of fight interactive playing methods by establishing a wheat connecting session, so that audience can watch live broadcast contents of different anchor at the same time, and the camp of anchor in the fight interactive playing methods can be increased.
However, since the manner of acquiring the fight score by the anchor in the fight interactive playing method is single, the anchor income cannot be effectively improved and the introduction of flow can be realized, so that the play enthusiasm of the anchor can be reduced, and the interest of the fight interactive playing method is difficult to improve.
Disclosure of Invention
The embodiment of the application provides a live continuous-play fight interaction method, system, device and computer equipment, which can solve the technical problems that the mode of obtaining fight scores by a host is single and the interest of fight interaction playing method is difficult to improve, and the technical scheme is as follows:
In a first aspect, an embodiment of the present application provides a live-link combat interaction method, including the steps of:
the server responds to the live link combat opening instruction, analyzes the live link combat opening instruction to obtain the anchor identifications, and establishes the link session connection of the anchor client corresponding to each anchor identification;
the client added into the live broadcasting room acquires audio and video stream data, and outputs the audio and video stream data in the live broadcasting room; the live broadcasting room comprises live broadcasting rooms created by the anchor corresponding to each anchor identifier, and the audio and video stream data comprises audio and video stream data corresponding to each anchor identifier;
the client added into the live broadcasting room responds to the animation display instruction, acquires video stream data of mixed stream animation data, and displays the animation of the virtual object executing a plurality of actions in a video window of the live broadcasting room according to the video stream data of the mixed stream animation data;
the server responds to the action recognition success instruction and updates the combat score corresponding to the target anchor identifier; the action recognition success instruction is sent when recognizing that the target anchor imitates the virtual object to execute a plurality of actions from video stream data corresponding to the target anchor mark;
The server responds to the live-link combat ending instruction to acquire combat scores corresponding to the anchor identifiers, acquires live-link combat results according to the combat scores corresponding to the anchor identifiers, and outputs the live-link combat results in the live-link combat room.
In a second aspect, an embodiment of the present application provides a live-link combat interactive system, including: the system comprises a client and a server, wherein the client comprises a host client and a spectator client;
the server responds to the live link combat opening instruction, analyzes the live link combat opening instruction to obtain the anchor identifications, and establishes the link session connection of the anchor client corresponding to each anchor identification;
the client added into the live broadcasting room acquires audio and video stream data, and outputs the audio and video stream data in the live broadcasting room; the live broadcasting room comprises live broadcasting rooms created by the anchor corresponding to each anchor identifier, and the audio and video stream data comprises audio and video stream data corresponding to each anchor identifier;
the client added into the live broadcasting room responds to the animation display instruction, acquires video stream data of mixed stream animation data, and displays the animation of the virtual object executing a plurality of actions in a video window of the live broadcasting room according to the video stream data of the mixed stream animation data;
The server responds to the action recognition success instruction and updates the combat score corresponding to the target anchor identifier; the action recognition success instruction is sent when recognizing that the target anchor imitates the virtual object to execute a plurality of actions from video stream data corresponding to the target anchor mark;
the server responds to the live-link combat ending instruction to acquire combat scores corresponding to the anchor identifiers, acquires live-link combat results according to the combat scores corresponding to the anchor identifiers, and outputs the live-link combat results in the live-link combat room.
In a third aspect, an embodiment of the present application provides a live-wheat-connected combat interaction device, including:
the communication session establishment unit is used for responding to the communication live fight starting instruction, analyzing the communication live fight starting instruction to obtain the anchor identification, and establishing communication session connection of the anchor client corresponding to each anchor identification;
the first output unit is used for acquiring audio and video stream data by the client added into the live broadcasting room and outputting the audio and video stream data in the live broadcasting room; the live broadcasting room comprises live broadcasting rooms created by the anchor corresponding to each anchor identifier, and the audio and video stream data comprises audio and video stream data corresponding to each anchor identifier;
The first response unit is used for responding to the animation display instruction by the client added into the live broadcasting room, obtaining video stream data of mixed stream animation data, and displaying the animation of the virtual object executing a plurality of actions in a video window of the live broadcasting room according to the video stream data of the mixed stream animation data;
the second response unit is used for responding to the action recognition success instruction by the server and updating the combat score corresponding to the target anchor identifier; the action recognition success instruction is sent when recognizing that the target anchor imitates the virtual object to execute a plurality of actions from video stream data corresponding to the target anchor mark;
the second output unit is used for responding to the command of ending the live fight of the continuous wheat, obtaining fight scores corresponding to the main broadcasting identifications, obtaining live fight results of the continuous wheat according to the fight scores corresponding to the main broadcasting identifications, and outputting the live fight results of the continuous wheat in the live broadcast room.
In a fourth aspect, embodiments of the present application provide a computer device, a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method as in the first aspect when the computer program is executed.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements steps of a method as in the first aspect.
In the embodiment of the application, a server responds to a live link combat start instruction, analyzes the live link combat start instruction to obtain a host broadcast identifier, and establishes a link session connection of a host broadcast client corresponding to each host broadcast identifier; the client added into the live broadcasting room acquires audio and video stream data, and outputs the audio and video stream data in the live broadcasting room; the live broadcasting room comprises live broadcasting rooms created by the anchor corresponding to each anchor identifier, and the audio and video stream data comprises audio and video stream data corresponding to each anchor identifier; the client added into the live broadcasting room responds to the animation display instruction, acquires video stream data of mixed stream animation data, and displays the animation of the virtual object executing a plurality of actions in a video window of the live broadcasting room according to the video stream data of the mixed stream animation data; the server responds to the action recognition success instruction and updates the combat score corresponding to the target anchor identifier; the action recognition success instruction is sent when recognizing that the target anchor imitates the virtual object to execute a plurality of actions from video stream data corresponding to the target anchor mark; the server responds to the live-link combat ending instruction to acquire combat scores corresponding to the anchor identifiers, acquires live-link combat results according to the combat scores corresponding to the anchor identifiers, and outputs the live-link combat results in the live-link combat room. In the live-broadcast fight interaction of the link, animation data are mixed in video stream data, so that animation that virtual objects execute a plurality of actions is displayed in a video window of a live broadcast room, when the target anchor is identified to have imitated the virtual objects to execute the actions in the video stream data corresponding to the target anchor mark, fight scores corresponding to the target anchor mark can be increased, and therefore, the interest of live-broadcast fight of the link can be improved, and the income and flow of the anchor can be improved to a certain extent, and the play enthusiasm of the anchor and the interactivity of live-broadcast fight of the link are improved.
For a better understanding and implementation, the technical solutions of the present application are described in detail below with reference to the accompanying drawings.
Drawings
Fig. 1 is an application scenario schematic diagram of a live continuous-wheat fight interaction method provided in an embodiment of the present application;
fig. 2 is a flow chart of a live continuous play interaction method according to a first embodiment of the present application;
FIG. 3 is a schematic illustration of a display of play components in a live room interface provided by an embodiment of the present application;
fig. 4 is a schematic display diagram of an interface between live broadcasting rooms after the live wheat-linked live broadcasting fight interaction is started according to the embodiment of the present application;
FIG. 5 is a schematic diagram of a display of an action-mimicking virtual gift provided in an embodiment of the present application;
fig. 6 is a flow chart of a live continuous play interaction method according to a second embodiment of the present application;
fig. 7 is another flow chart of a live continuous play interaction method according to a second embodiment of the present disclosure;
fig. 8 is a schematic diagram showing video stream data of mixed stream animation data according to an embodiment of the present application;
FIG. 9 is a schematic diagram showing video stream data of mixed stream animation data and fight score update prompt data according to an embodiment of the present application;
fig. 10 is a flow chart of a live continuous play interaction method according to a third embodiment of the present disclosure;
FIG. 11 is a display schematic diagram of a combat score display control provided in an embodiment of the present application;
fig. 12 is another flow chart of a live continuous play interaction method according to a third embodiment of the present disclosure;
fig. 13 is a timing chart of a live link combat interaction method according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a live-link combat interactive system according to a fourth embodiment of the present application;
fig. 15 is a schematic structural diagram of a live-link combat interaction device according to a fifth embodiment of the present application;
fig. 16 is a schematic structural diagram of a computer device according to a sixth embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. The word "if"/"if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination", depending on the context.
As will be appreciated by those skilled in the art, the terms "client," "terminal device," and "terminal device" as used herein include both devices that include only wireless signal receiver devices without transmitting capabilities and devices that include receiving and transmitting hardware that include devices capable of two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device such as a personal computer, tablet, or the like, having a single-line display or a multi-line display or a cellular or other communication device without a multi-line display; a PCS (PersonalCommunications Service, personal communication system) that may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant ) that can include a radio frequency receiver, pager, internet/intranet access, web browser, notepad, calendar and/or GPS (Global PositioningSystem ) receiver; a conventional laptop and/or palmtop computer or other appliance that has and/or includes a radio frequency receiver. As used herein, "client," "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or adapted and/or configured to operate locally and/or in a distributed fashion, at any other location(s) on earth and/or in space. As used herein, a "client," "terminal device," or "terminal device" may also be a communication terminal, an internet terminal, or a music/video playing terminal, for example, a PDA, a MID (Mobile Internet Device ), and/or a mobile phone with music/video playing function, or may also be a device such as a smart tv, a set top box, or the like.
The hardware referred to by the names "server", "client", "service node", etc. in this application is essentially a computer device having the performance of a personal computer, and is a hardware device having necessary components disclosed by von neumann's principle, such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, and an output device, and a computer program is stored in the memory, and the central processing unit calls a program stored in the external memory to run in the memory, executes instructions in the program, and interacts with the input/output device, thereby completing a specific function.
It should be noted that the concept of "server" as referred to in this application is equally applicable to the case of a server farm. The servers should be logically partitioned, physically separate from each other but interface-callable, or integrated into a physical computer or group of computers, according to network deployment principles understood by those skilled in the art. Those skilled in the art will appreciate this variation and should not be construed as limiting the implementation of the network deployment approach of the present application.
Referring to fig. 1, fig. 1 is a schematic application scenario diagram of a live-link combat interaction method provided in an embodiment of the present application, where the application scenario includes a hosting client 101, a server 102 and an audience client 103 provided in an embodiment of the present application, and the hosting client 101 and the audience client 103 interact through the server 102.
The anchor client 101 refers to an end that transmits a live video, and is generally a client used by an anchor (i.e., a live anchor user) in a live video.
The viewer client 103 refers to a client employed by a viewer (i.e., a live viewer user) receiving and viewing a live video, typically in a live video.
The hardware pointed to by the anchor client 101 and the audience client 103 essentially refers to computer devices, which may be, as shown in fig. 1, in particular, smart phones, smart interactive tablets, personal computers, and the like. Both the anchor client 101 and the spectator client 103 may access the internet via known network access means to establish a data communication link with the server 102.
The server 102 acts as a business server and may be responsible for further interfacing with related streaming servers, gift servers, and other servers providing related support, etc., to form a logically associated service cluster for providing services to related end devices, such as the anchor client 101 and audience client 103 shown in fig. 1.
In this embodiment of the present application, the anchor client 101 and the viewer client 103 may join the same live broadcast room (i.e., live broadcast channel), where the live broadcast room is a chat room implemented by means of the internet technology, and generally has an audio/video playing control function. A live user plays a live broadcast in a live broadcast room through a live broadcast client 101, and a viewer of a viewer client 103 can log into a server 102 to watch live broadcast in the live broadcast room.
In a live broadcasting room, interaction can be realized between a host and a spectator through well-known online interaction modes such as voice, video, characters and the like, generally, the host plays programs for spectator users in the form of audio and video streams, and economic transaction behaviors can be generated in the interaction process. Of course, the application form of the live broadcasting room is not limited to online entertainment, and can be popularized to other related scenes, such as video conference scenes, product recommendation sales scenes and any other scenes needing similar interaction.
Specifically, the process of viewing a live broadcast by a viewer is as follows: the viewer may click to access a live application installed on the viewer client 103 and choose to enter any live room, triggering the viewer client 103 to load the viewer with a live room interface that includes several interactive components, such as: video components, virtual gift bar components, public screen components and the like, and by loading the interaction components, a viewer can watch live broadcast in a live broadcast room and perform various online interactions, wherein the online interactions comprise but are not limited to giving virtual gift, public screen speaking and the like.
In this embodiment of the present application, the server 102 may also establish a link session connection between the anchor clients 101 to perform link live broadcast. The establishment mode of the wheat connecting session connection can be a random matching mode or a friend mode.
If the mode is randomly matched, the server 102 establishes a connection of a wheat-linking session with a plurality of anchor clients 101 sending a live-linking request according to a certain wheat-linking opening rule, after the connection of the wheat-linking session is established, the clients joining the live-broadcasting room can acquire audio and video stream data corresponding to a plurality of anchor identifiers, and output the audio and video stream data in the live-broadcasting room, so that users (including audiences and anchor) entering the live-broadcasting room can see real-time live broadcasting of a plurality of anchor simultaneously in the live-broadcasting room.
If the host is in the friend mode, the host can designate to connect with a plurality of friend hosts, after the server 102 receives the connection confirmation information of the host client 101 corresponding to the plurality of friend hosts, the server 102 establishes the connection between the host client 101 corresponding to the host identifier and the host client 101 corresponding to the friend host identifier, and similarly, after the connection of the connection is established, the user (including audience and host) entering the live broadcast room can see the real-time live broadcast of the plurality of hosts in the live broadcast room.
In the embodiment of the application, the anchor can perform various fight interaction playing methods through the live communication of the wheat, so that the anchor revenue is increased in a fight interaction mode, and the interactivity between the anchor and the audience can be effectively improved. However, since the manner of obtaining the fight score by the anchor in the current fight interactive playing method is single, the fight interactive playing method has low interest, and the revenues of the anchor and the introduction of the flow are difficult to ensure.
Based on the above, the embodiment of the application provides a live continuous play combat interaction method. Referring to fig. 2, fig. 2 is a flow chart of a live-link combat interaction method according to a first embodiment of the present application, and the method includes the following steps:
s101: and the server responds to the live link combat opening instruction, analyzes the live link combat opening instruction to obtain the anchor identifications, and establishes the link session connection of the anchor client corresponding to each anchor identification.
S102: the client added into the live broadcasting room acquires audio and video stream data, and outputs the audio and video stream data in the live broadcasting room; the live broadcast room comprises live broadcast rooms created by the anchor corresponding to each anchor identifier, and the audio and video stream data comprises audio and video stream data corresponding to each anchor identifier.
S103: and the client added into the live broadcasting room responds to the animation display instruction, acquires the video stream data of the mixed stream animation data, and displays the animation of the virtual object executing a plurality of actions in the video window of the live broadcasting room according to the video stream data of the mixed stream animation data.
S104: the server responds to the action recognition success instruction and updates the combat score corresponding to the target anchor identifier; the action recognition success instruction is sent when the target anchor is recognized to simulate the virtual object to execute a plurality of actions from video stream data corresponding to the target anchor identifier.
S105: the server responds to the live-link combat ending instruction to acquire combat scores corresponding to the anchor identifiers, acquires live-link combat results according to the combat scores corresponding to the anchor identifiers, and outputs the live-link combat results in the live-link combat room.
In this embodiment, the live-link combat interaction method is described from two execution bodies of the client and the server. The clients include anchor clients and audience clients.
Before the live-broadcasting fight interaction of the continuous-cast live broadcasting, the live broadcasting needs to be started firstly, specifically, the live broadcasting can click to access a live broadcasting application program, the live broadcasting application program enters an on-broadcasting interface, a live broadcasting client side is triggered to send a live broadcasting start instruction to a server through interaction with a live broadcasting start control in the on-broadcasting interface, the server responds to the live broadcasting start instruction, data in a live broadcasting room are issued to the live broadcasting client side, the live broadcasting client side loads the on-broadcasting room interface according to the data in the live broadcasting room, and audio and video stream data collected by the live broadcasting client side are played in the live broadcasting room, and at the moment, a spectator can also enter the live broadcasting room to watch live broadcasting.
And a play component is loaded in the live broadcasting room interface, and the anchor can start a corresponding play through interaction with the play component so as to improve interaction experience between audience and anchor.
Specifically, referring to fig. 3, fig. 3 is a schematic display diagram of a play component provided in an embodiment of the present application in a live room interface. It can be seen that several play components are shown in fig. 3, such as a funneling component 31 and a warfare PK component 32.
Because a plurality of fight playing methods provided by the live broadcasting room are realized by cooperation of at least two anchor, the anchor starts the fight playing methods, which means that a server needs to establish a wheat connecting session connection between anchor clients, and fight interaction is performed in a wheat connecting live broadcasting scene.
Therefore, before the step S101 is described in detail, the embodiment of the present application needs to first describe what condition will trigger the server to send out the live-wheat-connection combat start instruction, which is specifically as follows:
in an alternative embodiment, before the server executes step S101, the server responds to the play start request sent by the anchor client, analyzes the play start request to obtain the play identifier, selects at least two anchor clients that send the play start request containing the same play identifier, generates a live link play start instruction according to the anchor identifiers corresponding to the at least two anchor clients, and sends out a live link play start instruction.
In this embodiment, the server randomly selects the anchor who starts the same fight playing method through a random matching mode, and establishes a wheat-connecting session connection for the corresponding anchor client.
It will be appreciated that different players may need different numbers of game plays, for example: the game of funneling needs two anchor, then the server chooses two anchor customer end that send the request of opening the game of fighting that includes the identity of game of funneling at random, set up and link the wheat conversation to connect for it.
In addition, the anchor may start the interactive playing method in the friend mode, specifically, the anchor client first obtains the anchor identifier and the fight playing method identifier corresponding to the anchor selected by the anchor (the anchor is a friend relationship with the anchor), generates a fight playing method start request according to the anchor identifier and the fight playing method identifier, sends the fight playing method start request to the server, responds to the fight playing method start request, obtains the anchor identifier and the fight playing method identifier, and sends the anchor live broadcast request to the corresponding anchor client, where the anchor live broadcast request includes the anchor identifier and the fight playing method identifier requesting to carry out the anchor, so that the anchor receiving the anchor invitation determines which anchor invites the anchor and which fight playing method, and sends the anchor live broadcast fight start instruction after the server receives the anchor acknowledgement information sent by the corresponding anchor client.
In another alternative embodiment, teams may also be made between the anchor in some battle play, with live interactions in the form of teams, such as: the team PK play method, wherein the team mode may also be a friend mode or a random mode, and the team implementation process is not described in detail herein.
The following will explain the steps S101 to S102, specifically as follows:
the server responds to the live link combat opening instruction, analyzes the live link combat opening instruction to obtain the anchor identifications, and establishes the link session connection of the anchor client corresponding to each anchor identification; and the client added into the live broadcasting room acquires the audio and video stream data, and outputs the audio and video stream data in the live broadcasting room.
The live broadcasting room comprises a live broadcasting room created by a host broadcasting corresponding to each host broadcasting identifier, the clients added into the live broadcasting room comprise host broadcasting clients and audience clients added into the live broadcasting room, and the audio and video stream data are audio and video stream data after mixed streams and specifically comprise the audio and video stream data corresponding to each host broadcasting identifier.
Because the live contents of each main broadcast can be played together in the live broadcast room under the live broadcast scene of the link wheat. Therefore, the server will pull the audio and video stream data corresponding to the anchor identifier from each anchor client.
In an optional embodiment, after the server pulls the audio and video stream data corresponding to the anchor identifier from each anchor client, the server performs mixed stream operation on the audio and video stream data corresponding to the anchor identifier to obtain audio and video stream data, and then sends the audio and video stream data to the client added to the live broadcasting room, and the client added to the live broadcasting room obtains the audio and video stream data and outputs the audio and video stream data in the live broadcasting room.
In another alternative embodiment, after the server pulls the audio and video stream data corresponding to the anchor identifier from each anchor client, the server sends the audio and video stream data corresponding to each anchor identifier to the anchor client. Optionally, the server may only issue audio/video stream data corresponding to the anchor identifier of other links to a certain anchor client, so as to reduce certain flow expense. After the anchor client acquires the audio and video stream data corresponding to each anchor identifier, carrying out mixed stream operation on the audio and video stream data to obtain the audio and video stream data, finally, transmitting the audio and video stream data to the audience client which has joined the live broadcasting room through the server, and outputting the audio and video stream data in the live broadcasting room.
In other optional embodiments, after the server pulls the audio and video stream data corresponding to the anchor identifier from each anchor client, the server sends the audio and video stream data corresponding to each anchor identifier to clients (including anchor clients and audience clients) that have joined the live broadcast room, and after the clients that have joined the live broadcast room acquire the audio and video stream data corresponding to each anchor identifier, the server performs a mixed stream operation to obtain the audio and video stream data, and the audio and video stream data is data in the live broadcast room.
In the embodiment of the present application, the execution body for performing the mixed stream operation on the audio/video stream data corresponding to each anchor identifier is not limited, and may be a server, an anchor client or a viewer client.
In an alternative embodiment, the server includes a service server and a stream server, the service server performs processing on a service flow, and the stream server performs processing on related stream data to perform mixed stream operation.
Referring to fig. 4, fig. 4 is a schematic display diagram of an interface between live broadcasting rooms after the live broadcasting fight interaction is started according to an embodiment of the present application. Fig. 4 shows two video pictures of the host for the live-link combat interaction, where the video display area surface 41 corresponding to the host a is on the left side of the video window, and the video display area 42 corresponding to the host B is on the right side of the video window. In fig. 4, a video display area 41 and a video display area 42 equally divide a video window left and right.
It can be understood that, under other fight interactive playing methods, when there are multiple anchor plays to perform the live fight interaction, the layout of the video display area corresponding to the anchor in the video window will also be changed, which will not be described here one by one.
With regard to step S103, the client that has joined the living room acquires video stream data of the mixed stream animation data in response to the animation display instruction, and displays an animation in which the virtual object performs a plurality of actions in a video window of the living room based on the video stream data of the mixed stream animation data.
The animation data is data for presenting a virtual object to perform a plurality of actions, and includes animation frames (which may also be referred to as animation frames) corresponding to each action performed by the virtual object.
The virtual object may be a character virtual object or an animal virtual object.
The types of the several actions performed by the virtual object are not limited herein, and may be limb actions or facial actions.
In particular, the operation of mixing the animation data into the video stream data, which may be performed by the anchor client or the server, is not described in detail in the present embodiment, and reference may be made to the description of mixing the animation data with the video stream data in the second embodiment.
In this embodiment, since the video stream data of the mixed stream animation data is output in the video window of the live broadcasting room, the video picture and the animation picture are simultaneously contained in the picture played in the video window, so that the host and the audience can see the animation of the virtual object in the video window of the live broadcasting room to execute a plurality of actions.
In addition, in an alternative embodiment, when mixing the animation data into the video stream data, the prompt data may also be mixed together, where the prompt data is used to prompt the virtual object to perform several actions, for example: the prompt data may be "follow me action, increase the combat score", etc.
In an alternative embodiment, prior to step S103, it is also necessary to obtain the target anchor identification, so as to determine which anchor may increase the engagement score by mimicking the action.
Specifically, the server may obtain the fight scores corresponding to the anchor identifiers, and obtain, according to the fight scores corresponding to the anchor identifiers, the anchor identifier with the lowest corresponding fight score as the target anchor identifier. That is, the anchor with the lowest fighting score is selected from the anchors with the wheat as the target anchor, so that the target anchor can perform a plurality of actions by imitating the virtual object to increase the fighting score.
Under the condition, as the target anchor with relatively low fight score can be enabled to pass through action imitation and increase the fight score, if other anchors with wheat want to enable the target anchor to continuously perform action imitation, the fight score needs to be kept leading, so that the fight atmosphere in a living broadcast room can be enhanced, more audiences can interact with the anchor, and the anchor balance of the fight interaction in the field is finally promoted.
Conventionally, viewers in the combat interaction can promote the combat score of the corresponding anchor by presenting a virtual gift.
In addition to the above manner, the server may further analyze the virtual gift presentation instruction in response to the virtual gift presentation instruction to obtain a virtual gift identifier, and if the virtual gift identifier is an action imitating virtual gift identifier, obtain a target anchor identifier corresponding to the virtual gift receiver.
That is, the audience can give an action to imitate a virtual gift to the anchor, so that the anchor as a receiver of the virtual gift can also execute a plurality of actions by imitating a virtual object to increase the fight score, and then the anchor identifier corresponding to the anchor as the receiver of the virtual gift is the target anchor identifier.
In this case, the spectator gifting action mimics a virtual gift to the anchor, helping the anchor to increase the combat score, and also helping to generate interesting content, thereby increasing anchor campaigns.
Referring to fig. 5, fig. 5 is a schematic diagram showing an action simulation virtual gift provided in an embodiment of the present application, and as can be seen from fig. 5, a virtual gift of several types is included in a virtual gift column 51, and a sign 512 of the action simulation virtual gift is displayed above an action simulation virtual gift 511, where the sign 512 may be in the form of "dance" or "simulation" so as to indicate which virtual gift is the action simulation virtual gift for the audience.
In an alternative embodiment, the server further comprises a gift server, and the processing operations related to the virtual gift may be performed by the gift server.
In an alternative embodiment, the target anchor name corresponding to the target anchor identifier may be added to the prompting data, so that the prompting data not only can prompt the virtual object to execute the actions of a plurality of actions, but also can prompt which anchor can increase the combat score through action imitation.
Regarding step S104, the server responds to the action recognition success instruction to update the combat score corresponding to the target anchor identifier; the action recognition success instruction is sent when the target anchor is recognized to simulate the virtual object to execute a plurality of actions from video stream data corresponding to the target anchor identifier.
The main execution body of the action recognition operation may be a server or a hosting client, and in this embodiment of the present application, description is given in terms of the hosting client.
The method comprises the following steps: after displaying the animation of the virtual object executing a plurality of actions in the video window of the live broadcasting room, the anchor client acquires video stream data corresponding to the target anchor identifier, analyzes and identifies the video stream data corresponding to the target anchor identifier through a preset action identification algorithm, judges whether the target anchor is identified to simulate the virtual object to execute a plurality of actions from the video stream data corresponding to the target anchor identifier, and if yes, sends an action identification success instruction to the server.
In an optional embodiment, the above action recognition algorithm may be that the video stream data corresponding to the target anchor identifier is subjected to frame-by-frame processing, and then an anchor contour is recognized from each frame of video frame, and the anchor contour in each frame of video frame is compared with the contour of the virtual object in each frame of animation frame, so as to recognize whether the target anchor successfully imitates the virtual object to execute a plurality of actions.
In other alternative embodiments, other existing action recognition algorithms may be used for recognition, and are not described in detail herein.
And the server responds to the action recognition success instruction and updates the combat score corresponding to the target anchor identifier.
Specifically, the server may obtain the combat score to be increased according to a preset combat score increasing rule, and update the combat score corresponding to the target anchor identifier according to the combat score to be increased.
For example: the server may obtain the engagement score to be increased according to the simulation difficulty coefficient of the animation in which the virtual object performs a plurality of actions. The higher the coefficient of difficulty in imitation, the higher the combat score to be added.
Or, the server may also obtain the combat score to be increased according to the similarity of the target anchor imitating the virtual object to execute a plurality of actions. The higher the similarity, the higher the combat score to be added.
In an alternative embodiment, if the audience simulates the virtual gift through the gifting action, so that the anchor obtains the simulation opportunity, the combat score to be added can be determined according to the value of the action simulating the virtual gift, and the combat score corresponding to the target anchor identifier can be updated.
Specifically, the server acquires a virtual gift value corresponding to the action imitation virtual gift identifier; obtaining the combat score to be increased according to the virtual gift value and the random parameters in the preset range; and updating the fight score corresponding to the target anchor identifier according to the fight score to be added.
Wherein the preset range may be a closed interval of up to 0.1 to 1.
Regarding step S105, when the live-link fight satisfies a preset end condition, for example: and when the duration of the live continuous fight reaches the preset duration, triggering the server to send out a live continuous fight ending instruction, then, responding to the live continuous fight ending instruction, acquiring fight scores corresponding to the main broadcast identifiers by the server, acquiring live continuous fight results according to the fight scores corresponding to the main broadcast identifiers, and outputting the live continuous fight results in a live broadcast room.
In an alternative embodiment, the server may obtain the score of the match increased by simulating the virtual object performing a plurality of actions in the live match of the link, and perform the list exposure for the anchor with a relatively higher score of the match increased to form the list drainage.
In the live-broadcast fight interaction of the link, animation data are mixed in video stream data, so that animation that virtual objects execute a plurality of actions is displayed in a video window of a live broadcast room, when the target anchor is identified to have imitated the virtual objects to execute the actions in the video stream data corresponding to the target anchor mark, fight scores corresponding to the target anchor mark can be increased, and therefore the interest of live-broadcast fight interaction of the link can be improved, the income of the anchor and the introduction of flow can be improved to a certain extent, and the play enthusiasm of the anchor and the interaction of live-broadcast fight of the link are improved.
Referring to fig. 6, fig. 6 is a flow chart of a live continuous play fight interaction method according to a second embodiment of the present application. The method comprises the steps S201 to S207, and specifically comprises the following steps:
s201: and the server responds to the live link combat opening instruction, analyzes the live link combat opening instruction to obtain the anchor identifications, and establishes the link session connection of the anchor client corresponding to each anchor identification.
S202: the client added into the live broadcasting room acquires audio and video stream data, and outputs the audio and video stream data in the live broadcasting room; the live broadcast room comprises live broadcast rooms created by the anchor corresponding to each anchor identifier, and the audio and video stream data comprises audio and video stream data corresponding to each anchor identifier.
S203: the server responds to the animation mixed flow instruction to acquire animation data and the display position of the animation in the video window.
S204: and the server mixes the animation data into the video stream data according to the animation data and the display position of the animation in the video window to obtain the video stream data of the mixed stream animation data, and sends an animation display instruction to the client which has joined the live broadcasting room.
S205: and the client added into the live broadcasting room responds to the animation display instruction, acquires the video stream data of the mixed stream animation data, and displays the animation of the virtual object executing a plurality of actions in the video window of the live broadcasting room according to the video stream data of the mixed stream animation data.
S206: the server responds to the action recognition success instruction and updates the combat score corresponding to the target anchor identifier; the action recognition success instruction is sent when the target anchor is recognized to simulate the virtual object to execute a plurality of actions from video stream data corresponding to the target anchor identifier.
S207: the server responds to the live-link combat ending instruction to acquire combat scores corresponding to the anchor identifiers, acquires live-link combat results according to the combat scores corresponding to the anchor identifiers, and outputs the live-link combat results in the live-link combat room.
Steps S201 to S202 are the same as steps S101 to S102 in the first embodiment, steps S205 to S207 are the same as steps S103 to S105 in the first embodiment, and steps S203 to S204 will be described in detail below.
In this embodiment, the execution subject of mixing the moving picture data to the video stream data is a server.
Specifically, the server acquires animation data and a display position of an animation in a video window in response to an animation mixed stream instruction.
The animation mixed stream instruction is used for triggering the server to perform data mixed stream. In an alternative embodiment, the server may be the streaming server mentioned in the first embodiment.
The animation data has been explained in the first embodiment, and will not be described here.
The display position of the animation in the video window is used to determine which position in the video stream data to mix the animation data.
In an alternative embodiment, the display location may be any location in the video window, and in another alternative embodiment, the display location may be a lower center in the video window in order to reduce occlusion of the video frame of the anchor. In other alternative embodiments, the display location may also be a preset location in the video display area corresponding to the target anchor.
In an alternative embodiment, before executing step S203, the display position of the animation in the video window may be determined by the anchor client, please refer to fig. 7, specifically including steps 208 to S210, as follows:
s208: the server sends an animation mixed stream preparation instruction to the anchor client; the animation mixed stream preparation instruction comprises a target anchor identifier.
S209: the anchor client pulls the animation configuration resource from the server in response to the animation mixed stream preparation instruction, parses the animation configuration resource to acquire animation data and a display position of the animation in the video display area.
S210: the anchor client acquires the size information of the video window and the layout information of the video display area corresponding to each anchor identifier in the video window; obtaining the position of the video display area corresponding to the target anchor mark in the video window according to the size information of the video window and the layout information of the video display area corresponding to each anchor mark in the video window; obtaining the display position of the animation in the video window according to the display position of the animation in the video display area and the position of the video display area corresponding to the target anchor in the video window; and sending the animation mixed-flow instruction to the server.
In this embodiment, after the server obtains the target anchor identifier, an animation mixed stream preparation instruction is generated according to the target anchor identifier. The animation mixed stream preparation instruction is used for triggering the anchor client to determine the display position of the animation in the video window.
Specifically, the anchor client transmits an animation configuration resource request to the server in response to the animation mixed stream preparation instruction, and the server transmits the animation configuration resource to the anchor client in response to the animation configuration resource request. And the anchor client analyzes the animation configuration resource and acquires the animation data and the display position of the animation in the video display area.
Wherein the display position of the animation in the video display area is used to determine which position of the animation is displayed in the video display area, for example: the animation is displayed at the lower left corner or lower right corner of the video display area, etc.
In an alternative embodiment, the animation configuration resource is a compressed file, so that the anchor client needs to parse the compressed file before extracting the animation data and the display position of the animation in the video display area.
After acquiring the animation data and the display positions of the animations in the video display regions, the anchor client also acquires the size information of the video window and the layout information of the video display regions corresponding to the anchor identifiers in the video window.
The size information of the video window may change according to the current fight playing method, so that in order to accurately obtain the display position of the animation in the video window, the size information of the current video window needs to be acquired.
The layout information of the video display areas corresponding to the anchor identifications in the video window is used for determining the positions of the video display areas corresponding to the target anchor identifications in the video window. For example: when two anchors perform live-link combat, the layout information of the video display areas corresponding to the two anchors in the video window is that the video window is equally divided left and right for the video display areas corresponding to the two anchors, specifically, refer to the video display area 41 and the video display area 42 in fig. 4, where the video display area 41 is at the left side of the video window, and the video display area 42 is at the right side of the video window.
And then, the anchor client obtains the position of the video display area corresponding to the target anchor identifier in the video window according to the size information of the video window and the layout information of the video display area corresponding to each anchor identifier in the video window.
For example: the size information of the video window includes a display width of the video window and a display height of the video window, and the video window is equally divided left and right according to the video display area 41 and the video display area 42, so that a distance between a left side frame of the video display area 42 and a left side frame of the video window is width/2, and a distance between an upper side frame of the video display area 42 and an upper side frame of the video window is 0.
And finally, the anchor client side sends an animation mixed stream instruction to the server according to the display position of the animation in the video display area and the position of the video display area corresponding to the target anchor in the video window, so that the display position of the animation in the video window can be obtained.
For example: the display position of the animation in the video display area is the lower left corner, and then the distance between the left side frame of the animation display area and the left side frame of the video window is width/2, and the distance between the upper side frame of the animation display area and the upper side frame of the video window is the difference between the display height of the video window and the display height of the animation display area.
In this embodiment, determining, by the anchor client, the display position of the animation in the video window may reduce the operation overhead of the server, improve the mixed-stream efficiency, and enable the animation to be displayed in the video display area corresponding to the target anchor identifier, so that the target anchor may be prompted to increase the fight score through action simulation in an intuitive manner.
Specifically, on the basis of steps S208 to S210, displaying an animation of a virtual object in a video window of a live broadcasting room to perform a plurality of actions according to video stream data of mixed stream animation data, includes the steps of: and displaying the animation of the virtual object executing a plurality of actions in the video display area corresponding to the target anchor mark according to the video stream data of the mixed stream animation data.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating a display of video stream data of mixed stream animation data according to an embodiment of the present application. In fig. 8, the video display area 81 corresponding to the target anchor is on the left side of the video window, and the display position of the preset animation in the video display area is the lower left corner, so that the animation 82 of the virtual object executing a plurality of actions is displayed in the lower left corner of the video display area 81 corresponding to the target anchor. Only a certain frame of animation in the animation is shown in fig. 8, and it can be understood that the animation is actually displayed in the live broadcast room, which is consecutive from frame to frame.
In an alternative embodiment, the animation of the virtual object performing several actions as shown in the present application has a corresponding display time, for example: and 3s, the server can repeatedly execute the mixed stream operation of the animation data and the video stream data according to the preset display times.
In an alternative embodiment, the client added to the live broadcasting room responds to the action recognition success instruction, acquires the mixed stream animation data and the video stream data of the fight score update prompt data, and displays the animation of the virtual object executing a plurality of actions and the increased fight score in the video window of the live broadcasting room according to the mixed stream animation data and the video stream data of the fight score update prompt data.
In this embodiment, after recognizing from the video stream data corresponding to the target anchor identifier that the target anchor has imitated the virtual object to perform a plurality of actions, in order to intuitively inform viewers and anchors in the living room, the target anchor has imitated successfully and the current imitated increased combat score. And the client added into the live broadcasting room responds to the action recognition success instruction, acquires video stream data of mixed stream animation data and fight score update prompt data, and displays the animation of the virtual object executing a plurality of actions and the increased fight score in a video window of the live broadcasting room according to the mixed stream animation data and the fight score update prompt data.
The operation of streaming the animation data and the combat score update hint data to the video stream data in a mixed manner may be performed by a server, a hosting client, or a viewer client, and is not limited herein.
Referring to fig. 9, fig. 9 is a schematic diagram showing video stream data of mixed stream animation data and fight score update prompt data according to an embodiment of the present application. As can be seen in FIG. 9, the virtual object performs an animation 91 of several actions and a combat score update prompt 92.
In this embodiment, the user may be prompted to simulate success and determine that the fight score has been increased by mixing the fight score update prompt data together, so as to improve the user experience in the live fight interaction.
Referring to fig. 10, fig. 10 is a flow chart of a live-link combat interaction method according to a third embodiment of the present application. The method comprises steps S301 to S307, and specifically comprises the following steps:
s301: and the server responds to the live link combat opening instruction, analyzes the live link combat opening instruction to obtain the anchor identifications, and establishes the link session connection of the anchor client corresponding to each anchor identification.
S302: the client added into the live broadcasting room acquires audio and video stream data, and outputs the audio and video stream data in the live broadcasting room; the live broadcast room comprises live broadcast rooms created by the anchor corresponding to each anchor identifier, and the audio and video stream data comprises audio and video stream data corresponding to each anchor identifier.
S303: and the server transmits the fight score display control data to the client which has joined the live broadcasting room.
S304: and the client which has joined the live broadcasting room receives the fight score display control data, and displays the fight score display control in the live broadcasting room interface according to the fight score display control data.
S305: and the client added into the live broadcasting room responds to the animation display instruction, acquires the video stream data of the mixed stream animation data, and displays the animation of the virtual object executing a plurality of actions in the video window of the live broadcasting room according to the video stream data of the mixed stream animation data.
S306: the server responds to the action recognition success instruction and updates the combat score corresponding to the target anchor identifier; the action recognition success instruction is sent when the target anchor is recognized to simulate the virtual object to execute a plurality of actions from video stream data corresponding to the target anchor identifier.
S307: the server responds to the live-link combat ending instruction to acquire combat scores corresponding to the anchor identifiers, acquires live-link combat results according to the combat scores corresponding to the anchor identifiers, and outputs the live-link combat results in the live-link combat room.
Steps S301 to S302 are the same as steps S101 to S102 in the first embodiment, steps S305 to S307 are the same as steps S103 to S105 in the first embodiment, and steps S303 to S304 will be described in detail below.
In this embodiment, after the live-broadcast fight with wheat is started, not only audio and video stream data is output in the live-broadcast room, but also fight score display control data is issued to the client end of the live-broadcast room, and the client end of the live-broadcast room receives the fight score display control data, and the fight score display control is displayed in the live-broadcast room interface according to the fight score display control data.
The fight score display control data are used for presenting fight score display controls in the living broadcast room, and specifically comprise fight score display function data and display data.
The functional data for displaying the fight score is used for displaying the real-time fight score of the host in the live fight interaction of the continuous wheat. The display data of the fight score display is used for determining the display position and the display style of the fight score display control.
Referring to fig. 11, fig. 11 is a schematic display diagram of a combat score display control according to an embodiment of the present application. As can be seen in fig. 11, the combat score display control 111 displays combat scores of the anchor a and anchor B, respectively. The display style of the fight score display control 111 is a bar shape, the bars corresponding to different broadcasters can be visually distinguished through colors (not shown in a color chart), the overall bar size of the fight score display control is kept unchanged, the proportion of the bars corresponding to different broadcasters occupying the overall bar shape is dynamically changed, the proportion is determined by the fight score of the broadcasters, the higher the fight score is, the larger the occupied proportion is, and therefore the change of the fight score can be known by the broadcasters and audiences through the change of the bar occupation ratio.
In an alternative embodiment, fig. 12 is another flow chart of the live link combat interaction method provided in the third embodiment of the present application, and after step S306, steps S308 to S309 are further included, which specifically includes the following steps:
s308: the server sends an fight score updating instruction to the client which has joined the living broadcast room; the combat score updating instruction comprises a target anchor identifier and an updated combat score corresponding to the target anchor identifier.
S309: and responding to the fight score updating instruction by the client added into the live broadcasting room, analyzing the fight score updating instruction, acquiring the target anchor identifier and the updated fight score corresponding to the target anchor identifier, and displaying the updated fight score corresponding to the target anchor identifier in a fight score display control.
In the embodiment, the dynamic update in the combat score display control is realized after the combat score update, so that the audience and the anchor can more intuitively know the change of the combat score.
Fig. 13 is a timing chart of a live-link combat interaction method according to an embodiment of the present application. Referring to the timing diagram, the overall flow of the live link combat interactive method is more intuitively illustrated to facilitate understanding of the technical scheme provided by the application, as shown in fig. 13, the anchor client sends combat play start requests to the service server, the service server determines anchor identifications of links in response to the combat play start requests, generates link combat start instructions, the service server obtains the anchor identifications in response to the link combat start instructions, establishes link session connection between anchor clients corresponding to the anchor identifications, pulls audio and video stream data corresponding to each anchor identification, mixes the audio and video stream data, and sends the mixed audio and video stream data to clients added to the live broadcast room, the clients added to the live broadcast room include anchor clients and audience clients, the client end which has joined the live broadcasting room outputs the audio and video stream data in the live broadcasting room, the business server obtains the fight score display control data according to fight play method resources, the fight score display control data is issued to the client end which has joined the live broadcasting room, the client end which has joined the live broadcasting room displays the fight score display control according to the fight score display control data in the live broadcasting room interface, the business server obtains a target anchor mark, sends an animation mixed stream preparation instruction containing the target anchor mark to the anchor client end, the anchor client end pulls an animation configuration resource from a business service, analyzes the animation configuration resource, obtains animation data, determines the display position of the animation in a video window, sends the animation mixed stream instruction to the stream server, the stream server carries out mixed stream on the animation data and the video stream data according to the animation data and the display position of the animation in the video window, the method comprises the steps that video stream data of mixed stream animation data are issued to clients which are added to a live broadcasting room, the clients which are added to the live broadcasting room display virtual objects in video display areas corresponding to target anchor identifications according to the video stream data of the mixed stream animation data to execute animations of a plurality of actions, the anchor clients identify whether the target anchor successfully imitates the actions from the video stream data corresponding to the target anchor, if yes, action identification success instructions are sent to a server (comprising a stream server and a service server), the service server responds to the action identification success instructions, the fight scores corresponding to the target anchor identifications are updated, the updated fight scores corresponding to the target anchor identifications are sent to the clients which are added to the live broadcasting room, the clients which are added to the live broadcasting room display the fight scores in the fight score display control, the stream server responds to the action identification success instructions, mixed stream animation data, fight score update prompt data and video stream data are sent to the clients which are added to the live broadcasting room, and the fight score display window of the video stream data and the fight score update prompt data. And the service server responds to the live-link fight ending instruction, acquires the live-link fight result, and sends the live-link fight result to the client which has been added into the live-link room, and the client which has been added into the live-link room displays the live-link fight result in the live-link room interface.
It should be noted that, some optional implementations in the first to third embodiments are not shown in the timing chart one by one, but the main steps in the live-wheat-connection combat interaction method are all shown in fig. 13, and fig. 13 is only for helping understanding the technical solution of the present application, and the implementations that are not shown in the drawings are still within the protection scope of the present application.
Fig. 14 is a schematic structural diagram of a live-link combat interactive system according to a fourth embodiment of the present application. The live-wheat-linked combat interaction system 14 comprises a client 141 and a server 142, wherein the client 141 comprises a host client 1411 and a audience client 1412;
the server 142 responds to the live continuous play start command, analyzes the live continuous play start command to obtain a host broadcasting identifier, and establishes continuous wheat session connection of a host broadcasting client 1411 corresponding to each host broadcasting identifier;
the client 141 which has joined the live broadcasting room acquires audio and video stream data, and outputs the audio and video stream data in the live broadcasting room; the live broadcasting room comprises live broadcasting rooms created by the anchor corresponding to the anchor identifiers, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifiers;
The client 141 added to the live broadcasting room responds to the animation display instruction to obtain video stream data of mixed stream animation data, and displays the animation of the virtual object executing a plurality of actions in a video window of the live broadcasting room according to the video stream data of the mixed stream animation data;
the server 142 responds to the action recognition success instruction and updates the fight score corresponding to the target anchor identifier; the action recognition success instruction is sent when recognizing that the target anchor imitates the virtual object to execute the plurality of actions from video stream data corresponding to the target anchor identifier;
the server 142 responds to the command of ending the live link combat, obtains combat scores corresponding to the anchor identifiers, obtains live link combat results according to the combat scores corresponding to the anchor identifiers, and outputs the live link combat results in the live link combat room.
Fig. 15 is a schematic structural diagram of a live-link combat interactive device according to a fifth embodiment of the present application. The apparatus may be implemented as all or part of a computer device by software, hardware, or a combination of both. The device 15 comprises:
A wheat connection session establishment unit 151, configured to, in response to a wheat connection live play combat start instruction, parse the wheat connection live play combat start instruction to obtain a host identifier, and establish a wheat connection session connection of a host client corresponding to each host identifier;
a first output unit 152, configured to obtain audio and video stream data from a client that has joined a live broadcast room, and output the audio and video stream data in the live broadcast room; the live broadcasting room comprises live broadcasting rooms created by the anchor corresponding to the anchor identifiers, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifiers;
a first response unit 153, configured to obtain video stream data of mixed stream animation data in response to an animation display instruction by the client that has joined a live broadcast room, and display, in a video window of the live broadcast room, an animation in which a virtual object performs a plurality of actions according to the video stream data of the mixed stream animation data;
a second response unit 154, configured to update a combat score corresponding to the target anchor identifier in response to the action recognition success instruction by the server; the action recognition success instruction is sent when recognizing that the target anchor imitates the virtual object to execute the plurality of actions from video stream data corresponding to the target anchor identifier;
And the second output unit 155 is configured to obtain, by using the server in response to a live link combat ending instruction, combat scores corresponding to the anchor identifiers, obtain live link combat results according to combat scores corresponding to the anchor identifiers, and output the live link combat results in the live link combat room.
It should be noted that, when the live-link combat interaction device provided in the above embodiment executes the live-link combat interaction method, only the division of the above functional modules is used for illustration, in practical application, the above functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the live continuous play interaction device provided in the above embodiment and the live continuous play interaction method belong to the same concept, which embody detailed implementation process embodiments of the method, and are not described herein again.
Fig. 16 is a schematic structural diagram of a computer device according to a sixth embodiment of the present application. As shown in fig. 16, the computer device 16 may include: a processor 160, a memory 161, and a computer program 162 stored in the memory 161 and executable on the processor 160, such as: a live wheat-linking fight interactive program; the processor 160, when executing the computer program 162, implements the steps of the first to third embodiments described above.
Wherein the processor 160 may include one or more processing cores. The processor 160 utilizes various interfaces and wiring to connect various portions of the computer device 16, performs various functions of the computer device 16 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 161, and invoking data in the memory 161, and alternatively, the processor 160 may be implemented in at least one hardware form in the form of digital signal processing (Digital Signal Processing, DSP), field-programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programble Logic Array, PLA). The processor 160 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the touch display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 160 and may be implemented by a single chip.
The Memory 161 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 161 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 161 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 161 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as touch instructions, etc.), instructions for implementing the various method embodiments described above, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. The memory 161 may also optionally be at least one storage device located remotely from the aforementioned processor 160.
The embodiment of the present application further provides a computer storage medium, where a plurality of instructions may be stored, where the instructions are adapted to be loaded and executed by a processor, and the specific implementation procedure may refer to the specific description of the foregoing embodiment, and details are not repeated herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of each method embodiment described above may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc.
The present invention is not limited to the above-described embodiments, but, if various modifications or variations of the present invention are not departing from the spirit and scope of the present invention, the present invention is intended to include such modifications and variations as fall within the scope of the claims and the equivalents thereof.

Claims (14)

1. The live continuous wheat fight interaction method is characterized by comprising the following steps:
the server responds to a live continuous play start instruction, analyzes the live continuous play start instruction to obtain a main broadcasting identifier, and establishes continuous wheat session connection of a main broadcasting client corresponding to each main broadcasting identifier;
a client added into a live broadcasting room acquires audio and video stream data, and outputs the audio and video stream data in the live broadcasting room; the live broadcasting room comprises live broadcasting rooms created by the anchor corresponding to the anchor identifiers, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifiers;
the client end added into the live broadcasting room responds to the animation display instruction, acquires video stream data of mixed stream animation data, and displays the animation of the virtual object executing a plurality of actions in a video window of the live broadcasting room according to the video stream data of the mixed stream animation data;
The server responds to the action recognition success instruction and updates the fight score corresponding to the target anchor identifier; the action recognition success instruction is sent when recognizing that the target anchor imitates the virtual object to execute the plurality of actions from video stream data corresponding to the target anchor identifier; the target anchor mark is an anchor mark meeting preset conditions; the target anchor is the anchor corresponding to the target anchor identifier;
and the server responds to the live link combat ending instruction, acquires combat scores corresponding to the anchor identifiers, acquires live link combat results according to the combat scores corresponding to the anchor identifiers, and outputs the live link combat results in the live link combat room.
2. The live-linked combat interaction method of claim 1, wherein before the client having joined the live-linked room responds to the animation display instruction, comprising the steps of:
the server obtains the corresponding fight scores of the anchor identifications, obtains the anchor identification with the lowest corresponding fight score according to the corresponding fight scores of the anchor identifications, and takes the anchor identification with the lowest fight score as the target anchor identification.
3. The live-linked combat interaction method of claim 1, wherein before the client having joined the live-linked room responds to the animation display instruction, comprising the steps of:
the server responds to the virtual gift presentation instruction, analyzes the virtual gift presentation instruction to obtain a virtual gift identifier, and if the virtual gift identifier is an action imitating virtual gift identifier, obtains a host identifier corresponding to a virtual gift receiver, and takes the host identifier corresponding to the virtual gift receiver as a target host identifier.
4. A live-linked combat interaction method as claimed in any one of claims 1 to 3, wherein before said client having joined the live-room responds to the animation display instruction, further comprising the steps of:
the server responds to the animation mixed flow instruction to obtain animation data and the display position of the animation in the video window;
and the server mixes the animation data into the video stream data according to the animation data and the display position of the animation in the video window to obtain the video stream data of the mixed stream animation data, and sends an animation display instruction to the client added into the live broadcasting room.
5. The live-linked combat interaction method of claim 4, wherein before the server responds to the animation mixed-flow instruction, further comprising the steps of:
the server sends an animation mixed stream preparation instruction to the anchor client; wherein, the animation mixed stream preparation instruction comprises the target anchor mark;
the anchor client responds to the animation mixed flow preparation instruction, pulls an animation configuration resource from the server, analyzes the animation configuration resource to obtain the animation data and the display position of the animation in a video display area;
the anchor client acquires the size information of the video window and the layout information of the video display area corresponding to each anchor identifier in the video window; obtaining the position of the video display area corresponding to the target anchor mark in the video window according to the size information of the video window and the layout information of the video display area corresponding to each anchor mark in the video window; obtaining the display position of the animation in the video window according to the display position of the animation in the video display area and the position of the video display area corresponding to the target anchor mark in the video window; and sending the animation mixed flow instruction to the server.
6. The live-linked combat interaction method of claim 5, wherein said displaying an animation of a virtual object in a video window of said live-action room according to video stream data of said mixed stream animation data, comprises the steps of:
and displaying the animation of the virtual object executing the plurality of actions in the video display area corresponding to the target anchor identifier according to the video stream data of the mixed stream animation data.
7. A live-linked-play interaction method as claimed in any one of claims 1 to 3, further comprising the steps of:
and the client end which is added into the living broadcast room responds to an action recognition success instruction, acquires video stream data of the mixed stream animation data and the fight score updating prompt data, and displays the animation of the virtual object executing the actions and the increased fight score in a video window of the living broadcast room according to the video stream data of the mixed stream animation data and the fight score updating prompt data.
8. A method for live wheat-over-live combat interaction according to any one of claims 1 to 3, wherein after said establishing a wheat-over session connection of a host client corresponding to each of said host identifiers, further comprises the steps of:
The server transmits combat score display control data to the client side which has joined the live broadcasting room;
and the client added into the live broadcasting room receives the fight score display control data, and displays the fight score display control in a live broadcasting room interface according to the fight score display control data.
9. The live-link combat interaction method of claim 8, wherein after updating the combat score corresponding to the target anchor identity in response to the action recognition success instruction, the method comprises the steps of:
the server sends a combat score updating instruction to the client which has joined the live broadcasting room; the combat score updating instruction comprises the target anchor identifier and updated combat scores corresponding to the target anchor identifier;
and responding to the fight score updating instruction by the client added into the live broadcasting room, analyzing the fight score updating instruction, acquiring the target anchor identifier and the updated fight score corresponding to the target anchor identifier, and displaying the updated fight score corresponding to the target anchor identifier in the fight score display control.
10. The live-link combat interaction method of claim 3, wherein said server, in response to an action recognition success instruction, updates the combat score corresponding to the target anchor identity, comprising the steps of:
The server acquires a virtual gift value corresponding to the action simulation virtual gift identifier;
obtaining the combat score to be increased according to the virtual gift value and the random parameters in the preset range;
and updating the fight score corresponding to the target anchor identifier according to the fight score to be increased.
11. A live wheat-over-air combat interaction system, comprising: the system comprises a client and a server, wherein the client comprises a host client and a spectator client;
the server responds to a live continuous play start instruction, analyzes the live continuous play start instruction to obtain a host broadcasting identifier, and establishes continuous wheat session connection of the host broadcasting client corresponding to each host broadcasting identifier;
a client added into a live broadcasting room acquires audio and video stream data, and outputs the audio and video stream data in the live broadcasting room; the live broadcasting room comprises live broadcasting rooms created by the anchor corresponding to the anchor identifiers, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifiers;
the client end added into the live broadcasting room responds to the animation display instruction, acquires video stream data of mixed stream animation data, and displays the animation of the virtual object executing a plurality of actions in a video window of the live broadcasting room according to the video stream data of the mixed stream animation data;
The server responds to the action recognition success instruction and updates the fight score corresponding to the target anchor identifier; the action recognition success instruction is sent when recognizing that the target anchor imitates the virtual object to execute the plurality of actions from video stream data corresponding to the target anchor identifier; the target anchor mark is an anchor mark meeting preset conditions; the target anchor is the anchor corresponding to the target anchor identifier;
and the server responds to the live link combat ending instruction, acquires combat scores corresponding to the anchor identifiers, acquires live link combat results according to the combat scores corresponding to the anchor identifiers, and outputs the live link combat results in the live link combat room.
12. The utility model provides a company's wheat live broadcast fight interactive installation which characterized in that includes:
the communication session establishment unit is used for responding to the communication live fight starting instruction, analyzing the communication live fight starting instruction to obtain a host broadcasting identifier, and establishing communication session connection of a host broadcasting client corresponding to each host broadcasting identifier;
the first output unit is used for acquiring audio and video stream data by the client added into the live broadcasting room and outputting the audio and video stream data in the live broadcasting room; the live broadcasting room comprises live broadcasting rooms created by the anchor corresponding to the anchor identifiers, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifiers;
The first response unit is used for responding to the animation display instruction by the client added into the live broadcasting room, obtaining video stream data of mixed stream animation data, and displaying the animation of the virtual object executing a plurality of actions in a video window of the live broadcasting room according to the video stream data of the mixed stream animation data;
the second response unit is used for responding to the action recognition success instruction by the server and updating the combat score corresponding to the target anchor identifier; the action recognition success instruction is sent when recognizing that the target anchor imitates the virtual object to execute the plurality of actions from video stream data corresponding to the target anchor identifier; the target anchor mark is an anchor mark meeting preset conditions; the target anchor is the anchor corresponding to the target anchor identifier;
the second output unit is used for responding to the live link combat ending instruction, acquiring combat scores corresponding to the anchor identifiers by the server, acquiring live link combat results according to the combat scores corresponding to the anchor identifiers, and outputting the live link combat results in the live link combat room.
13. A computer device, comprising: a processor, a memory and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 10 when the computer program is executed.
14. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 10.
CN202111136353.0A 2021-09-27 2021-09-27 Continuous wheat live broadcast fight interaction method, system and device and computer equipment Active CN113676747B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111136353.0A CN113676747B (en) 2021-09-27 2021-09-27 Continuous wheat live broadcast fight interaction method, system and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111136353.0A CN113676747B (en) 2021-09-27 2021-09-27 Continuous wheat live broadcast fight interaction method, system and device and computer equipment

Publications (2)

Publication Number Publication Date
CN113676747A CN113676747A (en) 2021-11-19
CN113676747B true CN113676747B (en) 2023-06-13

Family

ID=78550325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111136353.0A Active CN113676747B (en) 2021-09-27 2021-09-27 Continuous wheat live broadcast fight interaction method, system and device and computer equipment

Country Status (1)

Country Link
CN (1) CN113676747B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114501157B (en) * 2021-12-16 2023-05-26 广州方硅信息技术有限公司 Interaction method, server, terminal, system and storage medium for live communication
CN114760498B (en) * 2022-04-01 2024-07-26 广州方硅信息技术有限公司 Synthetic action interaction method, system, device, equipment and medium under continuous wheat direct sowing
CN115314727A (en) * 2022-06-17 2022-11-08 广州方硅信息技术有限公司 Live broadcast interaction method and device based on virtual object and electronic equipment
CN115134621B (en) * 2022-06-30 2024-05-28 广州方硅信息技术有限公司 Live combat interaction method, system, device, equipment and medium
CN115134623B (en) * 2022-06-30 2024-07-26 广州方硅信息技术有限公司 Virtual gift interaction method, system, device, electronic equipment and medium
CN115499679B (en) * 2022-09-28 2024-06-25 广州方硅信息技术有限公司 Live broadcasting room interactive object display method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9597586B1 (en) * 2012-05-07 2017-03-21 CP Studios Inc. Providing video gaming action via communications in a social network
CN110519612A (en) * 2019-08-26 2019-11-29 广州华多网络科技有限公司 Even wheat interactive approach, live broadcast system, electronic equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6659479B2 (en) * 2016-06-28 2020-03-04 Line株式会社 Information processing apparatus control method, information processing apparatus, and program
CN109758769A (en) * 2018-11-26 2019-05-17 北京达佳互联信息技术有限公司 Game application player terminal determines method, apparatus, electronic equipment and storage medium
CN110300311A (en) * 2019-07-01 2019-10-01 腾讯科技(深圳)有限公司 Battle method, apparatus, equipment and storage medium in live broadcast system
CN110944235B (en) * 2019-11-22 2022-09-16 广州方硅信息技术有限公司 Live broadcast interaction method, device and system, electronic equipment and storage medium
CN111314718B (en) * 2020-01-16 2022-03-22 广州酷狗计算机科技有限公司 Settlement method, device, equipment and medium for live broadcast battle
CN112163479A (en) * 2020-09-16 2021-01-01 广州华多网络科技有限公司 Motion detection method, motion detection device, computer equipment and computer-readable storage medium
CN112714330B (en) * 2020-12-25 2023-03-28 广州方硅信息技术有限公司 Gift presenting method and device based on live broadcast with wheat and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9597586B1 (en) * 2012-05-07 2017-03-21 CP Studios Inc. Providing video gaming action via communications in a social network
CN110519612A (en) * 2019-08-26 2019-11-29 广州华多网络科技有限公司 Even wheat interactive approach, live broadcast system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113676747A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN113676747B (en) Continuous wheat live broadcast fight interaction method, system and device and computer equipment
CN104468623B (en) It is a kind of based on online live information displaying method, relevant apparatus and system
CN113873280B (en) Continuous wheat live broadcast fight interaction method, system and device and computer equipment
CN113766340B (en) Dance music interaction method, system and device under live connected wheat broadcast and computer equipment
CN113453029B (en) Live broadcast interaction method, server and storage medium
CN113727130B (en) Message prompting method, system and device for live broadcasting room and computer equipment
CN114025245B (en) Live broadcast room recommendation method and system based on task interaction and computer equipment
CN113938696B (en) Live broadcast interaction method and system based on custom virtual gift and computer equipment
CN114666672B (en) Live fight interaction method and system initiated by audience and computer equipment
CN113840156B (en) Live broadcast interaction method and device based on virtual gift and computer equipment
CN112657186A (en) Game interaction method and device
CN114007094A (en) Voice microphone-connecting interaction method, system, medium and computer equipment for live broadcast room
CN114201095A (en) Control method and device for live interface, storage medium and electronic equipment
CN113824976A (en) Method and device for displaying approach show in live broadcast room and computer equipment
CN114125480B (en) Live chorus interaction method, system, device and computer equipment
CN115314727A (en) Live broadcast interaction method and device based on virtual object and electronic equipment
CN114007095B (en) Voice-to-microphone interaction method, system and medium of live broadcasting room and computer equipment
CN115134623B (en) Virtual gift interaction method, system, device, electronic equipment and medium
CN115134621B (en) Live combat interaction method, system, device, equipment and medium
CN115314729B (en) Team interaction live broadcast method and device, computer equipment and storage medium
CN115134624B (en) Live broadcast continuous wheat matching method, system, device, electronic equipment and storage medium
CN114760498B (en) Synthetic action interaction method, system, device, equipment and medium under continuous wheat direct sowing
CN113891162B (en) Live broadcast room loading method and device, computer equipment and storage medium
CN114760520A (en) Live small and medium video shooting interaction method, device, equipment and storage medium
CN114666646B (en) Live broadcast room cover interaction method, system, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant