CN113676747A - Live wheat-connecting fighting interaction method, system and device and computer equipment - Google Patents

Live wheat-connecting fighting interaction method, system and device and computer equipment Download PDF

Info

Publication number
CN113676747A
CN113676747A CN202111136353.0A CN202111136353A CN113676747A CN 113676747 A CN113676747 A CN 113676747A CN 202111136353 A CN202111136353 A CN 202111136353A CN 113676747 A CN113676747 A CN 113676747A
Authority
CN
China
Prior art keywords
live
anchor
animation
live broadcast
fighting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111136353.0A
Other languages
Chinese (zh)
Other versions
CN113676747B (en
Inventor
雷兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202111136353.0A priority Critical patent/CN113676747B/en
Publication of CN113676747A publication Critical patent/CN113676747A/en
Application granted granted Critical
Publication of CN113676747B publication Critical patent/CN113676747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4437Implementing a Virtual Machine [VM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The application relates to the technical field of network live broadcast, and provides a live broadcast fighting interaction method, a live broadcast fighting interaction system, a live broadcast fighting interaction device and computer equipment, wherein the live broadcast fighting interaction method comprises the following steps: the server establishes a connected session connection of the anchor client corresponding to each anchor identification; a client added to the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; responding to an animation display instruction by a client added into the live broadcast room, acquiring video stream data of mixed flow animation data, and displaying animation of executing a plurality of actions by a virtual object in a video window of the live broadcast room according to the video stream data of the mixed flow animation data; the server responds to the action recognition success instruction and updates a fighting score corresponding to the target anchor identification; and the server responds to the instruction of ending the continuous wheat live broadcast fight and outputs the result of the continuous wheat live broadcast fight in the live broadcast room. Compared with the prior art, the method and the system can improve revenues of the anchor and the introduction of flow, and enhance the interestingness of live broadcast fighting.

Description

Live wheat-connecting fighting interaction method, system and device and computer equipment
Technical Field
The embodiment of the application relates to the technical field of network live broadcast, in particular to a live broadcast fighting interaction method, system and device and computer equipment.
Background
With the progress of network communication technology, live webcast becomes a new network interaction mode, and live webcast is favored by more and more audiences due to the characteristics of instantaneity, interactivity and the like.
At present, in the process of network live broadcast, various types of fighting interactive playing methods can be carried out between anchor broadcasters in a mode of establishing a microphone connecting session, so that audiences can watch live broadcast contents of different anchor broadcasters simultaneously, and the revenues of the anchor broadcasters in the fighting interactive playing methods can be increased.
However, the manner of acquiring the fighting score by the anchor in the fighting interactive playing method is single, so that the revenues of the anchor cannot be effectively improved and the introduction of the flow cannot be realized, so that the initiative of the anchor in broadcasting can be reduced, and the interestingness of the fighting interactive playing method cannot be improved.
Disclosure of Invention
The embodiment of the application provides a live fight interaction method, system, device and computer equipment of living wheat, can solve anchor and acquire the mode of fight score comparatively single, is difficult to improve the technical problem of the interest of the interactive play method of fighting, and this technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a live coupled wheat fighting interaction method, including:
the server responds to the online live broadcast fight starting instruction, analyzes the online live broadcast fight starting instruction to obtain the anchor identification, and establishes online session connection of the anchor client corresponding to each anchor identification;
a client added to the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises live broadcast rooms established by the anchor corresponding to the anchor identifications, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifications;
responding to an animation display instruction by a client added into the live broadcast room, acquiring video stream data of mixed flow animation data, and displaying animation of executing a plurality of actions by a virtual object in a video window of the live broadcast room according to the video stream data of the mixed flow animation data;
the server responds to the action recognition success instruction and updates a fighting score corresponding to the target anchor identification; the action recognition success command is sent when the target anchor imitates a virtual object to execute a plurality of actions from video stream data corresponding to the target anchor identification;
the server responds to the live-line broadcast fighting ending instruction, the fighting scores corresponding to the anchor marks are obtained, live-line broadcast fighting results are obtained according to the fighting scores corresponding to the anchor marks, and the live-line broadcast fighting results are output in a live broadcast room.
In a second aspect, an embodiment of the present application provides a live fighting interactive system with live microphone, including: the client comprises a main broadcasting client and an audience client;
the server responds to the online live broadcast fight starting instruction, analyzes the online live broadcast fight starting instruction to obtain the anchor identification, and establishes online session connection of the anchor client corresponding to each anchor identification;
a client added to the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises live broadcast rooms established by the anchor corresponding to the anchor identifications, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifications;
responding to an animation display instruction by a client added into the live broadcast room, acquiring video stream data of mixed flow animation data, and displaying animation of executing a plurality of actions by a virtual object in a video window of the live broadcast room according to the video stream data of the mixed flow animation data;
the server responds to the action recognition success instruction and updates a fighting score corresponding to the target anchor identification; the action recognition success command is sent when the target anchor imitates a virtual object to execute a plurality of actions from video stream data corresponding to the target anchor identification;
the server responds to the live-line broadcast fighting ending instruction, the fighting scores corresponding to the anchor marks are obtained, live-line broadcast fighting results are obtained according to the fighting scores corresponding to the anchor marks, and the live-line broadcast fighting results are output in a live broadcast room.
In a third aspect, an embodiment of the present application provides a live fighting interaction device with wheat, including:
the online live broadcast fighting starting instruction is analyzed by the server to obtain the anchor identification, and online session connection of the anchor client corresponding to each anchor identification is established;
the first output unit is used for acquiring audio and video stream data by a client which is added into the live broadcast room and outputting the audio and video stream data in the live broadcast room; the live broadcast room comprises live broadcast rooms established by the anchor corresponding to the anchor identifications, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifications;
the first response unit is used for responding to an animation display instruction by a client added into the live broadcast room, acquiring video stream data of mixed flow animation data, and displaying animation of executing a plurality of actions of a virtual object in a video window of the live broadcast room according to the video stream data of the mixed flow animation data;
the second response unit is used for responding to the action recognition success instruction by the server and updating the fighting score corresponding to the target anchor identification; the action recognition success command is sent when the target anchor imitates a virtual object to execute a plurality of actions from video stream data corresponding to the target anchor identification;
and the second output unit is used for responding to the live-on-air fight ending instruction by the server, acquiring the fight score corresponding to each anchor identification, obtaining a live-on-air fight result according to the fight score corresponding to each anchor identification, and outputting the live-on-air fight result in the live broadcast room.
In a fourth aspect, embodiments of the present application provide a computer device, a processor, a memory, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to the first aspect when executing the computer program.
In a fifth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the method according to the first aspect.
In the embodiment of the application, the server responds to the online live broadcast fight starting instruction, analyzes the online live broadcast fight starting instruction to obtain the anchor identification, and establishes online session connection of anchor clients corresponding to the anchor identifications; a client added to the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises live broadcast rooms established by the anchor corresponding to the anchor identifications, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifications; responding to an animation display instruction by a client added into the live broadcast room, acquiring video stream data of mixed flow animation data, and displaying animation of executing a plurality of actions by a virtual object in a video window of the live broadcast room according to the video stream data of the mixed flow animation data; the server responds to the action recognition success instruction and updates a fighting score corresponding to the target anchor identification; the action recognition success command is sent when the target anchor imitates a virtual object to execute a plurality of actions from video stream data corresponding to the target anchor identification; the server responds to the live-line broadcast fighting ending instruction, the fighting scores corresponding to the anchor marks are obtained, live-line broadcast fighting results are obtained according to the fighting scores corresponding to the anchor marks, and the live-line broadcast fighting results are output in a live broadcast room. This application embodiment is in live fight interaction even wheat, through mixed flow animation data in video stream data, realize showing the animation that virtual object carries out a plurality of action in the video window of live broadcast room, make in the video stream data that corresponds from the target anchor sign, when discerning that the target anchor has imitated virtual object and carry out a plurality of action, can increase the fight score value that the target anchor sign corresponds, thereby not only can improve the interest of live fight even wheat, and can improve the revenue and the introduction of flow of anchor to a certain extent, promote the initiative of broadcasting of anchor and the interactivity of live fight even wheat.
For a better understanding and implementation, the technical solutions of the present application are described in detail below with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic view of an application scenario of a live-broadcast fighting interaction method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a live coupled wheat fighting interaction method according to a first embodiment of the present application;
fig. 3 is a schematic display diagram of a play component provided in an embodiment of the present application in a live broadcast room interface;
fig. 4 is a display schematic diagram of a live broadcast room interface after a maskless live broadcast fighting interaction is started according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a display of an action-mimicking virtual gift provided by an embodiment of the present application;
fig. 6 is a schematic flowchart illustrating a live-coupled live-broadcast fighting interaction method according to a second embodiment of the present application;
fig. 7 is another schematic flow chart of a live coupled wheat fighting interaction method according to a second embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a display of video stream data of mixed flow animation data according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a display of mixed animation data and video stream data of a match score update prompt data according to an embodiment of the present application;
fig. 10 is a schematic flowchart illustrating a live-coupled-wheat fighting interaction method according to a third embodiment of the present application;
FIG. 11 is a schematic diagram illustrating a display of a fight score display control according to an embodiment of the present application;
fig. 12 is another schematic flow chart of a live-coupled-microphone battle interaction method according to a third embodiment of the present application;
fig. 13 is a timing diagram of a live-coupled-wheat fighting interaction method according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a live-coupled-microphone battle interaction system according to a fourth embodiment of the present application;
fig. 15 is a schematic structural diagram of a live-wheat fighting interaction device according to a fifth embodiment of the present application;
fig. 16 is a schematic structural diagram of a computer device according to a sixth embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if/if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
As will be appreciated by those skilled in the art, the terms "client," "terminal device," and "terminal device" as used herein include both wireless signal receiver devices, which include only wireless signal receiver devices without transmit capability, and receiving and transmitting hardware devices, which include receiving and transmitting hardware devices capable of two-way communication over a two-way communication link. Such a device may include: cellular or other communication devices such as personal computers, tablets, etc. having single or multi-line displays or cellular or other communication devices without multi-line displays; PCS (personal communications Service), which may combine voice, data processing, facsimile and/or data communications capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global positioning system) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "client," "terminal device" can be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "client", "terminal Device" used herein may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, and may also be a smart tv, a set-top box, and the like.
The hardware referred to by the names "server", "client", "service node", etc. is essentially a computer device with the performance of a personal computer, and is a hardware device having necessary components disclosed by the von neumann principle, such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, an output device, etc., wherein a computer program is stored in the memory, and the central processing unit loads a program stored in an external memory into the internal memory to run, executes instructions in the program, and interacts with the input and output devices, thereby accomplishing specific functions.
It should be noted that the concept of "server" as referred to in this application can be extended to the case of a server cluster. According to the network deployment principle understood by those skilled in the art, the servers should be logically divided, and in physical space, the servers may be independent from each other but can be called through an interface, or may be integrated into one physical computer or a set of computer clusters. Those skilled in the art will appreciate this variation and should not be so limited as to restrict the implementation of the network deployment of the present application.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of the live-coupled-live fighting interaction method provided in the embodiment of the present application, where the application scenario includes an anchor client 101, a server 102, and a viewer client 103 provided in the embodiment of the present application, and the anchor client 101 and the viewer client 103 interact with each other through the server 102.
The anchor client 101 is a client that transmits a live video, and is generally a client used by an anchor (i.e., a live anchor user) in live streaming.
The viewer client 103 refers to an end that receives and views a live video, and is typically a client employed by a viewer viewing a video in a live network (i.e., a live viewer user).
The hardware at which the anchor client 101 and viewer client 103 are directed is essentially a computer device, and in particular, as shown in fig. 1, it may be a type of computer device such as a smart phone, smart interactive tablet, and personal computer. Both the anchor client 101 and the viewer client 103 may access the internet via known network access means to establish a data communication link with the server 102.
The server 102, which is a business server, may be responsible for further connecting related streaming servers, gift servers, and other servers providing related support, etc., to form a logically associated server cluster to provide services for related terminal devices, such as the anchor client 101 and the viewer client 103 shown in fig. 1.
In the embodiment of the present application, the anchor client 101 and the audience client 103 may join in the same live broadcast room (i.e., a live broadcast channel), where the live broadcast room is a chat room implemented by means of an internet technology, and generally has an audio/video broadcast control function. The anchor user is live in the live room through the anchor client 101, and the audience of the audience client 103 can log in the server 102 to enter the live room to watch the live.
In the live broadcast room, interaction between the anchor and the audience can be realized through known online interaction modes such as voice, video, characters and the like, generally, the anchor performs programs for audience users in the form of audio and video streams, and economic transaction behaviors can also be generated in the interaction process. Of course, the application form of the live broadcast room is not limited to online entertainment, and can also be popularized to other relevant scenes, such as a video conference scene, a product recommendation sale scene and any other scenes needing similar interaction.
Specifically, the viewer watches live broadcast as follows: a viewer may click on a live application installed on the viewer client 103 and choose to enter any one of the live rooms, triggering the viewer client 103 to load a live room interface for the viewer, the live room interface including a number of interactive components, for example: the video component, the virtual gift bar component, the public screen component and the like can enable audiences to watch live broadcast in the live broadcast room by loading the interactive components, and perform various online interactions, wherein the online interaction modes comprise but are not limited to presenting virtual gifts, speaking on the public screen and the like.
In this embodiment, the server 102 may further establish a connection for a live microphone session between the anchor clients 101 to perform live microphone broadcast. The establishment mode of the session connection can be a random matching mode or a friend mode.
If the live broadcast is in the random matching mode, the server 102 establishes a connecting session connection with a plurality of anchor clients 101 sending live broadcast requests according to a certain connecting session starting rule, and after the connecting session connection is established, the clients which have been added into the live broadcast room can acquire audio and video stream data corresponding to a plurality of anchor identifications and output the audio and video stream data in the live broadcast room, so that users (including audiences and anchors) entering the live broadcast room can see real-time live broadcast of a plurality of anchors in the live broadcast room.
If the friend mode is adopted, the anchor can designate to connect with a plurality of friend anchors, after the server 102 receives the connecting confirmation information of the anchor client 101 corresponding to the friend anchors, the server 102 establishes the connecting session connection between the anchor client 101 corresponding to the anchor identifier and the anchor client 101 corresponding to the friend anchor identifier, and similarly, after the connecting session connection is established, the user (including audience and the anchor) entering the live broadcast room can see the real-time live broadcast of the plurality of anchor in the live broadcast room.
In this application embodiment, the anchor can carry out multiple fight interactive play methods through living the broadcast with the wheat to fight interactive form and increase anchor earning, and can also effectively promote the interactivity between anchor and the spectator. However, because the manner of obtaining the fight score by the anchor in the current fight interactive play method is single, the interest of the fight interactive play method is low, and it is difficult to guarantee the revenues of the anchor and realize the introduction of the flow.
Based on the above, the embodiment of the application provides a live coupled wheat fighting interaction method. Referring to fig. 2, fig. 2 is a schematic flow chart of a live-broadcast fighting interaction method according to a first embodiment of the present application, including the following steps:
s101: the server responds to the online live broadcast fight starting instruction, analyzes the online live broadcast fight starting instruction to obtain the anchor identification, and establishes online session connection of the anchor client corresponding to each anchor identification.
S102: a client added to the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises live broadcast rooms created by the anchor corresponding to the anchor identifications, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifications.
S103: and responding to the animation display instruction by the client added into the live broadcast room, acquiring video stream data of mixed flow animation data, and displaying the animation of executing a plurality of actions by the virtual object in a video window of the live broadcast room according to the video stream data of the mixed flow animation data.
S104: the server responds to the action recognition success instruction and updates a fighting score corresponding to the target anchor identification; and the action recognition success command is sent out when the target anchor imitates a virtual object to execute a plurality of actions from the video stream data corresponding to the target anchor identification.
S105: the server responds to the live-line broadcast fighting ending instruction, the fighting scores corresponding to the anchor marks are obtained, live-line broadcast fighting results are obtained according to the fighting scores corresponding to the anchor marks, and the live-line broadcast fighting results are output in a live broadcast room.
In this embodiment, the interactive method of live-live online battle is described from two execution bodies, namely a client and a server. Wherein, the client comprises an anchor client and a spectator client.
Before carrying out live broadcast fighting interaction, a main broadcast needs to start a live broadcast firstly, specifically, the main broadcast can click to access a live broadcast application program, enter a broadcast starting interface, trigger a main broadcast client to send a live broadcast starting instruction to a server through interaction with a live broadcast starting control in the broadcast starting interface, respond to the live broadcast starting instruction by the server, send live broadcast room data to the main broadcast client, load the live broadcast room interface according to the live broadcast room data by the main broadcast client, play audio and video streaming data collected by the main broadcast client in the live broadcast room, and at the moment, audiences can also enter the live broadcast room to watch the live broadcast.
The play method component is loaded in the live broadcast interface, and the anchor can start the corresponding play method through interaction with the play method component, so that the interactive experience between audiences and the anchor is improved.
Specifically, please refer to fig. 3, where fig. 3 is a schematic display diagram of a play component in a live broadcast room interface according to an embodiment of the present application. It can be seen that several play assemblies are shown in fig. 3, such as a cheerful bucket assembly 31 and a battle PK assembly 32.
Because a plurality of battle playing methods provided by the live broadcast room are realized by matching at least two anchor broadcasters, if the anchor broadcasters start the battle playing methods, the server needs to establish the connection of the live session between the anchor broadcasters and carry out battle interaction in the live broadcast scene.
Therefore, before describing step S101 in detail, the embodiment of the present application needs to first describe what kind of conditions will trigger the server to send out the continuous broadcast fight start instruction, which is specifically as follows:
in an optional embodiment, before the server executes step S101, the server responds to a battle play starting request sent by the anchor client, analyzes the battle play starting request to obtain a battle play identifier, selects at least two anchor clients that send battle play starting requests including the same battle play identifier, generates a live-wheat fighting starting instruction according to the anchor identifiers corresponding to the at least two anchor clients, and sends the live-wheat fighting starting instruction.
In this embodiment, the server randomly selects and opens the anchor with the same battle play by means of random matching, and establishes a connected session connection for the corresponding anchor client.
It will be appreciated that different combat plays require different numbers of anchor, for example: the joy fighting method needs the cooperation of two anchor clients, and the server randomly selects the two anchor clients which send the battle playing method opening requests containing joy fighting playing method identifications to establish the connection session connection for the two anchor clients.
In addition, the anchor can also start the interactive play method in a friend mode, specifically, the anchor client firstly obtains an anchor identifier and a battle play method identifier corresponding to the wheat anchor (which is in friend relationship with the current anchor) selected by the current anchor, and generates a battle playing method starting request to be sent to the server according to the anchor identification and the battle playing method identification, the server responds to the battle playing method starting request to obtain the anchor identification and the battle playing method identification, and then sends a live broadcast request with wheat to the corresponding anchor client, wherein, the live broadcast request of connecting the wheat comprises a main broadcast identification and a battle playing identification which request to connect the wheat, so that the anchor receiving the wheat-connecting invitation determines which anchor currently invites the anchor to carry out wheat-connecting and which battle playing method is carried out, and after the server receives the continuous wheat confirmation information sent by the corresponding anchor client, sending a continuous wheat live broadcast fight starting instruction.
In another alternative embodiment, in some battle playing methods, teams can be formed among the anchor broadcasters, and live interaction can be performed in the form of teams, for example: the team battle PK playing method is characterized in that the team formation mode can be a friend mode or a random mode, and the team formation implementation process is not described in detail.
The following will explain steps S101 to S102, specifically as follows:
the server responds to the online live broadcast fight starting instruction, analyzes the online live broadcast fight starting instruction to obtain the anchor identification, and establishes online session connection of the anchor client corresponding to each anchor identification; and the client added into the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room.
The live broadcast room comprises live broadcast rooms created by the anchor corresponding to the anchor identifications, the clients added to the live broadcast rooms comprise anchor clients and audience clients added to the live broadcast rooms, and the audio and video stream data are mixed audio and video stream data and specifically comprise audio and video stream data corresponding to the anchor identifications.
Under the live broadcast scene of the live broadcast, the live broadcast content of each main broadcast can be played together in the live broadcast room. Therefore, the server will pull the audio/video stream data corresponding to the anchor identifier from each anchor client.
In an optional embodiment, after the server pulls the audio and video stream data corresponding to the anchor identifier from each anchor client, the server performs mixed flow operation on the audio and video stream data corresponding to the anchor identifier to obtain the audio and video stream data, and then sends the audio and video stream data to the client added to the live broadcast room, and the client added to the live broadcast room obtains the audio and video stream data and outputs the audio and video stream data in the live broadcast room.
In another optional embodiment, after the server pulls the audio and video stream data corresponding to the anchor identifier from each anchor client, the server sends the audio and video stream data corresponding to each anchor identifier to the anchor client. Optionally, the server may only issue audio/video stream data corresponding to the anchor identifier of another connected microphone to a certain anchor client, thereby reducing a certain traffic expense. After the anchor client acquires the audio and video stream data corresponding to each anchor identification, the anchor client performs mixed flow operation on the audio and video stream data to obtain the audio and video stream data, and finally, the server issues the audio and video stream data to the audience client which is added to the live broadcast room to output the audio and video stream data in the live broadcast room.
In other optional embodiments, after the server pulls the audio and video stream data corresponding to the anchor identifier from each anchor client, the server sends the audio and video stream data corresponding to each anchor identifier to the clients (including the anchor client and the audience client) added to the live broadcast room, and after the clients added to the live broadcast room acquire the audio and video stream data corresponding to each anchor identifier, the clients perform mixed flow operation on the audio and video stream data to obtain the audio and video stream data, and the audio and video stream data is subjected to data transmission in the live broadcast room.
In the embodiment of the present application, an execution subject for performing a mixed flow operation on audio/video stream data corresponding to each anchor identifier is not limited, and may be a server, an anchor client, or a viewer client.
In an optional embodiment, the server includes a service server and a stream server, the service server performs processing on a service flow, the stream server performs processing on related stream data, and the mixed flow operation is executed.
Referring to fig. 4, fig. 4 is a schematic display diagram of a live broadcast room interface after the live broadcast fighting interaction is started according to the embodiment of the application. Fig. 4 shows a video frame of two anchor broadcasters performing live-on-air battle interaction, where the video display area surface 41 corresponding to anchor broadcaster a is on the left side of the video window, and the video display area surface 42 corresponding to anchor broadcaster B is on the right side of the video window. In fig. 4, the video display area 41 and the video display area 42 divide the video window equally left and right.
It can be understood that, under other fight interaction play methods, when a plurality of anchor players are present to carry out live broadcast/live broadcast fight interaction, the layout of the video display area corresponding to the anchor players in the video window may also change, and a description thereof is omitted.
Regarding step S103, the client that has joined the live broadcast room acquires video stream data of mixed flow animation data in response to the animation display instruction, and displays an animation in which a virtual object executes a plurality of actions in a video window of the live broadcast room according to the video stream data of the mixed flow animation data.
The animation data is data for presenting that the virtual object executes a plurality of actions, and includes an animation screen (which may also be referred to as an animation frame) corresponding to each action executed by the virtual object.
The virtual object may be a human virtual object or an animal virtual object.
The types of the plurality of actions performed by the virtual object are not limited herein, and may be limb actions or facial actions.
Specifically, operations related to blending animation data into video stream data are not elaborated in this embodiment, and may be performed by the anchor client or the server, and specifically, refer to the related description related to blending animation data and video stream data in the second embodiment.
In this embodiment, since the video stream data mixed with the animation data is output in the video window of the live broadcast room, the video picture and the animation picture are simultaneously included in the picture played in the video window, so that the main broadcast and the audience can see the animation of the virtual object executing several actions in the video window of the live broadcast room.
Furthermore, in an alternative embodiment, when blending animation data into video stream data, it is also possible to blend prompt data together, wherein the prompt data is used to prompt the virtual object to perform the actions of several actions, such as: the prompt data can be 'follow me action, increase fight score' and the like.
In an alternative embodiment, before step S103, it is also necessary to obtain the target anchor identification, so as to determine which anchor can increase the fight score through the imitation of actions.
Specifically, the server may obtain the fighting score corresponding to the anchor identifier, and obtain the anchor identifier with the lowest corresponding fighting score as the target anchor identifier according to the fighting score corresponding to each anchor identifier. That is, the anchor with the lowest fight score is selected from the anchor with wheat as the target anchor, so that the target anchor can increase the fight score by simulating a virtual object to execute a plurality of actions.
Under this condition, because can make the lower target anchor of fight score value relatively pass through the action imitation, increase the fight score value, if other anchor who links the wheat want to let this target anchor constantly carry out the action imitation then need keep the fight score value leading, therefore can strengthen the fight atmosphere in the live broadcast room, make more audiences and anchor interact, finally promote the interactive anchor of this field and all earn of fight.
Conventionally, in the fight interaction, the audience can promote the fight score of the corresponding anchor by giving a virtual gift.
In addition to the above manner, the server may further analyze the virtual gift giving instruction to obtain the virtual gift identifier in response to the virtual gift giving instruction, and obtain the target anchor identifier corresponding to the virtual gift receiver if the virtual gift identifier is the action-imitating virtual gift identifier.
That is, the audience may present the action-imitating virtual gift to the anchor, so that the anchor serving as the virtual gift receiver may also execute a plurality of actions by imitating the virtual object, and increase the fight score, and then the anchor identifier corresponding to the anchor serving as the virtual gift receiver is the target anchor identifier.
Under the condition, the audience presents the action to imitate the virtual gift to the anchor, which is beneficial to the anchor to improve the fight score and is also beneficial to generating interesting content, thereby increasing the revenues of the anchor.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating a display of an action-imitating virtual gift according to an embodiment of the present application, as shown in fig. 5, several types of virtual gifts are included in the virtual gift bar 51, and an action-imitating virtual gift sign 512 is displayed above the action-imitating virtual gift 511, wherein the sign 512 may be in the form of "dance" or "imitation" to indicate to the viewer which virtual gifts are the action-imitating virtual gifts.
In an alternative embodiment, where the server further comprises a gift server, the processing operations described above in connection with the virtual gift may be performed by the gift server.
In an optional embodiment, a target anchor name corresponding to the target anchor identification can be added to the prompt data, so that the prompt data can prompt the virtual object to perform the action of several actions and can also prompt which anchor can increase the fighting score through action simulation.
Regarding step S104, the server updates the fighting score corresponding to the target anchor identifier in response to the action recognition success instruction; and the action recognition success command is sent out when the target anchor imitates a virtual object to execute a plurality of actions from the video stream data corresponding to the target anchor identification.
The execution subject of the above-mentioned action recognition operation may be a server or a cast client, and in the embodiment of the present application, the description is made in terms of the cast client.
The method comprises the following specific steps: the method comprises the steps that after an animation that a virtual object executes a plurality of actions is displayed in a video window of a live broadcast room, video stream data corresponding to a target anchor identifier are obtained, the video stream data corresponding to the target anchor identifier are analyzed and identified through a preset action identification algorithm, whether the fact that the target anchor imitates the virtual object to execute the plurality of actions can be identified from the video stream data corresponding to the target anchor identifier is judged, and if yes, an action identification success instruction is sent to a server.
In an optional embodiment, the action recognition algorithm may be that video stream data corresponding to the target anchor identifier is subjected to framing processing, then an anchor contour is recognized from each frame of video picture, and the anchor contour in each frame of video picture is compared with the contour of the virtual object in each frame of animation picture frame by frame, so as to recognize whether the target anchor successfully imitates the virtual object to execute a plurality of actions.
In other alternative embodiments, other existing motion recognition algorithms may be used for recognition, and are not limited in detail herein.
And the server responds to the action recognition success instruction and updates the fighting score corresponding to the target anchor identification.
Specifically, the server may obtain the fight score to be increased according to a preset fight score increase rule, and update the fight score corresponding to the target anchor identifier according to the fight score to be increased.
For example: the server can acquire the fight score to be increased according to the simulation difficulty coefficient of the animation of the virtual object executing the plurality of actions. The higher the simulation difficulty factor, the higher the fight score to be increased.
Or the server can also obtain the fight score to be increased according to the similarity of the target anchor imitating the virtual object to execute a plurality of actions. The higher the similarity, the higher the fight score to be increased.
In an optional embodiment, if the audience gives the action-imitating virtual gift so that the anchor obtains the imitation opportunity, the value of the action-imitating virtual gift can be used for determining the fighting score to be increased, and the corresponding fighting score of the target anchor identifier is updated.
Specifically, the server acquires a virtual gift value corresponding to the action imitation virtual gift identification; obtaining a fight score to be increased according to the virtual gift value and random parameters in a preset range; and updating the fighting score corresponding to the target anchor identification according to the fighting score to be increased.
Wherein the predetermined range may be a closed interval of from 0.1 to 1.
In step S105, when the live online match meets a preset termination condition, for example: the duration of live fight of company wheat reachs and presets length of time, then can trigger the server and send live fight end instruction of company wheat, later, the server responds to live fight end instruction of company wheat, acquires the fight score value that each anchor sign corresponds, according to the fight score value that each anchor sign corresponds, obtains live fight result of company wheat, will live the fight result of company wheat and export in the live broadcast room.
In an optional embodiment, the server may obtain, in live online competition of each online anchor, a battle score increased by simulating that the virtual object performs a plurality of actions, and perform a battle exposure for an anchor with a relatively high increased battle score to form a battle guide.
This application embodiment is in live fighting interaction even wheat, through mixed flow animation data in video stream data, realize showing the animation that virtual object carries out a plurality of action in the video window of live room, make in the video stream data that corresponds from the target anchor sign, when discerning that the target anchor has imitated virtual object and carry out a plurality of action, can increase the fight score value that the target anchor sign corresponds, thereby not only can improve the interactive interest of live fighting even wheat, and can improve the revenues of anchor and realize the introduction of flow at a certain degree, promote the initiative of the broadcast of anchor and the interactivity of live fighting even wheat.
Referring to fig. 6, fig. 6 is a schematic flow chart of a live-talk-to-live fighting interaction method according to a second embodiment of the present application. The method comprises steps S201 to S207, which are as follows:
s201: the server responds to the online live broadcast fight starting instruction, analyzes the online live broadcast fight starting instruction to obtain the anchor identification, and establishes online session connection of the anchor client corresponding to each anchor identification.
S202: a client added to the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises live broadcast rooms created by the anchor corresponding to the anchor identifications, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifications.
S203: and the server responds to the animation mixed flow instruction and acquires the animation data and the display position of the animation in the video window.
S204: and the server mixes the animation data into the video stream data according to the animation data and the display position of the animation in the video window to obtain the video stream data of the mixed animation data, and sends an animation display instruction to the client added to the live broadcast room.
S205: and responding to the animation display instruction by the client added into the live broadcast room, acquiring video stream data of mixed flow animation data, and displaying the animation of executing a plurality of actions by the virtual object in a video window of the live broadcast room according to the video stream data of the mixed flow animation data.
S206: the server responds to the action recognition success instruction and updates a fighting score corresponding to the target anchor identification; and the action recognition success command is sent out when the target anchor imitates a virtual object to execute a plurality of actions from the video stream data corresponding to the target anchor identification.
S207: the server responds to the live-line broadcast fighting ending instruction, the fighting scores corresponding to the anchor marks are obtained, live-line broadcast fighting results are obtained according to the fighting scores corresponding to the anchor marks, and the live-line broadcast fighting results are output in a live broadcast room.
Steps S201 to S202 are the same as steps S101 to S102 in the first embodiment, and steps S205 to S207 are the same as steps S103 to S105 in the first embodiment, and steps S203 to S204 will be described in detail below.
In the present embodiment, the execution subject of mixing animation data into video stream data is a server.
Specifically, the server responds to the animation mixed flow instruction, and obtains animation data and the display position of the animation in the video window.
The animation mixed flow instruction is used for triggering the server to perform data mixed flow. In an alternative embodiment, the server may be the streaming server mentioned in the first embodiment.
The animation data has already been explained in the first embodiment, and will not be described in detail here.
The display position of the animation in the video window is used to determine to which position in the video stream data the animation data is mixed.
In an alternative embodiment, the display position may be any position in the video window, and in another alternative embodiment, the display position may be a lower center in the video window in order to reduce occlusion of the video pictures of the main broadcast. In other alternative embodiments, the display position may also be a preset position in the video display area corresponding to the target anchor.
In an alternative embodiment, before step S203 is executed, the anchor client may determine the display position of the animation in the video window, please refer to fig. 7, which specifically includes steps 208 to S210, as follows:
s208: the server sends an animation mixed flow preparation instruction to the anchor client; wherein, the animation mixed flow preparation instruction comprises a target anchor identification.
S209: and the anchor client responds to the animation mixed flow preparation instruction, pulls the animation configuration resource from the server, and analyzes the animation configuration resource to obtain animation data and the display position of the animation in the video display area.
S210: the anchor client acquires the size information of the video window and the layout information of the video display area corresponding to each anchor mark in the video window; according to the size information of the video window and the layout information of the video display area corresponding to each anchor mark in the video window, the position of the video display area corresponding to the target anchor mark in the video window is obtained; obtaining the display position of the animation in the video window according to the display position of the animation in the video display area and the position of the video display area corresponding to the target anchor in the video window; and sending the animation mixed flow instruction to the server.
In this embodiment, after the server acquires the target anchor identifier, an animation mixed-flow preparation instruction is generated according to the target anchor identifier. Wherein the animation mixed flow preparation instruction is used for triggering the anchor client to determine the display position of the animation in the video window.
Specifically, the anchor client sends an animation configuration resource request to the server in response to the animation mixed flow preparation instruction, and the server sends the animation configuration resource to the anchor client in response to the animation configuration resource request. And the anchor client analyzes the animation configuration resource to obtain animation data and the display position of the animation in the video display area.
Wherein the display position of the animation in the video display area is used to determine where to display the animation in the video display area, for example: the animation is displayed in a position such as a lower left corner or a lower right corner of the video display area.
In an alternative embodiment, the animation configuration resource is a compressed file, so that the anchor client needs to parse the compressed file first and then extract animation data and a display position of the animation in the video display area from the compressed file.
After acquiring the animation data and the display position of the animation in the video display area, the anchor client also acquires the size information of the video window and the layout information of the video display area corresponding to each anchor identifier in the video window.
The size information of the video window may change with the difference of the current battle playing method, and therefore, in order to accurately obtain the display position of the animation in the video window, the size information of the current video window needs to be acquired.
The layout information of the video display area corresponding to each anchor mark in the video window is used for determining the position of the video display area corresponding to the target anchor mark in the video window. For example: when two anchor broadcasters carry out live broadcast on live broadcast, the layout information of the video display areas corresponding to the two anchor broadcasters in the video window equally divides the video window left and right for the video display areas corresponding to the two anchor broadcasters, specifically, referring to the video display area 41 and the video display area 42 in fig. 4, the video display area 41 is located at the left side of the video window, and the video display area 42 is located at the right side of the video window.
And then, the anchor client obtains the position of the video display area corresponding to the target anchor identification in the video window according to the size information of the video window and the layout information of the video display area corresponding to each anchor identification in the video window.
For example: the size information of the video window includes the display width of the video window and the display height of the video window, and the video window is divided equally left and right according to the video display region 41 and the video display region 42, so that the distance between the left side frame of the video display region 42 and the left side frame of the video window is width/2, and the distance between the upper side frame of the video display region 42 and the upper side frame of the video window is 0.
And then, the anchor client side can obtain the display position of the animation in the video window according to the display position of the animation in the video display area and the position of the video display area corresponding to the target anchor in the video window, and finally sends an animation mixed flow instruction to the server.
For example: the display position of the animation in the video display area is the lower left corner, then the distance between the left side frame of the animation display area and the left side frame of the video window is width/2, and the distance between the upper side frame of the animation display area and the upper side frame of the video window is the difference between the display height of the video window and the display height of the animation display area.
In this embodiment, the anchor client determines the display position of the animation in the video window, so that the operation overhead of the server can be reduced, the mixed flow efficiency is improved, and the display position of the animation in the video window, which is acquired by the anchor client, can enable the animation to be displayed in the video display area corresponding to the target anchor identifier, so that the target anchor can be intuitively prompted to increase the fight score through action simulation.
Specifically, on the basis of steps S208 to S210, displaying an animation in which a virtual object executes a plurality of actions in a video window of a live broadcast room according to the video stream data of the mixed animation data includes the steps of: and displaying the animation of executing a plurality of actions by the virtual object in the video display area corresponding to the target anchor identification according to the video stream data of the mixed flow animation data.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating display of video stream data of mixed stream animation data according to an embodiment of the present application. In fig. 8, the video display area 81 corresponding to the target anchor is on the left side of the video window, and the display position of the preset animation in the video display area is the lower left corner, so that the animation 82 of the virtual object executing several actions is displayed on the lower left corner of the video display area 81 corresponding to the target anchor. Only one frame of the animation is shown in fig. 8, it being understood that consecutive frames of the animation are actually displayed in the live broadcast.
In an alternative embodiment, the animation of a virtual object performing several actions displayed in the present application has a corresponding display time, for example: and 3s, the server can repeatedly execute mixed flow operation of the animation data and the video stream data according to the preset display times.
In an optional embodiment, the client added to the live broadcast room responds to the action recognition success instruction, obtains video stream data of mixed flow animation data and fighting score updating prompt data, updates the video stream data of the prompt data according to the mixed flow animation data and the fighting score, and displays animation of the virtual object executing a plurality of actions and increased fighting scores in a video window of the live broadcast room.
In this embodiment, after it is recognized from the video stream data corresponding to the target anchor identifier that the target anchor has emulated the virtual object to perform a plurality of actions, in order to visually notify the audience and the anchor in the live broadcast room, the target anchor has emulated successfully and the value of the battle score increased by this time of emulation. And the client added to the live broadcast room responds to the action identification success instruction to acquire the video stream data of the mixed flow animation data and the fighting score updating prompt data, and displays the animation of the virtual object executing a plurality of actions and the increased fighting score in a video window of the live broadcast room according to the video stream data of the mixed flow animation data and the fighting score updating prompt data.
The mixing of the animation data and the engagement score update prompt data into the video stream data may be performed by a server, a main broadcasting client, or a spectator client, which is not limited herein.
Referring to fig. 9, fig. 9 is a schematic diagram illustrating display of video stream data of mixed animation data and match score update prompt data according to an embodiment of the present application. As can be seen in FIG. 9, the virtual object performs an animation 91 of several actions and updates the cue 92 for the battle score.
In this embodiment, the match score updating prompt data can be mixed together, so that the effects of prompting the user of successful simulation and determining that the match score is increased are achieved, and the user experience in the match interaction of live broadcast and live broadcast is improved.
Referring to fig. 10, fig. 10 is a schematic flow chart of a live-broadcast fighting interaction method according to a third embodiment of the present application. The method comprises steps S301-S307, which are as follows:
s301: the server responds to the online live broadcast fight starting instruction, analyzes the online live broadcast fight starting instruction to obtain the anchor identification, and establishes online session connection of the anchor client corresponding to each anchor identification.
S302: a client added to the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises live broadcast rooms created by the anchor corresponding to the anchor identifications, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifications.
S303: and the server issues the fight score display control data to the client which is added to the live broadcast room.
S304: and the client terminal added into the live broadcast room receives the fight score display control data, displays the fight score display control according to the fight score, and displays the fight score display control in the live broadcast room interface.
S305: and responding to the animation display instruction by the client added into the live broadcast room, acquiring video stream data of mixed flow animation data, and displaying the animation of executing a plurality of actions by the virtual object in a video window of the live broadcast room according to the video stream data of the mixed flow animation data.
S306: the server responds to the action recognition success instruction and updates a fighting score corresponding to the target anchor identification; and the action recognition success command is sent out when the target anchor imitates a virtual object to execute a plurality of actions from the video stream data corresponding to the target anchor identification.
S307: the server responds to the live-line broadcast fighting ending instruction, the fighting scores corresponding to the anchor marks are obtained, live-line broadcast fighting results are obtained according to the fighting scores corresponding to the anchor marks, and the live-line broadcast fighting results are output in a live broadcast room.
Steps S301 to S302 are the same as steps S101 to S102 in the first embodiment, and steps S305 to S307 are the same as steps S103 to S105 in the first embodiment, and steps S303 to S304 will be described in detail below.
In this embodiment, after the live broadcast fighting is started, audio and video stream data are output in the live broadcast room, the server also sends fighting score display control data to the client terminal added to the live broadcast room, the client terminal added to the live broadcast room receives the fighting score display control data, and the fighting score display control is displayed in the live broadcast room interface according to the fighting score display control data.
The fighting score display control data is used for presenting a fighting score display control in a live broadcast room, and specifically comprises functional data and display data for displaying the fighting score.
The functional data for displaying the fight score is used for displaying the real-time fight score of the main broadcast in the live-broadcast fight interaction. And the display data of the fighting score display is used for determining the display position and the display style of the fighting score display control.
Referring to fig. 11, fig. 11 is a schematic display diagram of a battle score display control according to an embodiment of the present application. As can be seen in fig. 11, the fight score display control 111 displays the fight score for anchor a and the fight score for anchor B, respectively. The demonstration style that the value of fighting shows control 111 is the bar, can carry out the direct vision through the colour to the bar that different anchor broadcasters correspond and distinguish (not show in the color map), the whole bar size that the value of fighting shows control keeps unchangeable, what dynamic change was that the bar that different anchor broadcasters correspond occupies the proportion of whole bar, this proportion is decided by the value of fighting of anchor, the value of fighting is higher, the proportion that occupies is bigger, thereby can make anchor and audience can be through the change that the bar accounts for ratio, the live is known the change of value of fighting.
In an optional embodiment, fig. 12 is another schematic flow chart of the live-on-air fighting interaction method according to the third embodiment of the present application, and after step S306, the method further includes steps S308 to S309, which are specifically as follows:
s308: the server sends a fighting score updating instruction to the client which is added to the live broadcast room; the fighting score updating instruction comprises a target anchor identification and an updated fighting score corresponding to the target anchor identification.
S309: and the client-side which is added into the live broadcast room responds to the fighting score updating instruction, analyzes the fighting score updating instruction, acquires the updated fighting scores corresponding to the target anchor identification and the target anchor identification, and displays the updated fighting scores corresponding to the target anchor identification in the fighting score display control.
In the embodiment, after the fighting score is updated, the dynamic update is carried out in the fighting score display control, so that the audience and the anchor can know the change of the fighting score more intuitively.
Please refer to fig. 13, which is a timing chart of the live-live fighting interaction method according to the embodiment of the present application. Referring to the timing diagram, a more intuitive description is performed on the overall flow of the live broadcast fighting interaction method for the direct broadcast, so as to facilitate understanding of the technical scheme provided by the application, as shown in fig. 13, a main broadcast client sends a fighting play starting request to a service server, the service server responds to the fighting play starting request, determines a main broadcast identifier of the live broadcast, generates a live broadcast fighting starting instruction, the service server responds to the live broadcast fighting starting instruction, acquires the main broadcast identifier, establishes a live broadcast session connection between main broadcast clients corresponding to the main broadcast identifier, pulls audio and video streaming data corresponding to each main broadcast identifier by a streaming server, mixes the audio and video streaming data, and sends the mixed audio and video streaming data to clients added to a live broadcast room, the clients added to the live broadcast room comprise a main broadcast client and an audience client, and the clients added to the live broadcast room output the audio and video streaming data in the live broadcast room, the method comprises the steps that a service server acquires combat score display control data according to combat play resources, sends the combat score display control data to a client terminal added to a live broadcast room, the client terminal added to the live broadcast room displays the combat score display control in a live broadcast room interface according to the combat score display control data, the service server acquires a target anchor mark, sends an animation mixed flow preparation instruction containing the target anchor mark to the anchor client terminal, the anchor client terminal pulls an animation configuration resource from the service, analyzes the animation configuration resource, acquires animation data, determines the display position of the animation in a video window, sends an animation mixed flow instruction to a flow server, the flow server mixes the animation data and video stream data according to the animation data and the display position of the animation in the video window, and sends the video stream data of the mixed flow animation data to the client terminal added to the live broadcast room, the client added to the live broadcast room displays the animation of executing a plurality of actions by the virtual object in the video display area corresponding to the target anchor identification according to the video stream data of the mixed flow animation data, the anchor client identifies whether the target anchor successfully imitates the plurality of actions from the video stream data corresponding to the target anchor, if so, the action identification success instruction is sent to a server (comprising a stream server and a service server), the service server responds to the action identification success instruction, updates the fighting score corresponding to the target anchor identification, sends the updated fighting score corresponding to the target anchor identification to the client added to the live broadcast room, the client added to the live broadcast room displays the client in the fighting score display control, the stream server responds to the action identification success instruction, the mixed flow animation data, the fighting score update prompt data and the video stream data, and sending the video stream data of the mixed flow animation data and the fight score updating prompt data to the client added to the live broadcast room, and outputting the video stream data of the mixed flow animation data and the fight score updating prompt data in a video window by the client added to the live broadcast room to display the animation and the increased fight score. And the service server responds to the instruction of ending the live broadcast fighting, acquires a live broadcast fighting result and sends the live broadcast fighting result to the client which is added into the live broadcast room, and the client which is added into the live broadcast room displays the live broadcast fighting result in a live broadcast room interface.
It should be noted that some optional implementations in the first to third embodiments are not individually shown in the timing chart, but the main steps in the interactive live-coupled-live broadcast fighting method are already shown in fig. 13, fig. 13 is only used to help understanding the technical solution of the present application, and implementations not shown in the figures are still within the protection scope of the present application.
Please refer to fig. 14, which is a schematic structural diagram of a live-microphone battle interaction system according to a fourth embodiment of the present application. The live-coupled-wheat fighting interaction system 14 comprises a client 141 and a server 142, wherein the client 141 comprises an anchor client 1411 and a viewer client 1412;
the server 142 responds to the online live broadcast fight start instruction, analyzes the online live broadcast fight start instruction to obtain a main broadcast identifier, and establishes online session connection of a main broadcast client 1411 corresponding to each main broadcast identifier;
a client 141 added to a live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises a live broadcast room established by a main broadcast corresponding to each main broadcast identification, and the audio and video stream data comprises audio and video stream data corresponding to each main broadcast identification;
the client 141 added to the live broadcast room responds to an animation display instruction to obtain video stream data of mixed flow animation data, and animation of executing a plurality of actions by a virtual object is displayed in a video window of the live broadcast room according to the video stream data of the mixed flow animation data;
the server 142 responds to the action recognition success instruction, and updates the fighting score corresponding to the target anchor identification; the action recognition success instruction is sent out when the target anchor imitates the virtual object to execute the actions from the video stream data corresponding to the target anchor identification;
and the server 142 responds to the live telecast combat ending instruction to acquire the combat score corresponding to each anchor identification, obtains a live telecast combat result according to the combat score corresponding to each anchor identification, and outputs the live telecast combat result in the live telecast room.
Please refer to fig. 15, which is a schematic structural diagram of a live-live fighting interaction device according to a fifth embodiment of the present application. The apparatus may be implemented as all or part of a computer device in software, hardware, or a combination of both. The device 15 comprises:
a mic-connected session establishing unit 151, configured to, in response to a mic-connected live broadcast fight start instruction, the server parses the mic-connected live broadcast fight start instruction to obtain a anchor identifier, and establishes a mic-connected session connection of an anchor client corresponding to each anchor identifier;
the first output unit 152 is used for a client added to the live broadcast room to acquire audio and video stream data and output the audio and video stream data in the live broadcast room; the live broadcast room comprises a live broadcast room established by a main broadcast corresponding to each main broadcast identification, and the audio and video stream data comprises audio and video stream data corresponding to each main broadcast identification;
a first response unit 153, configured to, in response to an animation display instruction, the client that has joined the live broadcast room obtains video stream data of mixed flow animation data, and displays, according to the video stream data of the mixed flow animation data, an animation in which a virtual object executes a plurality of actions in a video window of the live broadcast room;
a second response unit 154, configured to, in response to the action recognition success instruction, update the fighting score corresponding to the target anchor identifier; the action recognition success instruction is sent out when the target anchor imitates the virtual object to execute the actions from the video stream data corresponding to the target anchor identification;
and the second output unit 155 is used for responding to the live wheat fighting ending instruction by the server, acquiring the fighting scores corresponding to the anchor identifications, obtaining live wheat fighting results according to the fighting scores corresponding to the anchor identifications, and outputting the live wheat fighting results in the live broadcast room.
It should be noted that, when the interactive device for live telecast combat fight provided by the above embodiment executes the interactive method for live telecast fight, the division of the above functional modules is only used for example, in practical applications, the above function distribution can be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the live wheat fighting interaction device and the live wheat fighting interaction method provided by the embodiment belong to the same concept, and the detailed implementation process is shown in the method embodiment and is not described herein again.
Fig. 16 is a schematic structural diagram of a computer device according to a sixth embodiment of the present application. As shown in fig. 16, the computer device 16 may include: a processor 160, a memory 161, and a computer program 162 stored in the memory 161 and executable on the processor 160, such as: a live-coupled fighting interaction program; the steps in the first to third embodiments described above are implemented when the processor 160 executes the computer program 162.
The processor 160 may include one or more processing cores, among others. The processor 160 is connected to various components within the computer device 16 using various interfaces and lines to perform various functions of the computer device 16 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 161 and calling up data within the memory 161. alternatively, the processor 160 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), Programmable Logic Array (PLA). The processor 160 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing contents required to be displayed by the touch display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 160, but may be implemented by a single chip.
The Memory 161 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 161 includes a non-transitory computer-readable medium. The memory 161 may be used to store instructions, programs, code sets, or instruction sets. The memory 161 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as touch instructions, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 161 may optionally be at least one memory device located remotely from the processor 160.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the method steps of the foregoing embodiment, and a specific execution process may refer to specific descriptions of the foregoing embodiment, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium and used by a processor to implement the steps of the above-described embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc.
The present invention is not limited to the above-described embodiments, and various modifications and variations of the present invention are intended to be included within the scope of the claims and the equivalent technology of the present invention if they do not depart from the spirit and scope of the present invention.

Claims (14)

1. A live coupled wheat fighting interaction method is characterized by comprising the following steps:
the server responds to a live telecast combat starting instruction, analyzes the live telecast combat starting instruction to obtain a main broadcasting identification, and establishes a live telecast session connection of a main broadcasting client corresponding to each main broadcasting identification;
a client added to a live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises a live broadcast room established by a main broadcast corresponding to each main broadcast identification, and the audio and video stream data comprises audio and video stream data corresponding to each main broadcast identification;
the client added to the live broadcast room responds to an animation display instruction to obtain video stream data of mixed flow animation data, and animation of executing a plurality of actions by a virtual object is displayed in a video window of the live broadcast room according to the video stream data of the mixed flow animation data;
the server responds to the action recognition success instruction and updates a fighting score corresponding to the target anchor identification; the action recognition success instruction is sent out when the target anchor imitates the virtual object to execute the actions from the video stream data corresponding to the target anchor identification;
the server responds to a live telecast combat ending instruction, acquires combat scores corresponding to the anchor marks, obtains live telecast combat results according to the combat scores corresponding to the anchor marks, and outputs the live telecast combat results in the live telecast room.
2. The live coupled wheat fighting interaction method of claim 1, wherein the client-side participating in the live broadcast room responds to animation display instructions and comprises the following steps:
and the server acquires the fight score corresponding to each anchor identifier, and obtains the target anchor identifier with the lowest corresponding fight score according to the fight score corresponding to each anchor identifier.
3. The live coupled wheat fighting interaction method of claim 1, wherein the client-side participating in the live broadcast room responds to animation display instructions and comprises the following steps:
the server responds to a virtual gift giving instruction, analyzes the virtual gift giving instruction to obtain a virtual gift identifier, and obtains the target anchor identifier corresponding to a virtual gift receiver if the virtual gift identifier is an action imitation virtual gift identifier.
4. The live coupled wheat fighting interaction method according to any one of claims 1 to 3, wherein the client participating in the live broadcast room responds to an animation display instruction, and further comprises the following steps:
the server responds to the animation mixed flow instruction, and acquires animation data and the display position of the animation in the video window;
and the server mixes the animation data into the video stream data according to the animation data and the display position of the animation in the video window to obtain the video stream data of the mixed animation data, and sends an animation display instruction to the client added to the live broadcast room.
5. The live coupled wheat fighting interaction method of claim 4, wherein the server responds to the animation mixed flow instruction, and further comprises the following steps:
the server sends an animation mixed flow preparation instruction to the anchor client; wherein the animation mixed flow preparation instruction comprises the target anchor identification;
the anchor client responds to the animation mixed flow preparation instruction, pulls an animation configuration resource from the server, and analyzes the animation configuration resource to obtain the animation data and the display position of the animation in a video display area;
the anchor client acquires the size information of the video window and the layout information of the video display area corresponding to each anchor mark in the video window; obtaining the position of the video display area corresponding to the target anchor mark in the video window according to the size information of the video window and the layout information of the video display area corresponding to each anchor mark in the video window; obtaining the display position of the animation in the video window according to the display position of the animation in the video display area and the position of the video display area corresponding to the target anchor in the video window; and sending the animation mixed flow instruction to the server.
6. The live coupled wheat fighting interaction method as claimed in claim 5, wherein said animation of executing several actions by displaying a virtual object in a video window of said live broadcast room according to the video stream data of said mixed flow animation data, comprises the steps of:
and displaying the animation of the virtual object executing the plurality of actions in a video display area corresponding to the target anchor identification according to the video stream data of the mixed flow animation data.
7. The live coupled wheat fighting interaction method according to any one of claims 1 to 3, further comprising the steps of:
and the client added to the live broadcast room responds to the action recognition success instruction, obtains video stream data of the mixed flow animation data and the fighting score updating prompt data, and displays the animation of the virtual object executing the actions and the increased fighting score in a video window of the live broadcast room according to the video stream data of the mixed flow animation data and the fighting score updating prompt data.
8. The live wheat connecting session fighting interaction method according to any one of claims 1 to 3, wherein after establishing the live wheat connecting session connection of the anchor client corresponding to each anchor identifier, the method further comprises the following steps:
the server issues fighting score display control data to the client which is added to the live broadcast room;
and the client terminal which is added into the live broadcast room receives the fight score display control data, displays the fight score display control in the live broadcast room interface according to the fight score display control data.
9. The live coupled wheat fighting interaction method according to claim 8, wherein after the server responds to the action recognition success command and updates the fighting score corresponding to the target anchor identifier, the method comprises the following steps:
the server sends a fighting score updating instruction to the client which is added to the live broadcast room; the fighting score updating instruction comprises the target anchor identification and an updated fighting score corresponding to the target anchor identification;
the client-side which is added into the live broadcast room responds to the fighting score updating instruction, analyzes the fighting score updating instruction, obtains the updated fighting scores corresponding to the target anchor mark and the target anchor mark, and displays the updated fighting scores corresponding to the target anchor mark in the fighting score display control.
10. The live coupled wheat fighting interaction method according to claim 3, wherein the server updates the fighting score corresponding to the target anchor identifier in response to the action recognition success command, and the method comprises the following steps:
the server acquires a virtual gift value corresponding to the action imitation virtual gift identification;
obtaining a fight score to be increased according to the virtual gift value and a random parameter in a preset range;
and updating the fighting score corresponding to the target anchor identification according to the fighting score to be increased.
11. The utility model provides a live interactive system of fighting even wheat, its characterized in that includes: the system comprises clients and a server, wherein the clients comprise a main broadcasting client and an audience client;
the server responds to a live telecast combat starting instruction, analyzes the live telecast combat starting instruction to obtain a main broadcasting identification, and establishes a live telecast session connection of the main broadcasting client corresponding to each main broadcasting identification;
a client added to a live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises a live broadcast room established by a main broadcast corresponding to each main broadcast identification, and the audio and video stream data comprises audio and video stream data corresponding to each main broadcast identification;
the client added to the live broadcast room responds to an animation display instruction to obtain video stream data of mixed flow animation data, and animation of executing a plurality of actions by a virtual object is displayed in a video window of the live broadcast room according to the video stream data of the mixed flow animation data;
the server responds to the action recognition success instruction and updates a fighting score corresponding to the target anchor identification; the action recognition success instruction is sent out when the target anchor imitates the virtual object to execute the actions from the video stream data corresponding to the target anchor identification;
the server responds to a live telecast combat ending instruction, acquires combat scores corresponding to the anchor marks, obtains live telecast combat results according to the combat scores corresponding to the anchor marks, and outputs the live telecast combat results in the live telecast room.
12. The utility model provides a live interactive installation of fighting even wheat, its characterized in that includes:
the system comprises a direct broadcasting session establishing unit, a direct broadcasting client and a server, wherein the direct broadcasting session establishing unit is used for responding to a direct broadcasting fight starting instruction, analyzing the direct broadcasting fight starting instruction to obtain a main broadcasting identifier and establishing direct broadcasting session connection of the main broadcasting client corresponding to each main broadcasting identifier;
the first output unit is used for acquiring audio and video stream data by a client which is added into a live broadcast room and outputting the audio and video stream data in the live broadcast room; the live broadcast room comprises a live broadcast room established by a main broadcast corresponding to each main broadcast identification, and the audio and video stream data comprises audio and video stream data corresponding to each main broadcast identification;
the first response unit is used for responding to an animation display instruction by the client added to the live broadcast room, acquiring video stream data of mixed flow animation data, and displaying animation of executing a plurality of actions by a virtual object in a video window of the live broadcast room according to the video stream data of the mixed flow animation data;
the second response unit is used for responding to the action recognition success instruction by the server and updating the fighting score corresponding to the target anchor identification; the action recognition success instruction is sent out when the target anchor imitates the virtual object to execute the actions from the video stream data corresponding to the target anchor identification;
and the second output unit is used for responding to the live wheat fighting ending instruction by the server, acquiring the fighting scores corresponding to the anchor identifications, acquiring live wheat fighting results according to the fighting scores corresponding to the anchor identifications, and outputting the live wheat fighting results in the live broadcast room.
13. A computer device, comprising: processor, memory and computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 10 are implemented when the processor executes the computer program.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 10.
CN202111136353.0A 2021-09-27 2021-09-27 Continuous wheat live broadcast fight interaction method, system and device and computer equipment Active CN113676747B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111136353.0A CN113676747B (en) 2021-09-27 2021-09-27 Continuous wheat live broadcast fight interaction method, system and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111136353.0A CN113676747B (en) 2021-09-27 2021-09-27 Continuous wheat live broadcast fight interaction method, system and device and computer equipment

Publications (2)

Publication Number Publication Date
CN113676747A true CN113676747A (en) 2021-11-19
CN113676747B CN113676747B (en) 2023-06-13

Family

ID=78550325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111136353.0A Active CN113676747B (en) 2021-09-27 2021-09-27 Continuous wheat live broadcast fight interaction method, system and device and computer equipment

Country Status (1)

Country Link
CN (1) CN113676747B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114501157A (en) * 2021-12-16 2022-05-13 广州方硅信息技术有限公司 Interaction method, server, terminal, system and storage medium for live broadcast with wheat
CN114760498A (en) * 2022-04-01 2022-07-15 广州方硅信息技术有限公司 Method, system, medium, and device for synthesizing action interaction under live broadcast with continuous microphone
CN115134623A (en) * 2022-06-30 2022-09-30 广州方硅信息技术有限公司 Virtual gift interaction method and device based on main and auxiliary picture display and electronic equipment
CN115134621A (en) * 2022-06-30 2022-09-30 广州方硅信息技术有限公司 Live broadcast fight interaction method and device based on main and auxiliary picture display and electronic equipment
CN115314727A (en) * 2022-06-17 2022-11-08 广州方硅信息技术有限公司 Live broadcast interaction method and device based on virtual object and electronic equipment
CN115499679A (en) * 2022-09-28 2022-12-20 广州方硅信息技术有限公司 Method and device for displaying interactive object in live broadcast room, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9597586B1 (en) * 2012-05-07 2017-03-21 CP Studios Inc. Providing video gaming action via communications in a social network
CN109758769A (en) * 2018-11-26 2019-05-17 北京达佳互联信息技术有限公司 Game application player terminal determines method, apparatus, electronic equipment and storage medium
US20190184283A1 (en) * 2016-06-28 2019-06-20 Line Corporation Method of controlling information processing device, information processing device and non-transitory computer-readable recording medium storing program for information processing
CN110300311A (en) * 2019-07-01 2019-10-01 腾讯科技(深圳)有限公司 Battle method, apparatus, equipment and storage medium in live broadcast system
CN110519612A (en) * 2019-08-26 2019-11-29 广州华多网络科技有限公司 Even wheat interactive approach, live broadcast system, electronic equipment and storage medium
CN110944235A (en) * 2019-11-22 2020-03-31 广州华多网络科技有限公司 Live broadcast interaction method, device and system, electronic equipment and storage medium
CN111314718A (en) * 2020-01-16 2020-06-19 广州酷狗计算机科技有限公司 Settlement method, device, equipment and medium for live broadcast battle
CN112163479A (en) * 2020-09-16 2021-01-01 广州华多网络科技有限公司 Motion detection method, motion detection device, computer equipment and computer-readable storage medium
CN112714330A (en) * 2020-12-25 2021-04-27 广州方硅信息技术有限公司 Gift presenting method and device based on live broadcast with wheat and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9597586B1 (en) * 2012-05-07 2017-03-21 CP Studios Inc. Providing video gaming action via communications in a social network
US20190184283A1 (en) * 2016-06-28 2019-06-20 Line Corporation Method of controlling information processing device, information processing device and non-transitory computer-readable recording medium storing program for information processing
CN109758769A (en) * 2018-11-26 2019-05-17 北京达佳互联信息技术有限公司 Game application player terminal determines method, apparatus, electronic equipment and storage medium
CN110300311A (en) * 2019-07-01 2019-10-01 腾讯科技(深圳)有限公司 Battle method, apparatus, equipment and storage medium in live broadcast system
CN110519612A (en) * 2019-08-26 2019-11-29 广州华多网络科技有限公司 Even wheat interactive approach, live broadcast system, electronic equipment and storage medium
CN110944235A (en) * 2019-11-22 2020-03-31 广州华多网络科技有限公司 Live broadcast interaction method, device and system, electronic equipment and storage medium
CN111314718A (en) * 2020-01-16 2020-06-19 广州酷狗计算机科技有限公司 Settlement method, device, equipment and medium for live broadcast battle
CN112163479A (en) * 2020-09-16 2021-01-01 广州华多网络科技有限公司 Motion detection method, motion detection device, computer equipment and computer-readable storage medium
CN112714330A (en) * 2020-12-25 2021-04-27 广州方硅信息技术有限公司 Gift presenting method and device based on live broadcast with wheat and electronic equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114501157A (en) * 2021-12-16 2022-05-13 广州方硅信息技术有限公司 Interaction method, server, terminal, system and storage medium for live broadcast with wheat
CN114760498A (en) * 2022-04-01 2022-07-15 广州方硅信息技术有限公司 Method, system, medium, and device for synthesizing action interaction under live broadcast with continuous microphone
CN115314727A (en) * 2022-06-17 2022-11-08 广州方硅信息技术有限公司 Live broadcast interaction method and device based on virtual object and electronic equipment
CN115134623A (en) * 2022-06-30 2022-09-30 广州方硅信息技术有限公司 Virtual gift interaction method and device based on main and auxiliary picture display and electronic equipment
CN115134621A (en) * 2022-06-30 2022-09-30 广州方硅信息技术有限公司 Live broadcast fight interaction method and device based on main and auxiliary picture display and electronic equipment
CN115499679A (en) * 2022-09-28 2022-12-20 广州方硅信息技术有限公司 Method and device for displaying interactive object in live broadcast room, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113676747B (en) 2023-06-13

Similar Documents

Publication Publication Date Title
CN113676747B (en) Continuous wheat live broadcast fight interaction method, system and device and computer equipment
JP5987060B2 (en) GAME SYSTEM, GAME DEVICE, CONTROL METHOD, PROGRAM, AND RECORDING MEDIUM
US9066144B2 (en) Interactive remote participation in live entertainment
CN113766340B (en) Dance music interaction method, system and device under live connected wheat broadcast and computer equipment
CN113873280B (en) Continuous wheat live broadcast fight interaction method, system and device and computer equipment
CN113453029B (en) Live broadcast interaction method, server and storage medium
JP2016526929A (en) Information processing apparatus, control method, and program
WO2022267701A1 (en) Method and apparatus for controlling virtual object, and device, system and readable storage medium
CN109683839B (en) Method, equipment and system for split screen display and multi-terminal interaction
CN114007094A (en) Voice microphone-connecting interaction method, system, medium and computer equipment for live broadcast room
CN112732152A (en) Live broadcast processing method and device, electronic equipment and storage medium
CN113573083A (en) Live wheat-connecting interaction method and device and computer equipment
CN113824976A (en) Method and device for displaying approach show in live broadcast room and computer equipment
CN114257830A (en) Live game interaction method, system and device and computer equipment
CN113938696B (en) Live broadcast interaction method and system based on custom virtual gift and computer equipment
CN114125480A (en) Live broadcasting chorus interaction method, system and device and computer equipment
CN113329236A (en) Live broadcast method, live broadcast device, medium and electronic equipment
CN114666672B (en) Live fight interaction method and system initiated by audience and computer equipment
CN114095772B (en) Virtual object display method, system and computer equipment under continuous wheat direct sowing
CN115314727A (en) Live broadcast interaction method and device based on virtual object and electronic equipment
CN114760520A (en) Live small and medium video shooting interaction method, device, equipment and storage medium
CN114007095A (en) Voice microphone-connecting interaction method, system, medium and computer equipment for live broadcast room
CN115134624B (en) Live broadcast continuous wheat matching method, system, device, electronic equipment and storage medium
CN113596500B (en) Live user pairing interaction method and device, computer equipment and storage medium
CN113438491B (en) Live broadcast interaction method and device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant