CN115086698A - Method and device for controlling object interaction in live broadcast room and electronic equipment - Google Patents

Method and device for controlling object interaction in live broadcast room and electronic equipment Download PDF

Info

Publication number
CN115086698A
CN115086698A CN202210650657.7A CN202210650657A CN115086698A CN 115086698 A CN115086698 A CN 115086698A CN 202210650657 A CN202210650657 A CN 202210650657A CN 115086698 A CN115086698 A CN 115086698A
Authority
CN
China
Prior art keywords
interactive
interaction
client
component
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210650657.7A
Other languages
Chinese (zh)
Other versions
CN115086698B (en
Inventor
曾衍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202210650657.7A priority Critical patent/CN115086698B/en
Publication of CN115086698A publication Critical patent/CN115086698A/en
Application granted granted Critical
Publication of CN115086698B publication Critical patent/CN115086698B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application relates to the technical field of network live broadcast, and provides a method and a device for controlling object interaction in a live broadcast room and electronic equipment, wherein the method comprises the following steps: displaying a plurality of interactive objects in an interactive interface, and controlling first interactive components to be displayed in turn to clients corresponding to the interactive objects; receiving audio data acquired through the first interactive assembly, and at least issuing the audio data to a client corresponding to an interactive object; controlling a second interactive component to be displayed in a client corresponding to the interactive object; receiving interaction selection information acquired through the second interaction component, and confirming an interaction selection result; and confirming the end of the interaction according to the interaction selection result, or acquiring the interaction object continuously participating in the interaction according to the interaction selection result, and re-confirming the interaction selection result until the end of the interaction is confirmed. Compared with the prior art, the method and the device can improve the interaction participation of audiences, meet the interaction requirements of the audiences, and establish the deep and close social link.

Description

Method and device for controlling object interaction in live broadcast room and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of network live broadcast, in particular to a method and a device for controlling object interaction in a live broadcast room and electronic equipment.
Background
With the progress of network communication technology, live webcasting becomes a new network interaction mode, more and more internet platforms start to provide live webcasting service to attract users to conduct live webcasting interaction in a live webcasting room, so that common people have opportunities of developing and talenting, and social employment pressure is relieved.
At present, in a live network scene, generally speaking interaction is performed between audiences and audiences in a live broadcast room, or audio and video interaction, virtual gift interaction and the like are performed between the audiences and a main broadcast, and the interaction mode cannot establish close link between the audiences and the main broadcast and cannot meet the interaction requirements of the audiences.
Therefore, how to efficiently control the interaction link in the live broadcast room, improve the interaction participation of audiences, and establish a deep and close social link becomes a problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides an object interaction control method, an object interaction control device and electronic equipment in a live broadcast room, and the technical problems that audience interaction participation is low, a tight social link cannot be established, and audience interaction requirements are difficult to meet are solved, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for controlling object interaction in a live broadcast room, including:
displaying a plurality of interactive objects in an interactive interface, and controlling first interactive components to be displayed in a client side corresponding to the interactive objects in turn in response to a first interactive component display instruction; wherein each interactive object is assigned interactive data;
receiving audio data acquired by the first interactive component, and at least issuing the audio data to a client corresponding to the interactive object; wherein the audio data is used to describe the interactive data;
responding to a second interaction component display instruction, and controlling a second interaction component to be displayed in a client corresponding to the interaction object;
receiving interaction selection information acquired through the second interaction assembly, and confirming an interaction selection result;
and confirming the end of interaction according to the interaction selection result, or acquiring the interaction object which continues to participate in the interaction according to the interaction selection result, and re-confirming the interaction selection result until the end of the interaction is confirmed.
In a second aspect, an embodiment of the present application provides an object interaction control apparatus in a live broadcast room, including:
the first control unit is used for displaying a plurality of interactive objects in the interactive interface and controlling the first interactive components to be displayed in turn to the clients corresponding to the interactive objects in response to a first interactive component display instruction; wherein each interactive object is allocated with interactive data;
the data issuing unit is used for receiving the audio data acquired by the first interactive component and issuing the audio data to at least a client corresponding to the interactive object; wherein the audio data is used to describe the interactive data;
the second control unit is used for responding to a second interaction component display instruction and controlling a second interaction component to be displayed in the client corresponding to the interaction object;
the result confirmation unit is used for receiving the interaction selection information acquired by the second interaction assembly and confirming the interaction selection result;
and the third control unit is used for confirming the end of interaction according to the interaction selection result, or acquiring the interaction object which continuously participates in the interaction according to the interaction selection result, and re-confirming the interaction selection result until the end of the interaction is confirmed.
In a third aspect, an embodiment of the present application provides an electronic device, a processor, a memory, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the method according to the first aspect.
In the embodiment of the application, the first interactive components are displayed to the client sides corresponding to the interactive objects in turn, the audio data are collected in turn, the audio data are at least sent to the client sides corresponding to the interactive objects, the interactive objects are deeply communicated with the interactive data, then the interactive selection information is received through the second interactive components, the interactive objects can experience an interactive selection link in an immersion mode, the server can also confirm the interactive selection result of the current round based on the interactive selection information and confirm the end of the interaction or continue the next round of interaction according to the interactive selection result, the interaction process is accurately controlled, the object interactive control mode can establish a social link with a tight depth, the interactive participation degree of audiences is improved, and the interactive requirements of the audiences are met.
For a better understanding and implementation, the technical solutions of the present application are described in detail below with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic view of an application scene of an object interaction control method in a live broadcast room according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an object interaction control method in a live broadcast room according to a first embodiment of the present application;
FIG. 3 is a schematic diagram of an interactive interface provided in an embodiment of the present application;
fig. 4 is another schematic flowchart of an object interaction control method in a live broadcast room according to a first embodiment of the present application;
FIG. 5 is another schematic illustration of an interactive interface provided in an embodiment of the present application;
fig. 6 is a schematic flowchart of a method for controlling object interaction in a live broadcast room according to a first embodiment of the present application;
FIG. 7 is a schematic diagram of another application scenario of the voice game interaction method according to the embodiment of the present application;
fig. 8 is a schematic flowchart of S101 in a method for controlling object interaction in a live broadcast room according to a first embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a display of a first interactive component in an interactive interface according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a display of a second interactive component in an interactive interface according to an embodiment of the present application;
fig. 11 is a further flowchart of an object interaction control method in a live broadcast room according to the first embodiment of the present application;
FIG. 12 is a schematic diagram of another display of an interactive interface provided in the embodiments of the present application;
fig. 13 is a schematic structural diagram of an object interaction control apparatus in a live broadcast room according to a second embodiment of the present application;
fig. 14 is a schematic structural diagram of an electronic device according to a third embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if/if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
As will be appreciated by those skilled in the art, the terms "client," "terminal device," and "terminal device" as used herein include both wireless signal receiver devices, which include only wireless signal receiver devices without transmit capability, and receiving and transmitting hardware devices, which include receiving and transmitting hardware devices capable of two-way communication over a two-way communication link. Such a device may include: cellular or other communication devices such as personal computers, tablets, etc. having single or multi-line displays or cellular or other communication devices without multi-line displays; PCS (personal communications Service), which may combine voice, data processing, facsimile and/or data communications capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global positioning system) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "client," "terminal device" can be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "client", "terminal Device" used herein may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, and may also be a smart tv, a set-top box, and the like.
The hardware referred to by the names "server", "client", "service node", etc. is essentially an electronic device with the performance of a personal computer, and is a hardware device having necessary components disclosed by the von neumann principle such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, an output device, etc., a computer program is stored in the memory, and the central processing unit calls a program stored in an external memory into the internal memory to run, executes instructions in the program, and interacts with the input and output devices, thereby completing a specific function.
It should be noted that the concept of "server" as referred to in this application can be extended to the case of a server cluster. According to the network deployment principle understood by those skilled in the art, the servers should be logically divided, and in physical space, the servers may be independent from each other but can be called through an interface, or may be integrated into one physical computer or a set of computer clusters. Those skilled in the art will appreciate this variation and should not be so limited as to restrict the implementation of the network deployment of the present application.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of an object interaction control method in a live broadcast room according to an embodiment of the present application, where the application scenario includes an anchor client 101, a server 102, and a viewer client 103, where the anchor client 101 and the viewer client 103 interact with each other through the server 102.
The proposed clients of the embodiment of the present application include the anchor client 101 and the viewer client 103.
It is noted that there are many understandings of the concept of "client" in the prior art, such as: it may be understood as an application installed in the electronic device, or may be understood as a hardware device corresponding to the server.
In the embodiments of the present application, the term "client" refers to a hardware device corresponding to a server, and more specifically, refers to an electronic device, such as: smart phones, smart interactive tablets, personal computers, and the like.
When the client is a mobile device such as a smart phone and a smart interactive tablet, the user can install a matched mobile application program on the client and can also access a Web application program on the client.
When the client is a non-mobile device such as a Personal Computer (PC), the user can install a matching PC application on the client, and similarly can access a Web application on the client.
The mobile application refers to an application program that can be installed in the mobile device, the PC application refers to an application program that can be installed in the non-mobile device, and the Web application refers to an application program that needs to be accessed through a browser.
Specifically, the Web application program may be divided into a mobile version and a PC version according to the difference of the client types, and the page layout modes and the available server support of the two versions may be different.
In the embodiment of the application, the types of live application programs provided to the user are divided into a mobile end live application program, a PC end live application program and a Web end live application program. The user can autonomously select the mode of participating in the live webcast according to different types of the client adopted by the user.
The present application can divide the clients into a main broadcasting client 101 and a spectator client 103, depending on the identity of the user using the clients.
The anchor client 101 is a client that sends audio and video data, and is generally a client used by an anchor (i.e., a live anchor user) in live webcasting.
The viewer client 103 is a terminal that receives and views audio/video data, and is typically a client used by a viewer viewing video in a live webcast (i.e., a live viewer user).
The hardware at which the anchor client 101 and the viewer client 103 are directed essentially refers to electronic devices, in particular, as shown in fig. 1, which may be of the type of smart phones, smart interactive tablets, personal computers, and the like. Both the anchor client 101 and the viewer client 103 may access the internet via known network access means to establish a data communication link with the server 102.
Server 102, acting as a business server, may be responsible for further connecting with related audio data servers, video streaming servers, and other servers providing related support, etc., to form a logically associated server cluster for serving related terminal devices, such as anchor client 101 and viewer client 103 shown in fig. 1.
In the embodiment of the present application, the anchor client 101 and the audience client 103 may join in the same live broadcast room (i.e., a live broadcast channel), where the live broadcast room is a chat room implemented by means of an internet technology, and generally has an audio/video broadcast control function. The anchor user performs live broadcast in the live broadcast room through the anchor client 101, and the audience of the audience client 103 can log in the server 102 to enter the live broadcast room to watch live broadcast.
In the live broadcast room, interaction between the anchor and the audience can be realized through known online interaction modes such as voice, video, characters and the like, generally, the anchor performs programs for audience users in the form of audio and video streams, and economic transaction behaviors can also be generated in the interaction process. Of course, the application form of the live broadcast room is not limited to online entertainment, and can also be popularized to other relevant scenes, such as: game interaction scenarios, video conference scenarios, product recommendation sale scenarios, and any other scenario requiring similar interaction.
The live broadcast room may be divided into a video live broadcast room and a voice live broadcast room, and if the user pinches the voice live broadcast room, the anchor client 101 is the end that transmits audio data, and the viewer client 103 is the end that receives and listens to audio data.
The method for controlling the object interaction in the live broadcast room can be applied to a voice live broadcast room and can also be applied to a video live broadcast room, and the method for controlling the object interaction in the live broadcast room is developed and explained by taking the voice live broadcast room as the background. If the object interaction control method in the live broadcast room is applied in the live broadcast of the video, it can be understood that video data can be output in the interface.
In the embodiment of the application, if the interaction is started in the live broadcast room, in some interaction scenes, the user in the live broadcast room is not divided into the anchor and the audience, but is divided into the interaction object, the fighting object and other objects, and then the client is divided into the client corresponding to the interaction object, the client corresponding to the fighting object and the client corresponding to the other objects. Specifically, the description will be made in the following embodiments, and another application scenario of the object interaction control method in the live broadcast room after the interaction is started is described.
Because only speech interaction is usually performed between the audience and the audience in the live broadcast room, and only audio and video interaction, virtual gift interaction and the like are performed between the audience and the anchor, the interaction mode cannot establish close social links between the audience and the anchor and between the audience and the audience, and cannot meet the interaction requirements of the audience. Based on this, the embodiment of the application provides an object interaction control method in a live broadcast room. Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a method for controlling object interaction in a live broadcast room according to a first embodiment of the present application, where the method includes the following steps:
s101: displaying a plurality of interactive objects in an interactive interface, and controlling first interactive components to be displayed in turn to clients corresponding to the interactive objects in response to a first interactive component display instruction; wherein each interactive object is assigned interactive data.
S102: receiving audio data acquired through the first interactive assembly, and at least issuing the audio data to a client corresponding to an interactive object; wherein the audio data is used to describe the interactive data.
S103: and responding to the display instruction of the second interactive assembly, and controlling the second interactive assembly to be displayed in the client corresponding to the interactive object.
S104: and receiving the interaction selection information acquired through the second interaction component, and confirming the interaction selection result.
S105: and confirming the end of the interaction according to the interaction selection result, or acquiring the interaction object continuously participating in the interaction according to the interaction selection result, and re-confirming the interaction selection result until the end of the interaction is confirmed.
In this embodiment, a description is given of an object interaction control method in a live broadcast room with a server as an execution subject. Meanwhile, the client is used as an angle of the execution subject for explanation.
In step S101, the server displays a plurality of interactive objects in the interactive interface, and controls the first interactive components to be displayed in turn to the clients corresponding to the interactive objects in response to the first interactive component display instruction.
The interactive interface is an interface displayed in a client under an interactive scene, and before starting an interaction, a server generally sends an interactive plug-in resource to a client corresponding to a user participating in the interaction, so that the client corresponding to the user participating in the interaction loads the interactive interface according to the interactive plug-in resource.
Compared with a live broadcast interface, the interactive interface has certain differences in interface layout and display, and is configured correspondingly for different interactions, please refer to fig. 3, where fig. 3 is a display diagram of the interactive interface provided in the embodiment of the present application. It can be seen that an interactive icon 31 is shown in fig. 3, as well as several conventional interactive components, such as a message component 32, a gift giving component 33, etc. In addition, a number of agent components 34 are displayed in the interactive interface, at least for defining the display positions of the interactive objects in the interactive interface.
Generally, users participating in the interaction include not only interactive objects but also spectator objects. For interactive objects, as shown in FIG. 3, an add-on component 35 is also shown in the interactive interface to allow the interactive object to sit. The specific display mode may be to present an avatar of the interactive object on the seating component.
For the spectator object, optionally, the joining component is not shown in the corresponding client, or the joining component shown by the corresponding client is not effective (cannot interact).
In an alternative embodiment, users in the live room can all be the object of the interaction.
In an alternative embodiment, the interactive objects may be defined to be selected from users who are in the live room queue for interaction, and/or users who have a similar interaction score, interaction level, and host in the live room.
Users in the live broadcast room can choose to become the target of the battle, and can also choose not to participate in the interaction. For other objects not participating in the interaction, the live room interface is displayed in the corresponding client as usual.
Before the interaction starts, several interactive objects are displayed in the interactive interface, and in an alternative embodiment, the objects of the interactive objects may be displayed on the agent component.
It can be understood that the server does not necessarily display a plurality of interactive objects in the interactive interface at the same time, but displays the interactive objects in the interactive interface one by one as the interactive objects join in the interaction, so as to fully satisfy the interactive experience of the user.
In an alternative embodiment, please refer to fig. 4, where fig. 4 is another schematic flow chart of the method for controlling object interaction in a live broadcast room according to the first embodiment of the present application, and before S101, the method includes the steps of:
s106: responding to the loading request of the interactive plug-in list, issuing the interactive plug-in list data to a target client, enabling the target client to receive the interactive plug-in list data, and displaying the interactive plug-in list in a live broadcast interface; the target client is a client corresponding to a host of the live broadcast room.
S107: and responding to the interactive starting request, acquiring an interactive plug-in identification, an interactive object identification and a fighting object identification, and issuing interactive plug-in resources corresponding to the interactive plug-in identification to a client corresponding to the interactive object identification and a client corresponding to the fighting object identification.
Before the interactive interface is loaded, an interactive plug-in needs to be selected by a host, specifically, the host in the live broadcast room can trigger a target client to send an interactive plug-in list loading request to a server through interaction with the live broadcast room interface, so that the server responds to the interactive plug-in list loading request, sends interactive plug-in list data to the target client, and the target client receives the interactive plug-in list data and displays the interactive plug-in list in the live broadcast room interface.
The interactive plug-in list data is used for presenting an interactive plug-in list in a live broadcast interface and realizing the functions of browsing and selecting the interactive plug-in list.
In the embodiment of the application, the anchor is a user for creating a live broadcast room, and if a certain user (for example, the anchor for connecting with the anchor for establishing a session connection with the anchor, an administrator in the live broadcast room, etc.) has permission to start interaction in the live broadcast room, the anchor can be called a host of the live broadcast room.
The target client is a client corresponding to a host in the live broadcast room.
A host in a live broadcast room can browse an interactive plug-in list, the interactive plug-in list at least comprises interactive plug-in names corresponding to a plurality of interactive plug-in identifications, any one interactive plug-in is selected in the interactive plug-in list, a target client side is triggered to generate an interactive opening request, and the interactive opening request is sent to a server.
And then, the server responds to the interactive starting request to obtain an interactive plug-in identification, an interactive object identification and a fighting object identification.
The interactive plug-in identification is used for indicating which interactive plug-in resource is issued by the server to the client corresponding to the interactive object identification and the client corresponding to the spectator object identification.
The interactive object corresponding to the interactive object identifier may be specified by the host, that is, after the host selects any one of the interactive plug-ins, the host may specify the interactive object from the candidate interactive objects.
The interactive object corresponding to the interactive object identifier may also be randomly assigned by the server, that is, the server randomly selects the interactive object from the candidate interactive objects when responding to the interactive start request.
The process of acquiring the identifier of the spectator object is as follows: and the server sends the fighting inquiry information to a first client side in the live broadcast room, wherein the first client side is the client side corresponding to the users except the interactive objects in the live broadcast room. And then, the server receives the interactive fighting information returned by the first client, and sets the users of which the interactive fighting information is confirmed as the fighting objects to obtain the fighting object identification.
The server acquires the interactive plug-in identification, the interactive object identification and the fighting object identification, and then sends the interactive plug-in resources corresponding to the interactive plug-in identification to the client corresponding to the interactive object identification and the client corresponding to the fighting object identification, so that the client corresponding to the interactive object identification and the client corresponding to the fighting object identification receive and load the interactive plug-in resources corresponding to the interactive plug-in identification, wherein the interactive plug-in resources at least comprise an interactive interface and can also comprise data related to interactive logic control.
Referring to fig. 5, fig. 5 is another schematic display view of an interactive interface provided in the embodiment of the present application. As shown in FIG. 5, after an interactive object selects to join an interaction, the avatar 36 corresponding to the interactive object is displayed in the agent component 34, and the start component 37 and the exit component 38 are displayed in the interactive interface. For the interactive objects which have already joined in the interaction, before the interaction is formally started, the exit component 38 can be clicked at any time to exit the interaction, when the number of the interactive objects does not meet the preset threshold value of the number of the first interactive objects, the start component 37 does not take effect (cannot interact), when the number of the interactive objects meets the preset threshold value of the number of the first interactive objects, the start component 37 takes effect, and if the interactive objects click the start component 37, the corresponding client is triggered to generate and send an interaction opening confirmation instruction to the server.
Referring to fig. 6, fig. 6 is another schematic flow chart of the method for controlling object interaction in a live broadcast room according to the first embodiment of the present application, where after S101, the method includes the steps of:
s108: and responding to the interactive opening confirmation instruction, acquiring the number of the interactive objects, judging whether the number of the interactive objects exceeds a preset first interactive object number threshold value, and if so, generating an interactive opening instruction.
S109: and responding to the interactive starting instruction, acquiring an interactive object identifier and a fighting object identifier, and establishing data connection channels of the client corresponding to the interactive object identifier and the client corresponding to the fighting object identifier.
The preset first threshold of the number of interaction objects is changed based on the interaction, and for some interactions, the number of people is less, for example: the billiards, the preset threshold value of the number of the first interactive objects is 2. For some interactions, a greater number of people are required, for example: who is the bedpan, the preset threshold value of the number of the first interactive objects is 3.
Because the interactive objects can be selected to quit before the interaction starts, in order to ensure that the number of the interactive objects meets the requirement, the server can determine the number of the interactive objects according to the received interactive opening confirmation instruction, and then the interactive opening instruction is generated when the number of the interactive objects is confirmed to exceed the preset threshold value of the number of the first interactive objects.
Referring to fig. 7, fig. 7 is a schematic view of another application scenario of the voice game interaction method according to the embodiment of the present application. In fig. 7, the service server 71 interacts with the client 72, the client 72 is divided into a client 721 corresponding to an interaction object, a client 722 corresponding to a spectator fighting object, and a client 723 corresponding to other objects, the interaction server 73 interacts with the service server 71, and meanwhile, the interaction server 73 interacts with the client 721 corresponding to the interaction object and the client 722 corresponding to the spectator fighting object.
The specific way for the interactive server to establish the data connection channel is as follows: the interactive server creates an interactive live broadcast room for the interactive object and the fighting object, pulls the client corresponding to the interactive object and the client corresponding to the fighting object into the interactive live broadcast room, and establishes a data connection channel of the client corresponding to the interactive object identifier and the client corresponding to the fighting object identifier, wherein the data connection channel can be a long connection channel based on a WebSocket protocol, so that service is provided for the interactive object and the fighting object in the interactive process.
How to control the interaction link after the interaction is started is described below.
S101, responding to a display instruction of a first interaction component, before controlling the first interaction component to be displayed in turn to a client side corresponding to an interaction object, the method comprises the following steps:
s110: acquiring identity data and data to be described, and distributing the identity data and the data to be described to an interactive object, so that a client corresponding to the interactive object receives and displays the identity data corresponding to the interactive object and the data to be described corresponding to the interactive object respectively; the interactive objects are divided into a first interactive object and a second interactive object, the identity data corresponding to the first interactive object is first identity data, the identity data corresponding to the second object is second identity data, and the data to be described corresponding to the first interactive object is different from the data to be described corresponding to the second interactive object.
The server needs to allocate interactive data to the interactive object, so that the interactive object can interact based on the grasped interactive data, and in this embodiment, the interactive data includes identity data and data to be described.
Wherein the identity data comprises first identity data and second identity data. Based on the difference of the distributed identity data, the interactive objects are divided into a first interactive object and a second interactive object, the identity data corresponding to the first interactive object is first identity data, and the identity data corresponding to the second interactive object is second identity data.
For example: in the "who is the bedding" interaction, the first identity data may refer to the bedding and the second identity data may refer to the civilian. At least one of the first interactive objects, i.e. at least one of the interactive objects, is a bed bottom.
In an alternative embodiment, the server obtains the used identity data corresponding to the interactive object identifier, and distributes the identity data to the interactive object according to the used identity data corresponding to the interactive object identifier, thereby avoiding high-frequency repeated distribution of the same identity data to the same interactive object.
The data to be described corresponding to the first interactive object is different from the data to be described corresponding to the second interactive object. That is, the data to be described allocated to the first interactive object is different from the data to be described allocated to the second interactive object.
For example: in the interaction of 'who is the lying bottom', the data to be described corresponding to the first interaction object is the lying bottom vocabulary, and the data to be described corresponding to the second interaction object is the civilian vocabulary.
In an alternative embodiment, the server obtains the used data to be described in the live broadcast room, and obtains the data to be described from the preset database according to the used data to be described in the live broadcast room. Therefore, the same data to be described is prevented from being distributed with high frequency, and the interactive experience is guaranteed.
It can be understood that each interactive object can only see the interactive data configured to itself, and cannot grasp the interactive data allocated to other interactive objects.
Referring to fig. 8, fig. 8 is a schematic flowchart of S101 in a method for controlling object interaction in a live broadcast room according to a first embodiment of the present application, where S101 controls first interaction elements to be displayed in turn to clients corresponding to interaction objects in response to a first interaction element display instruction, and includes the steps of:
s1011: responding to a display instruction of the first interactive assembly, and acquiring a display control list; and the display control list comprises the display sequence of the components corresponding to each interactive object.
S1012: according to the display sequence of the components corresponding to each interactive object, sequentially controlling the first interactive components to be displayed in turn to the clients corresponding to the interactive objects; wherein, the first interactive component at least comprises an audio receiving sub-component.
In this embodiment, the server controls the display sequence of the first interactive elements according to the display control list.
And the display control list comprises the display sequence of the components corresponding to each interactive object.
In an optional embodiment, the display sequence of the components corresponding to the interactive object may be determined based on serial number information corresponding to the interactive object, each seat component corresponds to unique serial number information, and on which seat component the interactive object sits, the serial number information corresponding to the seat component is the serial number information corresponding to the interactive object.
Optionally, if the sequence number information corresponding to the interactive object is earlier, the display sequence of the components corresponding to the interactive object is earlier.
And the server sequentially controls the first interactive components to be displayed to the client sides corresponding to the interactive objects in turn according to the display sequence of the components corresponding to the interactive objects. For example: the display sequence of the components corresponding to the interactive object A is the first position, the display sequence of the components corresponding to the interactive object B is the second position, the display sequence of the components corresponding to the interactive object C is the third position, and the display sequence of the components corresponding to the interactive object D is the fourth position, so that the server can control the first interactive component to be displayed in the client side corresponding to the interactive object A first, then display the first interactive component in the client side corresponding to the interactive object B first, and so on. Generally, after the first interactive component cancels the display in the client corresponding to the previous interactive object, it will be displayed in the client corresponding to the next interactive object.
The first interactive component at least comprises an audio receiving sub-component, and the audio receiving sub-component is at least used for prompting the interactive object to open the microphone authority so as to receive audio data output by the interactive object. The audio data is used to describe the interactive data, and it describes, to be precise, the data to be described.
In an optional embodiment, in order to enable the interactive object to better experience each interactive link, the audio receiving sub-component also displays the name of the current interactive link, the vocabulary to be described corresponding to the interactive object, and the prompt information of the current interactive link.
Referring to fig. 9, fig. 9 is a schematic view illustrating a display of a first interaction assembly in an interaction interface according to an embodiment of the present application. As can be seen, a first interactive component is displayed in the interactive interface, the first interactive component at least includes an audio receiving subcomponent 91, and a name 92 of a current interactive link, a vocabulary 93 to be described corresponding to the interactive object, and a prompt message 94 of the current interactive link are displayed in the audio receiving subcomponent.
In an optional embodiment, the step of S1013 sequentially controlling, according to the display sequence of the components corresponding to each interactive object, the first interactive components to be displayed in turn to the client corresponding to the interactive object includes:
responding to an audio receiving end instruction, controlling a client corresponding to the current interactive object to cancel displaying the first interactive component, and controlling a client corresponding to the next interactive object to start displaying the first interactive component; the audio receiving ending instruction is generated when the client corresponding to the current interactive object confirms that the residual audio receiving duration is zero, or the audio receiving ending instruction is generated when the client corresponding to the current interactive object responds to a triggering instruction for the ending sub-assembly.
In this embodiment, the first interactive component further comprises an end subcomponent and a first countdown subcomponent.
The interactive object can trigger the ending sub-assembly when the audio output is ended, so that the client corresponding to the current interactive object generates a triggering instruction for the ending sub-assembly, and the client corresponding to the current interactive object responds to the triggering instruction for the ending sub-assembly to generate an audio receiving ending instruction.
The first countdown subcomponent displays the remaining audio receiving time, and when the remaining audio receiving time is zero, the client corresponding to the current interactive object is triggered to generate an audio receiving ending instruction. That is, the server controls the audio output duration (i.e., speaking duration) of each interactive object.
The server responds to the audio receiving ending instruction, controls the client corresponding to the current interactive object to cancel the display of the first interactive component, and controls the client corresponding to the next interactive object to start displaying the first interactive component, so that the current interactive link is accurately controlled.
Referring to FIG. 9, it can be seen that an end subcomponent 95 and a first countdown subcomponent 96 are also displayed in the interactive interface. The interactive object can trigger the server to end the audio reception by clicking the end subcomponent 95, and the first countdown subcomponent 96 displays the remaining audio reception duration, so that the interactive object can intuitively know the duration of the remaining audio reception duration.
In step S102, the server receives the audio data acquired through the first interactive component, and at least sends the audio data to the client corresponding to the interactive object.
The audio data is used to describe the interactive data, and more specifically, the audio data is used to describe the data to be described. For example: in the interaction of 'who is lying on the bottom', the data to be described corresponding to the first interactive object is the lying on the bottom vocabulary, then, the audio data output by the first interactive object should be the audio data for describing the lying on the bottom vocabulary, the data to be described corresponding to the second interactive object should be the civilian vocabulary, and then, the audio data output by the first interactive object should be the audio data for describing the civilian vocabulary.
In this embodiment, if there is a watching interaction process of the watching object, the audio data is further sent to the client corresponding to the watching object.
In step S103, in response to the second interaction component display instruction, the second interaction component is controlled to be displayed in the client corresponding to the interaction object.
The second interactive component is used for acquiring interactive selection information, and the interactive selection information at least comprises a selected object identifier.
In the interaction of "who is lying on the bottom", the selected object identifies that the corresponding selected object is B, and then the corresponding selected object represents that the interactive object considers that B is the interactive object assigned with the lying on the bottom vocabulary.
In an optional embodiment, the step S103 of controlling the second interactive component to be displayed in the client corresponding to the interactive object in response to the second interactive component display instruction includes the steps of:
s1031: responding to a second interaction assembly display instruction, and controlling information receiving sub-assemblies corresponding to all the interaction objects except the target interaction object to be displayed in the client corresponding to the target interaction object; and the information receiving sub-component corresponding to the interactive object is displayed beside the interactive position corresponding to the interactive object.
In this embodiment, the server responds to the second interaction component display instruction, and controls the information receiving sub-components corresponding to the interaction objects except the target interaction object to be displayed in the client corresponding to the target interaction object. That is, if the interactive object includes an interactive object a, an interactive object B, an interactive object C, and an interactive object D, where the interactive object a is a target interactive object, the information receiving sub-components corresponding to the interactive object B, the interactive object C, and the interactive object D are displayed in the client corresponding to the target interactive object (i.e., the interactive object a). In short, the server performs display control on the second interactive component, so that the user can only select interactive objects except the user as the selected objects.
The second interactive assembly at least comprises an information receiving sub-assembly corresponding to each interactive object, and the information receiving sub-assembly corresponding to the interactive object is displayed beside the interactive position corresponding to the interactive object.
Referring to fig. 10, fig. 10 is a schematic view illustrating a display of a second interactive element in an interactive interface according to an embodiment of the present application. As can be seen, a second interactive component is displayed in the interactive interface, the second interactive component at least includes an information receiving subcomponent 101 corresponding to each interactive object, and the information receiving subcomponent 101 corresponding to each interactive object is displayed beside an interactive position corresponding to the interactive object, as shown in fig. 10, that is, displayed beside an agent component 102 corresponding to the interactive object. Which interactive object corresponds to the information receiving subcomponent 101 is triggered, the interactive object is the selected object.
In an optional embodiment, the server controls the client corresponding to the interactive object to cancel displaying the second interactive component in response to the information receiving end instruction.
The information receiving end instruction is generated when the client corresponding to the interactive object confirms that the residual information receiving duration is zero, or the information receiving end instruction is generated when the client corresponding to the interactive object responds to a trigger instruction for the information receiving subassembly.
In this embodiment, the second interactive component further includes a second countdown subcomponent, and the remaining information receiving duration is displayed in the second countdown subcomponent.
Referring to fig. 10, a second interactive component is displayed in the interactive interface, the second interactive component further includes a second countdown subcomponent 103, and a remaining information receiving duration is displayed in the second countdown subcomponent 103.
In addition, the name 104 of the current interactive link and the prompt information 105 of the current interactive link can be displayed in the second interactive component.
In step S104, the server receives the interaction selection information obtained through the second interaction component, and confirms the interaction selection result. The server receives the interaction selection information acquired through the second interaction component, counts the selected object identifiers, and confirms the most selected interaction objects.
In an alternative embodiment, please refer to fig. 11, where fig. 11 is a further flowchart of the method for controlling object interaction in a live broadcast room according to the first embodiment of the present application, and after receiving the interaction selection information obtained by the second interaction component, the method further includes:
s111: and acquiring the selected object identification, at least one selected object identification corresponding to each selected object identification and sequence number information corresponding to the selected object identification according to the interaction selection information.
S112: and controlling the client corresponding to the interactive object to display the sequence number information corresponding to at least one selected object identifier beside the interactive position corresponding to the selected object identifier.
In this embodiment, each piece of interaction selection information includes a selected object identifier and a selected object identifier, and the server can obtain the selected object identifier and at least one selected object identifier corresponding to each selected object identifier through statistics according to the interaction selection information, and obtain sequence number information corresponding to the selected object identifier.
And then, the server controls the client corresponding to the interactive object to display the sequence number information corresponding to at least one selected object identifier beside the interactive position corresponding to the selected object identifier.
Referring to fig. 12, fig. 12 is a further schematic view illustrating an interactive interface according to an embodiment of the present disclosure. In fig. 12, serial number information 122 corresponding to each interactive object identifier is displayed on the seat component 121 corresponding to each interactive object identifier, and if a certain interactive object identifier is a selected object identifier, serial number information 123 corresponding to at least one selected object identifier is displayed beside the seat component 121 corresponding to the interactive object identifier, so that the interactive object can intuitively know the interactive selection condition. As shown in fig. 12, the interactive object with sequence number information 4 obtains 3 tickets, which are selected for the interactive objects with sequence number information 1, 2, and 3, respectively.
In this embodiment, after each round of interactive selection, the client corresponding to the interactive object identifier can display the interactive selection condition, so that the interactive object can know the interactive progress conveniently, and the interactive experience is improved.
In step S105, the server determines that the interaction is finished according to the interaction selection result, or acquires an interaction object that continues to participate in the interaction according to the interaction selection result, and reconfirms the interaction selection result until the interaction is finished.
In this embodiment, the interactive object is divided into a first interactive object and a second interactive object, the identity data corresponding to the first interactive object is first identity data, and the identity data corresponding to the second interactive object is second identity data. For example: in the "who is the bedding" interaction, the first identity data may refer to the bedding and the second identity data may refer to the civilian.
On this basis, S105 includes the steps of: if the interactive object selected most is the first interactive object, the server confirms that the interaction is finished; and if the most selected interactive objects are second interactive objects and the number of the remaining second interactive objects exceeds a preset second interactive object number threshold value, the server acquires the interactive objects which continuously participate in the interaction, and reconfirms the interactive selection result until the interaction is confirmed to be finished. It will be appreciated that the most selected interactive objects will not continue to interact.
For example: in the interaction of 'who is the bottom of the house', if the most selected interactive objects are the bottom of the house, the server confirms that the interaction is finished and the citizen wins; if the most selected interactive objects are civilians and the number of the remaining civilians exceeds 1, the server acquires the interactive objects continuously participating in the interaction, and reconfirms the interactive selection result until the interaction is confirmed to be finished; and if the most selected interactive objects are the civilians and the number of the rest civilians does not exceed 1, the server confirms that the interaction is finished and the user wins the house.
In an optional embodiment, the server controls the client corresponding to the interactive object to display the identity data corresponding to the interactive object selected most beside the interactive position corresponding to the interactive object selected most.
In an optional embodiment, the server obtains interaction ranking information corresponding to the user identifier, interaction point information corresponding to the user identifier, and/or interaction duration information corresponding to the user identifier; and determining reward data corresponding to the user identification according to the interaction ranking information corresponding to the user identification, the interaction point information corresponding to the user identification and/or the interaction duration information corresponding to the user identification, and issuing the reward data to the client corresponding to the user identification.
In the embodiment of the application, the first interactive components are displayed to the client sides corresponding to the interactive objects in turn, the audio data are collected in turn, the audio data are at least sent to the client sides corresponding to the interactive objects, the interactive objects are deeply communicated with the interactive data, then the interactive selection information is received through the second interactive components, the interactive objects can experience an interactive selection link in an immersion mode, the server can also confirm the interactive selection result of the current round based on the interactive selection information and confirm the end of the interaction or continue the next round of interaction according to the interactive selection result, the interaction process is accurately controlled, the object interactive control mode can establish a social link with a tight depth, the interactive participation degree of audiences is improved, and the interactive requirements of the audiences are met.
Please refer to fig. 13, which is a schematic structural diagram of an object interaction control apparatus in a live broadcast room according to a second embodiment of the present application. The apparatus may be implemented as all or part of an electronic device in software, hardware, or a combination of both. The device 13 comprises:
the first control unit 131 is configured to display a plurality of interactive objects in an interactive interface, and control, in response to a first interactive component display instruction, first interactive components to be displayed in turn to clients corresponding to the interactive objects; wherein each interactive object is allocated with interactive data;
the data issuing unit 132 is configured to receive the audio data acquired by the first interactive component, and issue the audio data to at least a client corresponding to the interactive object; wherein the audio data is used to describe the interactive data;
the second control unit 133, configured to control a second interactive component to be displayed in the client corresponding to the interactive object in response to a second interactive component display instruction;
a result confirmation unit 134, configured to receive the interaction selection information obtained through the second interaction component, and confirm an interaction selection result;
a third control unit 135, configured to confirm that the interaction is ended according to the interaction selection result, or obtain the interaction object that continues to participate in the interaction according to the interaction selection result, and reconfirm the interaction selection result until it is confirmed that the interaction is ended.
It should be noted that, when the object interaction control apparatus in a live broadcast room provided in the foregoing embodiment executes the object interaction control method in the live broadcast room, the division of each function module is merely used for illustration, and in practical applications, the function distribution may be completed by different function modules as needed, that is, the internal structure of the device is divided into different function modules, so as to complete all or part of the functions described above. In addition, the object interaction control device in the live broadcast room and the object interaction control method in the live broadcast room provided by the above embodiments belong to the same concept, and details of implementation processes are shown in the method embodiments and are not described herein again.
Fig. 14 is a schematic structural diagram of an electronic device according to a third embodiment of the present application. As shown in fig. 14, the electronic device 14 may include: a processor 140, a memory 141, and a computer program 142 stored in the memory 141 and operable on the processor 140, such as: an object interaction control program in the live broadcast room; the steps in the first embodiment described above are implemented when the processor 140 executes the computer program 142.
The processor 140 may include one or more processing cores, among other things. The processor 140 is connected to various parts in the electronic device 14 by various interfaces and lines, executes various functions of the electronic device 14 and processes data by operating or executing instructions, programs, code sets or instruction sets stored in the memory 141 and calling data in the memory 141, and optionally, the processor 140 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), Programmable Logic Array (PLA). The processor 140 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing contents required to be displayed by the touch display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 140, but may be implemented by a single chip.
The Memory 141 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 141 includes a non-transitory computer-readable medium. The memory 141 may be used to store instructions, programs, code sets, or instruction sets. The memory 141 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as touch instructions, etc.), instructions for implementing the above-described method embodiments, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. Memory 141 may optionally be at least one memory device located remotely from the aforementioned processor 140.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the method steps of the foregoing embodiment, and a specific execution process may refer to specific descriptions of the foregoing embodiment, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium and used for instructing relevant hardware, and when the computer program is executed by a processor, the steps of the above-described embodiments of the method may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc.
The present invention is not limited to the above-described embodiments, and various modifications and variations of the present invention are intended to be included within the scope of the claims and the equivalent technology of the present invention if they do not depart from the spirit and scope of the present invention.

Claims (13)

1. A method for controlling object interaction in a live broadcast room is characterized by comprising the following steps:
displaying a plurality of interactive objects in an interactive interface, and controlling first interactive components to be displayed in a client side corresponding to the interactive objects in turn in response to a first interactive component display instruction; wherein each interactive object is allocated with interactive data;
receiving audio data acquired through the first interactive assembly, and at least issuing the audio data to a client corresponding to the interactive object; wherein the audio data is used to describe the interactive data;
responding to a second interaction component display instruction, and controlling a second interaction component to be displayed in a client corresponding to the interaction object;
receiving interaction selection information acquired through the second interaction assembly, and confirming an interaction selection result;
and confirming the end of interaction according to the interaction selection result, or acquiring the interaction object which continues to participate in the interaction according to the interaction selection result, and re-confirming the interaction selection result until the end of the interaction is confirmed.
2. The method for controlling object interaction in a live broadcast room according to claim 1, wherein before displaying a plurality of interactive objects in the interactive interface, the method comprises the following steps:
responding to an interactive plug-in list loading request, issuing interactive plug-in list data to a target client, enabling the target client to receive the interactive plug-in list data, and displaying the interactive plug-in list in a live broadcast interface; the target client is a client corresponding to a host of a live broadcast room;
responding to an interactive starting request, acquiring an interactive plug-in identification, an interactive object identification and a fighting object identification, and issuing interactive plug-in resources corresponding to the interactive plug-in identification to a client corresponding to the interactive object identification and a client corresponding to the fighting object identification.
3. The method for controlling object interaction in a live broadcast room according to claim 2, wherein after a plurality of interactive objects are displayed in the interactive interface, the method comprises the following steps:
responding to an interactive opening confirmation instruction, acquiring the number of interactive objects, judging whether the number of the interactive objects exceeds a preset first interactive object number threshold value, and if so, generating an interactive opening instruction;
and responding to the interaction starting instruction, acquiring the interaction object identification and the fighting object identification, and establishing a data connection channel of a client corresponding to the interaction object identification and a client corresponding to the fighting object identification.
4. The method for controlling the interaction of the objects in the live broadcast room according to any one of claims 1 to 3, wherein the interaction data comprises identity data and data to be described;
before responding to a display instruction of the first interaction component and controlling the first interaction component to be displayed in turn to the client side corresponding to the interaction object, the method comprises the following steps:
acquiring the identity data and the data to be described, and distributing the identity data and the data to be described to the interactive object, so that a client corresponding to the interactive object receives and displays the identity data corresponding to the interactive object and the data to be described corresponding to the interactive object respectively; the interactive objects are divided into a first interactive object and a second interactive object, the identity data corresponding to the first interactive object is first identity data, the identity data corresponding to the second object is second identity data, and the data to be described corresponding to the first interactive object is different from the data to be described corresponding to the second interactive object.
5. The method for controlling object interaction in a live broadcast room according to any one of claims 1 to 3, wherein the step of controlling the first interaction components to be displayed in turn to the client corresponding to the interaction object in response to the first interaction component display instruction comprises the steps of:
responding to the display instruction of the first interactive assembly, and acquiring a display control list; the display control list comprises component display sequences corresponding to the interactive objects;
according to the display sequence of the components corresponding to the interactive objects, sequentially controlling the first interactive components to be displayed in the client sides corresponding to the interactive objects in turn; wherein the first interactive component comprises at least an audio receiving sub-component.
6. The method of claim 5, wherein the first interactive component further comprises an end subcomponent and a first countdown subcomponent, wherein the first countdown subcomponent displays a remaining audio receiving time;
the step of sequentially controlling the first interactive components to be displayed to the clients corresponding to the interactive objects in turn according to the display sequence of the components corresponding to the interactive objects comprises the following steps:
responding to an audio receiving end instruction, controlling a client corresponding to the current interactive object to cancel displaying the first interactive component, and controlling a client corresponding to the next interactive object to start displaying the first interactive component; the audio receiving ending instruction is generated when the client corresponding to the current interactive object confirms that the residual audio receiving duration is zero, or the audio receiving ending instruction is generated when the client corresponding to the current interactive object responds to a triggering instruction of the ending sub-component.
7. The method according to any one of claims 1 to 3, wherein the second interactive component comprises at least an information receiving subcomponent corresponding to each of the interactive objects;
the step of responding to the display instruction of the second interactive component and controlling the second interactive component to be displayed in the client corresponding to the interactive object comprises the following steps:
responding to the display instruction of the second interaction assembly, and controlling information receiving sub-assemblies corresponding to the interaction objects except for the target interaction object to be displayed in the client corresponding to the target interaction object; and the information receiving sub-component corresponding to the interactive object is displayed beside the interactive position corresponding to the interactive object.
8. The method of claim 7, wherein the second interactive component further comprises a second countdown subcomponent, and a remaining message receiving duration is displayed in the second countdown subcomponent;
the method further comprises the steps of:
responding to an information receiving end instruction, and controlling the client corresponding to the interactive object to cancel displaying the second interactive component; the information receiving end instruction is generated when the client corresponding to the interactive object confirms that the residual information receiving duration is zero, or the information receiving end instruction is generated when the client corresponding to the interactive object responds to a trigger instruction for the information receiving subassembly.
9. The method as claimed in any one of claims 1 to 3, wherein the interaction selection information includes a selected object identifier and a selected object identifier, and after receiving the interaction selection information obtained by the second interaction component, the method further includes:
according to the interaction selection information, acquiring selected object identifiers, at least one selected object identifier corresponding to each selected object identifier and sequence number information corresponding to the selected object identifiers;
and controlling the client corresponding to the interactive object to display the sequence number information corresponding to at least one selected object identifier beside the interactive position corresponding to the selected object identifier.
10. The method according to any one of claims 1 to 3, wherein the interactive objects are divided into a first interactive object and a second interactive object, the identity data corresponding to the first interactive object is first identity data, and the identity data corresponding to the second interactive object is second identity data;
the receiving of the interaction selection information obtained through the second interaction component and the confirmation of the interaction selection result comprise the following steps:
receiving interaction selection information acquired through the second interaction component, and confirming the interaction object which is selected most;
the step of confirming the end of the interaction according to the interaction selection result, or obtaining the interaction object which continues to participate in the interaction according to the interaction selection result, and re-confirming the interaction selection result until the end of the interaction is confirmed, comprises the steps of:
if the interactive object which is selected most is the first interactive object, confirming that the interaction is finished;
and if the interaction objects which are selected most are the second interaction objects and the number of the remaining second interaction objects exceeds a preset second interaction object number threshold value, acquiring the interaction objects which continuously participate in the interaction, and re-confirming the interaction selection result until the interaction is confirmed to be finished.
11. An object interaction control device in a live broadcast room, comprising:
the first control unit is used for displaying a plurality of interactive objects in the interactive interface and controlling the first interactive components to be displayed in turn to the client sides corresponding to the interactive objects in response to a first interactive component display instruction; wherein each interactive object is allocated with interactive data;
the data issuing unit is used for receiving the audio data acquired by the first interactive component and issuing the audio data to at least a client corresponding to the interactive object; wherein the audio data is used to describe the interactive data;
the second control unit is used for responding to a second interaction component display instruction and controlling a second interaction component to be displayed in the client corresponding to the interaction object;
the result confirmation unit is used for receiving the interaction selection information acquired by the second interaction assembly and confirming the interaction selection result;
and the third control unit is used for confirming the end of interaction according to the interaction selection result, or acquiring the interaction object which continuously participates in the interaction according to the interaction selection result, and re-confirming the interaction selection result until the end of the interaction is confirmed.
12. An electronic device, comprising: processor, memory and computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 10 are implemented when the processor executes the computer program.
13. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 10.
CN202210650657.7A 2022-06-10 2022-06-10 Method and device for controlling object interaction in live broadcasting room, electronic equipment and medium Active CN115086698B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210650657.7A CN115086698B (en) 2022-06-10 2022-06-10 Method and device for controlling object interaction in live broadcasting room, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210650657.7A CN115086698B (en) 2022-06-10 2022-06-10 Method and device for controlling object interaction in live broadcasting room, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN115086698A true CN115086698A (en) 2022-09-20
CN115086698B CN115086698B (en) 2024-10-01

Family

ID=83251702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210650657.7A Active CN115086698B (en) 2022-06-10 2022-06-10 Method and device for controlling object interaction in live broadcasting room, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN115086698B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109257616A (en) * 2018-09-30 2019-01-22 武汉斗鱼网络科技有限公司 A kind of voice connects wheat interactive approach, device, equipment and medium
CN109698963A (en) * 2018-12-29 2019-04-30 乐蜜有限公司 A kind of live broadcasting method, device, electronic equipment and readable storage medium storing program for executing
CN110339570A (en) * 2019-07-17 2019-10-18 网易(杭州)网络有限公司 Exchange method, device, storage medium and the electronic device of information
CN113094146A (en) * 2021-05-08 2021-07-09 腾讯科技(深圳)有限公司 Interaction method, device and equipment based on live broadcast and computer readable storage medium
CN113453029A (en) * 2021-05-28 2021-09-28 广州方硅信息技术有限公司 Live broadcast interaction method, server and storage medium
WO2022095679A1 (en) * 2020-11-09 2022-05-12 北京达佳互联信息技术有限公司 Live interaction method, device, electronic device and storage medium thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109257616A (en) * 2018-09-30 2019-01-22 武汉斗鱼网络科技有限公司 A kind of voice connects wheat interactive approach, device, equipment and medium
CN109698963A (en) * 2018-12-29 2019-04-30 乐蜜有限公司 A kind of live broadcasting method, device, electronic equipment and readable storage medium storing program for executing
CN110339570A (en) * 2019-07-17 2019-10-18 网易(杭州)网络有限公司 Exchange method, device, storage medium and the electronic device of information
WO2022095679A1 (en) * 2020-11-09 2022-05-12 北京达佳互联信息技术有限公司 Live interaction method, device, electronic device and storage medium thereof
CN113094146A (en) * 2021-05-08 2021-07-09 腾讯科技(深圳)有限公司 Interaction method, device and equipment based on live broadcast and computer readable storage medium
CN113453029A (en) * 2021-05-28 2021-09-28 广州方硅信息技术有限公司 Live broadcast interaction method, server and storage medium

Also Published As

Publication number Publication date
CN115086698B (en) 2024-10-01

Similar Documents

Publication Publication Date Title
CN113453029B (en) Live broadcast interaction method, server and storage medium
CN113766340B (en) Dance music interaction method, system and device under live connected wheat broadcast and computer equipment
CN104363476A (en) Online-live-broadcast-based team-forming activity method, device and system
CN113573083A (en) Live wheat-connecting interaction method and device and computer equipment
CN114268812B (en) Live broadcast room virtual resource giving method, device, computer equipment and storage medium
CN115134621B (en) Live combat interaction method, system, device, equipment and medium
CN113840154A (en) Live broadcast interaction method and system based on virtual gift and computer equipment
CN113032542B (en) Live broadcast data processing method, device, equipment and readable storage medium
CN114007095B (en) Voice-to-microphone interaction method, system and medium of live broadcasting room and computer equipment
CN114666672B (en) Live fight interaction method and system initiated by audience and computer equipment
CN113824976A (en) Method and device for displaying approach show in live broadcast room and computer equipment
CN114257830A (en) Live game interaction method, system and device and computer equipment
CN114666671B (en) Live broadcast praise interaction method, device, equipment and storage medium
CN113938696A (en) Live broadcast interaction method and system based on user-defined virtual gift and computer equipment
CN115314727A (en) Live broadcast interaction method and device based on virtual object and electronic equipment
CN113824984A (en) Virtual gift pipelining display method, system, device and computer equipment
CN115314729B (en) Team interaction live broadcast method and device, computer equipment and storage medium
CN115134624B (en) Live broadcast continuous wheat matching method, system, device, electronic equipment and storage medium
CN115134623B (en) Virtual gift interaction method, system, device, electronic equipment and medium
CN115065838B (en) Live broadcast room cover interaction method, system, device, electronic equipment and storage medium
CN115086698B (en) Method and device for controlling object interaction in live broadcasting room, electronic equipment and medium
CN113438491B (en) Live broadcast interaction method and device, server and storage medium
CN114885191A (en) Interaction method, system, device and equipment based on exclusive nickname of live broadcast room
CN114760502A (en) Live broadcast room approach show merging and playing method and device and computer equipment
CN114760531A (en) Live broadcasting room team interaction method, device, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant