CN112788354A - Live broadcast interaction method and device, electronic equipment, storage medium and program product - Google Patents

Live broadcast interaction method and device, electronic equipment, storage medium and program product Download PDF

Info

Publication number
CN112788354A
CN112788354A CN202011584553.8A CN202011584553A CN112788354A CN 112788354 A CN112788354 A CN 112788354A CN 202011584553 A CN202011584553 A CN 202011584553A CN 112788354 A CN112788354 A CN 112788354A
Authority
CN
China
Prior art keywords
special effect
live broadcast
live
effect image
list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011584553.8A
Other languages
Chinese (zh)
Inventor
王微
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011584553.8A priority Critical patent/CN112788354A/en
Publication of CN112788354A publication Critical patent/CN112788354A/en
Priority to PCT/CN2021/134091 priority patent/WO2022142944A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The disclosure relates to a live broadcast interaction method, a live broadcast interaction device, an electronic device, a storage medium and a program product, wherein special effect images of a plurality of first objects in an object list of a live broadcast room are displayed; the special effect image is obtained by carrying out special effect processing on the image data of the first object; adjusting the voting value of the first object in the live broadcast object list according to the evaluation instruction of the special effect image of the first object; therefore, the live broadcast room object list after the voting value is adjusted is displayed, the diversity of the live broadcast interaction mode is realized, and the interaction interest is improved.

Description

Live broadcast interaction method and device, electronic equipment, storage medium and program product
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a live broadcast interaction method and apparatus, an electronic device, a storage medium, and a program product.
Background
In the live broadcast field, the anchor can perform live video broadcast through live broadcast application, and can play wonderful programs for audiences, and the audiences can watch live broadcast through the live broadcast application. The interactive live broadcast is an enhanced application of the video live broadcast, and is to add an interactive function in the video live broadcast process.
In the related art, the interactive function in the interactive live broadcast includes adding voice and video interaction in the video live broadcast. However, the interaction between the anchor and the audience is relatively single in the related art.
Disclosure of Invention
The present disclosure provides a live broadcast interaction method, apparatus, storage medium, and program product, to at least solve the problem in the related art that the interaction mode between the anchor and the audience is relatively single. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, a live broadcast interaction method is provided, including:
displaying special effect images of a plurality of first objects in a live room object list; the special effect image is obtained by carrying out special effect processing on the image data of the first object;
adjusting the voting value of the first object in the live broadcast object list according to an evaluation instruction of the special effect image of the first object;
and displaying the live broadcasting room object list after the voting value is adjusted.
In one embodiment, the image data is determined in any one of the following ways:
responding to a shooting instruction in a live broadcast interface of a first object client, and acquiring image data obtained by shooting the first object;
acquiring image data of the first object from a gallery, wherein the gallery is an image of the first object stored on a client or a server;
and acquiring the head portrait data of the first object.
In one embodiment, before the showing the special effect images of the first objects in the live-air object list, the method further comprises:
responding to an uploading instruction of each special effect image, and determining an initial display position of each special effect image in the live broadcast room object list according to the triggering time of the uploading instruction;
the displaying of the special effect image of the plurality of first objects in the live broadcast object list includes:
and displaying each special effect image in the live broadcast room object list according to the initial display position.
In one embodiment, before said presenting each of said special effect images in said live-room object list, said method further comprises:
acquiring the number of special effect images in the live broadcast room object list;
and if the number of the special effect images reaches a display upper limit threshold value, displaying a first prompt message in a live broadcast interface of the first object client, wherein the first prompt message comprises information for prompting that the number of the special effect images reaches the upper limit.
In one embodiment, before the adjusting the vote value of the first object in the live-room object list according to the evaluation instruction of the special effect image of the first object, the method further includes:
showing a voting button in a specified area of the special effect image, wherein the evaluation instruction is an instruction generated by touching the voting button based on a second object, and the second object is a watching object in a live broadcast room.
In one embodiment, the adjusting, according to an evaluation instruction for a special effect image of a first object, a vote value of the first object in the live-air object list includes:
if an amplifying instruction of the second object for touch control on the special effect image is received, displaying the amplified special effect image in a live broadcast interface of a second object client, and acquiring a voice signal sent by the second object;
and adjusting the voting value of the first object in the live broadcasting room object list according to the voice signal.
In one embodiment, each of the special effect images is obtained by performing special effect processing on image data of each of the first objects according to a dynamic special effect material.
In one embodiment, the method further comprises:
and if the voting deadline time is reached, determining the first object with the voting value meeting the preset condition as the first target object according to the voting value of each first object.
And establishing the microphone connecting communication between the account of the first target object and the main broadcasting account.
In one embodiment, the method further comprises:
if the voting deadline time is reached, displaying a final ranking list; and the final ranking list is obtained by performing ranking processing on the basis of the final voting value of each first object.
In one embodiment, the establishing of the connected communication between the account of the first target object and the anchor account includes:
and receiving a confirmation message of the microphone connecting request sent by the anchor client, and establishing microphone connecting communication between the account of the first target object and the anchor account according to the confirmation message.
In one embodiment, before the establishing of the online communication between the account of the first target object and the anchor account, the method further includes:
and displaying the special effect image and/or a second prompt message of the first target object on a live broadcast interface of a main broadcast client, wherein the second prompt message comprises information for prompting a main broadcast to connect with the first target object.
In one embodiment, the establishing of the connected communication between the account of the first target object and the anchor account includes:
and starting timing from the moment when the live broadcast room interface of the anchor client displays the second prompt message, and establishing the microphone connecting communication between the account of the first target object and the anchor account if the timing time reaches the microphone connecting waiting time.
In one embodiment, before the establishing of the online communication between the account of the first target object and the anchor account, the method further includes:
responding to a wheat changing instruction for changing the first target object, and displaying a special effect image of a second target object in a live broadcast interface of the anchor client according to the wheat changing instruction; the second target object is any other first object except the first target object in the final ranking list;
and starting timing from the moment when the special effect image of the second target object is displayed on the live broadcast interface of the anchor client, and establishing the microphone connecting communication between the account of the second target object and the anchor account if the timing time reaches the microphone connecting waiting time.
In one embodiment, before the wheat change instruction responding to the first target object, the method further comprises:
displaying a wheat changing button in a live broadcast room interface of the anchor client; the wheat changing instruction is generated by touching the wheat changing button based on an anchor.
In one embodiment, before said presenting the special effects image of the second target object in the live view interface of the anchor client, the method further comprises:
acquiring the times of the main broadcasting triggering wheat changing instruction;
and when the frequency of the main broadcast triggering the wheat changing instruction reaches a wheat changing frequency threshold value, displaying a third prompt message in a live broadcast interface of the main broadcast client, wherein the third prompt message comprises information for prompting that the wheat changing frequency reaches an upper limit.
According to a second aspect of the embodiments of the present disclosure, there is provided a live broadcast interaction method, including:
displaying special effect images of a plurality of first objects in a live room object list; the special effect image is obtained by carrying out special effect processing on the image data of the first object;
if an evaluation instruction for a special effect image of a first object is received, adjusting a voting value of the first object in the live broadcast object list;
and displaying the live broadcasting room object list after the voting value is adjusted.
If the voting deadline is reached, displaying the special effect image and the second prompt message of the first target object; the first target object is a first object of which the voting value meets a preset condition in the live broadcast room object list, and the second prompt message comprises information for prompting the anchor and the first target object to connect to the other.
According to a second aspect of the embodiments of the present disclosure, there is provided a live broadcast interaction apparatus, including:
the display device comprises a special effect image display module, a display module and a display module, wherein the special effect image display module is configured to execute display of special effect images of a plurality of first objects in an object list of a live broadcast room; the special effect image is obtained by carrying out special effect processing on the image data of the first object;
the vote value adjusting module is configured to execute adjustment of the vote value of the first object in the live broadcast object list according to an evaluation instruction of the special effect image of the first object;
and the object list display module is configured to execute the display of the live broadcast room object list after the vote value is adjusted.
In one embodiment, the image data is determined in any one of the following ways:
responding to a shooting instruction in a live broadcast interface of a first object client, and acquiring image data obtained by shooting the first object;
acquiring image data of the first object from a gallery, wherein the gallery is an image of the first object stored on a client or a server;
and acquiring the head portrait data of the first object.
In one embodiment, the live interaction device further includes: the initial position determining module is configured to execute an uploading instruction responding to each special effect image, and determine an initial display position of each special effect image in the live broadcast room object list according to the triggering time of the uploading instruction;
and the object list display module is configured to display each special effect image in the live broadcast object list according to the initial display position.
In one embodiment, the live interaction device further includes:
the image quantity acquisition module is configured to acquire the quantity of the special effect images in the live broadcast room object list;
and the first message display module is configured to display a first prompt message in a live broadcast interface of the first object client if the number of the special effect images reaches a display upper limit threshold, wherein the first prompt message comprises information for prompting that the number of the special effect images reaches an upper limit.
In one embodiment, the live interaction device further includes:
and a voting button display module configured to perform displaying of voting buttons in a specified area of the special effect image, wherein the evaluation instruction is an instruction generated by touching the voting buttons based on a second object, and the second object is a viewing object in a live broadcast.
In one embodiment, the vote value adjusting module is further configured to execute, if an instruction for enlarging the special effect image by a second object in a touch manner is received, displaying the enlarged special effect image in a live broadcast interface of a second object client, and acquiring a voice signal sent by the second object; and adjusting the voting value of the first object in the live broadcasting room object list according to the voice signal.
In one embodiment, each of the special effect images is obtained by performing special effect processing on image data of each of the first objects according to a dynamic special effect material.
In one embodiment, the live interaction device further includes:
and the first target object determining module is configured to determine a first object of which the voting value meets a preset condition as a first target object according to the voting value of each first object if the voting deadline is reached.
A first connecting module configured to perform establishing a connecting communication between the account of the first target object and an anchor account.
In one embodiment, the live interaction device further includes:
the final ranking list display module is configured to display the final ranking list if the voting deadline is reached; and the final ranking list is obtained by performing ranking processing on the basis of the final voting value of each first object.
In one embodiment, the first connecting module is further configured to execute receiving a confirmation message of a connecting request sent by a main broadcasting client, and establish connecting communication between the account of the first target object and the main broadcasting account according to the confirmation message.
In one embodiment, the live interaction device further includes:
the second message display module is configured to execute a live broadcast room interface of a main broadcast client and display a second prompt message, wherein the second prompt message comprises information for prompting the main broadcast to connect to the first target object; or
And the first target special effect image display module is configured to execute a live broadcast room interface of a main broadcast client side and display the special effect image of the first target object.
In one embodiment, the first connecting module is further configured to perform timing from a time when the second prompt message is displayed on a live broadcast interface of the anchor client, and establish a connecting communication between the account of the first target object and the anchor account if the timing time reaches a connecting waiting duration.
In one embodiment, the live interaction device further includes:
the barley changing module is configured to execute a barley changing instruction responding to the first target object changing, and display a special effect image of a second target object in a live broadcast interface of the anchor client according to the barley changing instruction; the second target object is any other first object except the first target object in the final ranking list;
and the second microphone connecting module is configured to execute timing from the moment when the special effect image of the second target object is displayed on the live broadcast room interface of the anchor client, and establish microphone connecting communication between the account of the second target object and the anchor account if the timing time reaches the microphone connecting waiting time.
In one embodiment, the live interaction device further includes:
a wheat changing button display module configured to perform displaying of a wheat changing button in a live broadcast room interface of the anchor client; the wheat changing instruction is generated by touching the wheat changing button based on an anchor.
In one embodiment, the live interaction device further includes:
the times acquisition module is configured to execute the times of acquiring the anchor trigger wheat-changing instruction;
and the third message display module is configured to display a third prompt message in a live broadcast interface of the anchor client when the number of times that the anchor triggers the wheat changing instruction reaches a wheat changing number threshold, wherein the third prompt message comprises information for prompting that the wheat changing number reaches an upper limit.
According to a fourth aspect of the embodiments of the present disclosure, a live broadcast interaction apparatus includes:
the display device comprises a special effect image display module, a display module and a display module, wherein the special effect image display module is configured to execute display of special effect images of a plurality of first objects in an object list of a live broadcast room; the special effect image is obtained by carrying out special effect processing on the image data of the first object;
the vote value adjusting module is configured to execute adjustment of a vote value of a first object in the live broadcast object list if an evaluation instruction of a special effect image of the first object is received;
and the object list display module is configured to execute the display of the live broadcast room object list after the vote value is adjusted.
The image message display module is configured to display the special effect image and the second prompt message of the first target object if the voting deadline is reached; the first target object is a first object of which the voting value meets a preset condition in the live broadcast room object list, and the second prompt message comprises information for prompting the anchor and the first target object to connect to the other.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor; a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the live interaction method in any embodiment of the first aspect.
According to a sixth aspect of embodiments of the present disclosure, there is provided a storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the live interaction method described in any one of the embodiments of the first aspect.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer program product, the program product comprising a computer program, the computer program being stored in a readable storage medium, from which the computer program is read and executed by at least one processor of a device, such that the device performs the live interaction method described in any one of the embodiments of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
displaying special effect images of a plurality of first objects in a live room object list; the special effect image is obtained by carrying out special effect processing on the image data of the first object; adjusting the voting value of the first object in the live broadcast object list according to the evaluation instruction of the special effect image of the first object; therefore, the live broadcast room object list after the voting value is adjusted is displayed, the diversity of the live broadcast interaction mode is realized, and the interaction interest is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a diagram illustrating an application environment for a live interaction method, according to an example embodiment.
Fig. 2 is a flow diagram illustrating a method of live interaction in accordance with an example embodiment.
Fig. 3 is a flow diagram illustrating a method of live interaction in accordance with an example embodiment.
Fig. 4 is a flow diagram illustrating a method of live interaction in accordance with an example embodiment.
FIG. 5 is a schematic diagram of a live room interface shown in accordance with an exemplary embodiment.
Fig. 6a is a flowchart illustrating step S220 according to an exemplary embodiment.
FIG. 6b is a schematic diagram of a live room interface shown in accordance with an exemplary embodiment.
Fig. 7 is a flow diagram illustrating a method of live interaction in accordance with an example embodiment.
Fig. 8 is a flow diagram illustrating a method of live interaction in accordance with an example embodiment.
Fig. 9 is a flow diagram illustrating a method of live interaction in accordance with an example embodiment.
Fig. 10 is a flow diagram illustrating a method of live interaction in accordance with an example embodiment.
Fig. 11 is a flow diagram illustrating a method of live interaction in accordance with an example embodiment.
Fig. 12 is a block diagram illustrating a live interaction device, according to an example embodiment.
Fig. 13 is a block diagram illustrating a live interaction device, according to an example embodiment.
Fig. 14 is an internal block diagram of an electronic device shown in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The live broadcast interaction method provided by the present disclosure can be applied to the application environment shown in fig. 1. Wherein the anchor client 110 and the server 120 communicate over a network and at least one viewer client 130 and the server 120 communicate over the network. The spectator client 130 includes at least a first object client 132 and a second object client 134 that participate in the live interaction. The anchor client 110 has installed therein an application program that can be used for live broadcasting. Viewer client 130 has installed therein an application that can be used to view a live broadcast. The application installed in anchor client 110 for live viewing and the application installed in viewer client 130 for live viewing may be the same application. The viewer can understand as a viewing object of the live room. When the anchor client 110 creates a live broadcast room, the anchor client can acquire the live broadcast scene material selected by the anchor, thereby creating the live broadcast room. First, the anchor can initiate a live interactive activity with the audience in the live broadcast room, and the anchor can set the number of people participating in the activity, the interval time of the activity, the activity rule and the like at the anchor client. The live broadcast interactive activity can be a voting activity of each object in a live broadcast room, a live broadcast room object list can be generated according to data such as the vote number of each object or the time of each object participating in the activity, and the live broadcast room object list can be a ranking list of the live broadcast room. The voting activity may be voting on the user's head portrait, voting on a photo uploaded by the user, or voting on the user's magic expression. Secondly, responding to a live broadcast interaction activity initiated by the main broadcast, and enabling a first object and a second object to participate in the live broadcast interaction activity, wherein the first object is an object in a ranking list of a live broadcast room, and the second object is an object for voting on the first object in the ranking list of the live broadcast room. In the interaction process, the anchor client 110 displays special effect images of a plurality of first objects in the live room object list; the special effect image is obtained by performing special effect processing on image data of the first object. The first object client 132 enters the live broadcast room, and a ranking list of the live broadcast room is also displayed on the screen of the first object client 132, and special effect images of a plurality of first objects in the ranking list of the live broadcast room are also displayed. The second object client 134 enters the live broadcast room, performs evaluation operation on the special effect image of any first object in the leader board of the live broadcast room, and the second object client 134 sends an evaluation instruction of the special effect image of the first object to the server 120. The server 120 may adjust the vote value of the first object in the ranking list of the live broadcast room according to the evaluation instruction of the special effect image of the first object, adjust the ranking list of the live broadcast room according to the adjusted vote value of the first object, and send the adjusted ranking list of the live broadcast room to the anchor client 110, the first object client 132, and the second object client 134, so that the adjusted ranking list of the live broadcast room is displayed.
The anchor client 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, and tablet computers, the server 120 may be implemented by an independent server or a server cluster composed of a plurality of servers, and the viewer client 130 may be, but is not limited to, various personal computers, notebook computers, smart phones, and tablet computers.
Fig. 2 is a flowchart illustrating a live interaction method according to an exemplary embodiment, where, as shown in fig. 2, the live interaction method is used in the anchor client 110 or the first object client 132 or the second object client 134, and includes the following steps:
in step S210, special effect images of a number of first objects in the live room object list are presented.
The special effect image is obtained by carrying out special effect processing on the image data of the first object. The image data may be head image data of the first subject, or may be a whole-body image or a specific part image (such as a hand, an eye, a face) of the first subject captured by the image capturing apparatus. The special effect image may be a magic expression if the image data is head portrait data or face image data of the first object. The special effect image may be a special effect display character if the image data is a whole-body image of the first subject. The live room object list is a list of first objects displayed in a live room interface and sorted according to a predetermined rule, for example, the live room object list may be a live room ranking list. The first object may be an interactive audience participating in a live interaction. The interactive audience may be all or part of an audience that is watching a live broadcast.
Specifically, the anchor initiates a live broadcast interaction activity through an anchor client, and the first object can enter the live broadcast room through searching, hotspot recommendation and other modes. The first object may participate in the live interactive activity. The first object may upload image data of the first object to a server, and the server may perform special effect processing on the image data of the first object to obtain a special effect image of the first object. And if the plurality of first objects participate in the live broadcast interaction activity, the server can perform special effect processing on the image data of the plurality of first objects to obtain corresponding special effect images. The server may send the special effect image to the first object client and the anchor client, or may send the special effect image to clients viewing other objects in the live room. In addition, the image data of the first object can be locally processed by the first object client to obtain a special effect image of the first object. If a plurality of first objects participate in the live interactive activity, the server can perform special effect image on the plurality of first objects. The server may send the special effect image to the anchor client, or may send the special effect image to clients viewing other objects in the live room. Therefore, the first object client, the anchor client and the clients of other objects can show the special effect images sent by the server, and the special effect images are shown in the live room object list.
In step S220, the vote value of the first object in the live view object list is adjusted according to the evaluation instruction for the special effect image of the first object.
In step S230, the vote value adjusted live-room object list is presented.
The voting is a way for the watching object in the live broadcast room to vote each first object in the live broadcast room list according to the preference of the watching object or the preset rule. The evaluation instruction is an instruction triggered by a watching object in the live broadcast room executing voting operation through a client side of the watching object. The evaluation instructions may increase the vote value of the first object and may likewise decrease the vote value of the first object. For example, whether the special effect image of each first object is beautiful or not is evaluated, if the special effect image of a certain first object is evaluated to be beautiful, positive voting can be performed on the first object, for example, a praise behavior is performed on the first object; if the special effect image of a certain first object is evaluated to be not beautiful, negative voting can be performed on the first object, such as stepping on the first object. Specifically, the viewing object in the live broadcast room may preview a live broadcast room object list, trigger an evaluation instruction for the special effect image of the first object, and select a corresponding feature image of the first object from the live broadcast room object list to perform an evaluation operation in response to the evaluation instruction. And if an evaluation instruction for the special effect image of the first object is received, adjusting the voting value of the first object. The first object client, the anchor client and the clients of other objects can display the live room object list after the voting value is adjusted. After the voting value is adjusted, the adjusted voting value can be displayed in the designated area of the first object, and the special effect images in the live broadcast object list can be rearranged according to the voting value, so that the live broadcast object list after the voting value is rearranged can be displayed by the first object client, the anchor client and the clients of other objects. It will be appreciated that other first objects in the live room object list that are not evaluated may also be considered as having been adjusted, simply by adding zero to or subtracting zero from their vote value.
In the live broadcast interaction method, special effect images of a plurality of first objects in an object list of a live broadcast room are displayed; the special effect image is obtained by carrying out special effect processing on the image data of the first object; adjusting the voting value of the first object in a live broadcast object list according to the evaluation instruction of the special effect image of the first object; therefore, the live broadcast room object list after the voting value is adjusted is displayed, the interactive interest is improved, the live broadcast interactive modes are diversified, the live broadcast room watching objects can be evaluated through special effect images in the live broadcast room object list in the live broadcast room, and the live broadcast interaction among the live broadcast room watching objects is achieved.
In an exemplary embodiment, the image data is determined in any one of the following ways:
and responding to the shooting instruction in a live broadcast interface of the first object client to acquire image data obtained by shooting the first object. Or, the image data of the first object is obtained from a gallery, wherein the gallery is an image of the first object stored on the client or the server. Or the acquired avatar data of the first object.
Specifically, the photographing may be photographing or photographing. The shooting instruction may be an instruction issued by the first object to the first object client to shoot, and the first object may trigger the shooting instruction in the form of voice, shake, single click, double click, or the like. And responding to the shooting instruction, the first object client starts to run a shooting process according to the shooting instruction, shoots the first object and passes through the image data of the first object of the shooting assembly.
The first object client side is provided with a gallery, a plurality of pictures or video files of the first object are stored in the gallery of the first object client side, image data can be obtained by selecting the pictures from the gallery of the first object client side, and one picture can be captured from the video file to serve as the image data of the first object.
The server is provided with a gallery, pictures or video files of a plurality of first objects are stored in the gallery of the server, image data can be obtained by selecting the pictures from the gallery of the server, and one picture can be captured from the video file to serve as the image data of the first object.
The first object has corresponding avatar data, and the avatar data of the first object can be directly acquired as image data.
In this embodiment, the image data of the first object is acquired through various modes, and the diversity of the special effect image is improved, so that the interest of live broadcast interaction is improved.
In an exemplary embodiment, each of the special effect images is obtained by performing special effect processing on image data of each of the first objects based on the dynamic special effect material.
Specifically, the image data of each first object may be subjected to special effect processing according to the static special effect material, so as to obtain a corresponding special effect image. The static special effect material refers to a makeup template and an effect template for beautifying images, and the dynamic special effect material can be a makeup template such as gas makeup, pure makeup, European makeup, natural makeup, plain makeup and the like. The image data of each first object can be subjected to special effect processing according to the dynamic special effect materials to obtain a corresponding special effect image. The dynamic special effect materials can be templates with animation effects such as fun, change, beauty and beauty, virtual magic expression and the like. The dynamic special effect materials may correspond to a live broadcast scene in a live broadcast room, such as strange special effect materials, blessing special effect materials, and the like corresponding to a live broadcast scene of a holiday theme.
In the embodiment, the image data of each first object is subjected to special effect processing by using the dynamic special effect material, and a special effect image with a specific effect is displayed in a live broadcast room interface, so that the image data of the first object is more vivid, the visual effect of the live broadcast room interface is improved, a diversified live broadcast interaction mode is realized, the dwell time of a user in a live broadcast room is favorably prolonged, and the user retention rate of live broadcast application is improved.
In an exemplary embodiment, as shown in fig. 3, before showing the special effect images of the first objects in the live-air object list, the method further includes the following steps:
in step S310, in response to the upload instruction of each special effect image, an initial display position of each special effect image in the live view object list is determined according to the trigger time of the upload instruction.
In step S210, special effect images of a plurality of first objects in the live view object list are displayed, which may specifically be implemented by the following steps:
in step S320, each special effect image in the live view object list is displayed according to the initial display position.
Specifically, if the image data of the first object is obtained through the first object client, and a dynamic special effect material can be randomly obtained from the dynamic special effect material library, the corresponding dynamic special effect material can also be obtained from the dynamic special effect material library according to the behavior data of the first object or the user portrait data. And carrying out special effect processing on the image data of the first object by using the acquired dynamic special effect material to obtain a special effect image of the first object. In response to an upload instruction of the special effect image, the first object client uploads the special effect image to the server. The server acquires the trigger time of the uploading instruction, and determines the initial display position of the special effect image in the live broadcast room object list according to the trigger time, so that the special effect image is displayed in the live broadcast room object list according to the initial display position of the special effect image. For example, the earlier the uploading time of the special effect image is, the earlier the initial display position in the live view object list is. It is understood that after voting is started for the special effect images of the first objects, the special effect images in the live broadcast object list can be arranged and displayed according to the voting values of the first objects.
In this embodiment, the initial display position of each special effect image in the live broadcast object list is determined by the time when the first object uploads each special effect image, so that the enthusiasm of the first object to participate in live broadcast activities can be mobilized, and the number of objects watched in the live broadcast room is increased.
In an exemplary embodiment, as shown in fig. 4, the method further comprises the steps of:
in step S410, the number of special effect images in the live view object list is acquired.
In step S420, if the number of special effect images reaches the display upper limit threshold, a first prompt message is displayed in the live broadcast interface of the first object client.
Wherein the first prompt message includes information that the number of prompt special effect images reaches an upper limit. The exposure upper threshold refers to the number of first objects allowed to participate in the live interaction. The exposure upper limit threshold may be manually configured by the anchor when creating the live broadcast room, or may be a default threshold configured in advance. In particular, several special effect images already exist in the live room object list. After the first object client obtains the special effect images, the number of the special effect images in the live broadcast room object list is obtained in response to the uploading instruction of the special effect images. And comparing the number of the special effect images with a display upper limit threshold, and if the number of the special effect images reaches the display upper limit threshold, displaying a first prompt message in a live broadcast interface of the first object client, wherein the first prompt message comprises information for prompting that the number of the special effect images reaches the upper limit. And if the number of the special effect images does not reach the display upper limit threshold value, uploading the special effect images to a server, wherein the server can send the special effect images to the anchor client and can also send the special effect images to clients watching other objects in the live broadcast room. Therefore, the first object client, the anchor client and the clients of other objects can show the special effect images sent by the server, and the special effect images are shown in the live room object list.
In the embodiment, the number of the objects participating in the live broadcast interaction is controlled by configuring the corresponding display upper limit threshold value for the live broadcast room, so that the individual requirements of the anchor on the number of the interactive participants in the live broadcast scene are met.
In an exemplary embodiment, before adjusting the vote value of the first object in the live-air object list according to the evaluation instruction of the special effect image of the first object, the method further includes: and showing a voting button in a specified area of the special effect image.
The evaluation instruction is an instruction generated based on a second object touch voting button, the second object is a viewing object of a live broadcast, and the second object can be a live broadcast viewing object voting for the first object. The second object may also be another first object in the live room object list. Specifically, as shown in fig. 5, a voting button is displayed in the live broadcast interface, and the voting button is displayed in a designated area of the special effect image, such as a left area and a right area of the special effect image. The upper left corner region, etc. The voting button corresponding to the special effect image of any first object is triggered to vote for the first object, and it should be noted that the live broadcast object list can be hidden or contracted in the live broadcast interface, and the object list can be similar to a pendant or an icon when being contracted.
In the embodiment, the voting buttons are displayed in the specified area of the special effect image, so that the manner of participating in interaction is provided for watching objects in a live broadcast room, and the method is simple and easy to operate.
In an exemplary embodiment, as shown in fig. 6a, adjusting a vote value of a first object in a live view object list according to an evaluation instruction of a special effect image of the first object includes:
in step S610, if an instruction for enlarging the special effect image by a second object touch is received, displaying the enlarged special effect image in a live broadcast interface of a second object client, and acquiring a voice signal sent by the second object;
in step S620, the vote value of the first object in the live-air object list is adjusted according to the voice signal.
The zooming instruction can be an instruction for zooming in a special effect image in a live broadcast object list, and the zooming instruction can be triggered by a live broadcast watching object in the forms of voice, shaking, clicking or double clicking a screen, opening of a preset number of fingers on the screen, gesture change and the like. As shown in fig. 6b, a five-finger opening operation is performed in the display area of the special effect image, an enlargement instruction for the special effect image is received, the special effect image is enlarged, a five-finger closing operation is performed in the display area of the special effect image, and the enlarged special effect image is reduced. Specifically, a second object enters a live broadcast room, a live broadcast room object list is displayed in a live broadcast room interface of a second object client, the live broadcast room object list comprises a plurality of special effect images, and the amplified special effect images are displayed in the live broadcast room interface of the second object client when the second object touches a method instruction for the special effect images. And if the special effect image comprises a scene sound effect, acquiring the scene sound effect of the special effect image for playing. After the special effect image is enlarged, a speech signal uttered by the second object may be acquired. The voice signal can be an audio signal corresponding to the words like beauty, and goodness, and can also be an audio signal corresponding to the stepping behaviors like ugly and goodness. And identifying the voice signal, acquiring corresponding instruction content, and adjusting the voting value of the first object in the live broadcast object list.
In this embodiment, combine gesture recognition and speech recognition, increase live interactive mode, promote live interesting.
In an exemplary embodiment, as shown in fig. 7, the method further comprises the steps of:
in step S710, when the voting cut-off time is reached, a first object whose voting value satisfies a preset condition is determined as a first target object according to the voting value of each first object.
In step S720, a connected communication between the account of the first target object and the anchor account is established.
The voting deadline refers to a maximum duration allowed to participate in the live interaction. The voting deadline can be manually configured by the anchor when creating the live room or can be a default threshold configured in advance. The preset condition is a condition that needs to be met for the first object connected with the anchor wheat, for example, if the list of the objects in the live broadcast room can be a ranking list of the live broadcast room, the preset condition can be set as that the objects are connected with the anchor wheat in a ranking list, or the objects are connected with the anchor wheat three days before the ranking list, and the like. Specifically, if the voting deadline is reached, the server acquires the voting value of each first object, and determines the first object whose voting value meets the preset condition as the first target object according to the voting value of each first object. The number of the first target objects may be one or more. And if the first target object is one, acquiring the first object from the live broadcast object list as the first target object, and establishing the connection communication between the account of the first target object and the main broadcast account. If the number of the first target objects is multiple, the first objects with the corresponding number are obtained from the live broadcast room object list to serve as the first target objects, and the connecting communication between the account of the first target objects and the main broadcast account is established at different moments. For example, the anchor may connect to each first target object in sequence from high to low in the list order; the anchor may also connect to each first target object in a random manner. A preset number of first objects may also be selected as the first target object from high to low according to the vote value of each first object.
In the embodiment, the first target object connected with the main broadcast is determined according to the voting value of each first object in the object list of the live broadcast room, so that the interaction mode of objects watched by the main broadcast and the live broadcast room is enriched, the number of audiences in the live broadcast room is increased, and the dwell time of the audiences in the live broadcast room is prolonged.
In an exemplary embodiment, before establishing the online communication between the account of the first target object and the primary account, the method further comprises: and displaying the special effect image of the first target object and/or a second prompt message on a live broadcast interface of the anchor client, wherein the second prompt message comprises information for prompting the anchor and the first target object to connect to the wheat.
Specifically, if the voting deadline is reached, a first target object is determined from the live room object list. The special effect image of the first target object can be displayed in a live broadcast interface of the anchor client, so that an interaction result can be displayed to the anchor. The second prompt message may be displayed in a live broadcast interface of the anchor client, where the second prompt message includes information prompting that the anchor connects to the first target object. The second prompt message can be a prompt language for promoting interaction interest, such as 'I is a leaderboard and does not need to chat with me', and 'I is so beautiful and chats with me' and the like. And displaying the special effect image and the second prompt message of the first target object in a live broadcast interface of the anchor client.
In an exemplary embodiment, the time when the second prompt message is displayed on the live broadcast room interface of the autonomous broadcast client starts to be timed, and if the timed time reaches the microphone connecting waiting time, the microphone connecting communication between the account of the first target object and the anchor account is automatically established.
In this embodiment, through setting up the duration of waiting for connecting to the wheat, reduce the latency of watching masses in the live broadcast room, promote the efficiency interactive with the anchor, help increasing the spectator quantity in the live broadcast room, improve the spectator's of live broadcast room length of stay.
In an exemplary embodiment, the method further comprises the steps of: and if the voting deadline time is reached, displaying the final ranking list.
And the final ranking list is obtained by performing ranking processing on the basis of the final voting value of each first object. Specifically, if the voting deadline is reached, the server may obtain the voting value of each first object, and rank the special effect images of each first object according to the voting value of each first object, so as to obtain a final ranking list. The final leaderboard may be the first list of objects generated by ranking the vote values from high to low. The final ranking list can be displayed through the first object client, the second object client and the anchor client, and the final ranking list can be located in the middle area, the top area or the bottom area and the like in the live broadcast room interface.
In an exemplary embodiment, establishing the connected communication between the account of the first target object and the anchor account includes: and receiving a confirmation message of the microphone connecting request sent by the anchor client, and establishing microphone connecting communication between the account of the first target object and the anchor account according to the confirmation message.
Specifically, the first target object may trigger the microphone connecting request through the first object client. The first object client may send a connect-to-go request to the anchor client through the server. The anchor may trigger the permission instruction through the anchor client. The server responds to the permission instruction, and sends a confirmation message of the microphone connecting request to the first object client so that the first object client can establish microphone connecting communication between the account of the first target object and the main account according to the confirmation message of the microphone connecting request.
In this embodiment, after receiving the confirmation message from the anchor client, the first object client can establish the connection communication between the account of the first target object and the anchor account, so that the anchor can manage the first object in a unified manner.
In an exemplary embodiment, as shown in fig. 8, before establishing the connected communication between the account of the first target object and the anchor account, the method further comprises:
in step S810, in response to the wheat changing instruction for changing the first target object, a special effect image of the second target object is displayed in a live broadcast interface of the anchor client according to the wheat changing instruction.
In step S820, timing is started when the live broadcast room interface of the autonomous podcast client displays the special-effect image of the second target object, and if the timing time reaches the microphone connecting waiting duration, microphone connecting communication between the account of the second target object and the anchor account is established.
The second target object is any other first object except the first target object in the final ranking list. The microphone changing instruction is an instruction for changing a microphone connecting object when the main broadcast does not establish microphone connecting communication with a watching object in a live broadcast room, and the microphone changing instruction can be triggered in the forms of voice, shaking, single click or double click and the like. Specifically, the anchor triggers a wheat changing instruction for changing the first target object through voice, shaking, single-click or double-click and the like, responds to the wheat changing instruction for changing the first target object, and displays a special effect image of the second target object in a live broadcast interface of the anchor client according to the wheat changing instruction. The anchor considers whether to connect to the second target object by previewing the special effect image of the second target object. And starting timing at the moment when the live broadcast room interface of the autonomous broadcast client side displays the special effect image of the second target object, and automatically establishing the microphone connecting communication between the account of the second target object and the anchor account if the timing time reaches the microphone connecting waiting time.
In the embodiment, the intentional continuous microphone object is provided for the anchor by the microphone changing instruction, the waiting time for watching the masses in the live broadcast room is shortened by setting the continuous microphone waiting time, the respective requirements of the anchor and the live broadcast room for watching the masses are balanced, the interaction efficiency with the anchor is improved, the number of audiences in the live broadcast room is increased, and the staying time of the audiences in the live broadcast room is prolonged.
In an exemplary embodiment, as shown in fig. 9, before presenting the special effects image of the second target object in the live view interface of the anchor client, the method further comprises the steps of:
in step S910, the number of times the anchor triggers a change of wheat instruction is acquired.
In step S920, when the number of times that the anchor triggers the wheat change instruction reaches the wheat change number threshold, a third prompt message is displayed in the live broadcast interface of the anchor client, where the third prompt message includes information that prompts that the number of times of wheat change reaches the upper limit.
Wherein, the wheat changing times threshold value refers to the maximum times of allowing the main broadcasting to change wheat. The number threshold of the wheat changing times can be manually configured by the main player in a value range provided in advance when the live broadcast room is created, and can also be a default threshold configured in advance. Specifically, when the anchor triggers a wheat changing instruction, the number of times of the wheat changing instruction triggered by the anchor is acquired in response to the wheat changing instruction, the number of times of the wheat changing instruction triggered by the anchor is compared with a wheat changing number threshold, when the number of times of the wheat changing instruction triggered by the anchor reaches the wheat changing number threshold, a third prompt message is displayed in a live broadcasting room interface of the anchor client, and the third prompt message comprises information for prompting that the number of times of wheat changing reaches an upper limit. And when the frequency of the main broadcast triggering the wheat changing instruction does not reach the wheat changing frequency threshold value, displaying the special effect image of the third target object in a live broadcast interface of the main broadcast client. The third target object is any other first object except the first target object and the second target object in the final ranking list. And starting timing at the moment when the live broadcast room interface of the autonomous broadcast client side displays the special effect image of the third target object, and establishing the microphone connecting communication between the account of the third target object and the main broadcast account if the timing time reaches the microphone connecting waiting time.
In the embodiment, by setting the times threshold of the wheat replacement and the waiting time of wheat connection, the probability of the first object in the final ranking list being connected with the main broadcast is increased, the waiting time for watching the masses in the live broadcast room is reduced, the interaction efficiency with the main broadcast is improved, and the stay time of audiences in the live broadcast room is prolonged.
In an exemplary embodiment, prior to responding to the microphone changing instruction of the first target object, the method further comprises: and displaying a wheat changing button in a live broadcast room interface of the anchor client.
The microphone changing instruction is generated based on an anchor touch microphone changing button. Specifically, as shown in fig. 5, a change button is displayed in the live broadcast interface, and the voting button is displayed in a designated area of the live broadcast interface of the anchor client, such as a left area, a right area, a lower right angle area, and the like of the live broadcast interface. When the anchor has not established the connecting communication with the watching object in the live broadcast room, the connecting object can be switched by clicking and triggering the wheat changing button, and the special effect image of the second target object is displayed in the live broadcast room interface of the anchor client.
In the embodiment, the interaction mode between the viewing objects of the anchor and the live broadcast room is enriched by displaying the voting buttons in the specified area of the special-effect image, and the method is simple and easy to operate.
Fig. 10 is a flow diagram illustrating a live interaction method, according to an example embodiment, including the steps of:
in step S1002, in response to an upload instruction of each special effect image, an initial display position of each special effect image in the live view object list is determined according to a trigger time of the upload instruction.
The special effect image can be obtained by carrying out special effect processing on the image data of the first object; further, each of the special effect images may be obtained by performing special effect processing on image data of each of the first objects based on the motion special effect material. Determining the image data in any one of the following ways: responding to a shooting instruction in a live broadcast interface of a first object client, and acquiring image data obtained by shooting a first object; or acquiring image data of the first object from a gallery, wherein the gallery is an image of the first object stored on a client or a server; or the acquired avatar data of the first object.
In step S1004, each special effect image in the live view object list is displayed in accordance with the initial display position.
In step S1006, the vote value of the first object in the live view object list is adjusted according to the evaluation instruction for the special effect image of the first object.
Specifically, a voting button may be displayed in a designated area of the special-effect image, the evaluation instruction is an instruction generated by touching the voting button based on a second object, and the second object is a viewing object in a live broadcast room.
Similarly, an amplification instruction of the special effect image can be received through means of gesture recognition, voice recognition, client attitude information recognition, touch detection and the like, for example, if an amplification instruction of the second object for the special effect image is received, the amplified special effect image is displayed in a live broadcast interface of the second object client, and a voice signal sent by the second object is collected; and adjusting the voting value of the first object in the live broadcasting room object list according to the voice signal.
In step S1008, the vote value adjusted live view object list is presented.
In step S1010, if the voting deadline is reached, displaying the final ranking list; and the final ranking list is obtained by performing ranking processing on the basis of the final voting values of the first objects.
In step S1012, when the voting cutoff time is reached, a first object whose voting value satisfies a predetermined condition is determined as a first target object based on the voting value of each first object.
In step S1014, a special effect image of the first target object and/or a second prompt message is displayed on the live broadcast interface of the anchor client, where the second prompt message includes information prompting that the anchor connects to the first target object.
In step S1016, in response to the wheat changing instruction for changing the first target object, a special effect image of the second target object is displayed in the live broadcast interface of the anchor client according to the wheat changing instruction.
The second target object is any other first object except the first target object in the final ranking list. The wheat changing button can be displayed in a live broadcast room interface of the anchor client; the microphone changing instruction is an instruction generated based on an anchor touch microphone changing button.
In step S1018, timing is started when the live broadcast room interface of the autonomous podcast client displays the special-effect image of the second target object, and if the timing time reaches the microphone connection waiting duration, microphone connection communication between the account of the second target object and the anchor account is established.
Fig. 11 is a flowchart illustrating a live interaction method according to an exemplary embodiment, where, as shown in fig. 11, the live interaction method is applied to the anchor client 110, and includes the following steps:
in step S1110, special effect images of a plurality of first objects in the live view object list are displayed; the special effect image is obtained by performing special effect processing on image data of the first object.
In step S1120, the vote value of the first object in the live view object list is adjusted according to the evaluation instruction for the special effect image of the first object.
In step S1130, the list of live room objects after the vote value adjustment is presented.
In step S1140, if the voting deadline is reached, the special effect image and/or the second prompt message of the first target object are/is displayed.
The first target object is a first object of which the voting value meets a preset condition in the live broadcasting room object list, and the second prompt message comprises information for prompting the anchor and the first target object to connect to the internet.
The detailed description of the steps in the present embodiment has been provided in the previous embodiment of the method, and will not be described in detail here.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above-mentioned flowcharts may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the steps or the stages in other steps.
Fig. 12 is a block diagram illustrating a live interaction device, according to an example embodiment. Referring to fig. 12, the apparatus includes a special effect image presentation module 1210, a vote value adjustment module 1220, and an object list presentation module 1230.
A special effect image presentation module 1210 configured to perform a special effect image presentation of a number of first objects in a live view object list; the special effect image is obtained by carrying out special effect processing on the image data of the first object;
a vote value adjusting module 1220 configured to execute adjusting a vote value of a first object in the live view object list according to an evaluation instruction of a special effect image of the first object;
and an object list presentation module 1230 configured to perform presentation of the vote value adjusted live room object list.
In an exemplary embodiment, the image data is determined in any one of the following ways:
responding to a shooting instruction in a live broadcast interface of a first object client, and acquiring image data obtained by shooting the first object;
acquiring image data of the first object from a gallery, wherein the gallery is an image of the first object stored on a client or a server;
and acquiring the head portrait data of the first object.
In an exemplary embodiment, the live interactive apparatus further includes: the initial position determining module is configured to execute an uploading instruction responding to each special effect image, and determine an initial display position of each special effect image in the live broadcast room object list according to the triggering time of the uploading instruction;
and the object list display module is configured to display each special effect image in the live broadcast object list according to the initial display position.
In an exemplary embodiment, the live interactive apparatus further includes:
the image quantity acquisition module is configured to acquire the quantity of the special effect images in the live broadcast room object list;
and the first message display module is configured to display a first prompt message in a live broadcast interface of the first object client if the number of the special effect images reaches a display upper limit threshold, wherein the first prompt message comprises information for prompting that the number of the special effect images reaches an upper limit.
In an exemplary embodiment, the live interactive apparatus further includes:
and a voting button display module configured to perform displaying of voting buttons in a specified area of the special effect image, wherein the evaluation instruction is an instruction generated by touching the voting buttons based on a second object, and the second object is a viewing object in a live broadcast.
In an exemplary embodiment, the vote value adjusting module is further configured to execute, if an instruction for enlarging the special effect image is received by a second object through touch control, displaying the enlarged special effect image in a live broadcast interface of a second object client, and acquiring a voice signal sent by the second object; and adjusting the voting value of the first object in the live broadcasting room object list according to the voice signal.
In an exemplary embodiment, each of the special effect images is obtained by performing special effect processing on image data of each of the first objects based on a dynamic special effect material.
In an exemplary embodiment, the live interactive apparatus further includes:
and the first target object determining module is configured to determine a first object of which the voting value meets a preset condition as a first target object according to the voting value of each first object if the voting deadline is reached.
A first connecting module configured to perform establishing a connecting communication between the account of the first target object and an anchor account.
In an exemplary embodiment, the live interactive apparatus further includes:
the final ranking list display module is configured to display the final ranking list if the voting deadline is reached; and the final ranking list is obtained by performing ranking processing on the basis of the final voting value of each first object.
In an exemplary embodiment, the first connecting module is further configured to execute receiving a confirmation message of a connecting request sent by an anchor client, and establish connecting communication between the account of the first target object and the anchor account according to the confirmation message.
In an exemplary embodiment, the live interactive apparatus further includes:
the second message display module is configured to execute a live broadcast room interface of a main broadcast client and display a second prompt message, wherein the second prompt message comprises information for prompting the main broadcast to connect to the first target object; or
And the first target special effect image display module is configured to execute a live broadcast room interface of a main broadcast client side and display the special effect image of the first target object.
In an exemplary embodiment, the first connecting module is further configured to perform timing from a time when the second prompt message is displayed on a live broadcast interface of the anchor client, and establish a connecting communication between the account of the first target object and the anchor account if the timing time reaches a connecting waiting duration.
In an exemplary embodiment, the live interactive apparatus further includes:
the barley changing module is configured to execute a barley changing instruction responding to the first target object changing, and display a special effect image of a second target object in a live broadcast interface of the anchor client according to the barley changing instruction; the second target object is any other first object except the first target object in the final ranking list;
and the second microphone connecting module is configured to execute timing from the moment when the special effect image of the second target object is displayed on the live broadcast room interface of the anchor client, and establish microphone connecting communication between the account of the second target object and the anchor account if the timing time reaches the microphone connecting waiting time.
In an exemplary embodiment, the live interactive apparatus further includes:
a wheat changing button display module configured to perform displaying of a wheat changing button in a live broadcast room interface of the anchor client; the wheat changing instruction is generated by touching the wheat changing button based on an anchor.
In an exemplary embodiment, the live interactive apparatus further includes:
the times acquisition module is configured to execute the times of acquiring the anchor trigger wheat-changing instruction;
and the third message display module is configured to display a third prompt message in a live broadcast interface of the anchor client when the number of times that the anchor triggers the wheat changing instruction reaches a wheat changing number threshold, wherein the third prompt message comprises information for prompting that the wheat changing number reaches an upper limit.
Fig. 13 is a block diagram illustrating a live interaction device, according to an example embodiment. Referring to fig. 13, the apparatus includes a special effect image presentation module 1310, a vote value adjustment module 1320, an object list presentation module 1330, and an image message presentation module 1340.
A special effect image presentation module 1310 configured to perform a special effect image presentation of a number of first objects in a live view object list; the special effect image is obtained by carrying out special effect processing on the image data of the first object;
a vote value adjusting module 1320, configured to execute, if an evaluation instruction for a special effect image of a first object is received, adjusting a vote value of the first object in the live broadcast object list;
the object list presenting module 1330 is configured to perform presenting the vote value adjusted live-room object list.
The image message presentation module 1340 is configured to perform presentation of the special effect image of the first target object and the second prompt message if the voting deadline is reached; the first target object is a first object of which the voting value meets a preset condition in the live broadcast room object list, and the second prompt message comprises information for prompting the anchor and the first target object to connect to the other.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 14 is a block diagram illustrating an apparatus 1400 for live interaction, according to an example embodiment. For example, the device 1400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and so forth.
Referring to fig. 14, device 1400 may include one or more of the following components: a processing component 1402, a memory 1404, a power component 1406, a multimedia component 1408, an audio component 1410, an input/output (I/O) interface 1412, a sensor component 1414, and a communication component 1416.
The processing component 1402 generally controls the overall operation of the device 1400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 1402 may include one or more processors 1420 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 1402 can include one or more modules that facilitate interaction between processing component 1402 and other components. For example, the processing component 1402 can include a multimedia module to facilitate interaction between the multimedia component 1408 and the processing component 1402.
The memory 1404 is configured to store various types of data to support operation at the device 1400. Examples of such data include instructions for any application or method operating on device 1400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1404 may be implemented by any type of volatile or non-volatile storage device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 1406 provides power to the various components of the device 1400. The power components 1406 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 1400.
The multimedia component 1408 includes a screen that provides an output interface between the device 1400 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1408 includes a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 1400 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1410 is configured to output and/or input audio signals. For example, the audio component 1410 includes a Microphone (MIC) configured to receive external audio signals when the device 1400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1404 or transmitted via the communication component 1416. In some embodiments, audio component 1410 further includes a speaker for outputting audio signals.
I/O interface 1412 provides an interface between processing component 1402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 1414 includes one or more sensors for providing various aspects of status assessment for the device 1400. For example, the sensor component 1414 may detect an open/closed state of the device 1400, a relative positioning of components, such as a display and keypad of the device 1400, a change in position of the device 1400 or a component of the device 1400, the presence or absence of user contact with the device 1400, an orientation or acceleration/deceleration of the device 1400, and a change in temperature of the device 1400. The sensor assembly 1414 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 1414 may also include a photosensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1416 is configured to facilitate wired or wireless communication between the device 1400 and other devices. The device 1400 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 1416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the device 1400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as the memory 1404 that includes instructions executable by the processor 1420 of the device 1400 to perform the above-described method. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements the component issuing method in any of the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A live interaction method is characterized by comprising the following steps:
displaying special effect images of a plurality of first objects in a live room object list; the special effect image is obtained by carrying out special effect processing on the image data of the first object;
adjusting the voting value of the first object in the live broadcast object list according to an evaluation instruction of the special effect image of the first object;
and displaying the live broadcasting room object list after the voting value is adjusted.
2. The live interaction method of claim 1, wherein the image data is determined by any one of:
responding to a shooting instruction in a live broadcast interface of a first object client, and acquiring image data obtained by shooting the first object;
acquiring image data of the first object from a gallery, wherein the gallery is an image of the first object stored on a client or a server;
and acquiring the head portrait data of the first object.
3. The live interaction method of claim 2, wherein before the displaying the special effect images of the first objects in the live room object list, the method further comprises:
responding to an uploading instruction of each special effect image, and determining an initial display position of each special effect image in the live broadcast room object list according to the triggering time of the uploading instruction;
the displaying of the special effect image of the plurality of first objects in the live broadcast object list includes:
and displaying each special effect image in the live broadcast room object list according to the initial display position.
4. The live interaction method of claim 3, wherein prior to the presenting each of the special effects images in the live-room object list, the method further comprises:
acquiring the number of special effect images in the live broadcast room object list;
and if the number of the special effect images reaches a display upper limit threshold value, displaying a first prompt message in a live broadcast interface of the first object client, wherein the first prompt message comprises information for prompting that the number of the special effect images reaches the upper limit.
5. A live interaction method is characterized by comprising the following steps:
displaying special effect images of a plurality of first objects in a live room object list; the special effect image is obtained by carrying out special effect processing on the image data of the first object;
if an evaluation instruction for a special effect image of a first object is received, adjusting a voting value of the first object in the live broadcast object list;
displaying the live broadcasting room object list after the voting value is adjusted;
if the voting deadline is reached, displaying the special effect image and the second prompt message of the first target object; the first target object is a first object of which the voting value meets a preset condition in the live broadcast room object list, and the second prompt message comprises information for prompting the anchor and the first target object to connect to the other.
6. A live interaction device, comprising:
the display device comprises a special effect image display module, a display module and a display module, wherein the special effect image display module is configured to execute display of special effect images of a plurality of first objects in an object list of a live broadcast room; the special effect image is obtained by carrying out special effect processing on the image data of the first object;
the vote value adjusting module is configured to execute adjustment of the vote value of the first object in the live broadcast object list according to an evaluation instruction of the special effect image of the first object;
and the object list display module is configured to execute the display of the live broadcast room object list after the vote value is adjusted.
7. A live interaction device, comprising:
the display device comprises a special effect image display module, a display module and a display module, wherein the special effect image display module is configured to execute display of special effect images of a plurality of first objects in an object list of a live broadcast room; the special effect image is obtained by carrying out special effect processing on the image data of the first object;
the vote value adjusting module is configured to execute adjustment of a vote value of a first object in the live broadcast object list if an evaluation instruction of a special effect image of the first object is received;
the object list display module is configured to execute the display of the live broadcast room object list after the voting value is adjusted;
the image message display module is configured to display the special effect image and the second prompt message of the first target object if the voting deadline is reached; the first target object is a first object of which the voting value meets a preset condition in the live broadcast room object list, and the second prompt message comprises information for prompting the anchor and the first target object to connect to the other.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the live interaction method of any of claims 1-5.
9. A storage medium having instructions that, when executed by a processor of an electronic device, enable the electronic device to perform the live interaction method of any one of claims 1 to 5.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the component issuing method of any one of claims 1 to 5 when executed by a processor.
CN202011584553.8A 2020-12-28 2020-12-28 Live broadcast interaction method and device, electronic equipment, storage medium and program product Pending CN112788354A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011584553.8A CN112788354A (en) 2020-12-28 2020-12-28 Live broadcast interaction method and device, electronic equipment, storage medium and program product
PCT/CN2021/134091 WO2022142944A1 (en) 2020-12-28 2021-11-29 Live-streaming interaction method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011584553.8A CN112788354A (en) 2020-12-28 2020-12-28 Live broadcast interaction method and device, electronic equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN112788354A true CN112788354A (en) 2021-05-11

Family

ID=75753043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011584553.8A Pending CN112788354A (en) 2020-12-28 2020-12-28 Live broadcast interaction method and device, electronic equipment, storage medium and program product

Country Status (2)

Country Link
CN (1) CN112788354A (en)
WO (1) WO2022142944A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113518240A (en) * 2021-07-20 2021-10-19 北京达佳互联信息技术有限公司 Live broadcast interaction method, virtual resource configuration method, virtual resource processing method and device
WO2022048152A1 (en) * 2020-09-03 2022-03-10 广州华多网络科技有限公司 Video communication cooperative control, request and feedback method and apparatus, device, and medium
WO2022142944A1 (en) * 2020-12-28 2022-07-07 北京达佳互联信息技术有限公司 Live-streaming interaction method and apparatus
CN117579853A (en) * 2023-11-16 2024-02-20 书行科技(北京)有限公司 Information prompting method and device for live broadcasting room, electronic equipment and readable storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115379113A (en) * 2022-07-18 2022-11-22 北京达佳互联信息技术有限公司 Shooting processing method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107579959A (en) * 2017-08-22 2018-01-12 广州华多网络科技有限公司 Ballot receiving/transmission method, device and the relevant device of client and server end
CN107645682A (en) * 2017-10-20 2018-01-30 广州酷狗计算机科技有限公司 Carry out live method and system
CN109257616A (en) * 2018-09-30 2019-01-22 武汉斗鱼网络科技有限公司 A kind of voice connects wheat interactive approach, device, equipment and medium
CN110392274A (en) * 2019-07-17 2019-10-29 咪咕视讯科技有限公司 A kind of information processing method, equipment, client, system and storage medium
US20200099960A1 (en) * 2016-12-19 2020-03-26 Guangzhou Huya Information Technology Co., Ltd. Video Stream Based Live Stream Interaction Method And Corresponding Device
CN111309428A (en) * 2020-02-26 2020-06-19 网易(杭州)网络有限公司 Information display method, information display device, electronic apparatus, and storage medium
CN112087641A (en) * 2020-09-03 2020-12-15 广州华多网络科技有限公司 Video communication cooperative control, request and feedback method and device, equipment and medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534954B (en) * 2016-12-19 2019-11-22 广州虎牙信息科技有限公司 Information interacting method, device and terminal device based on live video stream
CN106791981A (en) * 2016-12-19 2017-05-31 广州虎牙信息科技有限公司 Live video stream transfer control method, device and terminal device
CN107172477B (en) * 2017-06-16 2019-12-31 广州市网星信息技术有限公司 Voting method and device
CN108366287A (en) * 2018-01-30 2018-08-03 广州虎牙信息科技有限公司 Direct broadcasting room action message methods of exhibiting and computer storage media, terminal
CN112788354A (en) * 2020-12-28 2021-05-11 北京达佳互联信息技术有限公司 Live broadcast interaction method and device, electronic equipment, storage medium and program product

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200099960A1 (en) * 2016-12-19 2020-03-26 Guangzhou Huya Information Technology Co., Ltd. Video Stream Based Live Stream Interaction Method And Corresponding Device
CN107579959A (en) * 2017-08-22 2018-01-12 广州华多网络科技有限公司 Ballot receiving/transmission method, device and the relevant device of client and server end
CN107645682A (en) * 2017-10-20 2018-01-30 广州酷狗计算机科技有限公司 Carry out live method and system
CN109257616A (en) * 2018-09-30 2019-01-22 武汉斗鱼网络科技有限公司 A kind of voice connects wheat interactive approach, device, equipment and medium
CN110392274A (en) * 2019-07-17 2019-10-29 咪咕视讯科技有限公司 A kind of information processing method, equipment, client, system and storage medium
CN111309428A (en) * 2020-02-26 2020-06-19 网易(杭州)网络有限公司 Information display method, information display device, electronic apparatus, and storage medium
CN112087641A (en) * 2020-09-03 2020-12-15 广州华多网络科技有限公司 Video communication cooperative control, request and feedback method and device, equipment and medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022048152A1 (en) * 2020-09-03 2022-03-10 广州华多网络科技有限公司 Video communication cooperative control, request and feedback method and apparatus, device, and medium
WO2022142944A1 (en) * 2020-12-28 2022-07-07 北京达佳互联信息技术有限公司 Live-streaming interaction method and apparatus
CN113518240A (en) * 2021-07-20 2021-10-19 北京达佳互联信息技术有限公司 Live broadcast interaction method, virtual resource configuration method, virtual resource processing method and device
CN113518240B (en) * 2021-07-20 2023-08-08 北京达佳互联信息技术有限公司 Live interaction, virtual resource configuration and virtual resource processing method and device
CN117579853A (en) * 2023-11-16 2024-02-20 书行科技(北京)有限公司 Information prompting method and device for live broadcasting room, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
WO2022142944A1 (en) 2022-07-07

Similar Documents

Publication Publication Date Title
CN106791893B (en) Video live broadcasting method and device
WO2020057327A1 (en) Information list display method and apparatus, and storage medium
EP3125530B1 (en) Video recording method and device
CN112788354A (en) Live broadcast interaction method and device, electronic equipment, storage medium and program product
CN106506448B (en) Live broadcast display method and device and terminal
CN112905074B (en) Interactive interface display method, interactive interface generation method and device and electronic equipment
EP3264774B1 (en) Live broadcasting method and device for live broadcasting
US20210258619A1 (en) Method for processing live streaming clips and apparatus, electronic device and computer storage medium
EP3258414B1 (en) Prompting method and apparatus for photographing
CN109151565B (en) Method and device for playing voice, electronic equipment and storage medium
CN111343476A (en) Video sharing method and device, electronic equipment and storage medium
CN110677734B (en) Video synthesis method and device, electronic equipment and storage medium
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
US20220078221A1 (en) Interactive method and apparatus for multimedia service
WO2020093798A1 (en) Method and apparatus for displaying target image, terminal, and storage medium
CN107396166A (en) The method and device of live middle display video
CN111866531A (en) Live video processing method and device, electronic equipment and storage medium
CN107566878A (en) The method and device of live middle display picture
CN114268823A (en) Video playing method and device, electronic equipment and storage medium
CN114430494B (en) Interface display method, device, equipment and storage medium
CN107247794B (en) Topic guiding method in live broadcast, live broadcast device and terminal equipment
CN107105311B (en) Live broadcasting method and device
CN106954093B (en) Panoramic video processing method, device and system
CN112669233A (en) Image processing method, image processing apparatus, electronic device, storage medium, and program product
CN105635573B (en) Camera visual angle regulating method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210511

RJ01 Rejection of invention patent application after publication