CN106878820B - Live broadcast interaction method and device - Google Patents

Live broadcast interaction method and device Download PDF

Info

Publication number
CN106878820B
CN106878820B CN201710065807.7A CN201710065807A CN106878820B CN 106878820 B CN106878820 B CN 106878820B CN 201710065807 A CN201710065807 A CN 201710065807A CN 106878820 B CN106878820 B CN 106878820B
Authority
CN
China
Prior art keywords
interactive
interaction
information
instruction
audience terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710065807.7A
Other languages
Chinese (zh)
Other versions
CN106878820A (en
Inventor
韩尚佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Publication of CN106878820A publication Critical patent/CN106878820A/en
Application granted granted Critical
Publication of CN106878820B publication Critical patent/CN106878820B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the disclosure relates to a live broadcast interaction method and device, and belongs to the field of network live broadcast. The method comprises the following steps: obtaining model data corresponding to the anchor image model; sending model data to each audience terminal, wherein the audience terminals are used for displaying the anchor image model in the live broadcast picture according to the model data; generating an interaction instruction according to interaction information sent by each audience terminal; and sending an interaction instruction to the audience terminal, wherein the audience terminal is used for controlling the anchor image model to interact according to the interaction instruction. The method and the device solve the problem that in the live broadcast process, the anchor does not interact with audiences in time according to the barrage content, the interaction effect is influenced, and the audiences are lost; the method and the system achieve real-time interaction with audiences by using the anchor image model, improve interaction efficiency and interaction quality while ensuring interaction timeliness, and avoid audience loss.

Description

Live broadcast interaction method and device
The present application claims priority of chinese patent application having application number 201611126239.9 entitled "live broadcast interaction method and apparatus", filed by the chinese patent office on 9/12/2016, the entire contents of which are incorporated herein by reference.
Technical Field
The present disclosure relates to the field of network live broadcasting, and in particular, to a live broadcasting interaction method and apparatus.
Background
With the continuous development of internet technology, live webcasting is gradually emerging, and more users are attracted to live webcasting.
In the process of watching live broadcast, audiences can send edited text information to a live broadcast server through a terminal, and the text information is displayed in a live broadcast picture in a bullet screen mode after the live broadcast server receives the text information; the anchor can interact with the audience in real time according to the bullet screen content. If the anchor does not interact with the audience in time according to the bullet screen content in the live broadcast process, the interaction effect will be influenced, and the audience is lost.
Disclosure of Invention
The embodiment of the disclosure provides a live broadcast interaction method and a live broadcast interaction device, and the technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, a live broadcast interaction method is provided, where the method includes:
obtaining model data corresponding to the anchor image model;
sending model data to each audience terminal, wherein the audience terminals are used for displaying the anchor image model in the live broadcast picture according to the model data;
generating an interaction instruction according to interaction information sent by each audience terminal;
and sending an interaction instruction to the audience terminal, wherein the audience terminal is used for controlling the anchor image model to interact according to the interaction instruction.
Optionally, when the interactive information is text interactive information, an interactive instruction is generated according to the interactive information sent by each audience terminal, including:
performing semantic analysis on the character interaction information sent by each audience terminal;
determining target semantics according to semantic analysis results corresponding to the text interactive information;
and determining an interaction instruction according to the target semantics.
Optionally, when the interactive information is virtual article presentation information, generating an interactive instruction according to the interactive information sent by each audience terminal includes:
acquiring the type and the presenting quantity of the virtual goods contained in the presenting information of the virtual goods;
calculating the value of the virtual item according to the type of the virtual item and the presentation quantity;
and determining an interaction instruction according to the virtual article value.
Optionally, when acquiring the interactive information of at least two information types, generating an interactive instruction according to the interactive information sent by each viewer terminal includes:
determining information weights corresponding to the interactive information of various information types, wherein the information weights are used for indicating the importance degree of the interactive information;
determining the interaction information with the largest information weight as target interaction information;
and generating an interaction instruction according to the target interaction information.
Optionally, determining information weights corresponding to the interactive information of the various information types includes:
when the interactive information is character interactive information, accumulating the weight values corresponding to the character interactive information to obtain information weight;
and when the interactive information is the virtual article presenting information, accumulating the virtual article values corresponding to the virtual article presenting information to obtain the information weight.
Optionally, the interactive information includes at least one of text interactive information and virtual article presentation information;
sending an interaction instruction to the audience terminal, comprising:
when the interaction instruction is generated by character interaction information, the interaction instruction is sent to each audience terminal;
or the like, or, alternatively,
when the interactive information is virtual article presentation information, accumulating the virtual article values corresponding to the virtual article presentation information to obtain a virtual article total value; and determining the information weight according to the total value of the virtual article.
Optionally, the interaction instruction includes at least one of interaction action data, interaction voice data, and interaction expression data;
the method further comprises the following steps:
acquiring an interaction action corresponding to the interaction instruction; adding the interactive action data corresponding to the interactive action into the interactive instruction, wherein the audience terminal is used for controlling the anchor image model to execute the interactive action according to the interactive action data;
and/or the presence of a gas in the gas,
acquiring interactive voice corresponding to the interactive instruction, and synthesizing the interactive voice according to preset voice parameters and interactive speech words corresponding to the interactive instruction, wherein the voice parameters comprise at least one of tone, tone and speech speed; interactive voice data corresponding to the interactive voice is added into the interactive instruction, and the audience terminal is used for playing the interactive voice according to the voice data;
and/or the presence of a gas in the gas,
acquiring an interactive expression corresponding to the interactive instruction; and adding the interactive expression data corresponding to the interactive expressions into the interactive instruction, wherein the audience terminal is used for controlling the anchor image model to display the interactive expressions according to the interactive expression data.
According to a second aspect of the embodiments of the present disclosure, there is provided a live interactive apparatus, including:
the acquisition module is configured to acquire model data corresponding to the anchor image model;
the first sending module is configured to send model data to each audience terminal, and the audience terminals are used for displaying the anchor image model in the live broadcast picture according to the model data;
the generating module is configured to generate interaction instructions according to the interaction information sent by each audience terminal;
and the second sending module is configured to send an interaction instruction to the audience terminal, and the audience terminal is used for controlling the anchor image model to interact according to the interaction instruction.
Optionally, when the interactive information is text interactive information, the generating module includes:
the analysis submodule is configured to perform semantic analysis on the character interaction information sent by each audience terminal;
the first determining submodule is configured to determine target semantics according to semantic analysis results corresponding to the text interactive information;
a second determining submodule configured to determine the interaction instruction according to the target semantics.
Optionally, when the interactive information is the virtual article presenting information, the generating module includes:
an acquisition sub-module configured to acquire the type and the presentation amount of the virtual item included in the virtual item presentation information;
a calculation sub-module configured to calculate a virtual item value from the virtual item type and the gift amount;
and the third determining submodule is configured to determine the interaction instruction according to the virtual article value.
Optionally, when the interactive information of at least two information types is acquired, the generating module includes:
the fourth determining submodule is configured to determine information weights corresponding to the interactive information of various information types, and the information weights are used for indicating the importance degree of the interactive information;
the fifth determining submodule is configured to determine the interaction information with the largest information weight as the target interaction information;
and the generation sub-module is configured to generate an interaction instruction according to the target interaction information.
Optionally, the fifth determining sub-module is configured to:
when the interactive information is character interactive information, accumulating the weight values corresponding to the character interactive information to obtain information weight;
when the interactive information is virtual article presentation information, accumulating the virtual article values corresponding to the virtual article presentation information to obtain a virtual article total value; and determining the information weight according to the total value of the virtual article.
Optionally, the interactive information includes at least one of text interactive information and virtual article presentation information;
a second sending module comprising:
the first sending submodule is configured to send the interaction instruction to each audience terminal when the interaction instruction is generated by the character interaction information;
or the like, or, alternatively,
a second transmitting sub-module configured to determine a target audience terminal presenting the virtual item according to the virtual item presentation information when the interaction instruction is generated from the virtual item presentation information; and sending an interaction instruction to the target audience terminal.
Optionally, the interaction instruction includes at least one of interaction action data, interaction voice data, and interaction expression data;
the device, still include:
the first adding module is configured to obtain an interaction action corresponding to the interaction instruction; adding the interactive action data corresponding to the interactive action into the interactive instruction, wherein the audience terminal is used for controlling the anchor image model to execute the interactive action according to the interactive action data;
and/or the presence of a gas in the gas,
the second adding module is configured to acquire interactive voice corresponding to the interactive instruction, the interactive voice is synthesized according to preset voice parameters and interactive speech corresponding to the interactive instruction, and the voice parameters comprise at least one of tone, tone and speech speed; interactive voice data corresponding to the interactive voice is added into the interactive instruction, and the audience terminal is used for playing the interactive voice according to the voice data;
and/or the presence of a gas in the gas,
the third adding module is configured to obtain an interactive expression corresponding to the interactive instruction; and adding the interactive expression data corresponding to the interactive expressions into the interactive instruction, wherein the audience terminal is used for controlling the anchor image model to display the interactive expressions according to the interactive expression data.
According to a third aspect of the embodiments of the present disclosure, there is provided a live interactive apparatus, including: a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
obtaining model data corresponding to the anchor image model;
sending model data to each audience terminal, wherein the audience terminals are used for displaying the anchor image model in the live broadcast picture according to the model data;
generating an interaction instruction according to interaction information sent by each audience terminal;
and sending an interaction instruction to the audience terminal, wherein the audience terminal is used for controlling the anchor image model to interact according to the interaction instruction.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
providing model data corresponding to the anchor image model for each audience terminal, and sending an interaction instruction generated by interaction information to the audience terminals, so that the audience terminals can control the displayed anchor image model to interact according to the interaction instruction; the problem that the anchor does not interact with audiences in time according to the bullet screen content in the live broadcasting process, the interaction effect is influenced, and the audiences are lost is solved; the method and the system achieve real-time interaction with audiences by using the anchor image model, improve interaction efficiency and interaction quality while ensuring interaction timeliness, and avoid audience loss.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic diagram of an implementation environment shown in accordance with an exemplary embodiment of the present disclosure;
fig. 2 is a method flow diagram illustrating a live interaction method according to an exemplary embodiment of the present disclosure;
fig. 3A is a method flow diagram illustrating a live interaction method according to another exemplary embodiment of the present disclosure;
fig. 3B and fig. 3C are schematic diagrams of an implementation of the live interaction method shown in fig. 3A;
fig. 3D is a method flow diagram illustrating a live interaction method according to another exemplary embodiment of the present disclosure;
fig. 3E is a schematic diagram of an implementation of the live interaction method shown in fig. 3D;
fig. 3F is a flow diagram of a process by which a live server determines target interaction information;
fig. 4 is a block diagram illustrating a live interaction device according to an exemplary embodiment of the present disclosure;
fig. 5 is a block diagram illustrating a live interaction device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
"module" as referred to herein refers to a program or instructions stored in memory that is capable of performing certain functions; reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
FIG. 1 is a schematic diagram of an implementation environment shown in accordance with an exemplary embodiment of the present disclosure. The implementation environment includes a live terminal 110, a live server 120, and at least one viewer terminal 130.
The live broadcast terminal 110 is an electronic device equipped with a live broadcast client and having an audio and video acquisition function, and the electronic device may be a smart phone, a tablet computer, a laptop convenient computer, a desktop computer, or the like. Fig. 1 schematically illustrates the live broadcast terminal 110 as a smart phone. When the anchor uses the live broadcast terminal 110 to perform network live broadcast, the live broadcast client sends the acquired audio and video data to the live broadcast server 120.
The live terminal 110 and the live server 120 are connected through a wired or wireless network.
The live server 120 is a backend server for live clients and live viewing clients. The live broadcast server 120 may be a single server, a server cluster composed of multiple servers, or a cloud computing center. Fig. 1 schematically illustrates the live broadcast server 120 as a server cluster.
Optionally, the server cluster includes a streaming server, a live broadcast room management server, and the like, and this embodiment does not limit the type and architecture of the server in the server cluster.
After receiving the audio and video data sent by the live broadcast terminal 110, the live broadcast server 120 performs audio and video streaming to each viewer terminal 130 watching the anchor live broadcast.
In the embodiments of the present disclosure, the live broadcast server 120 is further configured to send model data corresponding to the anchor avatar model to each viewer terminal 130, and an interaction instruction for controlling the anchor avatar model to perform an interaction operation.
The live interaction method provided by the embodiments of the present disclosure is applied to the live server 120.
The live server 120 is connected to each viewer terminal 130 via a wired or wireless network.
The viewer terminal 130 is an electronic device installed with a live viewing client, which may be a smart phone, a tablet computer, a laptop or desktop computer, or the like. After receiving the audio and video data stream pushed by the live broadcast server 130, the audience terminal 130 decodes and plays the audio and video data stream through the live broadcast watching client, thereby realizing live broadcast watching over the internet.
In the embodiments of the present disclosure, after receiving the model data sent by the live broadcast server 120, the audience terminal 130 renders and displays the anchor image model according to the model data, and meanwhile, in the process of watching the live broadcast, the audience terminal 130 controls the anchor image model to interact according to the interaction instruction sent by the live broadcast server 120.
Fig. 2 is a method flow diagram illustrating a live interaction method according to an exemplary embodiment of the present disclosure. The present embodiment takes the live broadcast interaction method applied to the live broadcast server 120 shown in fig. 1 as an example for explanation. The method may include the following steps.
In step 201, model data corresponding to the anchor avatar model is obtained.
Optionally, the anchor image model is constructed by the live broadcast server according to the anchor image, or the anchor image model is constructed by the live broadcast terminal according to the anchor image and uploaded to the live broadcast server.
Optionally, the anchor image model is constructed by adopting a 3D modeling technique according to the collected anchor image, and accordingly, the model data is 3D model data; the anchor avatar model is capable of simulating the anchor's expressions and actions through model animation techniques, wherein the anchor avatar includes anchor appearance, clothing, and the like.
In step 202, model data is transmitted to each viewer terminal, and the viewer terminal is configured to display a anchor image model in a live view according to the model data.
Optionally, when it is detected that the live terminal starts live broadcasting, the live server sends the model data to the audience terminal watching the anchor live broadcasting.
In step 203, an interaction instruction is generated according to the interaction information sent by each viewer terminal.
When the audience watches the live broadcast by using the audience terminal, the interaction can be carried out by the modes of sending characters (namely, barrage) or presenting virtual articles to the anchor broadcast and the like through the audience terminal, the live broadcast server can obtain corresponding character interaction information and/or virtual article presentation information after receiving the character interaction information and/or virtual article presentation information and generate a corresponding interaction instruction, and the interaction instruction is used for indicating the anchor image model to carry out interaction.
Optionally, the virtual item presenting information includes, but is not limited to, a type of the virtual item, a presenting quantity, and a virtual item presenter identifier, and the presented virtual item may be a virtual interactive item such as a virtual flower, a virtual gold coin, a virtual gold ingot, and the like.
In step 204, an interaction instruction is sent to the audience terminal, and the audience terminal is configured to control the anchor image model to interact according to the interaction instruction.
And after the interaction instruction is generated, the live broadcast server sends the interaction instruction to each audience terminal to instruct the audience terminals to control the anchor image model to perform corresponding interaction according to the interaction instruction. At least one of interactive action data, interactive expression data and interactive voice data in the interactive instruction sent by the live broadcast server, and correspondingly, the audience terminal controls the anchor image model to execute the interactive action, simulate the interactive expression or play the interactive voice according to the interactive instruction.
In summary, in the live broadcast interaction method provided by this embodiment, the audience terminals can control the displayed anchor image model to interact according to the interaction instruction by providing model data corresponding to the anchor image model to each audience terminal and sending the interaction instruction generated by the interaction information to the audience terminals; the problem that the anchor does not interact with audiences in time according to the bullet screen content in the live broadcasting process, the interaction effect is influenced, and the audiences are lost is solved; the method and the system achieve real-time interaction with audiences by using the anchor image model, improve interaction efficiency and interaction quality while ensuring interaction timeliness, and avoid audience loss.
In order to ensure timely interaction with audiences, in the live broadcast process, the live broadcast server can analyze the semantics expressed by the bullet screen in the live broadcast room, and sends a corresponding interaction instruction to the audience terminal according to the semantics, and instructs the audience terminal to control the anchor image model to interact according to the interaction instruction. The following description will be made by using exemplary embodiments.
Fig. 3A is a method flow diagram illustrating a live interaction method according to another exemplary embodiment of the present disclosure. The present embodiment takes the live broadcast interaction method applied to the live broadcast server 120 shown in fig. 1 as an example for explanation. The method may include the following steps.
In step 301, model data corresponding to the anchor avatar model is obtained.
Optionally, a plurality of anchor image model templates are pre-stored in a live broadcast client installed on the live broadcast terminal, after the live broadcast terminal collects an anchor image, the live broadcast terminal selects the anchor image model template with the highest matching degree with the anchor image, fine-tunes the anchor image model template according to the anchor image to generate an anchor image model, sends the anchor image model to the live broadcast server, and the live broadcast server performs associated storage on the anchor identifier and the corresponding anchor image model.
It should be noted that, in other possible embodiments, the anchor may also create an anchor avatar model (including but not limited to appearance features, clothing, and a live environment) through a manual creation function provided by the live client, and the specific implementation manner for constructing the anchor avatar model is not limited by the embodiments of the present disclosure.
When the anchor starts the live broadcast, or the number of audiences in the live broadcast room reaches a preset number threshold, or the sending rate of the barrage in the live broadcast room reaches a preset rate threshold, the live broadcast server acquires the model data of the anchor image model corresponding to the anchor, wherein the preset number threshold and the preset rate threshold can be set by the anchor. For example, when the number of audiences in a live broadcast room reaches 1000, the live broadcast server acquires model data of the anchor model; for another example, when the sending rate of the barrage in the live broadcast room reaches 10 pieces/second, the live broadcast server obtains the model data of the anchor model.
In step 302, model data is transmitted to each viewer terminal, and the viewer terminal is configured to display a anchor image model in a live view according to the model data.
After model data of the anchor image model are obtained, the live broadcast server sends the model data to each audience terminal watching the anchor.
After receiving the model data forwarded by the live broadcast server, the audience terminal renders the model data through the live broadcast watching client and displays the rendered anchor image model in the live broadcast picture.
Optionally, when the anchor image model is obtained by adjusting a preset anchor image model template and a plurality of anchor image model templates are preset in a live viewing client installed in a viewer terminal, model data sent by the live viewing terminal includes an anchor image model template identifier and corresponding model adjustment data (the data volume is far smaller than that of the whole model data); correspondingly, the audience terminal only needs to adjust the anchor image model template according to the model adjusting data, and the complete model data of the anchor image model can be obtained.
Optionally, in order not to affect the normal display of the live broadcast picture, the anchor image model is displayed at an edge position of the live broadcast picture, or the viewer may customize the display position of the anchor image model.
For example, as shown in fig. 3B, the viewer terminal analyzes the received model data and displays the anchor character model 32 in the lower right corner of the live view screen 31.
In step 303, semantic analysis is performed on the interactive text information sent by each viewer terminal.
In the live broadcast process, the live broadcast server acquires character interaction information (namely a bullet screen) sent by a viewer in a live broadcast room at preset time intervals, and performs semantic analysis on each piece of character interaction information, so as to determine the semantic expressed by each piece of character interaction information. The live broadcast server may perform semantic analysis on the text interaction information by using a method of extracting keywords, and the implementation does not limit the specific manner of the semantic analysis.
In step 304, the target semantics are determined according to the semantic analysis result corresponding to each piece of text interaction information.
Because the number of the character interactive information in the live broadcast is large, and the semantics expressed by different character interactive information may have larger difference, in order to enable the interactive operation executed by the anchor image model to meet the interactive requirements of most audiences, the live broadcast server further determines the target semantics of the character interactive information in the live broadcast according to the semantic analysis result of each piece of character interactive information.
In a possible implementation manner, the live broadcast server classifies the text interaction information according to the semantics of each piece of text interaction information, and determines the semantics corresponding to the category with the largest number of text interaction information as the target semantics of the text interaction information in the live broadcast room.
For example, the live broadcast server obtains 100 pieces of text-character interaction information, which includes 85 pieces of text interaction information with semantics of "exaggeration award", 10 pieces of text interaction information with semantics of "encouragement", and 5 pieces of text interaction information with semantics of "criticism", and the live broadcast server determines that "exaggeration award" is the target semantic expressed by all text interaction information in the live broadcast room.
Optionally, because the live broadcast server needs to provide a semantic analysis server for a large number of live broadcast rooms (corresponding to live broadcast terminals) at the same time, in order to reduce the processing pressure of the live broadcast server, when the number of text interaction information in the live broadcast room is smaller than a threshold value, the live broadcast server may also instruct the live broadcast terminal to perform semantic analysis on the text interaction information, and feed back a semantic analysis result to the live broadcast server, which is not limited in this embodiment.
In step 305, an interaction instruction is determined based on the target semantics.
In a possible implementation manner, after the target semantics are determined, the live broadcast terminal searches an interaction instruction corresponding to the target semantics in a preset mapping relation according to the target semantics. Illustratively, the mapping relationship between the target semantics and the interactive instruction is shown in table one.
Watch 1
Object semantics Interactive instruction
Qu award First interactive instruction
Encouragement Second interactive instruction
Criticizing Third interaction instruction
The mapping relation between the target semantics and the interaction instruction can be set by the anchor, and can also be preset by a live broadcast server; and different interaction instructions are used for instructing the anchor character model to execute different interaction operations. For example, a first interactive instruction is used for instructing the anchor character model to bow after the double arms are opened, a second interactive instruction is used for instructing the anchor character model to make a right hand fist into grease, and a third interactive instruction is used for instructing the anchor character model to throw a knee and squat.
In addition to determining the interaction instruction corresponding to the target semantic meaning according to the mapping relationship, the live broadcast server may analyze the target semantic meaning through an intelligent recognition technology such as an Artificial Neural Network (ANN) or a Convolutional Neural Network (CNN) to obtain an audience interaction requirement corresponding to the target semantic meaning, so as to generate a corresponding interaction instruction.
In order to enrich the interaction mode of the anchor image model, after the live broadcast server determines an interaction instruction, interactive data such as interactive action data, interactive voice data or interactive expression data and the like are added into the interaction instruction through the following steps 306 to 308, so that the audience terminal controls the anchor image model to execute different types of interaction operation according to the interactive data in the interaction instruction.
In step 306, acquiring an interaction action corresponding to the interaction instruction; and adding the interactive action data corresponding to the interactive action into the interactive instruction, wherein the audience terminal is used for controlling the anchor image model to execute the interactive action according to the interactive action data.
In a possible implementation manner, aiming at different interaction instructions, the live broadcast server is preset with a corresponding relation between the interaction instructions and the interaction actions, and after the interaction instructions are determined, the live broadcast server acquires the interaction actions corresponding to the interaction instructions and adds the acquired interaction action data corresponding to the interaction actions into the interaction instructions, so that the audience terminal controls the anchor image model to execute the interaction actions according to the interaction action data.
In other possible embodiments, the interactive action may also be stored in the live terminal, and the live terminal reports the interactive action when receiving the interactive action acquisition request sent by the live server, which is not limited in this embodiment.
In step 307, acquiring an interactive voice corresponding to the interactive instruction, wherein the interactive voice is synthesized according to a preset voice parameter and an interactive speech word corresponding to the interactive instruction, and the voice parameter includes at least one of tone, timbre and speech speed; and adding interactive voice data corresponding to the interactive voice into the interactive instruction, wherein the audience terminal is used for playing the interactive voice according to the voice data.
The interactive voice is synthesized by the live broadcast server according to preset voice parameters and interactive speech words corresponding to the interactive instructions, or the interactive voice is synthesized by the live broadcast terminal according to the voice parameters and the interactive speech words and then fed back to the live broadcast server.
The corresponding relation between the interaction instruction and the interaction speech is stored in a live broadcast server in advance, and the corresponding relation can be set by default or by a main broadcast. Schematically, the corresponding relationship between the interactive command and the interactive speech-line is shown in table two.
Watch two
Interactive instruction Interactive speech
First interactive instruction Thank you each sense organ exaggeration prize
Second interactive instruction I will continue to strive
Third interaction instruction I know wrongly
And after the live broadcast terminal determines the interaction instruction, searching the interaction speech corresponding to the interaction instruction in the corresponding relation shown in the table II. For example, the live broadcast terminal acquires an interactive speech line 'thank you each spectator exaggeration prize' according to the first interactive instruction.
After the interactive speech is obtained, the live broadcast server obtains the voice parameters of the interactive voice preset by the anchor, synthesizes the interactive speech through a voice synthesis technology, thereby generating corresponding interactive voice, and finally adds the voice data of the generated interactive voice into an interactive instruction. The speech parameters include pitch (male height, male middle height, male low height, female middle height, female low height), timbre (roughness, pointedness, hypertonia, depression, gentle), speech speed (low speed, medium speed, high speed), and the like, and this disclosure does not limit what speech synthesis technique is used.
It should be noted that, in other possible embodiments, the live broadcast server may also pre-store the interactive voice recorded by the anchor broadcast, associate and store the interactive instruction and the interactive voice, after determining the interactive instruction according to the interactive information in the live broadcast room, further obtain the interactive voice corresponding to the interactive instruction, and add the interactive voice data to the interactive instruction, which is not limited in this disclosure.
In step 308, acquiring an interactive expression corresponding to the interactive instruction; and adding the interactive expression data corresponding to the interactive expressions into the interactive instruction, wherein the audience terminal is used for controlling the anchor image model to display the interactive expressions according to the interactive expression data.
In a possible implementation manner, the live broadcast server stores a corresponding relationship between the interactive instruction and the interactive expression in advance, and the corresponding relationship may be set by default or by the anchor. Schematically, the corresponding relationship between the interactive instruction and the interactive expression is shown in table three.
Watch III
Interactive instruction Interactive expression
First interactive instruction Smile of the eyes
Second interactive instruction The book of one principal meridian
Third interaction instruction Mourning by hanging head
And after the live broadcast terminal determines the interaction instruction, searching the interaction expression corresponding to the interaction instruction in the corresponding relation shown in the table III. For example, the live broadcast terminal acquires an interactive expression 'smiling glabells' according to the first interactive instruction.
And after the interactive expression is acquired, the live broadcast server adds the interactive expression data corresponding to the interactive expression into the interactive instruction so that the audience terminal controls the anchor image model to display the corresponding interactive expression according to the interactive expression data. The interactive expression data can be obtained by modeling the anchor real expression, and correspondingly, the interactive expression data is expression model data.
In step 309, the interaction instruction is sent to each viewer terminal.
After the interactive data (interactive action data and/or interactive expression data and/or interactive voice data) are added to the interactive instruction through the steps, the live broadcast server sends the interactive instruction to each audience terminal. Correspondingly, the audience terminal controls the anchor image model in the live broadcast picture to execute corresponding interactive operation according to the received interactive instruction, namely according to the interactive data contained in the interactive instruction, so that the effect that the anchor image model interacts with the audience according to the bullet screen content is achieved. The interactive operation executed by the anchor image model comprises at least one of executing an interactive action, displaying an interactive expression and playing interactive voice.
Optionally, the live broadcast terminal may further send corresponding interactive feedback text information to each audience terminal, and instruct the audience terminal to display the interactive feedback text information around the anchor image model.
As shown in fig. 3C, after receiving the interaction instruction sent by the live broadcast server, the audience terminal controls the anchor image model 32 to execute the corresponding interaction action according to the interaction action data in the interaction instruction, and displays the interaction feedback text information 33 around the anchor image model 32.
Through the live broadcast interaction method, when the anchor is inconvenient to interact with audiences or the anchor cannot interact in time due to the fact that the number of the barrage is too large, live broadcast can automatically interact with the audiences in real time by utilizing the anchor image model, the effect of virtual anchor is achieved, and therefore a better interaction effect is achieved.
In summary, in the live broadcast interaction method provided by this embodiment, the audience terminals can control the displayed anchor image model to interact according to the interaction instruction by providing model data corresponding to the anchor image model to each audience terminal and sending the interaction instruction generated by the interaction information to the audience terminals; the problem that the anchor does not interact with audiences in time according to the bullet screen content in the live broadcasting process, the interaction effect is influenced, and the audiences are lost is solved; the method and the system achieve real-time interaction with audiences by using the anchor image model, improve interaction efficiency and interaction quality while ensuring interaction timeliness, and avoid audience loss.
In this embodiment, the live broadcast server does not need to transmit a video stream of the anchor image to each viewer terminal, but sends model data of the anchor image model to each viewer terminal, and the viewer terminals render the anchor image according to the model data, thereby reducing the flow generated in the live broadcast process.
In the embodiment, the live broadcast server performs semantic analysis on the character interaction information in the live broadcast room, determines the target semantics expressed by the character interaction information in the live broadcast room, further sends interaction instructions corresponding to the target semantics to each audience terminal, instructs the audience terminals to control the anchor image model in the live broadcast picture to execute corresponding interaction operations, and truly simulates the anchor interaction scene, so that the accuracy and the authenticity of interaction are improved.
In this embodiment, the live broadcast server adds the interactive action data, the interactive expression data and the interactive voice data to the interactive instruction, so that the audience terminal can control the anchor image model to execute the corresponding interactive action, display the interactive expression or play the interactive voice according to the interactive instruction, thereby enriching the interactive form of the anchor image model and more truly simulating the anchor interactive scene.
In other possible implementation manners, the live broadcast server can also control the anchor image model to interact according to the virtual article presentation behavior of the audience in the live broadcast room. On the basis of the live interactive method shown in fig. 3A, as shown in fig. 3D, the above steps 303 to 305 may be replaced by the following steps.
In step 310, the virtual item type and the presentation amount contained in the virtual item presentation information are acquired.
In the live broadcast process, the audience can present the virtual articles to the main broadcast through the audience terminal, and correspondingly, the live broadcast server acquires the virtual article presenting information generated when the audience presents the virtual articles, wherein the virtual article presenting information comprises the type of the virtual articles, the presenting quantity and the virtual article presenting party identification.
For example, the acquiring of the virtual item presentation information by the live broadcast server includes: silver coin (virtual item type), 1000 (presentation amount), and queen (virtual item presenter identification).
In step 311, the virtual item value is calculated based on the virtual item type and the gift amount.
Optionally, after the type and the present number of the virtual item included in the virtual item present information are obtained, the live broadcast server further obtains the unit price of the virtual item indicated by the type of the virtual item, so as to calculate the value of the virtual item according to the singles and the present number of the virtual item. Schematically, the correspondence between the virtual article and the unit price is shown in table four.
Watch four
Virtual article Unit price of
Fresh flower 1
Silver coin 5
Gold coin 10
For example, in connection with table four and the example in step 310, the live broadcast server calculates that the virtual item value is 5 × 1000 — 5000.
In step 312, an interaction instruction is determined based on the virtual item value.
Further, the live broadcast server determines an interaction instruction corresponding to the virtual article presentation information according to the virtual article value obtained through calculation.
Optionally, the live broadcast server stores a corresponding relationship between the virtual article value and the interaction instruction, and after the virtual article value is obtained through calculation, the live broadcast server searches for the corresponding interaction instruction in the corresponding relationship.
Optionally, the live broadcast server analyzes the presenting information of each virtual article through a predetermined data analysis decision algorithm, so as to dynamically adjust the corresponding relationship between the value of the virtual article and the interaction instruction. For example, when the analysis result indicates that the presenting behavior of the virtual article in the live broadcast room is less, dynamically adjusting the lower limit of the value of the virtual article corresponding to each interaction instruction; and dynamically adjusting the lower limit of the virtual article value corresponding to each interaction instruction upwards when the analysis result indicates that the virtual article presenting behaviors in the live broadcast room are more.
After the interaction instruction corresponding to the virtual article presentation information is determined, the live broadcast server sends the interaction instruction to each audience terminal watching the live broadcast, so that the audience terminals control each anchor image model to interact according to the interaction instruction.
Optionally, the live broadcast server may also send different interaction instructions to different audience terminals according to the virtual object present behaviors of the different audience terminals, as shown in fig. 3D, and step 309 shown in fig. 3A may be replaced with the following step.
In step 313, the target audience terminal for presenting the virtual item is determined based on the virtual item presentation information.
The virtual article presenting information acquired by the live broadcast server comprises a virtual article presenting party identifier, and a target audience terminal presenting the virtual article is determined according to the virtual article presenting party identifier.
For example, as shown in fig. 3E, the live broadcast terminal acquires that the virtual article presentation information 34 includes the virtual article presenter identifier "kaowang", and the live broadcast terminal determines that "kaowang" is the target audience terminal.
In step 314, an interaction instruction is sent to the target viewer terminal.
After a target audience terminal is determined, the live broadcast server sends an interaction instruction to the target audience terminal; correspondingly, after the target audience terminal receives the interaction instruction, the anchor image model is controlled to execute corresponding interaction operation according to the interaction instruction.
For example, as shown in fig. 3C and 3E, the interactive operations performed by the anchor image model 32 in the target audience terminal (the terminal corresponding to fig. 3E) and the non-target audience terminal (the terminal corresponding to fig. 3C) are different.
In the embodiment, the live broadcast server sends different interaction instructions to different audience terminals in a targeted manner according to the presenting behaviors of the virtual objects of the different audience terminals, so that different interaction operations are executed by anchor image models in the different audience terminals, and the authenticity and the pertinence of an interaction scene are improved.
In an actual usage scenario, the interactive information sent by the audience terminals may simultaneously include different types of interactive information, for example, the interactive information sent by each audience terminal simultaneously includes text interactive information and virtual article presenting information. In such a scenario, in order to enable the interaction performed by the anchor image model to meet the requirements of the audience, thereby achieving a better interaction effect, the live broadcast server needs to generate a corresponding interaction instruction based on the interaction information with the highest importance degree. In a possible embodiment, as shown in fig. 3F, the following steps are further included before step 303 or step 310.
In step 315, information weights corresponding to the interactive information of the various information types are determined, and the information weights are used for indicating the importance degree of the interactive information.
The importance degree of the interactive information of different information types in the live broadcast room is different, for example, the importance degree of the given information of a single virtual article is greater than that of the interactive information of a single character. Therefore, when the live broadcast room simultaneously contains the interactive information of multiple information types, the live broadcast server calculates the information weight corresponding to the interactive information of the various information types, and determines the interactive information with the highest importance degree according to the information weight, so that an interactive instruction is generated based on the interactive information.
For interactive information of character types, the importance degree of the interactive information is not only related to the information weight of the single character interactive information, but also related to the quantity of the character interactive information, so that when the interactive information is character interactive information, the live broadcast server accumulates the weighted values corresponding to the character interactive information to obtain the information weight of the interactive information of character types.
The weighted values of the text interaction information sent by different audience terminals are different, and the weighted values are related to the grades of the audience terminals (the higher the grade is, the higher the weighted values are).
And when the interactive information is virtual article presentation information, the live broadcast server accumulates the virtual article values corresponding to the virtual article presentation information to obtain a virtual article total value, and determines the information weight according to the virtual article total value.
In a possible implementation manner, for each piece of virtual article presentation information, the live broadcast server calculates to obtain a virtual article value according to the type of the virtual article and the number of the virtual article contained in the virtual article presentation information, and accumulates the virtual article values corresponding to each piece of virtual article presentation information to obtain a total value of the virtual article; and after the total value of the virtual article is obtained, the live broadcast server determines the information weight of the virtual article presentation information according to the preset relation between the total value of the virtual article and the information weight.
In step 316, the interaction information with the largest information weight is determined as the target interaction information.
Further, the live broadcast server compares the information weights of the interactive information of various information types, and determines the interactive information with the largest information weight as the target interactive information.
For example, if the information weight of the text interactive information in the live broadcast room calculated by the live broadcast server is 100 and the information weight of the virtual article presentation information is 200, the virtual article presentation information is determined as the target interactive information.
In step 317, an interaction instruction is generated according to the target interaction information.
When the determined target interactive information is the text interactive information, the live broadcast server generates an interactive instruction through the steps 303 to 305; when the determined target interaction information is the virtual article presenting information, the live broadcast server generates the interaction instruction through the steps 310 to 312, which is not described herein again.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 4 is a block diagram illustrating a live interaction device according to an exemplary embodiment of the present disclosure. The live interactive device can be implemented by hardware or a combination of hardware and software to become all or a part of the live server 120 shown in fig. 1. This live interactive installation includes:
an obtaining module 410 configured to obtain model data corresponding to the anchor avatar model;
a first transmitting module 420 configured to transmit the model data to each of the viewer terminals for displaying the anchor avatar model in the live view according to the model data;
a generating module 430 configured to generate an interaction instruction according to the interaction information sent by each viewer terminal;
and a second sending module 440 configured to send an interaction instruction to the viewer terminal, where the viewer terminal is configured to control the anchor character model to interact according to the interaction instruction.
Optionally, when the interactive information is text interactive information, the generating module 430 includes:
the analysis submodule is configured to perform semantic analysis on the character interaction information sent by each audience terminal;
the first determining submodule is configured to determine target semantics according to semantic analysis results corresponding to the text interactive information;
a second determining submodule configured to determine the interaction instruction according to the target semantics.
Optionally, when the interactive information is the virtual article presenting information, the generating module 430 includes:
an acquisition sub-module configured to acquire the type and the presentation amount of the virtual item included in the virtual item presentation information;
a calculation sub-module configured to calculate a virtual item value from the virtual item type and the gift amount;
and the third determining submodule is configured to determine the interaction instruction according to the virtual article value.
Optionally, when acquiring the interactive information of at least two information types, the generating module 430 includes:
the fourth determining submodule is configured to determine information weights corresponding to the interactive information of various information types, and the information weights are used for indicating the importance degree of the interactive information;
the fifth determining submodule is configured to determine the interaction information with the largest information weight as the target interaction information;
and the generation sub-module is configured to generate an interaction instruction according to the target interaction information.
Optionally, the fifth determining sub-module is configured to:
when the interactive information is character interactive information, accumulating the weight values corresponding to the character interactive information to obtain information weight;
when the interactive information is virtual article presentation information, accumulating the virtual article values corresponding to the virtual article presentation information to obtain a virtual article total value; and determining the information weight according to the total value of the virtual article.
Optionally, the interactive information includes at least one of text interactive information and virtual article presentation information;
a second transmitting module 440, comprising:
the first sending submodule is configured to send the interaction instruction to each audience terminal when the interaction instruction is generated by the character interaction information;
or the like, or, alternatively,
a second transmitting sub-module configured to determine a target audience terminal presenting the virtual item according to the virtual item presentation information when the interaction instruction is generated from the virtual item presentation information; and sending an interaction instruction to the target audience terminal.
Optionally, the interaction instruction includes at least one of interaction action data, interaction voice data, and interaction expression data;
the device, still include:
the first adding module is configured to obtain an interaction action corresponding to the interaction instruction; adding the interactive action data corresponding to the interactive action into the interactive instruction, wherein the audience terminal is used for controlling the anchor image model to execute the interactive action according to the interactive action data;
and/or the presence of a gas in the gas,
the second adding module is configured to acquire interactive voice corresponding to the interactive instruction, the interactive voice is synthesized according to preset voice parameters and interactive speech corresponding to the interactive instruction, and the voice parameters comprise at least one of tone, tone and speech speed; interactive voice data corresponding to the interactive voice is added into the interactive instruction, and the audience terminal is used for playing the interactive voice according to the voice data;
and/or the presence of a gas in the gas,
the third adding module is configured to obtain an interactive expression corresponding to the interactive instruction; and adding the interactive expression data corresponding to the interactive expressions into the interactive instruction, wherein the audience terminal is used for controlling the anchor image model to display the interactive expressions according to the interactive expression data.
In summary, the live broadcast interaction apparatus provided in this embodiment provides model data corresponding to the anchor image model to each audience terminal, and sends an interaction instruction generated by the interaction information to the audience terminal, so that the audience terminal can control the displayed anchor image model to interact according to the interaction instruction; the problem that the anchor does not interact with audiences in time according to the bullet screen content in the live broadcasting process, the interaction effect is influenced, and the audiences are lost is solved; the method and the system achieve real-time interaction with audiences by using the anchor image model, improve interaction efficiency and interaction quality while ensuring interaction timeliness, and avoid audience loss.
In this embodiment, the live broadcast server does not need to transmit a video stream of the anchor image to each viewer terminal, but sends model data of the anchor image model to each viewer terminal, and the viewer terminals render the anchor image according to the model data, thereby reducing the flow generated in the live broadcast process.
In the embodiment, the live broadcast server performs semantic analysis on the character interaction information in the live broadcast room, determines the target semantics expressed by the character interaction information in the live broadcast room, further sends interaction instructions corresponding to the target semantics to each audience terminal, instructs the audience terminals to control the anchor image model in the live broadcast picture to execute corresponding interaction operations, and truly simulates the anchor interaction scene, so that the accuracy and the authenticity of interaction are improved.
In this embodiment, the live broadcast server adds the interactive action data, the interactive expression data and the interactive voice data to the interactive instruction, so that the audience terminal can control the anchor image model to execute the corresponding interactive action, display the interactive expression or play the interactive voice according to the interactive instruction, thereby enriching the interactive form of the anchor image model and more truly simulating the anchor interactive scene.
In the embodiment, the live broadcast server sends different interaction instructions to different audience terminals in a targeted manner according to the presenting behaviors of the virtual objects of the different audience terminals, so that different interaction operations are executed by anchor image models in the different audience terminals, and the authenticity and the pertinence of an interaction scene are improved.
Fig. 5 is a block diagram illustrating a live interaction device 500, according to an example embodiment. For example, the apparatus 500 may be provided as the live server 120 shown in fig. 1. Referring to fig. 5, apparatus 500 includes a processing component 522 that further includes one or more processors and memory resources, represented by memory 532, for storing instructions, e.g., applications, that are executable by processing component 522. The application programs stored in memory 532 may include one or more modules that each correspond to a set of instructions. Further, the processing component 522 is configured to execute instructions to perform the live interaction method described above.
The apparatus 500 may also include a power component 526 configured to perform power management of the apparatus 500, a wired or wireless network interface 550 configured to connect the apparatus 500 to a network, and an input/output (I/O) interface 558. The apparatus 500 may operate based on an operating system stored in the memory 532, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (15)

1. A live interaction method, comprising:
when the anchor starts live broadcasting, or the number of audiences in a live broadcasting room reaches a preset number threshold, or the sending rate of a bullet screen in the live broadcasting room reaches a preset rate threshold, acquiring model data corresponding to an anchor image model;
sending the model data to each audience terminal, wherein the audience terminals are used for displaying the anchor image model in a live broadcast picture according to the model data;
generating an interaction instruction according to the interaction information sent by each audience terminal;
and sending the interaction instruction to the audience terminal, wherein the audience terminal is used for controlling the anchor image model to interact according to the interaction instruction.
2. The method according to claim 1, wherein when the interactive information is text interactive information, the generating an interactive instruction according to the interactive information sent by each of the viewer terminals comprises:
performing semantic analysis on the character interaction information sent by each audience terminal;
determining target semantics according to semantic analysis results corresponding to the text interaction information;
and determining the interaction instruction according to the target semantics.
3. The method according to claim 1, wherein when the interactive information is virtual article presentation information, the generating an interactive instruction according to the interactive information sent by each of the viewer terminals includes:
acquiring the type and the presenting quantity of the virtual goods contained in the virtual goods presenting information;
calculating a virtual item value according to the virtual item type and the presentation quantity;
and determining the interaction instruction according to the virtual article value.
4. The method according to claim 1, wherein when the interactive information of at least two information types is acquired, the generating an interactive instruction according to the interactive information sent by each of the viewer terminals comprises:
determining information weights corresponding to the interactive information of various information types, wherein the information weights are used for indicating the importance degree of the interactive information;
determining the interaction information with the maximum information weight as target interaction information;
and generating the interaction instruction according to the target interaction information.
5. The method of claim 4, wherein the determining the information weight corresponding to each of the interactive information of each information type comprises:
when the interactive information is character interactive information, accumulating the weight values corresponding to the character interactive information to obtain the information weight;
when the interactive information is virtual article presentation information, accumulating the virtual article values corresponding to the virtual article presentation information to obtain a virtual article total value; and determining the information weight according to the total value of the virtual article.
6. The method of any one of claims 1 to 5, wherein the interactive information includes at least one of textual interactive information and virtual item presentation information;
the sending the interaction instruction to the audience terminal comprises:
when the interaction instruction is generated by the character interaction information, the interaction instruction is sent to each audience terminal;
or the like, or, alternatively,
when the interaction instruction is generated by the virtual article presenting information, determining a target audience terminal for presenting a virtual article according to the virtual article presenting information; and sending the interaction instruction to the target audience terminal.
7. The method according to any one of claims 1 to 5, wherein the interactive instruction comprises at least one of interactive action data, interactive voice data and interactive expression data;
the method further comprises the following steps:
acquiring an interaction action corresponding to the interaction instruction; adding the interaction action data corresponding to the interaction action into the interaction instruction, wherein the audience terminal is used for controlling the anchor image model to execute the interaction action according to the interaction action data;
and/or the presence of a gas in the gas,
acquiring interactive voice corresponding to the interactive instruction, wherein the interactive voice is synthesized according to preset voice parameters and interactive speech corresponding to the interactive instruction, and the voice parameters comprise at least one of tone, tone and speech speed; adding interactive voice data corresponding to the interactive voice into the interactive instruction, wherein the audience terminal is used for playing the interactive voice according to the voice data;
and/or the presence of a gas in the gas,
acquiring an interactive expression corresponding to the interactive instruction; and adding the interactive expression data corresponding to the interactive expressions into the interactive instruction, wherein the audience terminal is used for controlling the anchor image model to display the interactive expressions according to the interactive expression data.
8. A live interaction device, the device comprising:
the acquisition module is configured to acquire model data corresponding to an anchor image model when an anchor starts live broadcasting, or the number of audiences in a live broadcasting room reaches a preset number threshold, or the sending rate of a barrage in the live broadcasting room reaches a preset rate threshold;
a first sending module configured to send the model data to each viewer terminal, the viewer terminal being configured to display the anchor avatar model in a live view according to the model data;
the generating module is configured to generate an interaction instruction according to the interaction information sent by each audience terminal;
and the second sending module is configured to send the interaction instruction to the audience terminal, and the audience terminal is used for controlling the anchor image model to interact according to the interaction instruction.
9. The apparatus of claim 8, wherein when the interactive message is a text interactive message, the generating module comprises:
the analysis submodule is configured to perform semantic analysis on the character interaction information sent by each audience terminal;
the first determining submodule is configured to determine target semantics according to a semantic analysis result corresponding to each piece of character interaction information;
a second determining submodule configured to determine the interaction instruction according to the target semantics.
10. The apparatus of claim 8, wherein when the interactive information is virtual item gifting information, the generating module comprises:
an acquisition sub-module configured to acquire the type and the presentation amount of the virtual item included in the virtual item presentation information;
a calculation sub-module configured to calculate a virtual item value from the virtual item type and the gift amount;
a third determination submodule configured to determine the interaction instruction according to the virtual item value.
11. The apparatus of claim 8, wherein when the interactive information of at least two information types is obtained, the generating module comprises:
the fourth determining submodule is configured to determine information weights corresponding to the interaction information of various information types, and the information weights are used for indicating the importance degree of the interaction information;
a fifth determining submodule configured to determine the interaction information with the largest information weight as target interaction information;
and the generation sub-module is configured to generate the interaction instruction according to the target interaction information.
12. The apparatus of claim 11, wherein the fifth determination submodule is configured to:
when the interactive information is character interactive information, accumulating the weight values corresponding to the character interactive information to obtain the information weight;
when the interactive information is virtual article presentation information, accumulating the virtual article values corresponding to the virtual article presentation information to obtain a virtual article total value; and determining the information weight according to the total value of the virtual article.
13. The apparatus according to any one of claims 8 to 12, wherein the interactive information includes at least one of text interactive information and virtual item presentation information;
the second sending module includes:
the first sending sub-module is configured to send the interaction instruction to each audience terminal when the interaction instruction is generated by the character interaction information;
or the like, or, alternatively,
a second transmitting sub-module configured to determine a target audience terminal for presenting a virtual item according to the virtual item presentation information when the interaction instruction is generated from the virtual item presentation information; and sending the interaction instruction to the target audience terminal.
14. The device according to any one of claims 8 to 12, wherein the interaction instruction comprises at least one of interaction action data, interaction voice data and interaction expression data;
the device, still include:
the first adding module is configured to obtain an interaction action corresponding to the interaction instruction; adding the interaction action data corresponding to the interaction action into the interaction instruction, wherein the audience terminal is used for controlling the anchor image model to execute the interaction action according to the interaction action data;
and/or the presence of a gas in the gas,
the second adding module is configured to obtain interactive voice corresponding to the interactive instruction, the interactive voice is synthesized according to preset voice parameters and interactive speech corresponding to the interactive instruction, and the voice parameters comprise at least one of tone, tone and speech speed; adding interactive voice data corresponding to the interactive voice into the interactive instruction, wherein the audience terminal is used for playing the interactive voice according to the voice data;
and/or the presence of a gas in the gas,
the third adding module is configured to obtain an interactive expression corresponding to the interactive instruction; and adding the interactive expression data corresponding to the interactive expressions into the interactive instruction, wherein the audience terminal is used for controlling the anchor image model to display the interactive expressions according to the interactive expression data.
15. A live interaction device, the device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
when the anchor starts live broadcasting, or the number of audiences in a live broadcasting room reaches a preset number threshold, or the sending rate of a bullet screen in the live broadcasting room reaches a preset rate threshold, acquiring model data corresponding to an anchor image model;
sending the model data to each audience terminal, wherein the audience terminals are used for displaying the anchor image model in a live broadcast picture according to the model data;
generating an interaction instruction according to the interaction information sent by each audience terminal;
and sending the interaction instruction to the audience terminal, wherein the audience terminal is used for controlling the anchor image model to interact according to the interaction instruction.
CN201710065807.7A 2016-12-09 2017-02-06 Live broadcast interaction method and device Active CN106878820B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2016111262399 2016-12-09
CN201611126239 2016-12-09

Publications (2)

Publication Number Publication Date
CN106878820A CN106878820A (en) 2017-06-20
CN106878820B true CN106878820B (en) 2020-10-16

Family

ID=59166583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710065807.7A Active CN106878820B (en) 2016-12-09 2017-02-06 Live broadcast interaction method and device

Country Status (1)

Country Link
CN (1) CN106878820B (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107454435A (en) * 2017-06-21 2017-12-08 白冰 A kind of live broadcasting method and live broadcast system based on physical interaction
CN107509117A (en) * 2017-06-21 2017-12-22 白冰 A kind of living broadcast interactive method and living broadcast interactive system
CN107635154A (en) * 2017-06-21 2018-01-26 白冰 A kind of live control device of physical interaction
CN107423809B (en) * 2017-07-07 2021-02-26 北京光年无限科技有限公司 Virtual robot multi-mode interaction method and system applied to video live broadcast platform
CN107750005B (en) * 2017-09-18 2020-10-30 迈吉客科技(北京)有限公司 Virtual interaction method and terminal
CN109635616B (en) * 2017-10-09 2022-12-27 阿里巴巴集团控股有限公司 Interaction method and device
CN108462883B (en) * 2018-01-08 2019-10-18 平安科技(深圳)有限公司 A kind of living broadcast interactive method, apparatus, terminal device and storage medium
CN108769724B (en) * 2018-05-30 2020-12-04 广州华多网络科技有限公司 Method and device for pushing popup in live webcast and live webcast system
CN108986192B (en) * 2018-07-26 2024-01-30 北京运多多网络科技有限公司 Data processing method and device for live broadcast
CN109271553A (en) * 2018-08-31 2019-01-25 乐蜜有限公司 A kind of virtual image video broadcasting method, device, electronic equipment and storage medium
CN109120985B (en) * 2018-10-11 2021-07-23 广州虎牙信息科技有限公司 Image display method and device in live broadcast and storage medium
CN109889858B (en) * 2019-02-15 2021-06-11 广州酷狗计算机科技有限公司 Information processing method and device for virtual article and computer readable storage medium
CN111641844B (en) * 2019-03-29 2022-08-19 广州虎牙信息科技有限公司 Live broadcast interaction method and device, live broadcast system and electronic equipment
CN110071938B (en) * 2019-05-05 2021-12-03 广州虎牙信息科技有限公司 Virtual image interaction method and device, electronic equipment and readable storage medium
CN110234019B (en) * 2019-07-31 2021-10-22 广州虎牙科技有限公司 Barrage interaction method, barrage interaction system, barrage interaction terminal and computer-readable storage medium
CN110519612A (en) * 2019-08-26 2019-11-29 广州华多网络科技有限公司 Even wheat interactive approach, live broadcast system, electronic equipment and storage medium
CN110662083B (en) * 2019-09-30 2022-04-22 北京达佳互联信息技术有限公司 Data processing method and device, electronic equipment and storage medium
CN111314719A (en) * 2020-01-22 2020-06-19 北京达佳互联信息技术有限公司 Live broadcast auxiliary method and device, electronic equipment and storage medium
CN111541908A (en) * 2020-02-27 2020-08-14 北京市商汤科技开发有限公司 Interaction method, device, equipment and storage medium
CN111988635A (en) * 2020-08-17 2020-11-24 深圳市四维合创信息技术有限公司 AI (Artificial intelligence) -based competition 3D animation live broadcast method and system
CN112601100A (en) * 2020-12-11 2021-04-02 北京字跳网络技术有限公司 Live broadcast interaction method, device, equipment and medium
CN112616063B (en) * 2020-12-11 2022-10-28 北京字跳网络技术有限公司 Live broadcast interaction method, device, equipment and medium
CN113766253A (en) * 2021-01-04 2021-12-07 北京沃东天骏信息技术有限公司 Live broadcast method, device, equipment and storage medium based on virtual anchor
CN113867538A (en) * 2021-10-18 2021-12-31 深圳追一科技有限公司 Interaction method, interaction device, computer equipment and computer-readable storage medium
CN114025186A (en) * 2021-10-28 2022-02-08 广州方硅信息技术有限公司 Virtual voice interaction method and device in live broadcast room and computer equipment
CN114173139B (en) * 2021-11-08 2023-11-24 北京有竹居网络技术有限公司 Live broadcast interaction method, system and related device
CN114302217B (en) * 2021-12-29 2024-01-05 广州繁星互娱信息科技有限公司 Voice information generation method and device, electronic equipment and storage medium
CN114501054B (en) * 2022-02-11 2023-04-21 腾讯科技(深圳)有限公司 Live interaction method, device, equipment and computer readable storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176197A (en) * 2011-03-23 2011-09-07 上海那里网络科技有限公司 Method for performing real-time interaction by using virtual avatar and real-time image
US20130346154A1 (en) * 2012-06-22 2013-12-26 Josephine Holz Systems and methods for audience measurement analysis
CN109743335A (en) * 2014-08-01 2019-05-10 广州华多网络科技有限公司 Interactive system, server, client and exchange method
CN104333782B (en) * 2014-11-11 2018-01-09 广州华多网络科技有限公司 A kind of main broadcaster formulates the order method and system, relevant device of task
CN104994421A (en) * 2015-06-30 2015-10-21 广州华多网络科技有限公司 Interaction method, device and system of virtual goods in live channel
CN105516784A (en) * 2016-01-29 2016-04-20 广州酷狗计算机科技有限公司 Virtual good display method and device
CN105740029B (en) * 2016-03-03 2019-07-05 腾讯科技(深圳)有限公司 A kind of method, user equipment and system that content is presented
CN105828145B (en) * 2016-03-18 2019-07-19 广州酷狗计算机科技有限公司 Interactive approach and device
CN105959718A (en) * 2016-06-24 2016-09-21 乐视控股(北京)有限公司 Real-time interaction method and device in video live broadcasting
CN106162369B (en) * 2016-06-29 2018-11-16 腾讯科技(深圳)有限公司 It is a kind of to realize the method, apparatus and system interacted in virtual scene

Also Published As

Publication number Publication date
CN106878820A (en) 2017-06-20

Similar Documents

Publication Publication Date Title
CN106878820B (en) Live broadcast interaction method and device
TWI778477B (en) Interaction methods, apparatuses thereof, electronic devices and computer readable storage media
WO2021109652A1 (en) Method and apparatus for giving character virtual gift, device, and storage medium
US11538213B2 (en) Creating and distributing interactive addressable virtual content
US10210002B2 (en) Method and apparatus of processing expression information in instant communication
CN112087655B (en) Method and device for presenting virtual gift and electronic equipment
KR101492359B1 (en) Input support device, input support method, and recording medium
US11451858B2 (en) Method and system of processing information flow and method of displaying comment information
WO2022022485A1 (en) Content provision method and apparatus, content display method and apparatus, and electronic device and storage medium
CN110868635A (en) Video processing method and device, electronic equipment and storage medium
US20230047858A1 (en) Method, apparatus, electronic device, computer-readable storage medium, and computer program product for video communication
CN106302471B (en) Method and device for recommending virtual gift
CN115515016B (en) Virtual live broadcast method, system and storage medium capable of realizing self-cross reply
US20240196025A1 (en) Computer program, server device, terminal device, and method
CN112287848A (en) Live broadcast-based image processing method and device, electronic equipment and storage medium
JP2024521795A (en) Simulating crowd noise at live events with sentiment analysis of distributed inputs
CN113645472B (en) Interaction method and device based on play object, electronic equipment and storage medium
CN116600152A (en) Virtual anchor live broadcast method, device, equipment and storage medium
CN116708853A (en) Interaction method and device in live broadcast and electronic equipment
CN114449301B (en) Item sending method, item sending device, electronic equipment and computer-readable storage medium
CN115620096A (en) Digital human animation evaluation optimization method and device, equipment, medium and product thereof
CN113301362B (en) Video element display method and device
KR20180119410A (en) Method for providing karaoke game service based on virtual reality
CN113542874A (en) Information playing control method, device, equipment and computer readable storage medium
CN115086693A (en) Virtual object interaction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant