CN112601100A - Live broadcast interaction method, device, equipment and medium - Google Patents

Live broadcast interaction method, device, equipment and medium Download PDF

Info

Publication number
CN112601100A
CN112601100A CN202011460121.6A CN202011460121A CN112601100A CN 112601100 A CN112601100 A CN 112601100A CN 202011460121 A CN202011460121 A CN 202011460121A CN 112601100 A CN112601100 A CN 112601100A
Authority
CN
China
Prior art keywords
live
virtual object
live broadcast
comment information
interactive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011460121.6A
Other languages
Chinese (zh)
Inventor
杨沐
王骁玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202011460121.6A priority Critical patent/CN112601100A/en
Publication of CN112601100A publication Critical patent/CN112601100A/en
Priority to PCT/CN2021/128072 priority patent/WO2022121557A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • H04N21/2335Processing of audio elementary streams involving reformatting operations of audio signals, e.g. by converting from one coding standard to another
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computational Linguistics (AREA)
  • Marketing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the disclosure relates to a live broadcast interaction method, a live broadcast interaction device, equipment and a medium, wherein the method comprises the following steps: playing live broadcast content corresponding to a painting question, responding to a first painting action of a virtual object, displaying a painting board in a first area of a live broadcast interface, and presenting a painting track graph corresponding to the first painting action on the painting board; displaying a plurality of comment information of the live audience in a second area of the live interface; playing interactive content of the virtual object aiming at the target comment information in a live broadcast interface, and hiding the drawing board; and the target comment information is one or more of comment information. By adopting the technical scheme, the user can guess questions according to the drawing track graph of the live drawing of the virtual object, and replies according to comment information input by the user, so that the interactive game that the user guesses and draws the questions between the virtual object and the user is realized, the diversity and the interestingness of the live drawing of the virtual object are improved, and the interactive experience effect of the user is further improved.

Description

Live broadcast interaction method, device, equipment and medium
Technical Field
The present disclosure relates to the field of live broadcast technologies, and in particular, to a live broadcast interaction method, apparatus, device, and medium.
Background
With the continuous development of live broadcast technology, live broadcast watching becomes an important entertainment activity in the life of people.
Currently, for interest, virtual objects can be used to replace live broadcasts. However, the interaction form of the virtual object is single, and the user cannot deeply participate in the live content corresponding to the virtual object.
Disclosure of Invention
To solve the technical problem or at least partially solve the technical problem, the present disclosure provides a live broadcast interaction method, apparatus, device and medium.
The embodiment of the disclosure provides a live broadcast interaction method, which comprises the following steps:
playing live broadcast content corresponding to a painting question, responding to a first painting action of a virtual object, displaying a painting board in a first area of a live broadcast interface, and presenting a painting track graph corresponding to the first painting action on the painting board;
displaying a plurality of comment information of a live audience in a second area of the live interface;
playing the interactive content of the virtual object aiming at the target comment information in the live broadcast interface, and hiding the drawing board; wherein the target comment information is one or more of the comment information.
The embodiment of the present disclosure further provides a live broadcast interaction apparatus, the apparatus includes:
the drawing live broadcasting module is used for playing live broadcasting content corresponding to a drawing title, responding to a first drawing action of a virtual object, displaying a drawing board in a first area of a live broadcasting interface, and presenting a drawing track graph corresponding to the first drawing action on the drawing board;
the comment display module is used for displaying a plurality of comment information of the live audience in a second area of the live interface;
the reply live broadcast module is used for playing the interactive content of the virtual object aiming at the target comment information in the live broadcast interface and hiding the drawing board; wherein the target comment information is one or more of the comment information.
An embodiment of the present disclosure further provides an electronic device, which includes: a processor; a memory for storing the processor-executable instructions; the processor is used for reading the executable instructions from the memory and executing the instructions to realize the live broadcast interaction method provided by the embodiment of the disclosure.
The embodiment of the present disclosure also provides a computer-readable storage medium, where a computer program is stored, where the computer program is used to execute the live broadcast interaction method provided by the embodiment of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: the live broadcast interaction scheme provided by the embodiment of the disclosure plays live broadcast content corresponding to a painting question, responds to a first painting action of a virtual object, displays a drawing board in a first area of a live broadcast interface, and presents a drawing track graph corresponding to the first painting action on the drawing board; displaying a plurality of comment information of the live audience in a second area of the live interface; playing interactive content of the virtual object aiming at the target comment information in a live broadcast interface, and hiding the drawing board; and the target comment information is one or more of comment information. By adopting the technical scheme, the user can guess questions according to the drawing track graph of the live drawing of the virtual object, and replies according to comment information input by the user, so that the interactive game that the user guesses and draws the questions between the virtual object and the user is realized, the diversity and the interestingness of the live drawing of the virtual object are improved, and the interactive experience effect of the user is further improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a live broadcast interaction method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a first live interaction provided in the embodiment of the present disclosure;
fig. 3 is a schematic diagram of a second live interaction provided by the present disclosure;
fig. 4 is a schematic diagram of a third live interaction provided by the embodiment of the present disclosure;
fig. 5 is a schematic diagram of a fourth live interaction provided by an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a fifth live interaction provided by the present disclosure;
fig. 7 is a schematic diagram of a sixth live interaction provided by an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a live broadcast interaction apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a schematic flow chart of a live broadcast interaction method according to an embodiment of the present disclosure, where the method can be executed by a live broadcast interaction apparatus, where the apparatus can be implemented by software and/or hardware, and can be generally integrated in an electronic device. As shown in fig. 1, the method is applied to a terminal of a user entering a live broadcast of a virtual object, and includes:
step 101, playing live broadcast content corresponding to a drawing title, responding to a first drawing action of a virtual object, displaying a drawing board in a first area of a live broadcast interface, and presenting a drawing track graph corresponding to the first drawing action on the drawing board.
The virtual object may be a three-dimensional model created in advance based on Artificial Intelligence (AI) technology, a controllable digital object may be set for the computer, and the limb motion and the face information of the real person may be acquired through the motion capture device and the face capture device to drive the virtual object. The specific type of the virtual object may include multiple types, different virtual objects may have different appearances, and the virtual object may be a virtual animal or a virtual character with different styles. In the embodiment of the disclosure, through the combination of the artificial intelligence technology and the video live broadcast technology, the virtual object can replace a real person to realize video live broadcast.
In an embodiment of the present disclosure, playing live content corresponding to a drawing title may include: the method comprises the steps of obtaining video data and audio data of at least one drawing topic of a virtual object, wherein the video data comprises action image data corresponding to the drawing topic and scene image data of a plurality of scenes, and the action image data is used for generating a first drawing action, a drawing track graph and an interaction action; and generating live broadcast content corresponding to the drawing title based on the video data and the audio data and playing the live broadcast content.
Optionally, in the process of playing the live content corresponding to the drawing title, based on the motion image data and the scene image data of the multiple scenes, the motion of the scenes and the virtual objects of the live content is switched along with the change of the live content.
The video data and the audio data of the drawing questions refer to data which are configured in advance by the server and are used for realizing live drawing of the virtual object, the number of the drawing questions is not limited specifically, and each drawing question has corresponding video data and audio data. In the embodiment of the disclosure, the question bank can be preset, the question bank comprises video data and audio data of a plurality of drawing questions, and the specific number can be set according to actual conditions. It can be understood that the repetition degree and the appearance frequency of the drawing questions in the question bank can be updated based on actual situations so as to meet the requirements of users.
The video data may include motion image data corresponding to a drawing title and scene image data of a plurality of scenes. The motion image data may include picture data of expressive motions and/or limb motions of the virtual object, the motion image data may be used to generate a first drawing motion of the virtual object, a drawing trace diagram representing a drawing question, and an interactive motion of the virtual object, and the first drawing motion, the drawing trace diagram, and the interactive motion may be matched with the drawing question. The scenes corresponding to the scene image data may include an environment scene and a scene with a picture view angle when the virtual object is live-broadcasting, and the picture view angle may be a view angle when the virtual object is shot by different lenses, and the display sizes and/or display directions of the scene images corresponding to different picture view angles are different. The scene can correspond to a drawing title and also can correspond to a link of live drawing or live reply.
In the embodiment of the disclosure, after the terminal detects that the user triggers live broadcasting of the virtual object, video data and audio data of at least one drawing topic of the virtual object in the server can be acquired, live broadcasting contents corresponding to the drawing topic can be generated through decoding processing of the video data and the audio data, and the live broadcasting contents of live broadcasting drawing of the drawing topic by the virtual object are played in a live broadcasting interface. The specific form of the live broadcast triggering operation is not limited, for example, when the trigger of the user on the setting key is detected, it may be determined that the live broadcast triggering operation is received; or when the trigger of the user to the recommendation information of the virtual object is detected, it may be determined that a live broadcast trigger operation is received, where the recommendation information may be a picture or a video. The trigger operation may include one or more of a single click, a double click, a swipe, and a voice command.
In the process of playing the live broadcast content corresponding to the drawing title, responding to a first drawing action of the virtual object, displaying a drawing board in a first area of a live broadcast interface, and presenting a drawing track graph corresponding to the first drawing action on the drawing board. The drawing board is used for displaying a drawing track graph corresponding to the drawing title on the live broadcast interface. In the process of playing the live content corresponding to the drawing title, the action of the virtual object can be switched from the first action to the second action along with the change of the live content based on the action image data, and the scene of the live content can be switched from the first scene to the second scene along with the change of the live content based on the scene image data of a plurality of scenes. That is, with the change of the live content corresponding to the drawing title, the virtual object can switch different actions, and the scenes of the live content can also switch different scenes.
Optionally, in the process of playing the live content corresponding to the drawing title, the interactive information sent by a plurality of audiences can be received, the audio data and the text information corresponding to the interactive information are received, the interactive information and the text information for replying the interactive information are displayed in the live interface, the video content replied by the virtual object aiming at the interactive information is generated based on the audio data, and the video content is played in the live page. The virtual object can alternate and reply the interactive information of the audience under the condition of not stopping the drawing action in the process of live-broadcasting the drawing, and the two links of the game and the interactive chat in which the virtual object draws the guess of the user can be carried out simultaneously and mutually overlapped, so that the participation of the user is further improved, and the interest of the interaction between the user and the virtual object is increased.
Fig. 2 is a schematic diagram of a first live broadcast interaction provided by the embodiment of the present disclosure, as shown in fig. 2, a live broadcast interface of a virtual object 11 is shown in the diagram, a live broadcast picture when the virtual object 11 performs a drawing action is shown in the live broadcast interface, a drawing board is held in a hand of the virtual object 11, a drawing board 16 is shown in a first area of the live broadcast interface, a drawing track diagram corresponding to the drawing action is shown in the drawing board 16, such as a half bottle in the diagram, and a prompt message of a current drawing question, such as a prompt message of "three characters and food" in the diagram. The upper left corner of the live interface in fig. 2 also shows the avatar and name of the virtual object 11, named "small a", and the focus button 12.
Fig. 3 is a schematic diagram of a second live broadcast interaction provided by an embodiment of the present disclosure, and fig. 3 shows a drawing track diagram generated gradually by a current drawing question along with the progress of the first drawing action, for example, a lower half portion of a bottle in the drawing is further added along with the first drawing action to form a complete bottle, and the progress of the first drawing action of the virtual object can provide more clues to improve the accuracy of the user's response.
In the process of live drawing of the virtual object, along with the change of the live content corresponding to the drawing title, the virtual object can switch different actions, different scenes can be switched in the scenes of the live content, the live content of the virtual anchor matches with the drawing title, the relevance is higher, the live effect of the virtual object is better, the diversity and the interestingness of the virtual object display are improved, and the experience effect of a user in the live drawing of the virtual object is further improved.
And 102, displaying a plurality of comment information of the live audience in a second area of the live interface.
The comment information refers to interaction information input by a live viewer when the live viewer watches the drawing track graph of the virtual object. The second area is an area which is arranged in the live interface and used for displaying comment information. The position of the second area is not limited specifically, and the second area can be displayed on the live broadcast interface in a floating layer or pop-up window mode.
In the embodiment of the disclosure, the terminal can receive a plurality of comment information from a plurality of live viewers in the process of playing the live content corresponding to the drawing title, and display the comment information in the second area of the live interface. The comment information can include answer information of the drawing track graph aiming at the virtual object, and therefore the user can guess questions in real time in the drawing process of the virtual object.
Optionally, the terminal can only show the answer information in the answer interface that supports the live audience self of answering to browse, and other audiences can not watch, promote the fairness of guessing the question, specifically can set up according to actual scene.
For example, referring to fig. 2, a second area is arranged at the lower left of the live interface, and comment information sent by different live viewers watching live drawings of the virtual object is shown in the second area, for example, "you draw a good" sent by user a, "a day with a drawing" sent by user B, and "you draw a bad" sent by user C. The lowest part of the live interface also shows an editing area 13 where the current user sends comment information and other function keys, such as an interactive key 14 and an activity and reward key 15 in the figure, where different function keys have different functions.
103, playing interactive contents of the virtual object aiming at the target comment information in a live broadcast interface, and hiding a drawing board; and the target comment information is one or more of comment information.
In the embodiment of the present disclosure, playing the interactive content of the virtual object for the target comment information in the live interface includes: receiving reply audio data and reply text information corresponding to the target comment information; displaying target comment information and reply text information in a third area of the live broadcast interface; and generating interactive content of the virtual object aiming at the target comment information based on the reply audio data and playing the interactive content in the live broadcast interface.
The target comment information is one or more comments needing to be replied, which are determined by the server side in the plurality of comment information sent by the live viewers based on a preset scheme, and the preset scheme can be set according to actual conditions, for example, the target comment information can be determined based on the integral of the live viewers sending the comment information; or searching target comment information matched with preset keywords, wherein the preset keywords can be mined and extracted according to hotspot information in advance and can also be keywords related to the drawing questions; or carrying out semantic recognition on the comment information, clustering the comment information with similar expression meanings to obtain a plurality of information sets, wherein the set with the most comment information is the hottest topic of the interaction of the live viewers, and taking the comment information corresponding to the set as target comment information. The reply Text information refers To reply content which is determined by the server based on the corpus and is matched with the target comment information, and the reply audio data is natural voice data which is obtained by converting a Text into a virtual object in real time through a Text To Speech (TTS) technology. The terminal can receive reply audio data and reply text information corresponding to the target comment information, reply interactive content can be generated based on the reply audio data and the video data of the current drawing title, then the terminal can display the target comment information and the reply text information in a third area of the live interface, and the interactive content replied by the virtual object aiming at the target comment information is played in the live interface.
For example, fig. 4 is a schematic diagram of a third live broadcast interaction provided by the embodiment of the present disclosure, as shown in fig. 4, a third area 17 in a live broadcast interface shows current target comment information and reply text information of a virtual object, where the target comment information in the drawing is "you draw you but not wrong" sent by the user a, the reply text information of the virtual object is "draw one by one" and an audio content corresponding to the reply text information is played at the same time.
In the above scheme, the terminal can play the interactive content replied by the virtual object aiming at the comment information in the live broadcast interface, and display the current comment information and the corresponding reply text information, so that the user can know which user's interactive content is being replied by the virtual object, the interactive depth between the user and the virtual object is further improved, and the interactive interaction experience is improved.
In the embodiment of the present disclosure, the live broadcast interaction method may further include: and responding to the second drawing action of the virtual object, displaying the drawing trace diagram on the drawing board, and updating the drawing trace diagram according to the second drawing action. The comment information comprises answer information for the drawing track graph, and the second drawing action is triggered when no correct answer corresponding to the drawing question exists in the answer information.
The second drawing action is a new drawing action of the virtual object when no live audience guesses the answer, and the second drawing action can be the same as or different from the first drawing action and depends on the action image data sent by the server. In the process of playing the live broadcast content corresponding to the drawing title, if no correct answer corresponding to the drawing title exists in the answer information, the virtual object starts to perform a second drawing action, at the moment, the first area of the live broadcast interface shows the drawing board again, the drawing track graph completed in the step S101 is shown on the drawing board, and the drawing track graph is continuously updated along with the second drawing action of the virtual object. The live audience can continuously input comment information to answer according to the updated drawing track graph. That is, the game of guessing by you between the virtual object and the user can be carried out in multiple sections, and one drawing question can correspond to at least one drawing track graph, so that the game interaction between the user and the virtual object is deeper, and the relevance is stronger.
For example, fig. 5 is a schematic diagram of a fourth live interaction provided by the embodiment of the present disclosure, as shown in fig. 5, a drawing board 16 of a live interface shows a drawing track diagram of a current drawing title updated with a second drawing action, and as a bottle including "hot pepper" in the drawing track diagram is further added to the drawing track diagram in fig. 3, more clues are provided to improve the accuracy of the user's response.
In the embodiment of the present disclosure, playing the interactive content of the virtual object for the target comment information in the live interface includes: receiving audio data corresponding to a correct answer, wherein the correct answer is received when the game time of the virtual object reaches a preset time threshold; and playing interactive live content of which the correct answer is published based on the virtual object generated by the audio data.
The server side can obtain the game time of the virtual object, and if the game time of the virtual object reaches a preset time threshold value and/or no correct answer exists in the answering information, the server side can send audio data and/or text information corresponding to the correct answer to the terminal. The terminal receives the audio data and/or the text information of the correct answer, can generate interactive live broadcast content based on the audio data and the video data of the drawing questions, and plays the interactive live broadcast content of which the virtual object publishes the correct answer in a live broadcast interface and/or displays the text information of the correct answer in the live broadcast interface. The two publishing modes of the correct answers can enable the user to quickly know the correct answer of the current drawing question.
In the embodiment of the present disclosure, the live broadcast interaction method may further include: and displaying feedback text information corresponding to the response information on the live broadcast interface, wherein the feedback text information is used for representing the answer result of the response information, and the feedback text information is determined according to the comparison result of the response information and the correct answer.
The feedback text information refers to text information determined by the server according to the comparison result of the answering information and the correct answer, and can represent whether the answer of the answering information is correct or not. If the server determines that the answer information is the same as the correct answer, the server can determine that the answer is correct, and the feedback text information is the text information with the correct answer, otherwise, the feedback text information is the text information with the wrong answer. After receiving the feedback text information, the terminal may display the feedback text information on a live interface.
The comment information can include answer information for the drawing track graph, the reply audio data can be feedback audio data determined based on the answer information, and the terminal can receive the feedback audio data, generate interactive content fed back by the virtual object aiming at the answer information based on the feedback audio data and play the interactive content in the live broadcast interface. For example, the user may view the interactive content that the virtual object says "you have both answered".
Fig. 6 is a schematic diagram of a fifth live interaction provided by the embodiment of the present disclosure, as shown in fig. 6, feedback text information, such as "answer to cheer" in the figure, may be displayed in a feedback area 18 in a live interface, and a corresponding special action is further added to the feedback text information, where the answer of the answer "chilli sauce" of the user C is correct.
When the answer result is correct, the server side can send the answer reward to the corresponding user and return the answer reward information to the user side, and the user side displays the answer reward information to the user. The answering reward can comprise rewards in various modes, such as points, virtual articles or virtual currency and the like, without limitation. As shown in fig. 6, for users with correct answers, the feedback area 18 of the live interface also shows the answer reward information, such as the virtual item "+ 100" and the point "+ 50" in the figure.
Optionally, for a plurality of users with correct answer results, the server may sort based on the time of the user answering, and may send additional answer reward terminals to the users with the preset number of top-ranked users to display the answer reward information including the additional answer reward. The additional awards may also include, but are not limited to, awards in a variety of manners. The preset number can be set according to practical situations, for example, the server side can send double points as additional answer rewards for the top 5 ranked users.
In the embodiment of the disclosure, the virtual object can be used for live-broadcasting painting, the live-broadcasting interface presents the painting track diagram of real-time painting, the live-broadcasting viewing can input answer information for the painting track diagram to answer, and if the answer is correct, the reward can be received, so that the interactive game of deeply participating i/i guessing between the virtual object and the user is realized. It can be understood that the live painting of the virtual object is not affected by the answer information input by the user, and even if the answer information input by the user is correct, the live painting does not stop, so that the question guessing can be continuously carried out by other users who input the answer information wrongly or do not input the answer information.
The live broadcast interaction scheme provided by the embodiment of the disclosure plays live broadcast content corresponding to a painting question, responds to a first painting action of a virtual object, displays a drawing board in a first area of a live broadcast interface, and presents a drawing track graph corresponding to the first painting action on the drawing board; displaying a plurality of comment information of the live audience in a second area of the live interface; playing interactive content of the virtual object aiming at the target comment information in a live broadcast interface, and hiding the drawing board; and the target comment information is one or more of comment information. By adopting the technical scheme, the user can guess questions according to the drawing track graph of the live drawing of the virtual object, and replies according to comment information input by the user, so that the interactive game that the user guesses and draws the questions between the virtual object and the user is realized, the diversity and the interestingness of the live drawing of the virtual object are improved, and the interactive experience effect of the user is further improved.
In some embodiments, the live interaction method may further include: acquiring interactive video data of a virtual object; and playing the interactive live broadcast content of the virtual object and the live broadcast audience in the live broadcast interface based on the interactive video data.
The interactive video data can be video data for a virtual object to actively initiate a topic and perform interaction with live audiences, and the interactive video data can correspond to texts of a plurality of topics. Optionally, if the server does not receive comment information sent by the live audience within a set time, the server may send interactive video data of the virtual object to the terminal, and the terminal may generate interactive live content based on the interactive video data and play the interactive live content in which the virtual object interacts with the live audience in a live interface. The setting time may be set according to actual conditions, and for example, the setting time may be 5 seconds.
For example, if the server does not receive comment information of a live viewer within 5 seconds, interactive video data may be sent to the terminal, so that the terminal plays interactive live content of a virtual object saying a preset speaking table book, for example, the virtual object says "how you do nothing about me", "boring well", and the like.
In the scheme, on the basis of realizing game interaction of the virtual object and the user who guesses you to draw, when the user watching the live broadcast of the virtual object does not interact temporarily, the virtual object and the live broadcast audience actively interact by setting some fixed speaking table books, so that the interaction enthusiasm of the user is mobilized, and the interaction effect of the user when the virtual object is live broadcast is improved. And the link of interactive chatting between the virtual object and the user can be switched from game interaction, and different links can be switched with each other, so that the interestingness of interaction between the user and the virtual object is improved, and the interactive experience effect of the user is improved.
In some embodiments, the live interaction method may further include: displaying an interactive panel in a fourth area of the live interface, wherein the interactive panel comprises at least one virtual resource; and displaying the corresponding special effect on the virtual object based on the triggering of the user on the virtual resource, and displaying the duration time identification of the special effect in a fifth area of the live broadcast interface.
The virtual resources may be displayed in the interactive panel in a manner of image identifiers with a preset shape, for example, the virtual resources displayed in the interactive panel may be identifiers with different special effects. The interactive panel supports touch operation of a user, such as clicking, long pressing and the like, so that the user generates a trigger operation for at least one virtual resource. The virtual resource may correspond to a special effect of the virtual object, and the special effect may include at least one of: clothing transformation special effects, dressing transformation special effects, body transformation special effects and scene prop transformation special effects. The duration identification of the special effect is used for prompting the user of the remaining display time of the special effect displayed on the current live broadcast interface. The duration identification can be realized in the form of image identification or text and the like.
An interactive panel comprising a plurality of virtual resources can be displayed in a fourth area of the live interface, corresponding special effect effects can be displayed on the virtual objects based on the triggering of the virtual resources by the user, and duration time marks corresponding to the current special effect are displayed in a fifth area of the live interface so as to prompt the user of the residual display time of the special effect.
Fig. 7 is a schematic diagram of a sixth live interaction provided by the embodiment of the present disclosure, in which the interaction panel 19 shows 7 virtual resources, including freezing, snack feeding, cat ear, beverage drinking, blowing, changing, and sweet food feeding, by way of example only. As shown in fig. 7, after the user triggers the cat ear in the virtual resource and the special effect corresponding to the cat ear is shown on the virtual object, the user can see that the cat ear grows on the head of the virtual object. And the figure also shows the duration identification 20 corresponding to the special effect of the cat ear, in the live broadcasting process of the virtual object, the circular area on the duration identification 20 can be continuously and clockwise filled, after the preset display duration of the special effect of the cat ear is reached, the circular area is in a completely filled state, and the duration identification 20 can disappear from the live broadcasting interface.
In the scheme, the virtual object can display the specific dynamic effect based on the triggering of the user in the live broadcasting process, the live broadcasting interaction mode under the virtual live broadcasting scene is enriched, the live broadcasting interest is improved, the residual display time of the special effect can be known by the user through the display duration identification, and the friendliness of live broadcasting interaction is improved.
Fig. 8 is a schematic structural diagram of a live broadcast interaction apparatus provided in an embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 8, the apparatus includes:
the drawing live broadcast module 301 is configured to play live broadcast content corresponding to a drawing title, respond to a first drawing action of a virtual object, display a drawing board in a first area of a live broadcast interface, and present a drawing track diagram corresponding to the first drawing action on the drawing board;
a comment displaying module 302, configured to display a plurality of comment information of a live viewer in a second area of the live interface;
a reply live broadcast module 303, configured to play, in the live broadcast interface, the interactive content of the virtual object for the target comment information, and hide the drawing board; wherein the target comment information is one or more of the comment information.
Optionally, the reply live module 303 is specifically configured to:
receiving reply audio data and reply text information corresponding to the target comment information;
displaying the target comment information and the reply text information in a third area of the live broadcast interface;
and generating interactive content of the virtual object aiming at the target comment information based on the reply audio data and playing the interactive content in the live broadcast interface.
Optionally, the apparatus further includes a drawing update module, configured to:
and responding to a second drawing action of the virtual object, displaying the drawing track diagram on the drawing board, and updating the drawing track diagram according to the second drawing action.
Optionally, the comment information includes answer information for the drawing trace diagram, and the second drawing action is triggered when no correct answer corresponding to the drawing question exists in the answer information.
Optionally, the reply live module 303 is specifically configured to:
receiving audio data corresponding to the correct answer, wherein the correct answer is received when the game time of the virtual object reaches a preset time threshold;
and playing interactive live content of which the correct answer is published by the virtual object generated based on the audio data.
Optionally, the apparatus further comprises a feedback module, configured to:
and displaying feedback text information corresponding to the response information on the live broadcast interface, wherein the feedback text information is used for representing the answer result of the response information, and the feedback text information is determined according to the comparison result of the response information and the correct answer.
Optionally, the drawing live broadcasting module 301 is specifically configured to:
acquiring video data and audio data of at least one drawing topic of the virtual object, wherein the video data comprises action image data corresponding to the drawing topic and scene image data of a plurality of scenes, and the action image data is used for generating the first drawing action, the drawing track graph and the interaction action;
and generating live broadcast content corresponding to the drawing title based on the video data and the audio data and playing the live broadcast content.
Optionally, in the process of playing the live content corresponding to the drawing title, based on the motion image data and the scene image data of the multiple scenes, the scenes of the live content and the motions of the virtual object are switched along with the change of the live content.
Optionally, the apparatus further includes an interactive live broadcast module, configured to:
acquiring interactive video data of the virtual object;
and playing the interactive live broadcast content of the virtual object and the live broadcast audience in the live broadcast interface based on the interactive video data.
Optionally, the apparatus further includes a special effects module, configured to:
displaying an interactive panel in a fourth area of the live broadcast interface, wherein the interactive panel comprises at least one virtual resource;
and displaying a corresponding special effect on the virtual object based on the triggering of the virtual resource by the user, and displaying a duration time identifier of the special effect in a fifth area of the live broadcast interface.
The live broadcast interaction device provided by the embodiment of the disclosure can execute the live broadcast interaction method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Referring specifically to fig. 9, a schematic diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 400 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), and the like, and fixed terminals such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 9, the electronic device 400 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 9 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program performs the above-mentioned functions defined in the live interaction method of the embodiments of the present disclosure when executed by the processing device 401.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: playing live broadcast content corresponding to a painting question, responding to a first painting action of a virtual object, displaying a painting board in a first area of a live broadcast interface, and presenting a painting track graph corresponding to the first painting action on the painting board; displaying a plurality of comment information of a live audience in a second area of the live interface; playing the interactive content of the virtual object aiming at the target comment information in the live broadcast interface, and hiding the drawing board; wherein the target comment information is one or more of the comment information.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, the present disclosure provides a live broadcast interaction method, including:
playing live broadcast content corresponding to a painting question, responding to a first painting action of a virtual object, displaying a painting board in a first area of a live broadcast interface, and presenting a painting track graph corresponding to the first painting action on the painting board;
displaying a plurality of comment information of a live audience in a second area of the live interface;
playing the interactive content of the virtual object aiming at the target comment information in the live broadcast interface, and hiding the drawing board; wherein the target comment information is one or more of the comment information.
According to one or more embodiments of the present disclosure, in a live broadcast interaction method provided by the present disclosure, the playing of the interactive content of the virtual object for the target comment information in the live broadcast interface includes:
receiving reply audio data and reply text information corresponding to the target comment information;
displaying the target comment information and the reply text information in a third area of the live broadcast interface;
and generating interactive content of the virtual object aiming at the target comment information based on the reply audio data and playing the interactive content in the live broadcast interface.
According to one or more embodiments of the present disclosure, in the live broadcast interaction method provided by the present disclosure, the method further includes:
and responding to a second drawing action of the virtual object, displaying the drawing track diagram on the drawing board, and updating the drawing track diagram according to the second drawing action.
According to one or more embodiments of the present disclosure, in the live broadcast interaction method provided by the present disclosure, the comment information includes answer information for the drawing track graph, and the second drawing action is triggered when no correct answer corresponding to the drawing question exists in the answer information.
According to one or more embodiments of the present disclosure, in a live broadcast interaction method provided by the present disclosure, the playing of the interactive content of the virtual object for the target comment information in the live broadcast interface includes:
receiving audio data corresponding to the correct answer, wherein the correct answer is received when the game time of the virtual object reaches a preset time threshold;
and playing interactive live content of which the correct answer is published by the virtual object generated based on the audio data.
According to one or more embodiments of the present disclosure, in the live broadcast interaction method provided by the present disclosure, the method further includes:
and displaying feedback text information corresponding to the response information on the live broadcast interface, wherein the feedback text information is used for representing the answer result of the response information, and the feedback text information is determined according to the comparison result of the response information and the correct answer.
According to one or more embodiments of the present disclosure, in a live broadcast interaction method provided by the present disclosure, the playing of live broadcast content corresponding to a drawing title includes:
acquiring video data and audio data of at least one drawing topic of the virtual object, wherein the video data comprises action image data corresponding to the drawing topic and scene image data of a plurality of scenes, and the action image data is used for generating the first drawing action, the drawing track graph and the interaction action;
and generating live broadcast content corresponding to the drawing title based on the video data and the audio data and playing the live broadcast content.
According to one or more embodiments of the present disclosure, in a live broadcast interaction method provided by the present disclosure, in a process of playing a live broadcast content corresponding to a drawing title, based on the motion image data and scene image data of the multiple scenes, a scene of the live broadcast content and a motion of the virtual object are switched along with a change of the live broadcast content.
According to one or more embodiments of the present disclosure, in the live broadcast interaction method provided by the present disclosure, the method further includes:
acquiring interactive video data of the virtual object;
and playing the interactive live broadcast content of the virtual object and the live broadcast audience in the live broadcast interface based on the interactive video data.
According to one or more embodiments of the present disclosure, in the live broadcast interaction method provided by the present disclosure, the method further includes:
displaying an interactive panel in a fourth area of the live broadcast interface, wherein the interactive panel comprises at least one virtual resource;
and displaying a corresponding special effect on the virtual object based on the triggering of the virtual resource by the user, and displaying a duration time identifier of the special effect in a fifth area of the live broadcast interface.
According to one or more embodiments of the present disclosure, there is provided a live interaction apparatus, including:
the drawing live broadcasting module is used for playing live broadcasting content corresponding to a drawing title, responding to a first drawing action of a virtual object, displaying a drawing board in a first area of a live broadcasting interface, and presenting a drawing track graph corresponding to the first drawing action on the drawing board;
the comment display module is used for displaying a plurality of comment information of the live audience in a second area of the live interface;
the reply live broadcast module is used for playing the interactive content of the virtual object aiming at the target comment information in the live broadcast interface and hiding the drawing board; wherein the target comment information is one or more of the comment information.
According to one or more embodiments of the present disclosure, in a live broadcast interaction apparatus provided by the present disclosure, the reply live broadcast module is specifically configured to:
receiving reply audio data and reply text information corresponding to the target comment information;
displaying the target comment information and the reply text information in a third area of the live broadcast interface;
and generating interactive content of the virtual object aiming at the target comment information based on the reply audio data and playing the interactive content in the live broadcast interface.
According to one or more embodiments of the present disclosure, in a live interactive apparatus provided by the present disclosure, the apparatus further includes a drawing update module, configured to:
and responding to a second drawing action of the virtual object, displaying the drawing track diagram on the drawing board, and updating the drawing track diagram according to the second drawing action.
According to one or more embodiments of the present disclosure, in the live broadcast interaction device provided by the present disclosure, the comment information includes answer information for the drawing track graph, and the second drawing action is triggered when no correct answer corresponding to the drawing question exists in the answer information.
According to one or more embodiments of the present disclosure, in a live broadcast interaction apparatus provided by the present disclosure, the reply live broadcast module is specifically configured to:
receiving audio data corresponding to the correct answer, wherein the correct answer is received when the game time of the virtual object reaches a preset time threshold;
and playing interactive live content of which the correct answer is published by the virtual object generated based on the audio data.
According to one or more embodiments of the present disclosure, in a live interactive apparatus provided by the present disclosure, the apparatus further includes a feedback module, configured to:
and displaying feedback text information corresponding to the response information on the live broadcast interface, wherein the feedback text information is used for representing the answer result of the response information, and the feedback text information is determined according to the comparison result of the response information and the correct answer.
According to one or more embodiments of the present disclosure, in a live broadcast interaction apparatus provided by the present disclosure, the drawing live broadcast module is specifically configured to:
acquiring video data and audio data of at least one drawing topic of the virtual object, wherein the video data comprises action image data corresponding to the drawing topic and scene image data of a plurality of scenes, and the action image data is used for generating the first drawing action, the drawing track graph and the interaction action;
and generating live broadcast content corresponding to the drawing title based on the video data and the audio data and playing the live broadcast content.
According to one or more embodiments of the present disclosure, in a live broadcast interaction apparatus provided by the present disclosure, in a process of playing a live broadcast content corresponding to a drawing title, based on the motion image data and scene image data of the plurality of scenes, a scene of the live broadcast content and a motion of the virtual object are switched with a change of the live broadcast content.
According to one or more embodiments of the present disclosure, in a live broadcast interaction apparatus provided by the present disclosure, the apparatus further includes an interactive live broadcast module, configured to:
acquiring interactive video data of the virtual object;
and playing the interactive live broadcast content of the virtual object and the live broadcast audience in the live broadcast interface based on the interactive video data.
According to one or more embodiments of the present disclosure, in a live broadcast interaction apparatus provided by the present disclosure, the apparatus further includes a special effect module, configured to:
displaying an interactive panel in a fourth area of the live broadcast interface, wherein the interactive panel comprises at least one virtual resource;
and displaying a corresponding special effect on the virtual object based on the triggering of the virtual resource by the user, and displaying a duration time identifier of the special effect in a fifth area of the live broadcast interface.
In accordance with one or more embodiments of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize any live interaction method provided by the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the live interaction method as any one of provided by the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (13)

1. A live interaction method is characterized by comprising the following steps:
playing live broadcast content corresponding to a painting question, responding to a first painting action of a virtual object, displaying a painting board in a first area of a live broadcast interface, and presenting a painting track graph corresponding to the first painting action on the painting board;
displaying a plurality of comment information of a live audience in a second area of the live interface;
playing the interactive content of the virtual object aiming at the target comment information in the live broadcast interface, and hiding the drawing board; wherein the target comment information is one or more of the comment information.
2. The method of claim 1, wherein the playing of the interactive content of the virtual object for the target comment information in the live interface comprises:
receiving reply audio data and reply text information corresponding to the target comment information;
displaying the target comment information and the reply text information in a third area of the live broadcast interface;
and generating interactive content of the virtual object aiming at the target comment information based on the reply audio data and playing the interactive content in the live broadcast interface.
3. The method of claim 1, further comprising:
and responding to a second drawing action of the virtual object, displaying the drawing track diagram on the drawing board, and updating the drawing track diagram according to the second drawing action.
4. The method according to claim 3, wherein response information for the drawing trace diagram is included in the comment information, and the second drawing action is triggered when no correct answer corresponding to the drawing question exists in the response information.
5. The method of claim 4, wherein the playing of the interactive content of the virtual object for the target comment information in the live interface comprises:
receiving audio data corresponding to the correct answer, wherein the correct answer is received when the game time of the virtual object reaches a preset time threshold;
and playing interactive live content of which the correct answer is published by the virtual object generated based on the audio data.
6. The method of claim 4, further comprising:
and displaying feedback text information corresponding to the response information on the live broadcast interface, wherein the feedback text information is used for representing the answer result of the response information, and the feedback text information is determined according to the comparison result of the response information and the correct answer.
7. The method according to claim 1, wherein the playing of the live content corresponding to the drawing title comprises:
acquiring video data and audio data of at least one drawing topic of the virtual object, wherein the video data comprises action image data corresponding to the drawing topic and scene image data of a plurality of scenes, and the action image data is used for generating the first drawing action, the drawing track graph and the interaction action;
and generating live broadcast content corresponding to the drawing title based on the video data and the audio data and playing the live broadcast content.
8. The method according to claim 7, wherein during playback of live content corresponding to the drawing title, based on the motion image data and scene image data of the plurality of scenes, the scenes of the live content and the motion of the virtual object are switched as the live content changes.
9. The method of claim 7, further comprising:
acquiring interactive video data of the virtual object;
and playing the interactive live broadcast content of the virtual object and the live broadcast audience in the live broadcast interface based on the interactive video data.
10. The method of claim 1, further comprising:
displaying an interactive panel in a fourth area of the live broadcast interface, wherein the interactive panel comprises at least one virtual resource;
and displaying a corresponding special effect on the virtual object based on the triggering of the virtual resource by the user, and displaying a duration time identifier of the special effect in a fifth area of the live broadcast interface.
11. A live interaction device, comprising:
the drawing live broadcasting module is used for playing live broadcasting content corresponding to a drawing title, responding to a first drawing action of a virtual object, displaying a drawing board in a first area of a live broadcasting interface, and presenting a drawing track graph corresponding to the first drawing action on the drawing board;
the comment display module is used for displaying a plurality of comment information of the live audience in a second area of the live interface;
the reply live broadcast module is used for playing the interactive content of the virtual object aiming at the target comment information in the live broadcast interface and hiding the drawing board; wherein the target comment information is one or more of the comment information.
12. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the live interaction method of any one of the claims 1-10.
13. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the live interaction method of any of the preceding claims 1-10.
CN202011460121.6A 2020-12-11 2020-12-11 Live broadcast interaction method, device, equipment and medium Pending CN112601100A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011460121.6A CN112601100A (en) 2020-12-11 2020-12-11 Live broadcast interaction method, device, equipment and medium
PCT/CN2021/128072 WO2022121557A1 (en) 2020-12-11 2021-11-02 Live streaming interaction method, apparatus and device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011460121.6A CN112601100A (en) 2020-12-11 2020-12-11 Live broadcast interaction method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN112601100A true CN112601100A (en) 2021-04-02

Family

ID=75192624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011460121.6A Pending CN112601100A (en) 2020-12-11 2020-12-11 Live broadcast interaction method, device, equipment and medium

Country Status (2)

Country Link
CN (1) CN112601100A (en)
WO (1) WO2022121557A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115061A (en) * 2021-04-07 2021-07-13 北京字跳网络技术有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN113504853A (en) * 2021-07-08 2021-10-15 维沃移动通信(杭州)有限公司 Comment generation method and device
CN113507620A (en) * 2021-07-02 2021-10-15 腾讯科技(深圳)有限公司 Live broadcast data processing method, device, equipment and storage medium
CN114125569A (en) * 2022-01-27 2022-03-01 阿里巴巴(中国)有限公司 Live broadcast processing method and device
CN114115528A (en) * 2021-11-02 2022-03-01 深圳市雷鸟网络传媒有限公司 Virtual object control method and device, computer equipment and storage medium
CN114125492A (en) * 2022-01-24 2022-03-01 阿里巴巴(中国)有限公司 Live content generation method and device
WO2022121557A1 (en) * 2020-12-11 2022-06-16 北京字跳网络技术有限公司 Live streaming interaction method, apparatus and device, and medium
CN115314749A (en) * 2022-06-15 2022-11-08 网易(杭州)网络有限公司 Interactive information response method and device and electronic equipment
WO2023134558A1 (en) * 2022-01-14 2023-07-20 北京字跳网络技术有限公司 Interaction method and apparatus, electronic device, storage medium, and program product
CN116527956A (en) * 2023-07-03 2023-08-01 世优(北京)科技有限公司 Virtual object live broadcast method, device and system based on target event triggering
WO2024099451A1 (en) * 2022-11-10 2024-05-16 北京字跳网络技术有限公司 Method and apparatus for online live streaming, and device and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174953B (en) * 2022-07-19 2024-04-26 广州虎牙科技有限公司 Event virtual live broadcast method, system and event live broadcast server
CN115278336B (en) * 2022-07-20 2024-03-29 北京字跳网络技术有限公司 Information processing method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106873893A (en) * 2017-02-13 2017-06-20 北京光年无限科技有限公司 For the multi-modal exchange method and device of intelligent robot
CN107423809A (en) * 2017-07-07 2017-12-01 北京光年无限科技有限公司 The multi-modal exchange method of virtual robot and system applied to net cast platform
CN108322832A (en) * 2018-01-22 2018-07-24 广州市动景计算机科技有限公司 Comment on method, apparatus and electronic equipment
CN110557625A (en) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 live virtual image broadcasting method, terminal, computer equipment and storage medium
CN110662083A (en) * 2019-09-30 2020-01-07 北京达佳互联信息技术有限公司 Data processing method and device, electronic equipment and storage medium
CN111010586A (en) * 2019-12-19 2020-04-14 腾讯科技(深圳)有限公司 Live broadcast method, device, equipment and storage medium based on artificial intelligence

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140123014A1 (en) * 2012-11-01 2014-05-01 Inxpo, Inc. Method and system for chat and activity stream capture and playback
CN106878820B (en) * 2016-12-09 2020-10-16 北京小米移动软件有限公司 Live broadcast interaction method and device
CN107750005B (en) * 2017-09-18 2020-10-30 迈吉客科技(北京)有限公司 Virtual interaction method and terminal
CN109271553A (en) * 2018-08-31 2019-01-25 乐蜜有限公司 A kind of virtual image video broadcasting method, device, electronic equipment and storage medium
CN110035325A (en) * 2019-04-19 2019-07-19 广州虎牙信息科技有限公司 Barrage answering method, barrage return mechanism and live streaming equipment
CN110139142A (en) * 2019-05-16 2019-08-16 北京达佳互联信息技术有限公司 Virtual objects display methods, device, terminal and storage medium
CN111277849B (en) * 2020-02-11 2021-10-15 腾讯科技(深圳)有限公司 Image processing method and device, computer equipment and storage medium
CN112601100A (en) * 2020-12-11 2021-04-02 北京字跳网络技术有限公司 Live broadcast interaction method, device, equipment and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106873893A (en) * 2017-02-13 2017-06-20 北京光年无限科技有限公司 For the multi-modal exchange method and device of intelligent robot
CN107423809A (en) * 2017-07-07 2017-12-01 北京光年无限科技有限公司 The multi-modal exchange method of virtual robot and system applied to net cast platform
CN108322832A (en) * 2018-01-22 2018-07-24 广州市动景计算机科技有限公司 Comment on method, apparatus and electronic equipment
CN110557625A (en) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 live virtual image broadcasting method, terminal, computer equipment and storage medium
CN110662083A (en) * 2019-09-30 2020-01-07 北京达佳互联信息技术有限公司 Data processing method and device, electronic equipment and storage medium
CN111010586A (en) * 2019-12-19 2020-04-14 腾讯科技(深圳)有限公司 Live broadcast method, device, equipment and storage medium based on artificial intelligence

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022121557A1 (en) * 2020-12-11 2022-06-16 北京字跳网络技术有限公司 Live streaming interaction method, apparatus and device, and medium
CN113115061B (en) * 2021-04-07 2023-03-10 北京字跳网络技术有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN113115061A (en) * 2021-04-07 2021-07-13 北京字跳网络技术有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
WO2022213727A1 (en) * 2021-04-07 2022-10-13 北京字跳网络技术有限公司 Live broadcast interaction method and apparatus, and electronic device and storage medium
CN113507620A (en) * 2021-07-02 2021-10-15 腾讯科技(深圳)有限公司 Live broadcast data processing method, device, equipment and storage medium
CN113504853A (en) * 2021-07-08 2021-10-15 维沃移动通信(杭州)有限公司 Comment generation method and device
CN114115528A (en) * 2021-11-02 2022-03-01 深圳市雷鸟网络传媒有限公司 Virtual object control method and device, computer equipment and storage medium
CN114115528B (en) * 2021-11-02 2024-01-19 深圳市雷鸟网络传媒有限公司 Virtual object control method, device, computer equipment and storage medium
WO2023134558A1 (en) * 2022-01-14 2023-07-20 北京字跳网络技术有限公司 Interaction method and apparatus, electronic device, storage medium, and program product
CN114125492B (en) * 2022-01-24 2022-07-15 阿里巴巴(中国)有限公司 Live content generation method and device
CN114125492A (en) * 2022-01-24 2022-03-01 阿里巴巴(中国)有限公司 Live content generation method and device
CN114125569A (en) * 2022-01-27 2022-03-01 阿里巴巴(中国)有限公司 Live broadcast processing method and device
CN115314749A (en) * 2022-06-15 2022-11-08 网易(杭州)网络有限公司 Interactive information response method and device and electronic equipment
CN115314749B (en) * 2022-06-15 2024-03-22 网易(杭州)网络有限公司 Response method and device of interaction information and electronic equipment
WO2024099451A1 (en) * 2022-11-10 2024-05-16 北京字跳网络技术有限公司 Method and apparatus for online live streaming, and device and storage medium
CN116527956A (en) * 2023-07-03 2023-08-01 世优(北京)科技有限公司 Virtual object live broadcast method, device and system based on target event triggering
CN116527956B (en) * 2023-07-03 2023-08-22 世优(北京)科技有限公司 Virtual object live broadcast method, device and system based on target event triggering

Also Published As

Publication number Publication date
WO2022121557A1 (en) 2022-06-16

Similar Documents

Publication Publication Date Title
CN112601100A (en) Live broadcast interaction method, device, equipment and medium
CN112616063B (en) Live broadcast interaction method, device, equipment and medium
US10834479B2 (en) Interaction method based on multimedia programs and terminal device
US11247134B2 (en) Message push method and apparatus, device, and storage medium
RU2527199C2 (en) Avatar integrated shared media selection
CN110536166B (en) Interactive triggering method, device and equipment of live application program and storage medium
CN110570698A (en) Online teaching control method and device, storage medium and terminal
CN107294837A (en) Engaged in the dialogue interactive method and system using virtual robot
CN112637622A (en) Live broadcasting singing method, device, equipment and medium
WO2004111901A1 (en) Intelligent collaborative media
CN111052107A (en) Topic guidance in conversations
CN111404808B (en) Song processing method
CN113301358A (en) Content providing and displaying method and device, electronic equipment and storage medium
CN114173139B (en) Live broadcast interaction method, system and related device
US20090158171A1 (en) Computer method and system for creating spontaneous icebreaking activities in a shared synchronous online environment using social data
CN112954426B (en) Video playing method, electronic equipment and storage medium
CN110446090A (en) A kind of virtual auditorium spectators bus connection method, system, device and storage medium
KR20130025277A (en) Method and server for providing message service
CN115079876A (en) Interactive method, device, storage medium and computer program product
CN110166351A (en) A kind of exchange method based on instant messaging, device and electronic equipment
CN112820265B (en) Speech synthesis model training method and related device
US11526269B2 (en) Video playing control method and apparatus, device, and storage medium
CN114398135A (en) Interaction method, interaction device, electronic device, storage medium, and program product
CN113301352A (en) Automatic chat during video playback
CN112752159B (en) Interaction method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210402

RJ01 Rejection of invention patent application after publication