CN108521584B - Interactive information processing method, device, anchor side equipment and medium - Google Patents

Interactive information processing method, device, anchor side equipment and medium Download PDF

Info

Publication number
CN108521584B
CN108521584B CN201810361423.4A CN201810361423A CN108521584B CN 108521584 B CN108521584 B CN 108521584B CN 201810361423 A CN201810361423 A CN 201810361423A CN 108521584 B CN108521584 B CN 108521584B
Authority
CN
China
Prior art keywords
audio
interactive
current
video
video stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810361423.4A
Other languages
Chinese (zh)
Other versions
CN108521584A (en
Inventor
李奇文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Information Technology Co Ltd
Original Assignee
Guangzhou Huya Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Information Technology Co Ltd filed Critical Guangzhou Huya Information Technology Co Ltd
Priority to CN201810361423.4A priority Critical patent/CN108521584B/en
Publication of CN108521584A publication Critical patent/CN108521584A/en
Application granted granted Critical
Publication of CN108521584B publication Critical patent/CN108521584B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4305Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/242Synchronization processes, e.g. processing of PCR [Program Clock References]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the invention discloses an interactive information processing method, an interactive information processing device, anchor side equipment and media. The method comprises the following steps: responding to the current interactive operation of the anchor, and generating current interactive information corresponding to the current interactive operation; inserting the current interactive information into the current audio and video stream to generate an interactive audio and video file; and sending the interactive audio and video file to a video network so that the audience side equipment receives the interactive audio and video file through the video network and displays the current interactive information in the interactive audio and video file. The method provided by the embodiment of the invention can realize the synchronization of the interactive information and the audio and video stream and improve the real-time property.

Description

Interactive information processing method, device, anchor side equipment and medium
Technical Field
The embodiment of the invention relates to the internet technology, in particular to an interactive information processing method, an interactive information processing device, anchor side equipment and a medium.
Background
With the rapid development of internet technology, live webcasting is a new technical field to enter the public visual field, and audiences can watch the wonderful performance of the anchor on respective equipment and can interact with the anchor in real time.
Currently, when a anchor interacts with audiences in real time, the anchor side equipment needs to send audio and video streams and interaction information to the audience side equipment. As shown in fig. 1, the audio/video stream is sent to the viewer-side device through the video network, and the interactive information is sent to the viewer-side device through the signaling server. Because the number of nodes, network load, bandwidth, and the like in the channel of the video network and the channel of the signaling server may be different, when the viewer-side device receives the audio/video stream, the signaling server may transmit the interactive information to the viewer-side device early or late, that is, the timestamp of the audio/video stream is easily misaligned with the timestamp of the interactive information. This tends to cause the viewer to perceive the interaction as an audio-visual stream, and the interactive information is not presented synchronously on the viewer-side device.
Disclosure of Invention
The embodiment of the invention provides an interactive information processing method, an interactive information processing device, anchor side equipment and a medium, so as to realize the synchronization of interactive information and audio and video streams and improve the real-time property.
In a first aspect, an embodiment of the present invention provides an interactive information processing method, including:
responding to the current interactive operation of the anchor, and generating current interactive information corresponding to the current interactive operation;
inserting the current interactive information into the current audio and video stream to generate an interactive audio and video file;
and sending the interactive audio and video file to a video network so that the audience side equipment receives the interactive audio and video file through the video network and displays the current interactive information in the interactive audio and video file.
In a second aspect, an embodiment of the present invention further provides another interactive information processing method, including:
receiving an interactive audio and video file from a video network, wherein the interactive audio and video file is generated by inserting current interactive information into current audio and video stream through anchor side equipment, and the current interactive information corresponds to current interactive operation of an anchor;
identifying the current interactive information from the interactive audio and video file;
and displaying the current interaction information.
In a third aspect, an embodiment of the present invention further provides an interactive information processing apparatus, including:
the first generation module is used for responding to the current interactive operation of the anchor and generating current interactive information corresponding to the current interactive operation;
the second generation module is used for inserting the current interactive information into the current audio and video stream to generate an interactive audio and video file;
and the sending module is used for sending the interactive audio and video file to a video network so that the audience side equipment can receive the interactive audio and video file through the video network and display the current interactive information in the interactive audio and video file.
In a fourth aspect, an embodiment of the present invention further provides an anchor side device, including one or more processors;
a memory for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the interactive information processing method according to any of the embodiments.
In a fifth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the interactive information processing method according to any embodiment.
In the embodiment, the current interactive information corresponding to the current interactive operation of the anchor is inserted into the current audio and video stream to generate an interactive audio and video file, so that the current interactive information and the current audio and video stream are combined into one, and an interactive audio and video file is generated; then, the interactive audio and video file is sent to the video network, so that the interactive audio and video file is sent to the audience side equipment through the video network, the current interactive information and the current audio and video stream in the interactive audio and video file can be transmitted to the audience side equipment together without stamping a time stamp on the current interactive information and aligning the time stamp, the problem that the current interactive information and the current audio and video stream are difficult to be synchronously transmitted to the audience side equipment in the prior art, namely the time stamps are not aligned is solved, the synchronization of the current interactive information and the current audio and video stream is realized, and the accuracy and the real-time performance are improved; furthermore, the current interactive information is transmitted through the channel of the video network, so that a signaling server is not needed to transmit the current interactive information, and the overhead of the signaling server is reduced.
Drawings
Fig. 1 is a signal flow diagram between a broadcaster-side device and a viewer-side device in the prior art;
fig. 2 is a flowchart of an interactive information processing method according to an embodiment of the present invention;
fig. 3 is a flowchart of an interactive information processing method according to a second embodiment of the present invention;
fig. 4 is a flowchart of an interactive information processing method according to a third embodiment of the present invention;
fig. 5 is a flowchart of an interactive information processing method according to a fourth embodiment of the present invention;
fig. 6a is a signal flow chart between a broadcaster-side device and a viewer-side device according to an embodiment of the present invention;
fig. 6b is a signal flow diagram between a device on the anchor side and a device on the viewer side according to another embodiment of the present invention;
fig. 7 is a schematic structural diagram of an interactive information processing apparatus according to a fifth embodiment of the present invention;
fig. 8 is a schematic structural diagram of an anchor-side device according to a sixth embodiment of the present invention;
fig. 9 is a schematic structural diagram of an interactive information processing apparatus according to a seventh embodiment of the present invention;
fig. 10 is a schematic structural diagram of an audience side device according to an eighth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 2 is a flowchart of an interactive information processing method according to an embodiment of the present invention, where this embodiment is applicable to a case where a host device sends interactive information to a viewer device, and the method may be executed by an interactive information processing apparatus, where the apparatus may be composed of hardware and/or software, and may be generally integrated in the host device, and specifically includes the following steps:
and S110, responding to the current interactive operation of the anchor, and generating current interactive information corresponding to the current interactive operation.
The live broadcast application program is installed on the main broadcast side equipment, and provides various interactive activities such as guessing activities, lottery activities, bullet screen launching activities and the like in order to facilitate interaction between the main broadcast and audiences. When the anchor wants to interact with the audience, the interactive activity provided by the live application program can be started on the anchor side equipment, and a series of subsequent operations can be executed on the started activity.
In each operation of the anchor, some of the operations need to be correspondingly displayed on the audience side equipment, so that the audience can timely know and participate in the interactive activity of the anchor. For example, clicking the "pop-up screen activity" control on the anchor to open the pop-up screen activity requires that the prompt information of the "anchor open pop-up screen activity" be displayed on the viewer-side device to attract the viewer to participate in the pop-up screen activity of the anchor.
For convenience of description and distinction, an operation that needs to correspond to a main cast presented on the viewer-side device is referred to as an interactive operation. Optionally, a plurality of interactive operations are pre-stored in the anchor side device, and when an operation of the anchor at the current time matches one interactive operation, current interactive information corresponding to the current interactive operation is generated in response to the current interactive operation of the anchor. For example, the winning list is generated in response to a click operation of the anchor on the "winning list" control. For another example, in response to the click operation of the "end guess activity" control by the anchor, the prompt message of "guess activity ended" is generated.
And S120, inserting the current interactive information into the current audio and video stream to generate an interactive audio and video file.
The current audio and video stream refers to the audio stream and the video stream recorded by the anchor side equipment at the current moment, and the anchor side equipment inserts the current interactive information into the current audio and video stream to generate an interactive audio and video file. Based on this, the interactive audio and video file contains the current audio and video and the current interactive information.
And S130, sending the interactive audio and video file to a video network so that the audience side equipment can receive the interactive audio and video file through the video network and display the current interactive information in the interactive audio and video file.
After the anchor side equipment generates the interactive audio and video file, the interactive audio and video file is sent to a video network in a stream pushing mode. The plug flow may refer to a process of transmitting the interactive audio/video file to a video network by using content capture software such as a plug flow tool. Alternatively, the video Network may be a CDN (Content Delivery Network).
The audience side equipment can receive the interactive audio and video files from the video network in a pull stream mode, identify the current interactive information in the interactive audio and video files, and display the current interactive information on the audience side equipment. The stream pulling refers to a process of pulling interactive audio and video files in a video network by using a specified address, wherein the interactive audio and video files exist in the video network.
In the embodiment, the current interactive information corresponding to the current interactive operation of the anchor is inserted into the current audio and video stream to generate an interactive audio and video file, so that the current interactive information and the current audio and video stream are combined into one, and an interactive audio and video file is generated; then, the interactive audio/video file is sent to the video network, so that the interactive audio/video file is sent to the audience equipment through the video network, that is, the current interactive information and the current audio/video stream are transmitted to the audience equipment by using the same video network channel. The current interactive information and the current audio and video stream in the interactive audio and video file can be transmitted to the audience side equipment together without stamping a time stamp on the current interactive information or aligning the time stamp, so that the technical problem that the current interactive information and the current audio and video stream are difficult to be synchronously transmitted to the audience side equipment in the prior art is solved, the synchronization of the current interactive information and the current audio and video stream is realized, and the real-time property is improved; furthermore, the current interactive information is transmitted through the channel of the video network, so that a signaling server is not needed to transmit the current interactive information, and the overhead of the signaling server is reduced.
Example two
In this embodiment, the above embodiment is further optimized, and fig. 3 is a flowchart of an interactive information processing method provided in the second embodiment of the present invention, as shown in fig. 3, including the following steps:
and S210, responding to the current interactive operation of the anchor, and generating current interactive information corresponding to the current interactive operation.
And S220, generating custom metadata corresponding to the current interaction information according to the current interaction information.
Wherein the custom metadata is used to describe current interaction information. In this embodiment, a description rule of the custom metadata may be pre-formulated, and a corresponding relationship between the custom metadata and the interaction information may be established. The custom metadata is essentially an array element, including pairs of element names and values. If the element name indicates the type of interactive information, for example, activity, the character string "activity" may be used to describe the activity. If the value refers to interactive information content, such as a lottery, the character string "lottery" may be used to describe the lottery. At this time, the custom metadata is the character string "activity lottery", and the corresponding relationship between "activity lottery" and "activity lottery" is established.
After the current interactive information is generated, the current interactive information can be compared in the corresponding relationship between the custom metadata and the interactive information to obtain the custom metadata corresponding to the current interactive information.
And S230, inserting the custom metadata into the current audio and video stream to generate an interactive audio and video file.
The user-defined metadata can be written into a metadata file of the current audio and video stream, so that the interactive audio and video file is generated.
The custom data may also be added to a metadata tag (metadata tag) to generate a metadata data unit, such as a metadata frame, so that the audio/video stream includes the custom metadata carrying the current interactive information.
In one embodiment, S230 may include:
and determining a first audio/video stream according to the generation time point of the custom metadata, wherein the generation time point of the custom metadata is positioned in the generation time period of the first audio/video stream.
The first audio and video stream comprises a plurality of frames of video frames and a plurality of frames of audio frames. It is assumed that the generation periods of the plurality of frames of video frames and the plurality of frames of audio frames are from time a to time B. And the current interactive operation triggered by the anchor is also within the time A and the time B, and the generation time point of the self-defined metadata is within the time A to the time B. The time difference between the time A and the time B is less than or equal to the preset time interval.
When the anchor triggers the current interactive operation, the timestamp of the current video frame can be recorded, and after the custom metadata is generated, the first audio and video stream can be determined according to the timestamp. At this time, a video frame corresponding to the timestamp when the anchor triggers the current interactive operation can be obtained; and determining the video stream section where the video frame is located and the corresponding audio stream section as a first audio and video stream.
And storing the self-defined metadata in the metadata file corresponding to the first audio and video stream.
And after the first audio and video stream is determined to be generated, packaging the metadata file and the first audio and video stream to generate an interactive audio and video file.
And storing the self-defined metadata into a metadata file corresponding to the first audio/video stream, and packaging the metadata file and the first audio/video stream to generate an interactive audio/video file, wherein the first audio/video stream is the current audio/video stream.
The metadata file may include a variety of metadata for describing different information, such as metadata describing "bit rate", metadata describing "resolution", metadata describing "author information", and the like. And storing the custom metadata into a metadata file, and packaging the metadata file and the current audio and video stream, thereby realizing the operation of inserting the current interactive information into the current audio and video stream. For example, the custom metadata is stored in a metadata file, the metadata file and the current audio/VIDEO stream are packaged by using a streaming media format (FLASH VIDEO, FLV) container, and the obtained file after packaging is called an interactive audio/VIDEO file.
And S240, sending the interactive audio and video file to a video network so that the audience side equipment receives the interactive audio and video file through the video network and displays the current interactive information in the interactive audio and video file.
In the embodiment, custom metadata corresponding to the current interactive information is generated according to the current interactive information; the user-defined metadata is stored in the metadata file, and the metadata file and the current audio and video stream are packaged to generate an interactive audio and video file, so that the current interactive information is merged into a channel of a video network in a mode of packaging the user-defined metadata, the current interactive information and the current audio and video stream are kept synchronous, audiences can see the current interactive information when watching the current audio and video, and user experience is improved.
In the above embodiment, the custom metadata is inserted into the current audio/video stream by storing the custom metadata into the metadata file and encapsulating the metadata file and the current audio/video stream. In some embodiments, a custom field corresponding to the current interactive information is generated according to the current interactive information, and the custom field is inserted into the current audio/video stream to generate an interactive audio/video file. The custom field may be added to the current audio stream and/or the current video stream and the current audio stream may be encapsulated to insert the custom field into the current audio and video stream. Of course, the current interactive information may also be divided into two parts, one part of the information generates a custom field, the other part generates custom metadata, the custom metadata is stored in a metadata file, the custom field is added to the current video stream and/or the current audio stream, and the metadata file, the current video stream and the current audio stream are encapsulated to generate the interactive audio/video file.
The following describes in detail a process of adding a custom field to a current audio stream and/or a current video stream, and encapsulating the current audio stream and the current video stream to generate an interactive audio/video file.
Inserting the current interactive information into the current audio and video stream to generate an interactive audio and video file, comprising: generating a custom field corresponding to the current interactive information according to the current interactive information; determining a target video frame and/or a target audio frame according to the generation time point of the custom field, wherein the difference between the custom field and the generation time points of the target video frame and the target audio frame is within a preset time length; adding a custom field in a target video frame and/or a target audio frame; and after determining that the second audio and video stream corresponding to the target video frame and the target audio frame is generated, packaging the second audio and video stream to generate an interactive audio and video file.
Suppose that the generation time point of a video frame is M time, the generation time point of an audio frame is N time, and the generation time point of the custom field is P time. And if the difference between the time points of the P time and the M time is within the preset duration and the difference between the time points of the P time and the N time is also within the preset duration, determining that the video frame is a target video frame and the audio frame is a target audio frame. At this time, the target video frame is the current video frame, and the target audio frame is the current audio frame.
Optionally, adding the custom field to the target video stream and/or the target audio stream includes the following three embodiments:
the first embodiment: custom metadata is added to the target video frame. In one example, the interactive audio video file is an FLV file. The FLV file includes two parts, a header and a body. The file body consists of a series of tags. Wherein the Tag comprises an audio Tag, a video Tag and a Script.
A custom meta field, such as the string "activity preference", is added to the video Tag of the target video frame.
The second embodiment: custom fields are added to the target audio frame. In an example, the interactive audio and video file is an FLV file, and a custom field may be added to the audio Tag of the target audio frame.
Third embodiment: when the custom field occupies more bytes, the custom field can be divided into two parts, one part is added into the target audio frame, and the other part is added into the target video frame. For example, the custom field is "activity memorytotetry" and the corresponding interactive information is "active member lottery". "activity" is added to the video Tag of the target video frame, and "memberlotter" is added to the audio Tag of the target audio frame.
The second audio and video stream comprises a plurality of frames of audio frames and a plurality of frames of video frames, wherein the target video frame is one of the plurality of frames of video frames, and the target audio frame is one of the plurality of frames of audio frames. And after the second audio and video stream is generated, packaging the multiframe audio frames and the multiframe video frames in the second audio and video stream to generate an interactive audio and video file.
In the embodiment, the user-defined field is added to the target video frame and/or the target audio frame, and the target video frame and the second audio/video stream corresponding to the target audio frame are encapsulated to generate the interactive audio/video file, so that the current interactive information is inserted into the current audio/video stream to generate the interactive audio/video file. The interactive audio and video file is sent to the video network, so that the interactive audio and video file is sent to the audience side equipment through the video network, the current interactive information in the interactive audio and video file and the current audio and video stream can be transmitted to the audience side equipment together without stamping a time stamp on the current interactive information and aligning the time stamp, the technical problem that the current interactive information and the current audio and video stream are difficult to be synchronously transmitted to the audience side equipment in the prior art is solved, the synchronization of the current interactive information and the current audio and video stream is realized, and the real-time performance is improved; in addition, the current interactive information is also transmitted through the channel of the video network, so that a signaling server is not needed to transmit the current interactive information, and the overhead of the signaling server is reduced.
EXAMPLE III
Fig. 4 is a flowchart of an interactive information processing method according to a third embodiment of the present invention, where this embodiment is applicable to a situation where an audience-side device displays interactive information, and the method may be executed by an interactive information processing device, where the device may be composed of hardware and/or software, and may be generally integrated in the audience-side device, and specifically includes the following steps:
and S310, receiving an interactive audio and video file from a video network, wherein the interactive audio and video file is generated by inserting current interactive information into current audio and video stream through anchor side equipment, and the current interactive information corresponds to the current interactive operation of an anchor.
The anchor side equipment responds to the current interactive operation of the anchor and generates current interactive information corresponding to the current interactive operation. And then, the anchor side equipment inserts the current interactive information into the current audio and video stream to generate an interactive audio and video file and sends the interactive audio and video file to a video network. Alternatively, the video network may be a CDN video network.
The audience side equipment receives the interactive audio and video files from the video network in a pull stream mode.
And S320, identifying the current interactive information from the interactive audio and video file.
And the audience side equipment reads the current interactive information from the interactive audio and video file. Optionally, the audience side device stores legal interaction information in advance, matches the read current interaction information with the prestored legal interaction information, and identifies the current interaction information if the current interaction information matches one of the legal interaction information.
And S330, displaying the current interactive information.
The audience side equipment displays the current interactive information in the forms of characters, pictures, sound, video and the like so that the audience can perceive the current interactive operation of the anchor.
And the viewer side equipment also demultiplexes the audio and video stream in the interactive audio and video file and plays the audio and video stream so that the viewer can perceive the live content. In some cases, the current interactive operation of the anchor will be recorded into the current audio/video stream, and the audience will perceive the interactive operation of the anchor from the live content. At the moment, the audience side equipment displays the current interactive information, so that the current interactive information and the interactive operation in the live content are kept synchronous, and the reality of audience experience is improved.
In the embodiment, the interactive audio and video file is received from the video network, the current interactive information is identified from the interactive audio and video file, and the current interactive information is displayed, so that the audio and video stream and the current interactive information are simultaneously obtained from the video network, the current interactive information does not need to be obtained from the signaling server, the overhead of the signaling server is reduced, and the cost is saved; meanwhile, the audio and video stream and the current interactive information are simultaneously acquired from the interactive audio and video file, so that the current interactive information and the interactive operation in the live content are kept synchronous, and the sense of reality of the audience experience is improved.
Example four
The above-described embodiment of this embodiment is further optimized. Fig. 5 is a flowchart of an interactive information processing method according to a fourth embodiment of the present invention. As shown in fig. 5, the method specifically includes the following steps:
and S410, receiving an interactive audio and video file from a video network, wherein the interactive audio and video file is generated by inserting current interactive information into current audio and video stream through anchor side equipment, and the current interactive information corresponds to the current interactive operation of an anchor.
And S420, identifying custom metadata in the metadata file from the interactive audio and video file, wherein the generation time point of the custom metadata is positioned in the generation time period of the first audio and video stream in the interactive audio and video file.
The interactive audio and video file is packaged with current audio and video stream and metadata, and the metadata comprises user-defined metadata. The generation time point of the custom metadata is positioned in the generation time period of the first audio/video stream in the interactive audio/video file, which shows that the custom metadata is almost synchronous with the first audio/video stream. When a viewer of the viewer-side equipment sees the audio/video stream, the viewer hardly feels the time difference between the user-defined metadata and the first audio/video stream due to the visual delay, so that the reality of the viewer experience is improved.
The custom metadata corresponds to the current interaction information and is used for describing the current interaction information. The legal interaction information corresponds to legal metadata. Based on this, the spectator end equipment stores legal metadata in advance, matches the user-defined metadata with the prestored legal metadata, and identifies the self-defined metadata if the user-defined metadata matches one of the legal metadata.
And S430, determining the current interaction information corresponding to the custom metadata according to the custom metadata.
The audience side equipment stores the corresponding relation between the legal metadata and the legal interaction information, and matches the user-defined metadata in the corresponding relation between the legal metadata and the legal interaction information to obtain the current interaction information corresponding to the user-defined metadata.
For example, if the viewer-side device stores a correspondence between the legal metadata "activity lottery" and the legal interaction information "activity lottery", the self-defined metadata "activity lottery" is identified, and then the corresponding legal interaction information "activity lottery" is obtained.
And S440, displaying the current interactive information.
In the embodiment, the user-defined metadata in the metadata file is identified from the interactive audio/video file, and the current interactive information corresponding to the user-defined metadata is determined according to the user-defined metadata, so that the current interactive information is obtained from the metadata file, the current interactive information does not need to be obtained from a signaling server, the overhead of the signaling server is reduced, and the cost is saved; meanwhile, the current interactive information and the interactive operation in the live content are kept synchronous, and the sense of reality of the audience experience is improved.
In some embodiments, the custom metadata corresponding to the current interaction information may be stored in a metadata file, and/or the custom field corresponding to the current interaction information may be stored in a target video frame and/or a target audio frame, and the viewer-side device may identify the custom metadata and/or the custom field from the metadata file, the target video frame, and/or the target audio frame stored in the custom metadata.
In the following, a method for identifying current interaction information from a target video frame and/or a target audio frame of an interactive audio/video file is described in detail.
Identifying current interactive information from the interactive audio and video file, including: acquiring a target video frame and/or a target audio frame from a second audio/video stream of the interactive audio/video file; identifying a custom field in a target video frame and/or a custom field in a target audio frame, wherein the difference between the custom field and the generation time points of the target video frame and the target audio frame is within a preset time length; and determining the current interaction information corresponding to the custom field according to the custom field.
And traversing each frame of video frame and each frame of audio frame in the second audio stream to obtain a target video frame and/or a target audio frame added with the custom field.
Optionally, identifying the custom field in the target video frame and/or the custom field in the target audio frame includes the following three embodiments:
the first embodiment: this applies to the case where the custom field is only present in the target video frame. Specifically, the custom field, for example, "activity preference" is read from the video Tag, and if the viewer-side device stores the legal field, "activity preference", the custom field is matched with the read data, so as to identify the custom field in the target video frame.
The second embodiment: this applies to the case where the custom field is present only in the target audio frame. Specifically, the custom field is read from the audio Tag, and if the read custom field matches the legal field stored in the viewer-side device, the custom field in the target audio frame is identified.
Third embodiment: this applies to the case where custom fields exist in the target audio frame and the target video frame. Specifically, a part of the custom field is read from the video Tag, another part of the custom field is read from the audio Tag, and the two parts of the custom fields are integrated to obtain a complete custom field, such as "activity multimedia. If the complete custom field matches the legal field stored in the viewer-side device, then the custom field in the target audio frame and the target video frame is identified.
And then, determining the current interaction information corresponding to the custom field according to the custom field. The audience side equipment stores the corresponding relation between the legal field and the legal interaction information, for example, the corresponding relation between the activity membership and the activity member lottery. Then matching the self-defined field, such as "activity memorytote" in the corresponding relationship between the foregoing legal field and the legal interaction information to obtain the corresponding current interaction information, i.e., "active member lottery".
With reference to the foregoing embodiments, fig. 6a is a signal flow chart between the anchor side device and the viewer side device according to an embodiment of the present invention. And the audience side equipment receives the interactive audio and video file from the video network and displays the current interactive information in the interactive audio and video file.
In some embodiments, as shown in fig. 6a, the audience side device generates participation interaction information corresponding to the participation interaction operation in response to the participation interaction operation of the audience, and sends the participation interaction information to the service server, and the service server processes the participation interaction information and then feeds back a processing result to the audience side device.
The following describes the interaction flow between the broadcaster-side device and the viewer-side device in a practical application scenario with reference to fig. 6 b.
The anchor informs the viewer via a camera and microphone that she wants to start a voting campaign. At this time, the camera records the current video, and the microphone records the current audio. And the anchor side equipment encodes the current video and the current audio to obtain a current video stream and a current audio stream. Meanwhile, the anchor clicks a 'voting' control on the anchor live broadcasting interface, the anchor side equipment generates a custom metadata 'activity title',
the anchor side equipment stores the user-defined metadata 'activity volume' into a metadata file, encapsulates the metadata file, the current audio stream and the current video stream to generate an interactive audio and video file, and pushes the interactive audio and video file to a video network.
The method comprises the following steps that audience side equipment pulls an interactive audio and video file from a video network, and separates and decodes the interactive audio and video file to obtain a current video and a current audio; meanwhile, identifying metadata in the interactive audio and video file to obtain custom metadata 'activity volume'. Then, the viewer side equipment plays the current video and the current audio, and simultaneously displays a text option A corresponding to the activity title on a viewer live broadcast interface; option B ". The audience sees that the voting activity is sensed from the picture and the main broadcasting sound of the live broadcasting room, and meanwhile, the voting options are displayed on the interface of the live broadcasting room, so that the audience feels more real and experiences better.
The viewer then performs a voting operation, such as selecting "Option A" and synchronizes "Option A" to the service server. The service server receives the voting options sent by a plurality of audiences, summarizes the voting options, and selects the option with the most votes, for example, "option B" to be fed back to the audience side equipment. The viewer-side device then displays "option B".
EXAMPLE five
Fig. 7 is a schematic structural diagram of an interactive information processing apparatus according to a fifth embodiment of the present invention, as shown in fig. 7, the apparatus includes: a first generating module 51, a second generating module 52 and a transmitting module 53.
And a first generating module 51, configured to generate current interaction information corresponding to a current interaction operation in response to the current interaction operation of the anchor.
And the second generating module 52 is configured to insert the current interaction information into the current audio/video stream to generate an interactive audio/video file.
And the sending module 53 is configured to send the interactive audio/video file to a video network, so that the audience side equipment receives the interactive audio/video file through the video network and displays current interactive information in the interactive audio/video file.
In the embodiment, the current interactive information corresponding to the current interactive operation of the anchor is inserted into the current audio and video stream to generate an interactive audio and video file, so that the current interactive information and the current audio and video stream are combined into one, and an interactive audio and video file is generated; then, the interactive audio and video file is sent to the video network, so that the interactive audio and video file is sent to the audience side equipment through the video network, the current interactive information and the current audio and video stream in the interactive audio and video file can be transmitted to the audience side equipment together without stamping a time stamp on the current interactive information and aligning the time stamp, the technical problem that the current interactive information and the current audio and video stream are difficult to be synchronously transmitted to the audience side equipment in the prior art is solved, the synchronization of the current interactive information and the current audio and video stream is realized, and the real-time performance is improved; in addition, the current interactive information is also transmitted through the channel of the video network, so that a signaling server is not needed to transmit the current interactive information, and the overhead of the signaling server is reduced.
In an optional embodiment, when the second generating module 52 inserts the current interaction information into the current audio/video stream to generate the interactive audio/video file, it is specifically configured to: generating custom metadata corresponding to the current interaction information according to the current interaction information; and inserting the custom metadata into the current audio and video stream to generate an interactive audio and video file.
In an optional embodiment, when the second generating module 52 inserts the custom metadata into the current audio/video stream to generate the interactive audio/video file, specifically configured to: generating custom metadata corresponding to the current interactive information according to the current interactive information; determining a first audio/video stream according to the generation time point of the custom metadata, wherein the generation time point of the custom metadata is positioned in the generation time period of the first audio/video stream; storing user-defined metadata in a metadata file corresponding to the first audio and video stream; and after the first audio and video stream is determined to be generated, packaging the metadata file and the first audio and video stream to generate an interactive audio and video file.
In an optional embodiment, when determining the first audio/video stream according to the generation time point of the custom metadata, the second generation module 52 is specifically configured to: acquiring a video frame corresponding to a timestamp when the anchor triggers the current interactive operation; and determining the video stream section where the video frame is located and the corresponding audio stream section as a first audio and video stream.
In an optional embodiment, when the second generating module 52 inserts the current interaction information into the current audio/video stream to generate the interactive audio/video file, it is specifically configured to: generating a custom field corresponding to the current interactive information according to the current interactive information; and inserting the custom field into the current audio and video stream to generate an interactive audio and video file.
In an optional embodiment, when the second generating module 52 inserts the custom field into the current audio/video stream to generate the interactive audio/video file, specifically configured to: generating a custom field corresponding to the current interactive information according to the current interactive information; determining a target video frame and/or a target audio frame according to the generation time point of the custom field, wherein the difference between the custom field and the generation time points of the target video frame and the target audio frame is within a preset time length; adding a custom field in a target video frame and/or a target audio frame; and after determining that the second audio and video stream corresponding to the target video frame and the target audio frame is generated, packaging the second audio and video stream to generate an interactive audio and video file.
The interactive information processing device provided by the embodiment of the invention can execute the interactive information processing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
EXAMPLE six
Fig. 8 is a schematic structural diagram of an anchor side apparatus according to a sixth embodiment of the present invention, and as shown in fig. 8, the anchor side apparatus includes a processor 60, a memory 61, an input device 62, and an output device 63; the number of processors 60 in the anchor side device may be one or more, and one processor 60 is taken as an example in fig. 8; the processor 60, the memory 61, the input device 62, and the output device 63 in the anchor-side apparatus may be connected by a bus or other means, and fig. 8 illustrates an example of connection by a bus.
The memory 61 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the interactive information processing method in the embodiment of the present invention (for example, the first generating module 51, the second generating module 52, and the sending module 53 in the interactive information processing apparatus). The processor 60 executes various functional applications and data processing of the anchor-side device by running software programs, instructions and modules stored in the memory 61, that is, implements the above-described interactive information processing method.
The memory 61 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 61 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 61 may further include memory located remotely from processor 60, which may be connected to the anchor-side device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 62 may be used to receive input numeric or character information and generate key signal inputs related to the anchor interaction operations and function controls of the anchor-side equipment. The output device 63 may include a display device such as a display screen.
EXAMPLE seven
Fig. 9 is a schematic structural diagram of an interactive information processing apparatus according to a seventh embodiment of the present invention, where the apparatus includes: a receiving module 71, an identification module 72 and a presentation module 73.
The receiving module 71 is configured to receive an interactive audio/video file from a video network, where the interactive audio/video file is generated by inserting current interactive information into a current audio/video stream through a device on a main broadcast side, and the current interactive information corresponds to a current interactive operation of the main broadcast.
And the identifying module 72 is configured to identify the current interactive information from the interactive audio/video file.
And the display module 73 is configured to display the current interactive information.
In the embodiment, the interactive audio and video file is received from the video network, the current interactive information is identified from the interactive audio and video file, and the current interactive information is displayed, so that the audio and video stream and the current interactive information are simultaneously obtained from the video network, the current interactive information does not need to be obtained from the signaling server, the overhead of the signaling server is reduced, and the cost is saved; meanwhile, the audio and video stream and the current interactive information are simultaneously acquired from the interactive audio and video file, so that the current interactive information and the interactive operation in the live content are kept synchronous, and the sense of reality of the audience experience is improved.
In an optional embodiment, when the identifying module 72 identifies the current interactive information from the interactive audio/video file, it is specifically configured to: identifying custom metadata in a metadata file from an interactive audio and video file, wherein the generation time point of the custom metadata is positioned in the generation time period of a first audio and video stream in the interactive audio and video file; and determining the current interaction information corresponding to the custom metadata according to the custom metadata.
In an optional embodiment, when the identifying module 72 identifies the current interactive information from the interactive audio/video file, it is specifically configured to: acquiring a target video frame and/or a target audio frame from a second audio/video stream of the interactive audio/video file; identifying a custom field in a target video frame and/or a custom field in a target audio frame, wherein the difference between the custom field and the generation time points of the target video frame and the target audio frame is within a preset time length; and determining the current interaction information corresponding to the custom field according to the custom field.
The interactive information processing device provided by the embodiment of the invention can execute the interactive information processing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example eight
Fig. 10 is a schematic structural diagram of a viewer-side apparatus according to an eighth embodiment of the present invention, and as shown in fig. 10, the broadcaster-side apparatus includes a processor 80, a memory 81, an input device 82, and an output device 83; the number of processors 80 in the anchor side device may be one or more, and one processor 80 is taken as an example in fig. 10; the processor 80, the memory 81, the input device 82, and the output device 83 in the anchor-side apparatus may be connected by a bus or other means, and the bus connection is exemplified in fig. 10.
The memory 81 is used as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the interactive information processing method in the embodiment of the present invention (for example, the receiving module 71, the identifying module 72, and the displaying module 73 in the interactive information processing apparatus). The processor 80 executes various functional applications and data processing of the anchor side device by running software programs, instructions and modules stored in the memory 81, that is, implements the above-described interactive information processing method.
The memory 81 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 81 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 81 may further include memory located remotely from processor 80, which may be connected to the anchor-side device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 82 is operable to receive input numeric or character information and to generate key signal inputs associated with viewer operation and function control. The output device 83 may include a display device such as a display screen.
Example nine
An embodiment of the present invention further provides a computer-readable storage medium having a computer program stored thereon, where the computer program is used for executing an interactive information processing method when executed by a computer processor, and the method includes:
responding to the current interactive operation of the anchor, and generating current interactive information corresponding to the current interactive operation;
inserting the current interactive information into the current audio and video stream to generate an interactive audio and video file;
and sending the interactive audio and video file to a video network so that the audience side equipment receives the interactive audio and video file through the video network and displays the current interactive information in the interactive audio and video file.
Of course, the computer program provided by the embodiments of the present invention is not limited to the above method operations, and may also perform related operations in the interactive information processing method provided by any embodiments of the present invention.
Example ten
An embodiment of the present invention also provides a computer-readable storage medium having stored thereon a computer program, which when executed by a computer processor is configured to perform an interactive information processing method, the method including:
receiving an interactive audio and video file from a video network, wherein the interactive audio and video file is generated by inserting current interactive information into current audio and video stream through anchor side equipment, and the current interactive information corresponds to the current interactive operation of an anchor;
identifying current interactive information from the interactive audio and video file;
and displaying the current interactive information.
Of course, the computer program provided by the embodiments of the present invention is not limited to the above method operations, and may also perform related operations in the interactive information processing method provided by any embodiments of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the interactive information processing apparatus, the units and modules included in the embodiment are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (8)

1. An interactive information processing method, comprising:
responding to the current interactive operation of the anchor, and generating current interactive information corresponding to the current interactive operation;
inserting the current interactive information into the current audio and video stream to generate an interactive audio and video file;
sending the interactive audio and video file to a video network so that the audience side equipment receives the interactive audio and video file through the video network and displays the current interactive information in the interactive audio and video file;
the inserting the current interactive information into the current audio/video stream to generate an interactive audio/video file includes:
generating custom metadata corresponding to the current interaction information according to the current interaction information;
determining a first audio/video stream according to the generation time point of the user-defined metadata, wherein the generation time point of the user-defined metadata is positioned in the generation time period of the first audio/video stream;
storing the custom metadata in a metadata file corresponding to the first audio and video stream;
and after the first audio and video stream is determined to be generated, packaging the metadata file and the first audio and video stream to generate an interactive audio and video file.
2. The method according to claim 1, wherein the determining a first audio/video stream according to the generation time point of the custom metadata comprises:
acquiring a video frame corresponding to a timestamp when the anchor triggers the current interactive operation;
and determining the video stream section where the video frame is located and the corresponding audio stream section as a first audio and video stream.
3. The method of claim 1, wherein the inserting the current interaction information into a current audio/video stream to generate an interactive audio/video file comprises:
generating a custom field corresponding to the current interactive information according to the current interactive information;
and inserting the custom field into the current audio and video stream to generate an interactive audio and video file.
4. The method of claim 3, wherein the inserting the custom field into the current audio-video stream to generate an interactive audio-video file comprises:
determining a target video frame and/or a target audio frame according to the generation time point of the custom field, wherein the difference between the custom field and the generation time points of the target video frame and the target audio frame is within a preset time length;
adding the custom field in the target video frame and/or the target audio frame;
and after determining that the generation of a second audio/video stream corresponding to the target video frame and the target audio frame is finished, packaging the second audio/video stream to generate an interactive audio/video file.
5. An interactive information processing method, comprising:
receiving an interactive audio and video file from a video network, wherein the interactive audio and video file is generated by inserting current interactive information into current audio and video stream through anchor side equipment, and the current interactive information corresponds to current interactive operation of an anchor;
identifying the current interactive information from the interactive audio and video file;
displaying the current interaction information;
wherein, from the interactive audio and video file, identifying the current interactive information comprises:
identifying custom metadata in a metadata file from the interactive audio and video file, wherein the generation time point of the custom metadata is positioned in the generation time period of a first audio and video stream in the interactive audio and video file;
and determining the current interaction information corresponding to the custom metadata according to the custom metadata.
6. An interactive information processing apparatus, comprising:
the first generation module is used for responding to the current interactive operation of the anchor and generating current interactive information corresponding to the current interactive operation;
the second generation module is used for inserting the current interactive information into the current audio and video stream to generate an interactive audio and video file;
the transmitting module is used for transmitting the interactive audio and video file to a video network so that the audience side equipment can receive the interactive audio and video file through the video network and display the current interactive information in the interactive audio and video file;
the second generation module is specifically configured to, when inserting the current interaction information into the current audio/video stream to generate an interactive audio/video file: generating custom metadata corresponding to the current interaction information according to the current interaction information; inserting the custom metadata into the current audio and video stream to generate an interactive audio and video file;
the second generation module is specifically configured to, when inserting the custom metadata into the current audio/video stream to generate an interactive audio/video file: determining a first audio/video stream according to the generation time point of the user-defined metadata, wherein the generation time point of the user-defined metadata is positioned in the generation time period of the first audio/video stream; storing the custom metadata in a metadata file corresponding to the first audio and video stream; and after the first audio and video stream is determined to be generated, packaging the metadata file and the first audio and video stream to generate an interactive audio and video file.
7. An anchor-side device comprising one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the interactive information processing method of any one of claims 1-4.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the interactive information processing method according to any one of claims 1 to 4.
CN201810361423.4A 2018-04-20 2018-04-20 Interactive information processing method, device, anchor side equipment and medium Active CN108521584B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810361423.4A CN108521584B (en) 2018-04-20 2018-04-20 Interactive information processing method, device, anchor side equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810361423.4A CN108521584B (en) 2018-04-20 2018-04-20 Interactive information processing method, device, anchor side equipment and medium

Publications (2)

Publication Number Publication Date
CN108521584A CN108521584A (en) 2018-09-11
CN108521584B true CN108521584B (en) 2020-08-28

Family

ID=63428936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810361423.4A Active CN108521584B (en) 2018-04-20 2018-04-20 Interactive information processing method, device, anchor side equipment and medium

Country Status (1)

Country Link
CN (1) CN108521584B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109714622B (en) * 2018-11-15 2021-04-16 北京奇艺世纪科技有限公司 Video data processing method and device and electronic equipment
CN110225384A (en) * 2019-06-18 2019-09-10 北京字节跳动网络技术有限公司 The method for pushing of status message, the switching method of interaction content, device and equipment
CN112492401B (en) * 2019-09-11 2022-08-12 腾讯科技(深圳)有限公司 Video-based interaction method and device, computer-readable medium and electronic equipment
CN112969093B (en) * 2019-12-13 2023-09-08 腾讯科技(北京)有限公司 Interactive service processing method, device, equipment and storage medium
CN111083515B (en) * 2019-12-31 2021-07-23 广州华多网络科技有限公司 Method, device and system for processing live broadcast content
CN113099309A (en) * 2021-03-30 2021-07-09 上海哔哩哔哩科技有限公司 Video processing method and device
CN113473163B (en) * 2021-05-24 2023-04-07 康键信息技术(深圳)有限公司 Data transmission method, device, equipment and storage medium in network live broadcast process

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742246A (en) * 2009-12-01 2010-06-16 中广传播有限公司 System and method for realizing interactive service of mobile multimedia broadcast
CN105704504A (en) * 2016-01-28 2016-06-22 腾讯科技(深圳)有限公司 A method and apparatus for inserting push information in video direct broadcast
CN106162230A (en) * 2016-07-28 2016-11-23 北京小米移动软件有限公司 The processing method of live information, device, Zhu Boduan, server and system
CN106534954A (en) * 2016-12-19 2017-03-22 广州虎牙信息科技有限公司 Information interaction method and device based on live broadcast video streams and terminal device
CN106537929A (en) * 2014-05-28 2017-03-22 弗劳恩霍夫应用研究促进协会 Data processor and transport of user control data to audio decoders and renderers
CN106792229A (en) * 2016-12-19 2017-05-31 广州虎牙信息科技有限公司 Ballot exchange method and its device based on direct broadcasting room video flowing barrage
CN107360160A (en) * 2017-07-12 2017-11-17 广州华多网络科技有限公司 live video and animation fusion method, device and terminal device
CN107666619A (en) * 2017-06-15 2018-02-06 北京金山云网络技术有限公司 Live data transmission method, device, electronic equipment, server and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9043829B2 (en) * 2009-10-07 2015-05-26 At&T Intellectual Property I, Lp Synchronization of user interactive events with on-screen events during playback of multimedia stream

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742246A (en) * 2009-12-01 2010-06-16 中广传播有限公司 System and method for realizing interactive service of mobile multimedia broadcast
CN106537929A (en) * 2014-05-28 2017-03-22 弗劳恩霍夫应用研究促进协会 Data processor and transport of user control data to audio decoders and renderers
CN105704504A (en) * 2016-01-28 2016-06-22 腾讯科技(深圳)有限公司 A method and apparatus for inserting push information in video direct broadcast
CN106162230A (en) * 2016-07-28 2016-11-23 北京小米移动软件有限公司 The processing method of live information, device, Zhu Boduan, server and system
CN106534954A (en) * 2016-12-19 2017-03-22 广州虎牙信息科技有限公司 Information interaction method and device based on live broadcast video streams and terminal device
CN106792229A (en) * 2016-12-19 2017-05-31 广州虎牙信息科技有限公司 Ballot exchange method and its device based on direct broadcasting room video flowing barrage
CN107666619A (en) * 2017-06-15 2018-02-06 北京金山云网络技术有限公司 Live data transmission method, device, electronic equipment, server and storage medium
CN107360160A (en) * 2017-07-12 2017-11-17 广州华多网络科技有限公司 live video and animation fusion method, device and terminal device

Also Published As

Publication number Publication date
CN108521584A (en) 2018-09-11

Similar Documents

Publication Publication Date Title
CN108521584B (en) Interactive information processing method, device, anchor side equipment and medium
US10462496B2 (en) Information processor, information processing method and program
JP6219269B2 (en) Terminal device, information processing method, program, and linked application supply system
US10616647B2 (en) Terminal apparatus, server apparatus, information processing method, program, and linking application supply system
CN109089154B (en) Video extraction method, device, equipment and medium
KR102190278B1 (en) Information processing device, information processing method and program
CN109089127B (en) Video splicing method, device, equipment and medium
US20160295269A1 (en) Information pushing method, device and system
CN109714622B (en) Video data processing method and device and electronic equipment
KR102019286B1 (en) Terminal device, server device, information processing method, program, and collaborative application supply system
CN112203106B (en) Live broadcast teaching method and device, computer equipment and storage medium
KR102110623B1 (en) Transmission device, information processing method, program, reception device, and application linking system
CN105103566A (en) Systems and methods for identifying video segments for displaying contextually relevant content
US20180091870A1 (en) Digital Channel Integration System
CN111107390A (en) Live broadcast service system and live broadcast connection establishment method
CN109756744B (en) Data processing method, electronic device and computer storage medium
CN113115066A (en) Live broadcast room display system, method, terminal and medium for live broadcast application
CN104429092B (en) Reception device handles the method for information, program, sending device and applies linked system
CN111835988B (en) Subtitle generation method, server, terminal equipment and system
JP2016536826A (en) Method for synchronizing and generating stream, and corresponding computer program, storage medium, and playback device, execution device, and generation device
JP2002112224A (en) Broadcast program link information providing system, device, method and recording medium
TW201817245A (en) Multimedia rendering method adapted to multivariate audience and a system thereof
CN112839236A (en) Video processing method, device, server and storage medium
EP2894868B1 (en) Information processing device, information processing method, program, and content sharing system
KR100900153B1 (en) Method and system for servicing interactive data using broadcasting in dmb

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant