CN108260016A - Processing method, device, equipment, system and storage medium is broadcast live - Google Patents
Processing method, device, equipment, system and storage medium is broadcast live Download PDFInfo
- Publication number
- CN108260016A CN108260016A CN201810206343.1A CN201810206343A CN108260016A CN 108260016 A CN108260016 A CN 108260016A CN 201810206343 A CN201810206343 A CN 201810206343A CN 108260016 A CN108260016 A CN 108260016A
- Authority
- CN
- China
- Prior art keywords
- target information
- main
- frame
- video frame
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003860 storage Methods 0.000 title claims abstract description 21
- 238000003672 processing method Methods 0.000 title claims abstract description 19
- 238000000034 method Methods 0.000 claims abstract description 39
- 238000004891 communication Methods 0.000 claims description 49
- 238000004590 computer program Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 20
- 230000005540 biological transmission Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 6
- 230000001360 synchronised effect Effects 0.000 description 6
- 230000001960 triggered effect Effects 0.000 description 5
- 238000009826 distribution Methods 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- ZSAZGCBSZUURAX-UHFFFAOYSA-N 1-chloro-4-(diethoxyphosphorylsulfanylmethylsulfanyl)benzene Chemical compound CCOP(=O)(OCC)SCSC1=CC=C(Cl)C=C1 ZSAZGCBSZUURAX-UHFFFAOYSA-N 0.000 description 1
- 229910000906 Bronze Inorganic materials 0.000 description 1
- 241000219112 Cucumis Species 0.000 description 1
- 235000015510 Cucumis melo subsp melo Nutrition 0.000 description 1
- 101000574396 Homo sapiens Protein phosphatase 1K, mitochondrial Proteins 0.000 description 1
- 102100025799 Protein phosphatase 1K, mitochondrial Human genes 0.000 description 1
- FJJCIZWZNKZHII-UHFFFAOYSA-N [4,6-bis(cyanoamino)-1,3,5-triazin-2-yl]cyanamide Chemical compound N#CNC1=NC(NC#N)=NC(NC#N)=N1 FJJCIZWZNKZHII-UHFFFAOYSA-N 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000010974 bronze Substances 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- KUNSUQLRTQLHQQ-UHFFFAOYSA-N copper tin Chemical compound [Cu].[Sn] KUNSUQLRTQLHQQ-UHFFFAOYSA-N 0.000 description 1
- 238000005336 cracking Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/254—Management at additional data server, e.g. shopping server, rights management server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/47815—Electronic shopping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The disclosure provides a kind of live streaming processing method, device, equipment, system and storage medium, for main broadcaster end, the method includes:Determine the prime frame mark of main video frame, the main video frame is the first frame in the sets of video frames for it is expected to play simultaneously with target information in video stream data;The video stream data for carrying prime frame mark and the target information are sent to viewer end, so that viewer end plays prime frame and identifies corresponding main video frame and the target information simultaneously.It can realize that video frame and target information are played simultaneously using the embodiment of the present disclosure.
Description
Technical Field
The present application relates to the field of live broadcast technologies, and in particular, to a live broadcast processing method, apparatus, device, system, and storage medium.
Background
The network video live broadcast can be that a user at a spectator end watches live audio and video live events, such as events, meetings, teaching, operations and the like, which are carried out by a user at a main broadcast end through a network. In the live broadcast push information scene, the audience end not only plays the video stream data of the main broadcast end, but also plays other target information. For example, in a scene of live answering, a spectator end can display an answering frame containing a question on a live video, so that when a spectator hears the question broadcasted by a host in the live video through a terminal, the spectator can see the answering frame from a terminal screen and answer the question.
However, due to network and other reasons, there may be a certain delay between two paths of data, which may result in a situation where the video frame is not synchronized with specific information. For example, the moderator may not have announced the title, and an answer box containing the title is already shown on the screen. Because the requirement of live broadcast answering on real-time performance is high, the answer is judged to be effective only when the answer is completed within the specified answer time. Therefore, the asynchronous display of the broadcast question and the answer frame can influence the answer validity judgment.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a live broadcast processing method, apparatus, device, system, and storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a live broadcast processing method, used for a host, the method including:
determining a main frame identifier of a main video frame, wherein the main video frame is a first frame in a video frame set expected to be played simultaneously with target information in video stream data;
and sending the video stream data carrying the main frame identification and the target information to a spectator end so that the spectator end simultaneously plays the main video frame corresponding to the main frame identification and the target information.
In an optional implementation manner, the determining a primary frame identifier of the primary video frame includes:
when target information is issued to a spectator, the currently generated video frame is determined as a main video frame, and a main frame identifier of the main video frame is obtained.
In an optional implementation manner, the target information includes a question or a question-answering statistical result; and/or, the primary frame identification includes a time of the primary video frame.
In an optional implementation manner, the anchor generates and pushes video stream data carrying a main frame identifier through a stream pushing system, a code rate parameter in the stream pushing system is determined based on the number of online people in a live broadcast room, and a frame interval parameter in the stream pushing system is determined based on the code rate parameter.
In an alternative implementation, the push flow system comprises an OBS push flow system.
According to a second aspect of the embodiments of the present disclosure, there is provided a live broadcast processing method for a viewer side, the method including:
acquiring target information and video stream data carrying a main frame identifier, wherein the main video frame is a first frame in a video frame set which is expected to be played simultaneously with the target information in the video stream data;
and when the main video frame corresponding to the main frame identification is played, the target information is played at the same time.
In an optional implementation manner, before playing the target information, the method further includes:
determining a receiving time of the video stream data and a receiving time of the target information;
comparing the difference value of the two receiving times with a preset allowable delay time threshold value;
and determining the target information to be expected to be played synchronously with the main video frame in the video stream data according to the comparison result.
In an optional implementation, the method further includes:
starting countdown from the moment of simultaneously playing the target information and the main video frame corresponding to the main frame identifier, and stopping playing the target information or forbidding an operation control related to the target information when the countdown is finished;
the countdown time of the countdown is determined based on the difference between the standard countdown time and the terminal network delay, and the terminal network delay is the network delay between the audience terminal and the communication service terminal.
According to a third aspect of the embodiments of the present disclosure, there is provided a live broadcast processing apparatus, provided at an anchor side, the apparatus including:
the identification determining module is configured to determine a main frame identification of a main video frame, wherein the main video frame is a first frame in a video frame set expected to be played simultaneously with target information in video stream data;
and the information sending module is configured to send the video stream data carrying the main frame identifier and the target information to a spectator end so that the spectator end can simultaneously play the main video frame corresponding to the main frame identifier and the target information.
In an optional implementation manner, the identifier determining module is specifically configured to: when target information is issued to a spectator, the currently generated video frame is determined as a main video frame, and a main frame identifier of the main video frame is obtained.
In an alternative implementation, the target information includes question questions or answer statistics.
In an alternative implementation, the primary frame identifies a time that includes the primary video frame.
In an optional implementation manner, the anchor generates and pushes video stream data carrying a main frame identifier through a stream pushing system, a code rate parameter in the stream pushing system is determined based on the number of online people in a live broadcast room, and a frame interval parameter in the stream pushing system is determined based on the code rate parameter.
In an alternative implementation, the push flow system comprises an OBS push flow system.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a live broadcast processing apparatus, provided at a viewer side, the apparatus including:
the information acquisition module is configured to acquire target information and video stream data carrying a main frame identifier, wherein the main video frame is a first frame in a video frame set which is expected to be played simultaneously with the target information in the video stream data;
and the information playing module is configured to play the target information simultaneously when playing the main video frame corresponding to the main frame identifier.
In an optional implementation manner, the apparatus further includes an information determining module configured to:
determining the receiving time of the video stream data and the receiving time of the target information before playing the target information;
comparing the difference value of the two receiving times with a preset allowable delay time threshold value;
and determining the target information to be expected to be played synchronously with the main video frame in the video stream data according to the comparison result.
In an optional implementation, the apparatus further comprises a countdown module configured to: starting countdown from the moment of simultaneously playing the target information and the main video frame corresponding to the main frame identifier, and stopping playing the target information or forbidding an operation control related to the target information when the countdown is finished;
the countdown time of the countdown is determined based on the difference between the standard countdown time and the terminal network delay, and the terminal network delay is the network delay between the audience terminal and the communication service terminal.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a live broadcast system, including a main broadcast end and a viewer end;
a main frame identifier of a main video frame is determined by a main broadcasting terminal, wherein the main video frame is the first frame in a video frame set expected to be played simultaneously with target information in video stream data; sending the video stream data carrying the main frame identification and the target information to a viewer;
the audience terminal obtains the target information and video stream data carrying the main frame identification; and when the main video frame corresponding to the main frame identification is played, the target information is played at the same time.
According to a sixth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
determining a main frame identifier of a main video frame, wherein the main video frame is a first frame in a video frame set expected to be played simultaneously with target information in video stream data;
and sending the video stream data carrying the main frame identification and the target information to a spectator end so that the spectator end simultaneously plays the main video frame corresponding to the main frame identification and the target information.
According to a seventh aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring target information and video stream data carrying a main frame identifier, wherein the main video frame is a first frame in a video frame set which is expected to be played simultaneously with the target information in the video stream data;
and when the main video frame corresponding to the main frame identification is played, the target information is played at the same time.
According to an eighth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of any of the methods described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the embodiment of the disclosure, since the main video frame is the first frame in the video frame set expected to be played simultaneously with the target information in the video stream data, the main frame identifier of the main video frame can be determined, so that when the main video frame corresponding to the main frame identifier is played by the audience, the target information can be played simultaneously, thereby realizing the synchronous playing of the main video frame and the target information.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram illustrating an application scenario for implementing live broadcasting according to an exemplary embodiment of the present disclosure.
Fig. 2A is a flow diagram illustrating a live processing method according to an example embodiment of the present disclosure.
Fig. 2B is a flow diagram illustrating another live processing method according to an example embodiment of the present disclosure.
Fig. 2C is a schematic diagram of a live answer interface shown in the present disclosure according to an exemplary embodiment.
Fig. 3A is a flow diagram illustrating another live processing method of the present disclosure according to an example embodiment.
Fig. 3B is a schematic diagram of a live system shown in accordance with an exemplary embodiment of the present disclosure.
Fig. 3C is an application scenario diagram illustrating a live processing method according to an exemplary embodiment of the present disclosure.
Any of fig. 4-8 are block diagrams of a live processing device shown in the present disclosure according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating an apparatus for live processing according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The network video live broadcast can be that a user at a spectator end watches live audio and video live events, such as events, meetings, teaching, operations and the like, which are carried out by a user at a main broadcast end through a network. In some live scenes, the viewer plays not only the video stream data of the main broadcast but also other target information. For example, in the scenario of a live answer, the target information may include a question topic. The live question-answering mode can be that a user (audience) logs in a live broadcast room within a specified time, online answering is carried out under the guidance of a host, bonus can be carried out on specified number (such as 12 channels) of questions, and bonus set by the melon score in each period is carried out. At the audience end, the answer frame containing the questions can be displayed on the live video, so that the audience can see the answer frame from the screen of the terminal and answer the questions when hearing the questions broadcasted by the host in the live video by using the terminal. As another example, in a live-broadcast first-purchase scenario, the target information may include information on goods for sale. The live broadcast shopping mode can be that a user logs in a live broadcast room within a specified time and carries out online shopping under the guidance of a host. At the audience, the commodity information to be sold can be displayed on the live broadcast video, the audience can listen to the live broadcast video and host broadcast the commodity information, and meanwhile, the audience can also operate the screen, select the preferred commodity and perform order placing/purchasing operation.
It can be understood that the disclosed embodiments can be applied in the following scenarios: in the live broadcast process, a spectator end can receive two paths of data, wherein one path of data can be live broadcast video stream, and the other path of data can be target information. And video frames related to the target information in the live video stream need to be played synchronously with the target information. The target information and at least one frame of video frame in the live video stream have an association relation. For example, the association relationship may be a video frame in which broadcast target information exists in a live video stream. The target information may be data such as question, answer statistics, information on products to be sold, and information to be memorized, which are not listed here.
As shown in fig. 1, fig. 1 is a schematic diagram of an application scenario for implementing live broadcast according to an exemplary embodiment of the present disclosure. In this scenario, a main broadcaster 110, a server 120, a first viewer 131, and a second viewer 132 may be included. It is understood that the scene is illustrated by two spectators, in practice one or more spectators may be present. The anchor end can be a video stream initiating end, the viewer end can be a video stream receiving end, in some scenes, the anchor end can be called a host end, and the viewer end can be called a user end. The anchor side and the audience side (such as the first audience side and the second audience side) can be software installed on the electronic equipment, and can also be terminal equipment. The anchor terminal can call a camera to record videos, take photos and other modes to make live broadcast data, and then sends the live broadcast data to the server terminal through the network. Both the audience terminal and the anchor terminal can send the input interactive messages (such as barrage messages) to the server terminal, so that the interactive messages can be displayed at all the terminals in the same live broadcast room. And aiming at the target information, the target information is often sent to the server by the anchor terminal and sent to each audience terminal by the server. The server is used for providing background services of internet live broadcast, such as storing the corresponding relation between the anchor server and the audience server, distributing live broadcast data, distributing interactive messages, distributing target information and the like. In one example, in order to adapt the received video stream data to various live platforms and various protocols, the received video stream data may be transcoded at the server to support formats such as PTMP, HLS, FLV, etc. for being pulled by the corresponding viewers.
The server may be a generic term for a plurality of server devices, or may be a generic term for at least one piece of software installed on a server device. In one example, live video stream distribution and data stream distribution may be implemented by the same server. In another example, in order to distinguish different services and improve the timeliness of information processing, the server may include a live broadcast server and a communication server, and the like, the live broadcast server may be used for distributing video streams, and the communication server is used for distributing data, in particular, target information. It can be understood that, according to actual requirements, the server may further include a bullet screen server for distributing bullet screens, a gift server for distributing gifts, and the like, which are not described herein in detail. The electronic device may be any electronic device that can run software, and the electronic device may be a handheld electronic device or other electronic device. For example, it may be a cellular phone, media player, or other handheld portable device, a slightly smaller portable device such as a wristwatch device or other wearable or miniaturized device, but may also be a PDA (Personal Digital Assistant), tablet computer, notebook computer, desktop computer, television, computer integrated into a computer display or other electronic equipment, and the like.
The live broadcast room can be a social network platform, an instant messaging platform and the like which are aggregated by a plurality of users, the users enter the live broadcast room in a client logging mode, the users exist in the live broadcast room in the identity of the members, and the members with various identities, such as audiences, anchor broadcasters and the like, are contained in the same live broadcast room. The user can arbitrarily join or quit the live broadcast room. For users with certain authority, the users can add or delete members in the live broadcast room and can also create or release the live broadcast room.
The embodiment of the present disclosure is first exemplified from the perspective of the anchor side. As shown in fig. 2A, fig. 2A is a flow chart illustrating a live broadcast processing method according to an exemplary embodiment of the present disclosure. The method is used for an anchor terminal, and comprises the following steps:
in step 201, determining a main frame identifier of a main video frame, where the main video frame is a first frame in a video frame set expected to be played simultaneously with target information in video stream data;
in step 202, the video stream data carrying the main frame identifier and the target information are sent to the viewer side, so that the viewer side simultaneously plays the main video frame corresponding to the main frame identifier and the target information.
Next, the disclosed embodiments are exemplified from the viewpoint of the viewer side. As shown in fig. 2B, fig. 2B is a flow diagram illustrating another live processing method according to an example embodiment of the present disclosure. For the audience, the method comprising:
in step 203, obtaining target information and video stream data carrying a main frame identifier, where the main video frame is a first frame in a video frame set expected to be played simultaneously with the target information in the video stream data;
in step 204, when the main video frame corresponding to the main frame identifier is played, the target information is played at the same time.
Regarding the video stream data, the anchor terminal can call a camera to record videos and take photos, and can also directly obtain audio and video data and the like from the audio and video acquisition terminal and make the video stream data. In one example, to achieve audio and video synchronization, after audio data and image data are obtained at the live broadcast end, the audio data and the image data may be synthesized, and synchronization between the audio data and the image data is ensured, so as to avoid the situation that the transmission process causes picture and audio non-synchronization.
Regarding the target information, the target information may be information associated with at least one video frame in the live video stream. For example, the association relationship may be a video frame in which broadcast target information exists in a live video stream. The object information is divided by type, and the object information may be character information, image information, audio information, video information, and the like. Target information is divided according to contents, and the target information can be question questions, answer statistical results, information of commodities to be sold and the like. During the live broadcast, the user (such as the host) at the anchor end can broadcast the target information. The broadcast target information may be specific content of the read target information or may be indication information for indicating the target information, if the target information is character information or image information; if the target information is character information, image information, audio information or video information, the broadcast target information can be the broadcast target information, and meanwhile, the target information can be introduced. The playing is not limited to playing audio and video, and can also include displaying information. Taking target information as an example of a question topic, in an example, a user (such as a host) at a host can broadcast specific contents of the question topic, for example: the famous "urinating child" bronze statue is located in which country, and can even report answer options, such as: firstly, the Netherlands; II, Belgium; and thirdly, Ireland. In another example, the anchor user may only broadcast the number of questions for the question topic, such as: please look at the first question.
With respect to the primary video frame, the primary video frame may be a first frame in a set of video frames in the video stream data that is expected to be played concurrently with the target information. The set of video frames in the video stream data that is expected to be played simultaneously with the target information may also be referred to as an associated video frame set, and thus, the primary video frame may be the first frame in the associated video frame set. One or more of the associated video frames may be included in the set of associated video frames. In order to avoid the phenomena of pause and the like, video stream data is played uninterruptedly at a spectator end, so that a main video frame is determined according to a main frame identifier, and when the main video frame is played, target information is played simultaneously, so that the synchronous playing of the main video frame and the target information is realized.
In practical application, when the currently acquired audio and video is associated with the target information, the issuing instruction of the target information can be triggered and executed. For example, the acquisition end shoots a picture containing a host and acquires corresponding audio, and when the host broadcasts target information, the target information is issued, so that the audience end can receive video stream data and the target information, and play the target information while playing a main video frame.
In view of this, as one of the ways to determine the main video frame, when the anchor terminal issues the target information to the audience terminal through the communication service terminal, the anchor terminal determines the currently generated video frame as the main video frame, and obtains the main frame identifier of the main video frame. The target information is triggered to be issued when the anchor terminal needs to be used, namely, when the anchor terminal sends the target information, a video frame associated with the target information is generated, so that the currently generated video frame is the video frame associated with the target information, and the video frame and the target information can be played at the same time at the audience terminal.
Therefore, the main video frame is determined from the currently generated video stream data when the target information is sent, and the method is easy to implement and relatively accurate. Further, if the current time period is indicated and more than one frame of video frames are currently generated, one frame may be taken as the main video frame, for example, the first frame or the middle frame of the multiple frames of video frames generated in the current 1s may be taken as the main video frame.
Regarding the target information being delivered to the audience through the server, in an example, it may be determined that the anchor terminal is delivering the target information to the audience through the server (e.g., a communication server) when receiving a delivery instruction of the target information, and therefore, when receiving the delivery instruction of the target information, the currently generated video frame may be determined as a main video frame, and a main frame identifier of the main video frame may be obtained.
Wherein, the issuing command is a command for sending target information. The condition for issuing the instruction may be an instruction generated by triggering at a timing when the anchor side is generating/pre-generating the associated video frame and pre-playing the target information. For example, the issuing command may be a command that triggers generation when the target information starts to be broadcasted. The embodiment takes the start of broadcasting the target information as a condition for triggering generation of the issuing instruction, and can realize synchronous playing of the target information and video frames during broadcasting of the target information at a viewer end.
In one example, the issuing instruction may be an instruction generated by a specific object triggering key when the target information starts to be broadcasted. The keys may be physical keys or touch-sensitive virtual keys. For example, when the host starts broadcasting the target information, such as: when the host recites that the first topic is … …, a worker matched with the host clicks a key on the equipment to trigger the issuing instruction for generating the target information. Therefore, the sending time of the issued instruction is manually controlled, and the method is easy to realize.
In another example, the issuing instruction may be a trigger to generate the issuing instruction when the device recognizes that the live video frame includes a preset keyword or a preset key audio. For example, when the host starts broadcasting the target information, such as: when the host recites that the first topic is … …, the anchor terminal recognizes the voice to obtain the preset key voice, and then triggers the issuing instruction for generating the target information. Therefore, the issuing instruction can be automatically triggered and generated through voice recognition or image recognition, and labor cost is saved.
It can be understood that the issuing instruction may also be generated by triggering other conditions, which is not described herein.
Regarding the primary frame identification, the primary frame identification is an identification for indicating a primary video frame, and a corresponding primary video frame may be determined based on the primary frame identification. In one example, the primary frame identification may be tag information, such as name, number, ID, etc., of the primary video frame. In practice, the viewer-side program often determines the video frame by time, and in order to reduce the modification to the viewer-side program, in another example, the main frame identifier may be the time of the main video frame. The time of the primary video frame may be the time of the current video frame in the video stream.
The video stream data and the target information carrying the main frame identifier are sent to the audience, or the video stream data and the target information carrying the main frame identifier are sent to the audience through the server. The service end for sending the video stream data carrying the main frame identifier and the target information may be the same or different. In one example, the video stream data carrying the main frame identifier may be sent to the viewer through a live broadcast server, and the target information may be sent to the viewer through a communication server. The target information is sent to the audience end by the anchor end through the communication service end, and the data stream is sent down by the anchor end through a communication channel which can be realized by a long chain.
In the embodiment of the present disclosure, the anchor terminal may send target information to the viewer terminal, and the target information may be information containing specific content. In order to improve the security of the target information, the transmission target information may be information obtained by encrypting the content of the target information. Therefore, the encrypted target information is sent to the audience, so that the timeliness of the target information is guaranteed, and the safety of the specific content of the target information can be guaranteed.
In one example, the target information may be directly sent to a server (e.g., a communication server), and the server sends the target information to the viewer. In another example, the server may store the target information in advance, and the anchor may send an identifier (e.g., ID) of the target information to the server, and the server determines the target information according to the identifier and sends the determined target information to the viewer.
It is understood that the embodiments of the present disclosure take the example of sending the target information to the audience as an example, and in other alternative implementations, the anchor end may send the ID of the target information to the audience. In one example, a database containing the target information may be preloaded at the viewer's end. And when the audience receives the ID of the target information, acquiring the target information corresponding to the ID from a pre-stored database.
However, in order to avoid the risk of cracking the database stored in the audience, in another example, after obtaining the target information ID, the anchor may obtain the target information corresponding to the ID from the service server storing the target information according to the target information ID, thereby improving the security of the target information.
After the audience obtains the video stream data carrying the main frame identification and the target information, the target information can be played simultaneously when the main video frame corresponding to the main frame identification is played.
The playing includes presentation and playing, for example, presentation target information, and playing video stream data. In practical applications, since the transmission speed of data is often faster than that of the video stream, the target information may be received earlier by the viewer than the video frame associated with the viewer, and therefore, the target information is not displayed immediately after being received, but is stored locally. When the main video frame corresponding to the main frame identification is played, the target information is played, so that the consistency of the picture, the audio and the target information is ensured.
In practical applications, the transmission speed of data is often faster than that of video stream, that is, the time for obtaining the target information is earlier than the time for obtaining the main video frame, but if the receiving time between the target information and the main video frame exceeds a certain threshold, the target information may fail. In view of this, it is possible to determine whether the target information is information that is expected to be played in synchronization with the main video frame in the video stream data, based on the time difference between the reception time of the video stream data and the reception time of the target information. Specifically, in the audience, determining the receiving time of the video stream data and the receiving time of the target information; comparing the difference value of the two receiving times with a preset allowable delay time threshold value; and when the target information is determined to be the information expected to be played synchronously with the main video frame in the video stream data according to the comparison result, the target information is played simultaneously when the main video frame corresponding to the main frame identification is played. In one example, if it is determined according to the comparison result that the target information is not the information expected to be played synchronously with the main video frame in the video stream data, it is determined that the target information is invalid information, that is, the target information is not played.
Therefore, whether the target information is expected to be played synchronously with the main video frame in the video stream data or not is judged according to the time difference between the receiving time of the video stream data and the receiving time of the target information, and the accuracy of the playing information can be improved.
Regarding the playing of the target information at the viewer side, if the target information is the display information, the target information can be displayed on the screen of the main video frame. When displaying, the target information can be directly displayed, or the target information can be processed and then displayed, so as to provide a control which can be operated by the audience. Taking the target information as the question, an answer frame containing the question may be displayed on the video frame. In one example, not only the topic content is contained in the answer box, but an answer input control is also provided for the viewer to input an answer through the answer input control. In another example, to improve the efficiency of answering, each option may be a trigger control, and when the viewer clicks or touches the option, the triggered option serves as an answer submitted by the viewer. As shown in fig. 2C, fig. 2C is a schematic view of a live question answering interface according to an exemplary embodiment of the present disclosure. In this illustration, each option may be a trigger control, and when the user clicks on one of the options, the triggered option (belgium) serves as the answer submitted by the viewer.
In order to realize synchronous playing of the target information and the video frames related to the target information at the audience, a main frame of the video frames related to the target information may be determined first, and then the target information is played simultaneously when the main video frame corresponding to the main frame identifier is played.
In one example, if only one video frame is included in the video frame set that is expected to be played simultaneously with the target information, the target information may be presented only on the screen of the main video frame. Namely: and the target information is played only when the main video frame is played, and the playing of the target information is finished when the playing of the main video frame is finished.
In another example, if the video frame set expected to be played simultaneously with the target information includes more than one video frame, the target information starts to be played while the main video frame corresponding to the main frame identifier is played, and when the condition of stopping playing the target information is reached, the target information is finished to be played, so that the target information is played while the main video frame is played, and when the main video frame is switched to other associated video frames, the target information is still played on the associated video frames until the condition of stopping playing the target information is reached, the target information is stopped to be played. For example, the playing time of the target information is time-limited, and the playing time of the target information can be determined by setting a countdown. Specifically, starting countdown from the beginning of playing the target information, and stopping playing the target information or disabling the operation control associated with the target information when the countdown is finished, so as to prohibit the viewer from continuing to operate when the countdown is finished.
The countdown time may be a predetermined fixed time, and for example, the countdown time may be 10 s. Therefore, the countdown time is directly set, and the method is easy to realize.
However, in practical applications, the target information is received at different times due to different viewer-side devices, such as networks. However, by setting a fixed countdown time, which is often counted from the time when the target information is received and displayed, the following situations may occur: since there may be a time difference between different apparatuses for receiving the target information, a viewer using an apparatus having a slow reception time can know the target information in advance by viewing the target information in an apparatus having a fast reception time. To avoid this, the countdown time is determined based on the difference between the standard countdown time and the terminal network delay, which may be the network delay between the viewer side and the communication server side.
In one example, the terminal network latency may be determined based on a transmission time of the time tick request initiated by the communication service and a reception time of the time tick request received by the viewer. For example, the communication server periodically sends a time synchronization request carrying the transmission time to the communication server, so that the viewer determines the network delay of the terminal based on the difference between the reception time of the time synchronization request and the transmission time carried in the request.
In another example, the network delay of the terminal may also be determined based on the sending time of the communication service end sending the target information and the receiving time of the audience end receiving the target information, so that the network delay can be determined each time the target information is sent, and the real-time update of the network delay is realized.
To remind the progress of the countdown, the countdown time may be reminded. The reminding mode can be voice reminding or displaying reminding. When the reminder is displayed, the countdown time number can be displayed, and the countdown time number can also be represented by a progress bar and other modes. For example, as shown in FIG. 2C, the countdown progress is displayed as a bar progress bar. It is understood that the progress bar may be in the shape of a bar, a circle, etc., without limitation.
The viewer side can return the operation result to the server side. For example, the operation result may be returned to the communication server or other business servers. The server side can perform statistical analysis on the operation result and feed back the analysis result to the audience side and the live broadcast side. Taking target information as an example of question questions, the server side can count the selection proportion of each option, and then feeds back the answer counting result to the audience side and the anchor side.
The embodiment shows that the anchor terminal sends the video stream data carrying the main frame identifier to the audience terminal through the live broadcast server terminal, and sends the target information to the audience terminal through the communication server terminal, so that the target information is played simultaneously when the main video frame corresponding to the main frame identifier is played in the audience terminal, and synchronous playing is realized.
In a live scene where a host hosts a program, very complicated links such as multi-camera management, lighting, and director control of a camera are involved in recording the program, and a console involves very large machines and equipment. The broadcasting station occupies space, is complex to operate, needs to be operated by a specially trained broadcasting guide, has higher learning cost, and is inconvenient to operate picture making, character pasting and the like. In view of this, in an alternative implementation, the anchor generates and pushes video stream data through a push streaming system, and in particular, generates and pushes video stream data using an obs (open broadcast software) push streaming system. Open broadcast Software is Open-source live broadcast streaming media content production Software, supports operating systems such as OSX, Windows and Linux, and is suitable for various live broadcast scenes. The OBS tool can support the operations of fast editing, clipping, scene and source material fast switching of the content, and can carry out fast and simple content editing on live activities. Therefore, the director can complete the operations of machine position switching, matting, pasting, question issuing and the like only by one device.
In one example, a plug-in for an OBS push streaming system may be developed, where the work to be performed by the plug-in includes setting push parameters, such as push address, encoder parameter x264, playback resolution, bitrate (related to the final live image quality), audio bitrate, audio sampling rate, Canvas resolution, scaling filter, FPS, and other parameters. The above parameters may require the latest configuration to be obtained by the plug-in. Parameters acquired by the plug-in are set on the pusher, and the director only needs to issue the title and records the title main frame through stream pushing. The method replaces large-scale broadcasting equipment, and can complete all the work completed by the broadcasting equipment by only one notebook computer.
Furthermore, the code rate parameter in the stream pushing system is determined based on the number of on-line people in the live broadcast room. For example, the bitrate parameter can be adjusted in a stepwise manner according to the number of online people in a live broadcast room, and the bitrate value can be reduced as the number of online people increases.
Further, a frame interval parameter in the push stream system is determined based on the code rate parameter. The frame interval parameter is the interval time of each frame of video in the video stream. The embodiments of the present disclosure may adjust the frame interval according to the code rate parameter, for example, a low code rate may correspond to a larger frame interval, and a high code rate may correspond to a smaller frame interval.
It is understood that other parameters may be adjusted accordingly, which are not listed here.
The various technical features in the above embodiments can be arbitrarily combined, so long as there is no conflict or contradiction between the combinations of the features, but the combination is limited by the space and is not described one by one, and therefore, any combination of the various technical features in the above embodiments also belongs to the scope disclosed in the present specification.
One of the live systems is exemplified below. Fig. 3A is a flow diagram illustrating another live processing method of the present disclosure according to an example embodiment. The method is applied to a live system, and the live system can comprise a main broadcasting end and a spectator end. The method comprises the following steps:
the steps performed by the anchor may include steps 301 and 302:
in step 301, a main frame identifier of a main video frame is determined, where the main video frame is a first frame in a video frame set that is expected to be played simultaneously with target information in video stream data;
in step 302, sending the video stream data carrying the main frame identifier and the target information to a viewer;
the steps performed at the viewer side may include steps 303 and 304:
in step 303, the target information and the video stream data carrying the main frame identifier are obtained;
in step 304, when the main video frame corresponding to the main frame identifier is played, the target information is played at the same time.
It is understood that 301 and 302 in fig. 3A are the same as 201 and 202 in fig. 2A, and are not repeated herein. 303 and 304 in fig. 3A are the same as 203 and 204 in fig. 2B, and are not repeated herein.
The anchor side can send the video stream data carrying the main frame identification and the target information to the audience side through the server side. In an example, the anchor side may send the video stream data carrying the main frame identifier and the target information to the viewer side through the same server side, or send the video stream data carrying the main frame identifier and the target information to the viewer side through different server sides. For example, the service end may include a live broadcast service end and a communication service end as examples, where the live broadcast service end performs video stream distribution and the communication service end performs data distribution. As shown in fig. 3B, fig. 3B is a schematic diagram of a live system shown in the present disclosure according to an exemplary embodiment. The live system may include a main cast end 310, a live service end 320, a communication service end 330, and a viewer end 340. The anchor 310 sends the video stream data carrying the main frame identifier to the audience through the live broadcast server 320, and the anchor 310 sends the target information to the audience through the communication server 330.
As shown in fig. 3C, fig. 3C is an application scenario diagram of a live broadcast processing method according to an exemplary embodiment of the present disclosure, where the method is applied to a live broadcast answering system, and the live broadcast answering system may include a main broadcast terminal, a live broadcast server terminal, a communication server terminal, and an audience terminal. The audience members may include a first audience member and an Nth audience member, where N is an integer greater than 1.
The anchor terminal develops a set of OBS plug-in units of the plug-in system, plug-in unit parameters can be determined according to information such as the number of online people counted by the cloud server terminal, and the parameters are set. When target information is issued to a spectator end through a communication service end, a video frame generated currently is determined as a main video frame, a main frame identifier of the main video frame is obtained, and video stream data carrying the main frame identifier is sent to the spectator end through a live broadcast service end.
Regarding sending the question to the viewer through the communication server, in one example, the question may be directly sent to the communication server, and the communication server sends the question to the viewer. In another example, the communication server may store a question topic in advance, or even store an answer to the question topic, the anchor terminal may send a topic identifier (e.g., ID) of the question topic to the communication server, and the communication server determines the question topic according to the topic identifier and sends the determined question topic to the viewer terminal. Furthermore, the communication service end can encrypt the problem topic and send the encrypted problem topic to the audience end so as to improve the safety of the problem topic.
And the audience terminal obtains target information and video stream data carrying the main frame identification, and can play the problem topic simultaneously when playing the main video frame corresponding to the main frame identification. For example, after receiving a question topic, the question topic can be saved locally. When the main video frame corresponding to the main frame identification is played, the problem topic is displayed on the screen, so that the consistency of the picture, the audio and the topic is ensured.
Further, the playing time of the target information can be determined by setting a countdown mode. For example, starting a countdown from the beginning of playing the question topic, and stopping playing the target information or disabling the operation control associated with the target information when the countdown is finished, so as to prohibit the viewer from continuing to operate when the countdown is finished. The countdown time for the countdown may be determined based on a difference between the standard countdown time and a terminal network delay, which may be a network delay between the viewer side and the communication server side.
After receiving the answer from the audience, the communication server can judge the validity of the answer. The answer time may be a preset determination time, and may be 10 seconds, for example. The judgment time is the same as the standard countdown time. In one example, the answer timeout time allows for a wide limit, considering the time spent communicating. When the validity of the answer is judged, the difference value between the sending time of the question and the receiving time of the answer can be compared with the sum value of the standard countdown time and the wide limit value, the validity of the answer is determined according to the comparison result, and the invalid answer can be deleted. The wide-limit value may be a time value obtained based on historical communication time consumption statistics, for example, on the basis of the original answering time, the answer received within 11s is regarded as a valid answer by +1 s. The grace value may also be obtained based on a difference between a time for issuing a question topic by the communication service end and a time for issuing a video frame associated with the first frame.
It is understood that, in this embodiment, the problem topic sending and the result analysis and statistics are implemented in the same service end (communication service end), in other embodiments, the problem topic sending may be performed by the communication service end, and the result analysis and statistics may be performed by the answer service end (service end), which are not listed here.
According to the embodiment, the main frame identification of the main video frame is recorded, so that the problem can be played when the main video frame corresponding to the main frame identification is played, the consistency of pictures, audio and the problem is ensured, the user experience is improved, and meanwhile, a fair timing strategy is introduced to achieve fair countdown of the answer.
Corresponding to the embodiment of the live broadcast processing method, the disclosure also provides embodiments of a live broadcast processing device, equipment and a system applied by the device, and a storage medium.
As shown in fig. 4, fig. 4 is a block diagram of a live broadcast processing apparatus according to an exemplary embodiment, where the apparatus is provided at a main broadcast end, and the apparatus includes:
an identification determination module 410 configured to determine a primary frame identification of a primary video frame, the primary video frame being a first frame in a set of video frames in the video stream data that is expected to be played simultaneously with the target information.
The information sending module 420 is configured to send the video stream data carrying the main frame identifier and the target information to the viewer, so that the viewer simultaneously plays the main video frame corresponding to the main frame identifier and the target information.
In an optional implementation manner, the identification determining module 410 is specifically configured to: when target information is issued to a spectator, the currently generated video frame is determined as a main video frame, and a main frame identifier of the main video frame is obtained.
In an alternative implementation, the target information includes question questions or answer statistics.
In an alternative implementation, the primary frame identifies a time that includes the primary video frame.
In an optional implementation manner, the anchor generates and pushes video stream data carrying a main frame identifier through a stream pushing system, a code rate parameter in the stream pushing system is determined based on the number of online people in a live broadcast room, and a frame interval parameter in the stream pushing system is determined based on the code rate parameter.
In an alternative implementation, the push flow system comprises an OBS push flow system.
As shown in fig. 5, fig. 5 is a block diagram of another live broadcast processing apparatus according to an exemplary embodiment of the present disclosure, provided at a viewer end, the apparatus including:
the information obtaining module 510 is configured to obtain target information and video stream data carrying a main frame identifier, where the main frame is a first frame in a video frame set that is expected to be played simultaneously with the target information in the video stream data.
And an information playing module 520 configured to play the target information simultaneously when playing the main video frame corresponding to the main frame identifier.
As shown in fig. 6, fig. 6 is a block diagram of another live broadcast processing apparatus according to an exemplary embodiment of the present disclosure, the apparatus further includes an information determining module 530 configured to:
determining the receiving time of the video stream data and the receiving time of the target information before playing the target information;
comparing the difference value of the two receiving times with a preset allowable delay time threshold value;
and determining the target information to be expected to be played synchronously with the main video frame in the video stream data according to the comparison result.
As shown in fig. 7, fig. 7 is a block diagram of another live processing apparatus shown in the present disclosure according to an exemplary embodiment, the apparatus further includes a countdown module 540 configured to:
starting countdown from the moment of simultaneously playing the target information and the main video frame corresponding to the main frame identifier, and stopping playing the target information or forbidding an operation control related to the target information when the countdown is finished;
the countdown time of the countdown is determined based on the difference between the standard countdown time and the terminal network delay, and the terminal network delay is the network delay between the audience terminal and the communication service terminal.
As shown in fig. 8, fig. 8 is a block diagram of a live broadcast processing system shown in the present disclosure according to an exemplary embodiment, which includes an identifier determining module 810 and an information sending module 820 provided at a main broadcast end, and an information obtaining module 830 and an information playing module 840 provided at a viewer end.
The identifier determining module 810 is configured to determine a primary frame identifier of a primary video frame, where the primary video frame is a first frame in a video frame set that is expected to be played simultaneously with target information in video stream data;
the information sending module 820 is configured to send the video stream data carrying the main frame identifier and the target information to a viewer;
the information obtaining module 830 is configured to obtain the target information and video stream data carrying a main frame identifier;
the information playing module 840 is configured to play the target information simultaneously when playing the main video frame corresponding to the main frame identifier.
Correspondingly, the present disclosure also provides a live broadcast system, which includes a main broadcast end and a spectator end;
a main frame identifier of a main video frame is determined by a main broadcasting terminal, wherein the main video frame is the first frame in a video frame set expected to be played simultaneously with target information in video stream data; sending the video stream data carrying the main frame identification and the target information to a viewer;
the audience terminal obtains the target information and video stream data carrying the main frame identification; and when the main video frame corresponding to the main frame identification is played, the target information is played at the same time.
Correspondingly, the present disclosure also provides an electronic device, which includes a processor; a memory for storing processor-executable instructions; wherein the processor is configured to:
determining a main frame identifier of a main video frame, wherein the main video frame is a first frame in a video frame set expected to be played simultaneously with target information in video stream data;
and sending the video stream data carrying the main frame identification and the target information to a spectator end so that the spectator end simultaneously plays the main video frame corresponding to the main frame identification and the target information.
Correspondingly, the present disclosure also provides an electronic device, which includes a processor; a memory for storing processor-executable instructions; wherein the processor is configured to:
acquiring target information and video stream data carrying a main frame identifier, wherein the main video frame is a first frame in a video frame set which is expected to be played simultaneously with the target information in the video stream data;
and when the main video frame corresponding to the main frame identification is played, the target information is played at the same time.
Accordingly, the present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the methods described above.
The present disclosure may take the form of a computer program product embodied on one or more storage media including, but not limited to, disk storage, CD-ROM, optical storage, and the like, having program code embodied therein. Computer-usable storage media include permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
The specific details of the implementation process of the functions and actions of each module in the device are referred to the implementation process of the corresponding step in the method, and are not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
As shown in fig. 9, fig. 9 is a block diagram of an apparatus for live processing shown in accordance with an exemplary embodiment of the present disclosure. The apparatus 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like.
Referring to fig. 9, apparatus 900 may include one or more of the following components: processing component 902, memory 904, power component 906, multimedia component 908, audio component 910, input/output (I/O) interface 912, sensor component 914, and communication component 916.
The processing component 902 generally controls overall operation of the device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 902 may include one or more processors 920 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operation at the apparatus 900. Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 906 provides power to the various components of the device 900. The power components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 900.
The multimedia component 908 comprises a screen providing an output interface between the device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, audio component 910 includes a Microphone (MIC) configured to receive external audio signals when apparatus 900 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 also includes a speaker for outputting audio signals.
I/O interface 912 provides an interface between processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing status assessment of various aspects of the apparatus 900. For example, sensor assembly 914 may detect an open/closed state of device 900, the relative positioning of components, such as a display and keypad of device 900, the change in position of device 900 or one of the components of device 900, the presence or absence of user contact with device 900, the orientation or acceleration/deceleration of device 900, and the change in temperature of device 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communications between the apparatus 900 and other devices in a wired or wireless manner. The apparatus 900 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 904 comprising instructions, executable by the processor 920 of the apparatus 900 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Wherein the instructions in the storage medium, when executed by the processor, enable the apparatus 900 to perform a live processing method comprising:
determining a main frame identifier of a main video frame, wherein the main video frame is a first frame in a video frame set expected to be played simultaneously with target information in video stream data; and sending the video stream data carrying the main frame identification and the target information to a spectator end so that the spectator end simultaneously plays the main video frame corresponding to the main frame identification and the target information.
Or,
acquiring target information and video stream data carrying a main frame identifier, wherein the main video frame is a first frame in a video frame set which is expected to be played simultaneously with the target information in the video stream data; and when the main video frame corresponding to the main frame identification is played, the target information is played at the same time.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.
Claims (19)
1. A live broadcast processing method, used for a host, the method comprising:
determining a main frame identifier of a main video frame, wherein the main video frame is a first frame in a video frame set expected to be played simultaneously with target information in video stream data;
and sending the video stream data carrying the main frame identification and the target information to a spectator end so that the spectator end simultaneously plays the main video frame corresponding to the main frame identification and the target information.
2. The method of claim 1, wherein determining the primary frame identification of the primary video frame comprises:
when target information is issued to a spectator, the currently generated video frame is determined as a main video frame, and a main frame identifier of the main video frame is obtained.
3. The method of claim 1, wherein the target information comprises a question or answer statistic; and/or, the primary frame identification includes a time of the primary video frame.
4. The method according to claim 1, wherein the anchor generates and pushes video stream data carrying a main frame identifier through a stream pushing system, wherein a bitrate parameter in the stream pushing system is determined based on the number of on-line people in a live broadcast room, and a frame interval parameter in the stream pushing system is determined based on the bitrate parameter.
5. The method of claim 4, wherein the plug flow system comprises an OBS plug flow system.
6. A live broadcast processing method, for use on a viewer side, the method comprising:
acquiring target information and video stream data carrying a main frame identifier, wherein the main video frame is a first frame in a video frame set which is expected to be played simultaneously with the target information in the video stream data;
and when the main video frame corresponding to the main frame identification is played, the target information is played at the same time.
7. The method of claim 6, wherein before playing the target information, further comprising:
determining a receiving time of the video stream data and a receiving time of the target information;
comparing the difference value of the two receiving times with a preset allowable delay time threshold value;
and determining the target information to be expected to be played synchronously with the main video frame in the video stream data according to the comparison result.
8. The method according to claim 6 or 7, characterized in that the method further comprises:
starting countdown from the moment of simultaneously playing the target information and the main video frame corresponding to the main frame identifier, and stopping playing the target information or forbidding an operation control related to the target information when the countdown is finished;
the countdown time of the countdown is determined based on the difference between the standard countdown time and the terminal network delay, and the terminal network delay is the network delay between the audience terminal and the communication service terminal.
9. A live broadcast processing apparatus, provided at a host side, the apparatus comprising:
the identification determining module is configured to determine a main frame identification of a main video frame, wherein the main video frame is a first frame in a video frame set expected to be played simultaneously with target information in video stream data;
and the information sending module is configured to send the video stream data carrying the main frame identifier and the target information to a spectator end so that the spectator end can simultaneously play the main video frame corresponding to the main frame identifier and the target information.
10. The apparatus of claim 9, wherein the identity determination module is specifically configured to: when target information is issued to a spectator, the currently generated video frame is determined as a main video frame, and a main frame identifier of the main video frame is obtained.
11. The apparatus of claim 9,
the target information comprises question questions or answer statistical results; and/or the presence of a gas in the gas,
the primary frame identification comprises a time of the primary video frame; and/or the presence of a gas in the gas,
the method comprises the steps that a main broadcast end generates and pushes video stream data carrying main frame identification through a stream pushing system, a code rate parameter in the stream pushing system is determined based on the number of on-line people in a live broadcast room, and a frame interval parameter in the stream pushing system is determined based on the code rate parameter.
12. The device of claim 11, wherein the push flow system comprises an OBS push flow system.
13. A live broadcast processing apparatus, provided at a viewer side, the apparatus comprising:
the information acquisition module is configured to acquire target information and video stream data carrying a main frame identifier, wherein the main video frame is a first frame in a video frame set which is expected to be played simultaneously with the target information in the video stream data;
and the information playing module is configured to play the target information simultaneously when playing the main video frame corresponding to the main frame identifier.
14. The apparatus of claim 13, further comprising an information determination module configured to:
determining the receiving time of the video stream data and the receiving time of the target information before playing the target information;
comparing the difference value of the two receiving times with a preset allowable delay time threshold value;
and determining the target information to be expected to be played synchronously with the main video frame in the video stream data according to the comparison result.
15. The apparatus of claim 13 or 14, further comprising a countdown module configured to:
starting countdown from the moment of simultaneously playing the target information and the main video frame corresponding to the main frame identifier, and stopping playing the target information or forbidding an operation control related to the target information when the countdown is finished;
the countdown time of the countdown is determined based on the difference between the standard countdown time and the terminal network delay, and the terminal network delay is the network delay between the audience terminal and the communication service terminal.
16. A live broadcast system, comprising a main broadcast end and a viewer end;
a main frame identifier of a main video frame is determined by a main broadcasting terminal, wherein the main video frame is the first frame in a video frame set expected to be played simultaneously with target information in video stream data; sending the video stream data carrying the main frame identification and the target information to a viewer;
the audience terminal obtains the target information and video stream data carrying the main frame identification; and when the main video frame corresponding to the main frame identification is played, the target information is played at the same time.
17. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
determining a main frame identifier of a main video frame, wherein the main video frame is a first frame in a video frame set expected to be played simultaneously with target information in video stream data;
and sending the video stream data carrying the main frame identification and the target information to a spectator end so that the spectator end simultaneously plays the main video frame corresponding to the main frame identification and the target information.
18. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring target information and video stream data carrying a main frame identifier, wherein the main video frame is a first frame in a video frame set which is expected to be played simultaneously with the target information in the video stream data;
and when the main video frame corresponding to the main frame identification is played, the target information is played at the same time.
19. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810206343.1A CN108260016B (en) | 2018-03-13 | 2018-03-13 | Live broadcast processing method, device, equipment, system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810206343.1A CN108260016B (en) | 2018-03-13 | 2018-03-13 | Live broadcast processing method, device, equipment, system and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108260016A true CN108260016A (en) | 2018-07-06 |
CN108260016B CN108260016B (en) | 2020-07-28 |
Family
ID=62745939
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810206343.1A Active CN108260016B (en) | 2018-03-13 | 2018-03-13 | Live broadcast processing method, device, equipment, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108260016B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109714622A (en) * | 2018-11-15 | 2019-05-03 | 北京奇艺世纪科技有限公司 | A kind of video data handling procedure, device and electronic equipment |
CN110139128A (en) * | 2019-03-25 | 2019-08-16 | 北京奇艺世纪科技有限公司 | A kind of information processing method, blocker, electronic equipment and storage medium |
CN110876089A (en) * | 2018-09-03 | 2020-03-10 | 阿里巴巴集团控股有限公司 | Online answer processing method and device |
CN111726650A (en) * | 2020-06-30 | 2020-09-29 | 广州繁星互娱信息科技有限公司 | Video live broadcast method and device and computer storage medium |
WO2020253452A1 (en) * | 2019-06-18 | 2020-12-24 | 北京字节跳动网络技术有限公司 | Status message pushing method, and method, device and apparatus for switching interaction content in live broadcast room |
CN112330997A (en) * | 2020-11-13 | 2021-02-05 | 北京安博盛赢教育科技有限责任公司 | Method, device, medium and electronic equipment for controlling demonstration video |
CN112330996A (en) * | 2020-11-13 | 2021-02-05 | 北京安博盛赢教育科技有限责任公司 | Control method, device, medium and electronic equipment for live broadcast teaching |
CN112437316A (en) * | 2020-10-15 | 2021-03-02 | 北京三快在线科技有限公司 | Method and device for synchronously playing instant message and live video stream |
CN112839235A (en) * | 2020-12-30 | 2021-05-25 | 北京达佳互联信息技术有限公司 | Display method, comment sending method, video frame pushing method and related equipment |
CN112966674A (en) * | 2020-12-07 | 2021-06-15 | 北京字节跳动网络技术有限公司 | Topic explaining method and device and electronic equipment |
CN113678137A (en) * | 2019-08-18 | 2021-11-19 | 聚好看科技股份有限公司 | Display device |
CN114268806A (en) * | 2021-12-24 | 2022-04-01 | 南京纳加软件股份有限公司 | Signal processing method of high-smoothness live broadcast control system |
CN115883918A (en) * | 2021-09-22 | 2023-03-31 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for processing video stream |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101022353A (en) * | 2006-10-10 | 2007-08-22 | 鲍东山 | Directional stream media advertisement insert-cut system |
CN102595206A (en) * | 2012-02-24 | 2012-07-18 | 央视国际网络有限公司 | Data synchronization method and device based on sport event video |
US20150121436A1 (en) * | 2013-10-25 | 2015-04-30 | Broadcom Corporation | Presentation timeline synchronization across audio-video (av) streams |
CN104918016A (en) * | 2015-06-09 | 2015-09-16 | 柳州桂通科技股份有限公司 | Multimedia multi-information synchronized reproducing system |
CN106162354A (en) * | 2015-04-08 | 2016-11-23 | 许怡详 | Video instant scene service-Engine |
CN106162230A (en) * | 2016-07-28 | 2016-11-23 | 北京小米移动软件有限公司 | The processing method of live information, device, Zhu Boduan, server and system |
CN106331830A (en) * | 2016-09-06 | 2017-01-11 | 北京小米移动软件有限公司 | Method, device, equipment and system for processing live broadcast |
CN106488291A (en) * | 2016-11-17 | 2017-03-08 | 百度在线网络技术(北京)有限公司 | The method and apparatus of simultaneous display file in net cast |
CN107071502A (en) * | 2017-01-24 | 2017-08-18 | 百度在线网络技术(北京)有限公司 | Video broadcasting method and device |
-
2018
- 2018-03-13 CN CN201810206343.1A patent/CN108260016B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101022353A (en) * | 2006-10-10 | 2007-08-22 | 鲍东山 | Directional stream media advertisement insert-cut system |
CN102595206A (en) * | 2012-02-24 | 2012-07-18 | 央视国际网络有限公司 | Data synchronization method and device based on sport event video |
US20150121436A1 (en) * | 2013-10-25 | 2015-04-30 | Broadcom Corporation | Presentation timeline synchronization across audio-video (av) streams |
CN106162354A (en) * | 2015-04-08 | 2016-11-23 | 许怡详 | Video instant scene service-Engine |
CN104918016A (en) * | 2015-06-09 | 2015-09-16 | 柳州桂通科技股份有限公司 | Multimedia multi-information synchronized reproducing system |
CN106162230A (en) * | 2016-07-28 | 2016-11-23 | 北京小米移动软件有限公司 | The processing method of live information, device, Zhu Boduan, server and system |
CN106331830A (en) * | 2016-09-06 | 2017-01-11 | 北京小米移动软件有限公司 | Method, device, equipment and system for processing live broadcast |
CN106488291A (en) * | 2016-11-17 | 2017-03-08 | 百度在线网络技术(北京)有限公司 | The method and apparatus of simultaneous display file in net cast |
CN107071502A (en) * | 2017-01-24 | 2017-08-18 | 百度在线网络技术(北京)有限公司 | Video broadcasting method and device |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110876089A (en) * | 2018-09-03 | 2020-03-10 | 阿里巴巴集团控股有限公司 | Online answer processing method and device |
CN109714622A (en) * | 2018-11-15 | 2019-05-03 | 北京奇艺世纪科技有限公司 | A kind of video data handling procedure, device and electronic equipment |
CN109714622B (en) * | 2018-11-15 | 2021-04-16 | 北京奇艺世纪科技有限公司 | Video data processing method and device and electronic equipment |
CN110139128A (en) * | 2019-03-25 | 2019-08-16 | 北京奇艺世纪科技有限公司 | A kind of information processing method, blocker, electronic equipment and storage medium |
WO2020253452A1 (en) * | 2019-06-18 | 2020-12-24 | 北京字节跳动网络技术有限公司 | Status message pushing method, and method, device and apparatus for switching interaction content in live broadcast room |
CN113678137A (en) * | 2019-08-18 | 2021-11-19 | 聚好看科技股份有限公司 | Display device |
CN113678137B (en) * | 2019-08-18 | 2024-03-12 | 聚好看科技股份有限公司 | Display apparatus |
CN111726650A (en) * | 2020-06-30 | 2020-09-29 | 广州繁星互娱信息科技有限公司 | Video live broadcast method and device and computer storage medium |
CN112437316A (en) * | 2020-10-15 | 2021-03-02 | 北京三快在线科技有限公司 | Method and device for synchronously playing instant message and live video stream |
CN112330997A (en) * | 2020-11-13 | 2021-02-05 | 北京安博盛赢教育科技有限责任公司 | Method, device, medium and electronic equipment for controlling demonstration video |
CN112330996A (en) * | 2020-11-13 | 2021-02-05 | 北京安博盛赢教育科技有限责任公司 | Control method, device, medium and electronic equipment for live broadcast teaching |
CN112966674A (en) * | 2020-12-07 | 2021-06-15 | 北京字节跳动网络技术有限公司 | Topic explaining method and device and electronic equipment |
CN112839235A (en) * | 2020-12-30 | 2021-05-25 | 北京达佳互联信息技术有限公司 | Display method, comment sending method, video frame pushing method and related equipment |
CN112839235B (en) * | 2020-12-30 | 2023-04-07 | 北京达佳互联信息技术有限公司 | Display method, comment sending method, video frame pushing method and related equipment |
CN115883918A (en) * | 2021-09-22 | 2023-03-31 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for processing video stream |
CN114268806A (en) * | 2021-12-24 | 2022-04-01 | 南京纳加软件股份有限公司 | Signal processing method of high-smoothness live broadcast control system |
Also Published As
Publication number | Publication date |
---|---|
CN108260016B (en) | 2020-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108260016B (en) | Live broadcast processing method, device, equipment, system and storage medium | |
US20210281909A1 (en) | Method and apparatus for sharing video, and storage medium | |
CN108449605B (en) | Information synchronous playing method, device, equipment, system and storage medium | |
US10455291B2 (en) | Live video stream sharing | |
CN106303590B (en) | Method and device for inviting to watch video film | |
CN104168503B (en) | The method and device of shared video information | |
CN110166788B (en) | Information synchronous playing method, device and storage medium | |
CN111970533A (en) | Interaction method and device for live broadcast room and electronic equipment | |
CN106937131B (en) | Video stream switching method, device and equipment | |
CN109151565B (en) | Method and device for playing voice, electronic equipment and storage medium | |
US9736518B2 (en) | Content streaming and broadcasting | |
US9756373B2 (en) | Content streaming and broadcasting | |
CN111031332B (en) | Data interaction method, device, server and storage medium | |
CN111432284B (en) | Bullet screen interaction method of multimedia terminal and multimedia terminal | |
CN104023263A (en) | Video selection providing method and device thereof | |
CN111866531A (en) | Live video processing method and device, electronic equipment and storage medium | |
US20230007312A1 (en) | Method and apparatus for information interaction in live broadcast room | |
CN111343477B (en) | Data transmission method and device, electronic equipment and storage medium | |
CN110191367B (en) | Information synchronization processing method and device and electronic equipment | |
US20220078221A1 (en) | Interactive method and apparatus for multimedia service | |
WO2018076358A1 (en) | Multimedia information playback method and system, standardized server and broadcasting terminal | |
CN114025180A (en) | Game operation synchronization system, method, device, equipment and storage medium | |
CN106331830A (en) | Method, device, equipment and system for processing live broadcast | |
CN109729367B (en) | Method and device for providing live media content information and electronic equipment | |
CN113301363A (en) | Live broadcast information processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |