CN112788360B - Live broadcast method, live broadcast device and computer program product - Google Patents

Live broadcast method, live broadcast device and computer program product Download PDF

Info

Publication number
CN112788360B
CN112788360B CN202011631129.4A CN202011631129A CN112788360B CN 112788360 B CN112788360 B CN 112788360B CN 202011631129 A CN202011631129 A CN 202011631129A CN 112788360 B CN112788360 B CN 112788360B
Authority
CN
China
Prior art keywords
data
information
time
playing
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011631129.4A
Other languages
Chinese (zh)
Other versions
CN112788360A (en
Inventor
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011631129.4A priority Critical patent/CN112788360B/en
Publication of CN112788360A publication Critical patent/CN112788360A/en
Application granted granted Critical
Publication of CN112788360B publication Critical patent/CN112788360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234345Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation

Abstract

The present disclosure relates to a live broadcast method, a live broadcast apparatus and a computer program product, the method comprising: receiving a jamming message, wherein the jamming message comprises a starting time and a jamming duration of the jamming; and extracting the cached cartoon data from the cache area according to the cartoon information, and transmitting the cartoon data to the client equipment for playing by the client equipment, wherein the cartoon data comprises audio and video data which starts from the starting time of the cartoon and lasts for a preset time, the preset time is greater than or equal to the sum of the cartoon duration and the catch-up duration, and the catch-up duration is the time from the client equipment to play the cartoon data until the current live content is synchronously played. By adopting the scheme, under the condition that the blocking in the live broadcast process is eliminated, the audio and video data playing when the blocking occurs can be continued, and the experience is better.

Description

Live broadcast method, live broadcast device and computer program product
Technical Field
The present disclosure relates to the field of audio and video playback, and in particular, to a live broadcast method, a live broadcast apparatus, and a computer program product.
Background
In the related technology, when a live broadcast platform watches live broadcast, a long-time blocking condition can occur when a weak network condition is met, and when a network is restored, live broadcast content can directly jump to the progress of live broadcast of a current host. From the click-through to the restoration of the intermediate live content. This results in the loss of live content.
The existing live broadcast technology implementation process mainly comprises the steps of audio and video acquisition, encoding, pushing, pulling, decoding and playing. Because the audio and video acquisition is real-time, the push stream and the pull stream are also real-time, the data of the server can be synchronously pushed to the client, and if you are stuck for a long time, a lot of content can be lost in the live broadcast process.
At present, after the blocking occurs, the user can see back and understand the content once after the live broadcast is finished, which is unfavorable for obtaining corresponding information at the moment. Moreover, due to different areas, different coverage of network infrastructures, different bandwidths and the like, the phenomenon of blocking caused by network problems during live broadcast is common. The measures adopted by the various platforms at the present stage are to directly continue playing the live content and discard the content when the blocking occurs.
Disclosure of Invention
The disclosure provides a live broadcast method, a live broadcast device and a computer program product, which at least solve the problem that in the live broadcast process, when the blocking disappears, the live broadcast content cannot be directly played, and the user is required to review the video in the blocking process after the live broadcast is finished. The technical scheme of the present disclosure is as follows:
According to a first aspect of an embodiment of the present disclosure, there is provided a live broadcast method, including: receiving jamming information, wherein the jamming information comprises the starting time and the jamming duration of jamming; and extracting the cached jamming data from the cache area according to the jamming information, and transmitting the jamming data to the client equipment for playing by the client equipment, wherein the jamming data comprises audio and video data which starts from the starting time of the jamming and lasts for a preset time, the preset time is greater than or equal to the sum of the jamming time length and the catch-up time length, and the catch-up time length is the time from the client equipment to the playing of the jamming data and until the current live broadcast content is synchronously played.
Optionally, before extracting the buffered katon data from the buffer according to the katon information, the method further includes: encoding the audio and video data to obtain encoded audio and video data; dividing the coded audio and video data into a plurality of data fragments, and caching the data fragments in the cache region.
Optionally, the step of encoding the audio-video data includes: and encoding the audio and video data by adopting a plurality of encoding parameters to obtain a plurality of encoded audio and video data with different definition, wherein the encoding parameters comprise bit rate and video code rate.
Optionally, the step of extracting the cached katon data from the cache area according to the katon information, and issuing the katon data to the client device for playing by the client device includes: determining video information at least according to the katon information, wherein the video information at least comprises: the number of the data segments in the stuck data and the start time of the first of the data segments in the stuck data; determining the corresponding clamping data according to the video information; and sending the video information and the corresponding cartoon data to the client equipment.
Optionally, the step of sending the video information and the corresponding katon data to the client device further includes: transmitting the video information to the client device; and under the condition that the predetermined information sent by the client equipment is received, the katon data is sent to the client equipment, wherein the predetermined information is information which characterizes that the katon fragments are allowed to be cached.
Optionally, before determining video information at least according to the click-through information, the method further comprises: recording data segment information corresponding to each data segment, wherein the data segment information at least comprises the starting time of the data segment and the duration of the data segment, and the step of determining video information at least according to the katon information comprises the following steps: calculating the catch-up duration according to the stuck duration, the live broadcast speed and the preset play speed; and determining the number of the data fragments, the starting time of the first data fragment and the data fragment information corresponding to the data fragments in the cartoon fragments according to the starting time of the cartoon, the live speed, the catch-up duration, the preset parameters and the data fragment information, wherein the preset parameters are the preset playing double speed or the cartoon duration.
Optionally, the preset time is greater than or equal to the sum of the click duration, the catch-up duration and the buffering time, where the buffering time is a time required by the client device to buffer the click data.
Optionally, the video information further includes a sharpness of the data segment, and the determining the video information at least according to the katon information includes: and determining the definition of the cartoon segment according to the catch-up duration.
Optionally, the step of determining the sharpness of the katon fragment according to the catch-up duration includes: determining the definition of the katon fragment as a first definition which is lower than the definition of the live broadcast under the condition that the catch-up time length is larger than the first threshold and smaller than or equal to a second threshold; determining that the definition of the katon fragment is the definition of the live broadcast under the condition that the catch-up duration is smaller than or equal to the first threshold value; and under the condition that the catch-up time length is larger than the second threshold value, determining the definition of the katon fragment as a second definition, wherein the second definition is lower than the first definition.
Optionally, before determining the jamming data according to the jamming information and sending the jamming data to the client device for playing by the client device, the method further includes: acquiring first barrage information in a live broadcast process; and integrating the first barrage information into the corresponding data fragment information.
Optionally, the method further comprises: acquiring second barrage information received by the client device when the client device plays the katon fragment; and integrating the second barrage information into the data fragment information.
According to a second aspect of the embodiments of the present disclosure, there is provided a live broadcast method, including: the method comprises the steps of obtaining and sending the jamming information to a server device, wherein the jamming information comprises the starting time and the jamming duration of jamming; and receiving and playing the stuck data sent by the server-side equipment, wherein the stuck data is audio and video data of preset time determined at least according to the stuck information, the starting time of the preset time is the starting time of the stuck, the audio and video data is cached in the live broadcast process, the preset time is greater than or equal to the sum of the stuck time and the catch-up time, and the catch-up time is the time from the start of playing the stuck data until the current live broadcast content is synchronously played.
Optionally, the step of receiving and playing the card data sent by the server device includes: receiving video information sent by the server-side equipment, wherein the video information at least comprises: the number of the data fragments in the cartoon data and the starting time of the first data fragment in the cartoon data, wherein the video information is determined by the server equipment at least according to the cartoon information, and the data fragments are obtained by the server equipment for segmenting the audio and video data; determining to cache the stuck data according to the video information; and judging whether the cached data volume of the cartoon data accords with the continuous playing condition, and if so, playing the cartoon data.
Optionally, the step of determining to buffer the katon data according to the video information includes: determining whether to allow buffering of the stuck data according to the video information; and under the condition that the blocking data is allowed to be cached, the blocking data is cached.
Optionally, the step of determining whether the cached data amount of the katon data meets a condition of continuous playing, if yes, playing the katon data includes: and under the condition that the cached data volume of the cartoon data accords with the continuous playing condition, determining a playing strategy, wherein the playing strategy comprises the following steps: the play start time and play speed in the first data segment in the clip segment; and playing the clamping data according to the playing strategy.
Optionally, the video information further includes the catch-up duration and a data amount of the katon data, and the step of determining the playing policy includes, in a case where the buffered data amount of the katon data meets the condition of continuous playing: and determining the playing speed of the stuck data according to the data quantity of the stuck data and the catch-up time length.
Optionally, the step of determining the playing speed doubling of the katon fragment according to the data amount and the catch-up duration includes: according to the data amount of the first time period and the first time period, determining the play speed increment in the first time period; according to the data volume of the second time period and the second time period, determining that the playing speed in the second time period is reduced, wherein the catch-up time length is divided into the first time period and the second time period according to time sequence, and the data volume is composed of the data volume of the first time period and the data volume of the second time period.
According to a third aspect of the embodiments of the present disclosure, there is provided a live broadcast method, including: in the live broadcast process, the server device caches live broadcast audio and video data to a cache region; the client device records the jamming information and sends the jamming information to the server device, wherein the jamming information comprises the starting time and the jamming duration of the jamming; the server side equipment extracts cached cartoon data from a cache area and transmits the cartoon data to the client side equipment, wherein the cartoon data comprises audio and video data which starts from the starting time of the cartoon and lasts for a preset time, the preset time is larger than or equal to the sum of the cartoon duration and the catch-up duration, and the catch-up duration is the time from the client side equipment to synchronously play the current live broadcast content; and the client device receives the clamping data and plays the clamping fragments.
Optionally, before the server device extracts the cached katon data from the cache region, the method further includes: the server-side equipment encodes the audio and video data to obtain encoded audio and video data; the server-side equipment divides the coded audio and video data into a plurality of data fragments and caches the data fragments in the cache region.
Optionally, the step of encoding the audio and video data by the server device includes: and the server equipment adopts various coding parameters to code the audio and video data to obtain the coded audio and video data with different definition.
Optionally, the step of the server device extracting the cached katon data from the cache area and issuing the katon data to the client device includes: the server device determines video information at least according to the clamping information, wherein the video information at least comprises: the number of the data segments in the stuck data and the start time of the first of the data segments; the server equipment determines the corresponding cartoon data according to the video information; and the server-side equipment sends the video information and the corresponding katon data to the client-side equipment.
Optionally, the step of receiving the katon data and playing the katon data by the client device includes: receiving video information sent by the server-side equipment; determining whether to allow buffering of the katon fragments according to the video information; under the condition that the katon fragments are allowed to be cached, sending preset information to the server-side equipment, wherein the preset information is information representing that the katon fragments are allowed to be cached; caching the stuck data; and judging whether the cached data volume of the cartoon data accords with the continuous playing condition, and if so, playing the cartoon data.
Optionally, the step of determining whether the cached data amount of the katon data meets a condition of continuous playing, if yes, playing the katon data includes: and under the condition that the cached data volume of the cartoon data accords with the continuous playing condition, the client equipment determines a playing strategy, wherein the playing strategy comprises the following steps: the play start time and play speed in the first data segment in the clip segment; and the client device plays the clamping data according to the playing strategy.
Optionally, the video information further includes the catch-up duration and a data amount of the katon data, and the step of determining, by the client device, the play policy includes: and the client device determines the playing speed of the cartoon segment according to the received data quantity of the cartoon data and the catch-up duration.
Optionally, the step of determining, by the client device, the play speed doubling of the katon fragment according to the data amount and the catch-up duration includes: the client device determines the play speed increment in a first time period according to the data volume in the first time period and the first time period; the client device determines that the playing speed in the second time period decreases according to the data volume in the second time period and the second time period, the catch-up duration is divided into the first time period and the second time period according to time sequence, and the data volume is composed of the data volume in the first time period and the data volume in the second time period.
Optionally, before the server device determines video information at least according to the katon information, the method further includes: the server side equipment records data fragment information corresponding to each data fragment, the data fragment information at least comprises the starting time of the data fragment and the duration of the data fragment, and the step of determining video information by the server side equipment at least according to the clamping information comprises the following steps: the server equipment calculates the catch-up duration according to the blocking duration, the live broadcast speed and the preset play speed; the server device determines the number of the data segments, the starting time of the first data segment and the data segment information corresponding to the data segments in the clip according to the starting time of the clip, the live broadcast speed, the catch-up duration, a preset parameter and each piece of data segment information, wherein the preset parameter is the preset playing speed or the clip duration.
Optionally, the video information further includes the definition of the data segment, and the step of determining, by the server device, the video information at least according to the katon information includes: and the server equipment determines the definition of the cartoon segment according to the catch-up duration.
Optionally, before the server device determines the jamming data according to the jamming information and issues the jamming data to the client device, the jamming data includes a jamming section, the method further includes: the server equipment acquires first barrage information in the live broadcast process; and the server-side equipment integrates the first barrage information into the corresponding data fragment information.
Optionally, the method further comprises: the server side equipment acquires second barrage information received when the client side equipment plays the katon fragment; and the server-side equipment integrates the second barrage information into the corresponding data fragment information.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a live broadcast apparatus, including: a first acquisition unit configured to perform receiving click information, wherein the click information includes a click start time and a click duration; and the sending unit is configured to execute the katon data which are cached according to the katon information and extract the katon data from the cache area, and send the katon data to the client equipment for the client equipment to play, wherein the katon data comprise audio and video data which start from the start time of the katon and last for a preset time, the preset time is larger than or equal to the sum of the katon time and the catch-up time, and the catch-up time is the time when the client equipment starts to play the katon data and until the current live broadcast content is synchronously played.
Optionally, the live broadcast device further includes: the encoding unit is configured to encode the audio and video data before extracting the cached katon data from the cache area according to the katon information, so as to obtain encoded audio and video data; and the buffer unit is configured to divide the coded audio and video data into a plurality of data fragments and buffer the data fragments into the buffer area.
Optionally, the encoding unit is further configured to perform: and encoding the audio and video data by adopting a plurality of encoding parameters to obtain a plurality of encoded audio and video data with different definition, wherein the encoding parameters comprise bit rate and video code rate.
Optionally, the transmitting unit includes: a first determining module configured to perform determining video information based at least on the click-through information, the video information including at least: the number of the data segments in the stuck data and the start time of the first of the data segments in the stuck data; a second determining module configured to determine the corresponding stuck data according to the video information; and the sending module is configured to send the video information and the corresponding katon data to the client device.
Optionally, the sending module includes: a first transmission sub-module configured to perform transmission of the video information to the client device; and the second sending submodule is configured to send the cartoon data to the client equipment when receiving preset information sent by the client equipment, wherein the preset information is information which characterizes that the cartoon data is allowed to be cached.
Optionally, the live broadcast device further includes: a recording unit configured to record data segment information corresponding to each data segment, where the data segment information includes at least a start time of the data segment and a duration of the data segment, before determining video information at least according to the katon information, and the first determining module includes: a first determining submodule configured to determine the catch-up time period according to the stuck time period, the live speed and a predetermined play multiplier; the second determining submodule is configured to determine the number of the data segments, the starting time of the first data segment and the data segment information corresponding to the data segments in the cartoon data according to the starting time of the cartoon, the live speed, the catch-up time length, preset parameters and the data segment information, wherein the preset parameters are the preset playing double speed or the cartoon time length.
Optionally, the preset time is greater than or equal to the sum of the click duration, the catch-up duration and the buffering time, where the buffering time is a time required by the client device to buffer the click data.
Optionally, the video information further includes sharpness of the data segment, and the first determining module further includes: and a third determining sub-module configured to determine the sharpness of the stuck data according to the catch-up duration.
Optionally, the third determining submodule includes: a fourth determining submodule configured to determine that the sharpness of the stuck data is a first sharpness, which is lower than the sharpness of the live broadcast, if the catch-up time is longer than a first threshold and is less than or equal to a second threshold; a fifth determining submodule configured to determine that the sharpness of the stuck data is the sharpness of live broadcast if the catch-up duration is less than or equal to the first threshold; a sixth determination submodule configured to determine the sharpness of the stuck data to be a second sharpness if the catch-up time period is greater than the second threshold, the second sharpness being lower than the first sharpness.
Optionally, the live broadcast device further includes: the second acquisition unit is configured to extract cached cartoon data from the cache area according to the cartoon information, and send the cartoon data to the client equipment so as to acquire first bullet screen information in the live broadcast process before the client equipment plays the cartoon data; and the first integration unit is configured to integrate the first barrage information into the corresponding data fragment information.
Optionally, the live broadcast device further includes: the third acquisition unit is configured to acquire second bullet screen information received when the client device plays the katon data; and a second integration unit configured to perform integration of the second bullet screen information into the data segment information.
According to a fifth aspect of embodiments of the present disclosure, there is provided a live broadcast apparatus, including: the second acquisition unit is configured to acquire the jamming information and send the jamming information to the server equipment, wherein the jamming information comprises the starting time and the jamming duration of the jamming; the receiving unit is configured to receive and play the click data sent by the server device, wherein the click data is audio and video data with preset time determined at least according to the click information, the starting time of the preset time is the starting time of the click, the audio and video data is cached in a live broadcast process, the preset time is greater than or equal to the sum of the click duration and the catch-up duration, and the catch-up duration is the time from the start of playing the click data until the current live broadcast content is synchronously played.
Optionally, the receiving unit includes: the receiving module is configured to receive video information sent by the server-side equipment, and the video information at least comprises: the number of the data fragments in the cartoon data and the starting time of the first data fragment in the cartoon data, wherein the video information is determined at least according to the cartoon information, and the data fragments are obtained by dividing the audio and video data; a determining module configured to perform determining to cache the katon data according to the video information; and the playing module is configured to execute the step of judging whether the data volume of the cached cartoon data accords with the condition of continuous playing, and if so, playing the cartoon data.
Optionally, the determining module includes: a determining submodule configured to perform a determination of whether to allow buffering of the stuck data according to the video information; and the caching submodule is configured to perform caching of the cartoon data under the condition that the cartoon data is allowed to be cached.
Optionally, the playing module includes: a first determining submodule configured to determine a playing policy in a case where the cached data amount of the cartoon data meets the condition of continuous playing, the playing policy comprising: the play start time and play speed in the first data segment in the cartoon data; and the playing sub-module is configured to play the cartoon data according to the playing strategy.
Optionally, the video information further includes the catch-up duration and a data amount of the katon data, and the first determining submodule is configured to perform: and determining the playing speed of the stuck data according to the data quantity of the stuck data and the catch-up time length.
Optionally, the first determining submodule includes: a second determining sub-module configured to perform determining a play multiplier increment within a first time period according to an amount of data of the first time period and the first time period; and the third determining submodule is configured to determine the playing speed in the second time period to be reduced according to the data volume of the second time period and the second time period, the catch-up duration is divided into the first time period and the second time period according to the time sequence, and the data volume is composed of the data volume of the first time period and the data volume of the second time period.
According to a sixth aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the live method as claimed in any one of the claims.
According to a seventh aspect of embodiments of the present disclosure, there is provided a system comprising: the server device is configured to execute any one of the live broadcast methods; a client device configured to perform any of the live methods.
According to an eighth aspect of embodiments of the present disclosure, there is provided a storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform any one of the live broadcast methods.
According to a ninth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements any one of the live methods.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
in the above scheme, firstly, the blocking information is received, then the buffered blocking data is extracted from the buffer area according to the blocking information, and the blocking data is issued to the client device for playing by the client device. The method solves the problems that in the prior art, when the clip disappears, the content in the clip cannot be directly played, and the user is required to review the video in the clip process after the live broadcast is finished. In addition, in the scheme, the blocking data comprise audio and video data which start from the starting time of blocking and last for a preset time, the blocking data not only comprise data in the blocking period but also comprise live broadcast data corresponding to the catch-up duration, and the synchronization with the current live broadcast after the blocking disappears and after the blocking data are played.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is a block diagram illustrating an application scenario of a live broadcast method according to an exemplary embodiment.
Fig. 2 is a flow diagram illustrating a live method according to an example embodiment.
Fig. 3 is a flow chart illustrating a live method according to yet another exemplary embodiment.
Fig. 4 is a flow diagram illustrating a live method according to another exemplary embodiment.
Fig. 5 is a flow chart illustrating a live method according to yet another exemplary embodiment.
Fig. 6 is a block diagram illustrating a live device according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating a live device according to an exemplary embodiment.
Fig. 8 is a block diagram of another electronic device, shown in accordance with an exemplary embodiment.
Fig. 9 is a block diagram illustrating a live system in accordance with an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
As described in the background art, in the prior art, in the live broadcast process, when the clip disappears, the live broadcast content is directly played, and the problem that the user needs to look back at the video in the clip process after the live broadcast is finished is solved.
Fig. 1 is a block diagram illustrating an application scenario of a live broadcast method according to an exemplary embodiment. As shown in fig. 1, the following live method may be applied in this implementation environment. The implementation environment includes a client device 01 and a server device 02, and the client device 01 and the server device 02 may be interconnected and communicate through a network.
The client device 01 is a device for playing audio and video data, the server device 02 caches the audio and video data played by the client device 01 in real time, and under the condition that the client device 01 is in a stuck state, the server device 02 acquires the stuck state data and issues the stuck state data to the client device for playing by the client device.
The client device 01 may be any electronic product that can perform man-machine interaction with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, a voice interaction or handwriting device, such as a mobile phone, a tablet computer, a palmtop computer, a personal computer (Personal Computer, PC), a wearable device, a smart tv, etc.
The server device 02 may be a server, a server cluster formed by a plurality of servers, or a cloud computing service center. The server device 02 may include a processor, memory, network interface, and the like.
Those skilled in the art will appreciate that the above-described client devices and server devices are merely examples, and that other client devices or server devices that may be present in the present disclosure or otherwise hereafter are intended to be within the scope of the present disclosure and are incorporated herein by reference.
Based on this, embodiments of the present disclosure provide a live broadcast method, a live broadcast apparatus, and a computer program product.
The execution body of the live broadcast method provided by the embodiment of the present disclosure may be the client device or the server device, or may be a functional module and/or a functional entity capable of implementing the live broadcast method in the client device or the server device, which may be specifically determined according to actual use requirements, and the embodiment of the present disclosure is not limited. The live broadcast method provided by the embodiment of the present disclosure is described below by taking an execution body as a client device as an example.
Fig. 2 is a schematic flow chart of a live broadcast method according to an exemplary embodiment, and as shown in fig. 2, the live broadcast method is used in a server device, and includes the following steps S11-S12.
In step S11, receiving a jamming information, where the jamming information includes a jamming start time and a jamming duration;
in step S12, the cached click data is extracted from the cache area according to the click information, and the click data is issued to the client device for playing by the client device, where the click data includes audio/video data that starts from the start time of the click and lasts for a preset time, and the preset time is greater than or equal to the sum of the click duration and the catch-up duration, where the catch-up duration is the time that the client device starts from playing the click data and until the current live content is synchronously played.
In the above embodiment, the method includes the steps of firstly receiving the click information, extracting cached click data from the cache area according to the click information, and transmitting the click data to the client device for playing by the client device. The method solves the problems that in the prior art, when the clip disappears, the content in the clip cannot be directly played, and the user is required to review the video in the clip process after the live broadcast is finished. In addition, in the scheme, the jamming data comprise audio and video data which start from the starting time of the jamming and last for a preset time, the jamming data not only comprise data in the jamming period but also comprise live broadcast data corresponding to the catch-up duration, and the synchronization with the current live broadcast after the jamming disappears and after the jamming data are played. In order to achieve buffering of audio/video data in the live broadcast process, in an embodiment of the present application, before step S12, the method further includes: encoding the audio and video data to obtain encoded audio and video data, namely encoding the audio and video data in the live broadcast process acquired in real time; specifically, audio and video data can be encoded according to video compression standards such as h.261, h.263, h.264 and the like, so as to obtain encoded audio and video data. Dividing the coded audio/video data into a plurality of data segments, and caching the data segments in the cache area. The method comprises the steps of dividing the encoded audio and video data into a plurality of data fragments, and then caching the plurality of data fragments. The audio/video data may be divided into a plurality of data segments according to the size of the audio/video data, or may be divided according to the playing time of the audio/video data, for example, each audio/video data with a size of 50K is divided into one data segment according to the playing order of the video, and each audio/video data with a playing time of 30S is divided into one data segment according to the playing order of the video. The audio and video can be cached according to different definitions, for example, 720P, 480P, 270P and the like, and of course, the audio and video can also be cached in a pure audio form, and the space required by the caching is different due to the different definitions of the caching. Because the audio and video data are cached in the cache area in advance in a plurality of data fragments, namely all the audio and video data are cached in advance in the live broadcast process, the cartoon data determined according to the starting time and the cartoon length of the cartoon are also cached in the cache area in advance, the cartoon data are read out from the cache area and then are issued to the client device, the client device can continue playing the received cartoon data, and the client device can play the cartoon data while receiving the cartoon data, namely the client device needs to play the cartoon data while receiving the cartoon data from the cache area, so that resources are saved. Compared with the prior art, the method does not cause inaccurate data caching because of network delay when the video corresponding to the clamping period is cached after the clamping, and further ensures that the user can see the complete video content in the clamping period after the clamping is eliminated later. In addition, in the method, the corresponding data is not cached by the client device, but live audio and video data is obtained in real time and cached, and corresponding blocking data is sent to the client device, so that the problem that the cached data of the client is incomplete due to blocking is avoided.
In a more specific embodiment of the present application, the encoding the audio and video data includes: and encoding the audio and video data by adopting a plurality of encoding parameters to obtain a plurality of encoded audio and video data with different definition, wherein the encoding parameters comprise bit rate and video code rate. Because the encoding parameters are different, the definition of the audio and video data obtained by encoding is also different, in addition, the definition is not only related to the encoding mode, but also related to the source video, the definition of the source video is influenced by the acquisition source, the acquisition source can be equipment such as a camera which can absorb the audio and video data, the definition of the source video is related to the resolution of the camera, and a person skilled in the art can select a proper encoding parameter according to the actual situation.
In one embodiment of the present application, as shown in fig. 3, step S12 is implemented through step S120, step S121, and step S122.
In step S120, video information is determined at least according to the click information, where the video information at least includes: the number of the data segments in the stuck data and the start time of the first one of the stuck segments;
The video information corresponding to the cartoon segment can be accurately determined by acquiring the number of the data segments and the starting time of the first data segment so as to be played by the client device. For example, the number of the data segments is 10, the starting time of the first data segment is 10:25 of 9/20/2020, and then the video information corresponding to the katon segment can be accurately determined according to the size of each data segment.
In step S121, corresponding stuck data is determined according to the video information;
in step S122, the video information and the corresponding katon data are transmitted to the client device.
In one embodiment of the present application, step S122 includes: transmitting the video information to the client device; and transmitting the katon data to the client device under the condition that the predetermined information transmitted by the client device is received, wherein the predetermined information is information which characterizes that the katon fragments are allowed to be cached. That is, under the condition that the client device allows the information of the katon fragments to be cached, the video information corresponding to the katon fragments is sent to the client device for the client device to play, so that resources are saved. Under the condition that a plurality of client devices exist, corresponding jamming data can be sent to the client devices according to the specific jamming condition of each client device, so that smooth playing of the jamming data by all the client devices is realized.
In one embodiment of the present application, before step S120, the method further includes: recording data segment information corresponding to each data segment, wherein the data segment information at least comprises the starting time of the data segment and the duration of the data segment.
Of course, the data segment information may also include the end time, definition, size and barrage information of the data segment, and then the accurate determination of the data segment is implemented according to the start time of the data segment, the duration of the data segment, the end time of the data segment, the definition, size and barrage information of the data segment.
In one embodiment of the present application, step S120 includes: calculating the catch-up duration according to the stuck duration, the live broadcast speed and the preset play speed; and determining the number of the data segments, the starting time of the first data segment and the data segment information corresponding to the data segment in the clip segment according to the starting time of the clip, the live speed, the catch-up duration, the preset parameter and the data segment information, wherein the preset parameter is the preset playing speed or the clip duration. In the actual application process, according to the start time of the clip, the live broadcast speed, the catch-up duration and the preset parameters, the audio and video data corresponding to the live broadcast preset time from the start time of the clip can be determined, and then, according to the data segment information and the corresponding audio and video data, the number of the data segments, the start time of the first data segment and the data segment information corresponding to the data segments in the clip segment are determined.
In an embodiment of the present application, the preset time is greater than or equal to a sum of the katon duration, the catch-up duration, and a buffering time, where the buffering time is a time required for the client device to buffer the katon data. So that the speed of the live broadcast is caught up when the card segment fragments are played.
In a specific embodiment, equation 1 is used: t is t 1 f+tf=1.5 ft, calculated as catch-up time length, where t 1 Representing the catton duration, f representing the live speed, 1.5 representing the predetermined play multiplier speed, i.e. assuming that the catton data is played at 1.5 times the normal play speed, t representing the catch-up duration, it is available according to equation 1: t=2t 1 Namely, assuming that the cartoon data is played at 1.5 times of the normal playing speed, after the cartoon is recovered, the audio and video data with 2 times of cartoon duration is played, the cartoon can be synchronized with the current live broadcast content,of course, the catch-up period changes with the change of the click-on period, the live speed and the predetermined play speed; and determining the number of the data fragments, the starting time of the first data fragment and the data fragment information corresponding to the data fragment in the clip according to the starting time of the clip, the live broadcast speed, the catch-up time length, the preset parameters and the data fragment information, so as to accurately determine the audio and video data in the clip.
In another embodiment of the present application, the preset time further includes a time required for the client device to buffer the katon fragment. Specifically, in consideration of the time required for the client device to cache the above-described clip, according to equation 2: (t) 1 +t 2 ) f+tf=1.5 ft determines the catch-up period, where t 2 The time required by the client device to buffer the katon fragments is indicated, namely, the client device needs to buffer the katon fragments before playing the katon fragments. The corresponding catch-up time length can also be calculated according to the formula.
And t can be obtained from equation 2 2 The larger t is, the larger t is.
If step-up and step-down are considered again, the catch-up period can be determined using equation 3:
Figure SMS_1
i.e. play t at a play speed of 1.5f 3 Time period, i.e. play t at play speed of 2f 4 And in the time period, the playing speed is adaptively adjusted according to the actual situation of the cartoon segment.
In another embodiment of the present application, the video information further includes sharpness of the data segment, and the step S120 further includes: and determining the definition of the stuck segment according to the catch-up duration.
In an embodiment of the present application, the step of determining the sharpness of the katon segment according to the catch-up duration includes: determining the definition of the katon fragment as a first definition, wherein the first definition is lower than the definition of the live broadcast under the condition that the catch-up time length is larger than the first threshold and smaller than or equal to a second threshold; determining that the definition of the katon fragment is the definition of the live broadcast under the condition that the catch-up duration is smaller than or equal to the first threshold value; and determining that the definition of the katon fragment is a second definition when the catch-up time is longer than the second threshold, wherein the second definition is lower than the first definition. Specifically, the relationship of the catch-up period to the sharpness can be expressed by the formula 4:
Figure SMS_2
Wherein x is 1 Represents a first threshold value, x 2 Represents a second threshold, p 1 Representing the first sharpness, p z Indicating definition of live broadcast, p 2 And the second definition is represented, namely the larger the catch-up duration is, the lower the definition is, and in the case that the catch-up duration is longer, pure audio play can be adopted for realizing the process of catching up with the live broadcast.
In another embodiment of the present application, the playing speed may be determined according to the content of the played audio/video data, for example, when the content of the audio/video data is a learning video, the playing speed should be reduced appropriately, and when the content of the audio/video data is a pure entertainment video, the playing speed may be increased appropriately, so as to ensure a better viewing experience. Specifically, the play multiplier, the play start time, and the preset time may be determined by a compensation SDK (software development kit).
In another embodiment of the present application, before step S12, the method further includes: acquiring first barrage information in a live broadcast process; and integrating the first barrage information into the corresponding data fragment information. And the first barrage information is integrated into the corresponding data fragment information, so that communication among audiences is facilitated, and better experience is ensured.
In yet another embodiment of the present application, the method further includes: acquiring second bullet screen information received by the client device when playing the katon fragment; and integrating the second barrage information into the data segment information. And the second barrage information is integrated into the data fragment information, so that communication among audiences is facilitated, and better experience is ensured.
Fig. 4 is a flowchart of a live method, as shown in fig. 4, for use in a client device, according to an exemplary embodiment, including the following steps S21-S22.
In step S21, the click information is obtained and sent to the server device, where the click information includes a click start time and a click duration, and of course, the click information may also include a click end time, a click information size, a click information definition, and a bullet screen information;
in step S22, the click-on data sent by the server device is received and played, where the click-on data is audio/video data with a preset time determined at least according to the click-on information, the start time of the preset time is the start time of the click-on, the audio/video data is cached in the live broadcast process, the preset time is greater than or equal to the sum of the click-on duration and the catch-up duration, and the catch-up duration is a time from the start of playing the click-on data until the current live broadcast content is synchronously played.
In the scheme, the clamping information is firstly obtained and sent to the server-side equipment, then clamping data sent by the server-side equipment is received and played, and by adopting the scheme, under the condition that clamping is eliminated in the live broadcast process, the audio and video data when clamping occurs can be continuously played, the audio and video data in the clamping process are not required to be played by a user after live broadcast, so that the user watching video can watch the content played in the clamping process, and better experience is realized. The method solves the problem that in the prior art, when the blocking disappears, the live content is directly played continuously, and the user is required to complete the live video watching back to the blocking process. In addition, in the method, audio and video data in the live broadcast process are acquired in real time and cached, and the cached cartoon data is extracted from the cache area according to the cartoon information. In addition, in the method, the corresponding data is not cached by the client device, but live audio and video data is obtained in real time and cached, and corresponding blocking data is sent to the client device, so that the problem that the cached data of the client is incomplete due to blocking is avoided.
In another embodiment of the present application, step S22 includes: receiving video information sent by the server-side equipment, wherein the video information at least comprises: the number of the data segments in the katon data and the starting time of the first data segment in the katon segments, wherein the video information is determined by the server equipment at least according to the katon information, and the data segments are obtained by the server in a segmented manner; determining to cache the stuck data according to the video information; and judging whether the data quantity of the cached cartoon data accords with the continuous playing condition, and if so, playing the cartoon data. Specifically, the condition of continuous playing may be that the sum of the buffered data amount and the downloaded amount (downloaded during playing) is equal to or greater than the playing amount (product of playing time and playing speed), where the downloaded amount has the same downloading time and playing time; under the condition that a plurality of client devices exist, corresponding jamming data can be sent to the client devices according to the specific jamming condition of each client device, so that smooth playing of the jamming data by all the client devices is realized.
In still another embodiment of the present application, the determining to buffer the katon data according to the video information includes: determining whether to allow buffering of the stuck data according to the video information; and caching the stuck data under the condition that the stuck data is allowed to be cached. That is, under the condition that the client device allows the information of the katon fragments to be cached, the video information corresponding to the katon fragments is sent to the client device for the client device to play, so that resources are saved.
In still another embodiment of the present application, the step of determining whether the data amount of the buffered katon data meets a condition of continuous playing, if yes, playing the katon data includes: and determining a playing strategy under the condition that the cached data volume of the cartoon data accords with the continuous playing condition, wherein the playing strategy comprises the following steps: the play start time and play double speed of the first data segment in the katon segments, and the play start time and play double speed of the first data segment in the katon data; and playing the clamping data according to the playing strategy. For example, in the case where the played content is important, the play multiple is reduced, and in the case where the played content is important, the play multiple is increased; and playing the clamping data according to the playing strategy. Namely, different playing strategies are formulated according to different cartoon data so as to ensure smooth playing of the cartoon data.
In another embodiment of the present application, the video information further includes the catch-up duration and a data amount of the katon data, and the step of determining the playing policy includes: and determining the playing speed of the cartoon segment according to the received data quantity of the cartoon data and the catch-up time length, wherein the catch-up time length is the time required from the playing of the cartoon segment to the synchronous playing of the current live content. Specifically, equation 1 is employed: t is t 1 f+tf=1.5 ft, calculated as catch-up time length, where t 1 Representing the catton duration, f representing the live speed, 1.5 representing the predetermined play multiplier speed, i.e. assuming that the catton data is played at 1.5 times the normal play speed, t representing the catch-up duration, it is available according to equation 1: t=2t 1 That is, it is assumed that the clip data is played at 1.5 times of the normal play speed, after the clip is restored, the audio and video data with 2 times of the clip time length is played, and then the clip time length can be synchronized with the current live content, and of course, the catch-up time length is changed along with the change of the clip time length, the live speed and the preset play time length.
In still another embodiment of the present application, the step of determining the playing speed of the katon fragment according to the data amount and the catch-up duration includes: according to the data amount of the first time period and the first time period, determining the play speed increment in the first time period; according to the data volume of the second time period and the second time period, determining that the playing speed in the second time period is reduced, wherein the catch-up time length is divided into the first time period and the second time period according to time sequence, and the data volume is composed of the data volume of the first time period and the data volume of the second time period. The playing speed of the katon section in the time period is determined according to the length of the time period (comprising the first time period and the second time period) and the data quantity corresponding to the time period, so that smooth playing of the katon section is realized, the progress of live broadcasting is tracked by the double-speed playing, after the progress of live broadcasting is tracked, the playing speed is changed, the live broadcasting is watched, specifically, the playing speed of a user in playing the katon section is assumed to be 3 times of the normal playing speed, the playing speed is directly reduced to the normal playing speed, the user can feel uncomfortable when the user is reduced, and the user experience is ensured. And when the cartoon clip is played, the playing speed is gradually increased, so that the user adapts to the playing speed in a short time.
Fig. 5 is a flowchart of a live broadcast method, as shown in fig. 5, according to an exemplary embodiment, where the live broadcast method is used in a client device and a server device, and includes the following steps S31-S33.
In step S31, in the live broadcast process, the server device caches live broadcast audio and video data in a cache region;
in step S32, the client device records the click-on information and sends the click-on information to the server device, where the click-on information includes a start time and a click-on duration of the click-on;
in step S33, the server device determines the click data according to the click information, and issues the click data to the client device, where the click data includes audio/video data that starts from a start time of the click and lasts for a preset time, where the preset time is greater than or equal to a sum of the click time and a catch-up time, and the catch-up time is a time that the client device starts from playing the click data and until the current live content is synchronously played;
in step S34, the client device receives the clip data and plays the clip segment.
In the above embodiment, firstly, the server device caches the acquired audio and video data in the cache region, under the condition that the client device is blocked and eliminated, the server device acquires the blocking information including the start time of blocking and the blocking time length of blocking, then determines the blocking data according to the blocking information, and issues the blocking data to the client device for playing by the client device. By adopting the scheme, under the condition that the blocking in the live broadcast process is eliminated, the audio and video data playing when the blocking occurs can be continued, the audio and video data in the blocking process is not required to be played by a user after the live broadcast is finished, the user watching the video can watch the content played when the blocking is finished, and better experience is realized. The method solves the problem that in the prior art, when the blocking disappears, the live content is directly played continuously, and the user is required to complete the live video watching back to the blocking process. In addition, in the scheme, the jamming data comprise audio and video data which start from the starting time of the jamming and last for a preset time, the jamming data not only comprise data in the jamming period but also comprise live broadcast data corresponding to the catch-up duration, and the synchronization with the current live broadcast after the jamming disappears and after the jamming data are played.
In still another embodiment of the present application, before the server device extracts the cached katon data from the cache area, the method further includes: the server equipment encodes the audio and video data to obtain encoded audio and video data; the server device divides the encoded audio/video data into a plurality of data segments and caches the data segments in the cache region. Specifically, audio and video data can be encoded according to video compression standards such as h.261, h.263, h.264 and the like, so as to obtain encoded audio and video data.
In another embodiment of the present application, the step of encoding the audio/video data by the server device includes: the server device encodes the audio and video data by adopting various encoding parameters to obtain a plurality of encoded audio and video data with different definition. The above coding parameters include bit rate and video rate. Because of different coding parameters, the definition of the audio and video data obtained by coding is also different, and a person skilled in the art can select a proper coding parameter according to the actual situation.
In yet another embodiment of the present application, step S33 includes: the server device determines video information at least according to the clamping information, wherein the video information at least comprises: the number of the data segments in the stuck data and the start time of the first one of the stuck segments; the server equipment determines corresponding clamping data according to the video information; and the server side equipment transmits the clamping data comprising the video information and the corresponding clamping fragments to the client side equipment. The video information corresponding to the cartoon segment can be accurately determined by acquiring the number of the data segments and the starting time of the first data segment so as to be played by the client device. For example, the number of the data segments is 10, the starting time of the first data segment is 10:25 of 9/20/2020, and then the video information corresponding to the katon segment can be accurately determined according to the size of each data segment.
In one embodiment of the present application, step S34 includes: receiving video information sent by the server-side equipment, wherein the video information at least comprises: the number of the data segments in the katon data and the start time of the first data segment in the katon segments, and the video information is determined by the server device at least according to the katon information; determining whether to allow buffering of the katon fragments according to the video information; under the condition that the katon fragments are allowed to be cached, sending preset information to the server-side equipment, wherein the preset information is information for representing the katon fragments allowed to be cached; caching the stuck data; and judging whether the data quantity of the cached cartoon data accords with the continuous playing condition, and if so, playing the cartoon data. That is, under the condition that the client device allows the information of the katon fragments to be cached, the video information corresponding to the katon fragments is sent to the client device for the client device to play, so that resources are saved.
In still another embodiment of the present application, the step of determining whether the data amount of the buffered katon data meets a condition of continuous playing, if yes, playing the katon data includes: and under the condition that the cached data volume of the cartoon data accords with the continuous playing condition, the client equipment determines a playing strategy, wherein the playing strategy comprises the following steps: the play start time and play speed of the first data segment in the clip segment; and the client device plays the cartoon data according to the playing strategy. Namely, different playing strategies are formulated according to different cartoon data so as to ensure smooth playing of the cartoon data.
In an embodiment of the present application, the video information further includes the catch-up duration and a data amount of the katon data, and the step of determining, by the client device, the play policy includes: and the client device determines the playing speed of the cartoon data according to the data quantity of the cartoon data and the catch-up time length, wherein the catch-up time length is the time required from the playing of the cartoon segment to the synchronous playing of the current live content. And determining the playing speed of the cartoon fragments according to the received data quantity and the catch-up time length of the cartoon data so as to realize smooth playing of the cartoon data.
In another embodiment of the present application, the step of determining, by the client device, the play speed of the katon fragment according to the data amount and the catch-up duration includes: the client device determines the play speed increment in the first time period according to the data volume in the first time period and the first time period; the client device determines that the playing speed in the second time period decreases according to the data volume in the second time period and the second time period, the catch-up duration is divided into the first time period and the second time period according to time sequence, and the data volume is composed of the data volume in the first time period and the data volume in the second time period. The playing speed of the katon section in the time period is determined according to the length of the time period (comprising the first time period and the second time period) and the data quantity corresponding to the time period, so that smooth playing of the katon section is realized, the progress of live broadcasting is tracked by the double-speed playing, after the progress of live broadcasting is tracked, the playing speed is changed, the live broadcasting is watched, specifically, the playing speed of a user in playing the katon section is assumed to be 3 times of the normal playing speed, the playing speed is directly reduced to the normal playing speed, the user can feel uncomfortable when the user is reduced, and the user experience is ensured. And when the cartoon clip is played, the playing speed is gradually increased, so that the user adapts to the playing speed in a short time. In still another embodiment of the present application, before the server device determines the video information corresponding to the katon fragment according to at least the katon information, the method further includes: the server device records data segment information corresponding to each data segment, wherein the data segment information at least comprises the starting time of the data segment and the duration of the data segment. The step of determining the video information by the server device at least according to the clamping information comprises the following steps: the server device determines the catch-up duration according to the blocking duration, the live broadcast speed and the preset play speed; the server device determines the number of the data segments, the starting time of the first data segment and the data segment information corresponding to the data segment in the clip data according to the starting time of the clip, the live broadcast speed, the catch-up duration, a preset parameter and each data segment information, wherein the preset parameter is the preset play multiple speed or the clip duration.
In an embodiment of the present application, the video information further includes a definition of the data segment, and the step of determining, by the server device, the video information at least according to the katon information includes: and the server device determines the definition of the cartoon data according to the catch-up time length.
In still another embodiment of the present application, the video information further includes a definition of the data segment, and the step of determining, by the server device, the video information corresponding to the katon segment at least according to the katon information includes: and the server device determines the definition of the katon fragment according to the catch-up time length. The step of determining the definition of the katon fragment according to the catch-up time length comprises the following steps: determining the definition of the katon fragment as a first definition, wherein the first definition is lower than the definition of the live broadcast under the condition that the catch-up time length is larger than the first threshold and smaller than or equal to a second threshold; determining that the definition of the katon fragment is the definition of the live broadcast under the condition that the catch-up duration is smaller than or equal to the first threshold value; and determining that the definition of the katon fragment is a second definition when the catch-up time is longer than the second threshold, wherein the second definition is lower than the first definition. Specifically, the relationship between the catch-up period and the sharpness can be expressed by the formula 4:
Figure SMS_3
Wherein x is 1 Represents a first threshold value, x 2 Represents a second threshold, p 1 Representing the first sharpness, p z Indicating definition of live broadcast, p 2 And the second definition is represented, namely the larger the catch-up duration is, the lower the definition is, and in the case that the catch-up duration is longer, pure audio play can be adopted for realizing the process of catching up with the live broadcast.
In an embodiment of the present application, before the server device determines the jamming data according to the jamming information and issues the jamming data to the client device, the method further includes: the server equipment acquires first barrage information in the live broadcast process; and the server equipment integrates the first barrage information into the corresponding data fragment information. And the first barrage information is integrated into the corresponding data fragment information, so that communication among audiences is facilitated, and better experience is ensured.
In one embodiment of the present application, the method further includes: the server side equipment acquires second barrage information received when the client side equipment plays the katon fragment; and the server equipment integrates the second barrage information into the corresponding data fragment information. And the second barrage information is integrated into the data fragment information, so that communication among audiences is facilitated, and better experience is ensured.
Fig. 6 is a block diagram illustrating a live device according to an exemplary embodiment, the live device including:
a first obtaining unit 10 configured to perform receiving a jamming information, where the jamming information includes a jamming start time and a jamming duration;
and a transmitting unit 20 configured to execute the katon data that is cached from the cache area according to the katon information, and issue the katon data to the client device for playing by the client device, where the katon data includes audio/video data that starts from a start time of the katon and lasts for a preset time, and the preset time is greater than or equal to a sum of the katon duration and a catch-up duration, and the catch-up duration is a time that the client device starts from playing the katon data and until the current live content is synchronously played.
In the above scheme, in the live broadcast process, the first obtaining unit receives the click-on information, the sending unit extracts the cached click-on data from the cache area according to the click-on information, and issues the click-on data to the client device for playing by the client device, and because the preset time is greater than or equal to the sum of the click-on time and the catch-up time, the audio and video data when the click-on occurs can be continuously played under the condition that the click-on is eliminated in the live broadcast process, the audio and video data in the click-on process is not required to be played by the user after the live broadcast, so that the user watching the video can watch the content played when the click-on is performed next, and the better experience is realized. The scheme solves the problems that in the prior art, when the video is not available after the video is blocked, the content in the video is not directly played, and the user is required to review the video in the blocking process after the live broadcast is finished. In addition, in the scheme, the jamming data comprise audio and video data which start from the starting time of the jamming and last for a preset time, the jamming data not only comprise data in the jamming period but also comprise live broadcast data corresponding to the catch-up duration, and the synchronization with the current live broadcast after the jamming disappears and after the jamming data are played.
In an embodiment of the present application, the live broadcast apparatus further includes an encoding unit and a buffer unit, where the encoding unit is configured to perform encoding of the audio/video data to obtain encoded audio/video data before extracting the buffered katon data from the buffer according to the katon information; specifically, audio and video data can be encoded according to video compression standards such as h.261, h.263, h.264 and the like, so as to obtain encoded audio and video data. The buffer unit is used for dividing the coded audio/video data into a plurality of data fragments and buffering the data fragments into the buffer area. The method comprises the steps of dividing the encoded audio and video data into a plurality of data fragments, and then caching the plurality of data fragments. The audio/video data may be divided into a plurality of data segments according to the size of the audio/video data, or may be divided according to the playing time of the audio/video data, for example, each audio/video data with a size of 50K is divided into one data segment according to the playing order of the video, and each audio/video data with a playing time of 30S is divided into one data segment according to the playing order of the video. The audio and video can be cached according to different definitions, for example, 720P, 480P, 270P and the like, and of course, the audio and video can also be cached in a pure audio form, and the space required by the caching is different due to the different definitions of the caching. Because the audio and video data are cached in the cache area in advance in a plurality of data fragments, namely all the audio and video data are cached in advance in the live broadcast process, the cartoon data determined according to the starting time and the cartoon length of the cartoon are also cached in the cache area in advance, the cartoon data are read out from the cache area and then are issued to the client device, the client device can continue playing the received cartoon data, and the client device can play the cartoon data while receiving the cartoon data, namely the client device needs to play the cartoon data while receiving the cartoon data from the cache area, so that resources are saved.
In a more specific embodiment of the present application, the encoding unit is further configured to perform: and encoding the audio and video data by adopting a plurality of encoding parameters to obtain a plurality of encoded audio and video data with different definition, wherein the encoding parameters comprise bit rate and video code rate. Because the encoding parameters are different, the definition of the audio and video data obtained by encoding is also different, in addition, the definition is not only related to the encoding mode, but also related to the source video, the definition of the source video is influenced by the acquisition source, the acquisition source can be equipment such as a camera which can absorb the audio and video data, the definition of the source video is related to the resolution of the camera, and a person skilled in the art can select a proper encoding parameter according to the actual situation.
In one embodiment of the present application, the transmitting unit includes a first determining module, a second determining module, and a transmitting module, where the first determining module is configured to determine video information at least according to the katon information, where the video information at least includes: the number of the data segments in the stuck data and the start time of the first one of the stuck segments; the video information corresponding to the cartoon segment can be accurately determined by acquiring the number of the data segments and the starting time of the first data segment so as to be played by the client device. For example, the number of the data segments is 10, the starting time of the first data segment is 10:25 of 9/20/2020, and then the video information corresponding to the katon segment can be accurately determined according to the size of each data segment. A second determining module, configured to determine the corresponding stuck data according to the video information; the sending module is configured to send the video information and the corresponding katon data of the katon fragment to the client device. And transmitting the katon data to the client device under the condition that the predetermined information transmitted by the client device is received, wherein the predetermined information is information which characterizes that the katon fragments are allowed to be cached. That is, under the condition that the client device allows the information of the katon fragments to be cached, the video information corresponding to the katon fragments is sent to the client device for the client device to play, so that resources are saved. Under the condition that a plurality of client devices exist, corresponding jamming data can be sent to the client devices according to the specific jamming condition of each client device, so that smooth playing of the jamming data by all the client devices is realized.
In one embodiment of the present application, the transmitting module includes a first transmitting sub-module and a second transmitting sub-module, where the first transmitting sub-module is configured to perform transmitting the video information to the client device; the second transmitting sub-module is configured to transmit the katon data to the client device when receiving predetermined information transmitted by the client device, wherein the predetermined information is information which characterizes that the katon data is allowed to be buffered. That is, under the condition that the client device allows the information of the katon fragments to be cached, the video information corresponding to the katon fragments is sent to the client device for the client device to play, so that resources are saved. Under the condition that a plurality of client devices exist, corresponding jamming data can be sent to the client devices according to the specific jamming condition of each client device, so that smooth playing of the jamming data by all the client devices is realized.
In an embodiment of the present application, the live broadcast apparatus further includes a recording unit, and the recording unit is configured to record data segment information corresponding to each of the data segments, where the data segment information includes at least a start time of the data segment and a duration of the data segment, before determining video information corresponding to the katon segment at least according to the katon information. Of course, the end time, definition, size and barrage information of the data segment can also be obtained, and then the accurate determination of the data segment is realized according to the start time of the data segment, the duration of the data segment, the end time of the data segment, the definition, size and barrage information of the data segment. The first determining module includes: first determination The sub-module is configured to determine the catch-up duration according to the click duration, the live speed and the preset play speed; and a second determining sub-module configured to determine the number of the data segments, the start time of the first data segment, and the data segment information corresponding to the data segment in the clip data according to the start time of the clip, the live speed, the catch-up time length, a predetermined parameter, and each of the data segment information, where the predetermined parameter is the predetermined play multiple speed or the clip time length. Specifically, equation 1 is employed: t is t 1 f+tf=1.5 ft, calculated as catch-up time length, where t 1 Representing the catton duration, f representing the live speed, 1.5 representing the predetermined play multiplier speed, i.e. assuming that the catton data is played at 1.5 times the normal play speed, t representing the catch-up duration, it is available according to equation 1: t=2t 1 Namely, the video and audio data with 2 times of the blocking time length can be synchronized with the current live broadcast content after the blocking is resumed under the assumption that the blocking data is played at 1.5 times of the normal playing speed, and of course, the catch-up time length is changed along with the changing of the blocking time length, the live broadcast speed and the preset playing time length; and determining the number of the data fragments, the starting time of the first data fragment and the data fragment information corresponding to the data fragment in the clip according to the starting time of the clip, the live broadcast speed, the catch-up time length, the preset parameters and the data fragment information, so as to accurately determine the audio and video data in the clip.
In another embodiment of the present application, the preset time is greater than or equal to a sum of the katon duration, the catch-up duration, and a buffering time, where the buffering time is a time required for the client device to buffer the katon data.
In another embodiment of the present application, the video information further includes a definition of the data segment, and the first determining module further includes a third determining sub-module configured to determine the definition of the katon segment according to the catch-up duration. Specifically, in the case of determining the number of data segments, the faster the network speed, the higher the definition of the stuck segment, and the slower the network speed, the lower the definition of the stuck segment.
In an embodiment of the present application, the third determining submodule includes a fourth determining submodule, a fifth determining submodule and a sixth determining submodule, where the fourth determining submodule is configured to determine that the sharpness of the katon segment is a first sharpness, and the first sharpness is lower than the sharpness of the live broadcast, when the catch-up time is longer than the first threshold and less than or equal to a second threshold; a fifth determining submodule is configured to determine that the definition of the katon fragment is the definition of the live broadcast in the case that the catch-up time period is less than or equal to the first threshold value; and a sixth determining submodule configured to determine the sharpness of the stuck segment to be a second sharpness, where the second sharpness is lower than the first sharpness, if the catch-up time period is longer than the second threshold. Specifically, the relationship of the catch-up period to the sharpness can be expressed by the formula 4:
Figure SMS_4
Wherein x is 1 Represents a first threshold value, x 2 Represents a second threshold, p 1 Representing the first sharpness, p z Indicating definition of live broadcast, p 2 And the second definition is represented, namely the larger the catch-up duration is, the lower the definition is, and in the case that the catch-up duration is longer, pure audio play can be adopted for realizing the process of catching up with the live broadcast.
In another embodiment of the present application, the live broadcast apparatus further includes a second acquiring unit and a first integrating unit, where the second acquiring unit is configured to perform extracting, according to the katon information, cached katon data from a cache area, and issue the katon data to the client device, so as to acquire first bullet screen information in a live broadcast process before the client device plays the first bullet screen; the first integration unit is configured to integrate the first bullet screen information into the corresponding data segment information. And the first barrage information is integrated into the corresponding data fragment information, so that communication among audiences is facilitated, and better experience is ensured.
In still another embodiment of the present application, the live broadcast apparatus further includes a third acquiring unit and a second integrating unit, where the third acquiring unit is configured to perform acquiring second bullet screen information received when the client device plays the katon data; the second integrating unit is configured to integrate the second bullet screen information into the data segment information. And the second barrage information is integrated into the data fragment information, so that communication among audiences is facilitated, and better experience is ensured.
Fig. 7 is a block diagram illustrating a live device according to an exemplary embodiment. The live broadcast device comprises:
a second obtaining unit 30, configured to obtain and send a click-on information to the server device, where the click-on information includes a start time and a click-on duration of a click-on in a process of playing the live video;
the receiving unit 40 is configured to perform receiving and playing the katon data sent by the server device, where the katon data is audio/video data with a preset time determined at least according to the katon information, the start time of the preset time is the start time of the katon, the audio/video data is cached in a live broadcast process, and the preset time is greater than or equal to the sum of the katon duration and a catch-up duration, and the catch-up duration is a time from playing the katon data until the current live broadcast content is synchronously played.
In the above scheme, the second acquisition unit acquires the jamming information and sends the jamming information to the server side equipment, and the receiving unit receives the jamming data sent by the server side equipment and plays the jamming data.
In an embodiment of the present application, the receiving unit includes a receiving module, a determining module, and a playing module, where the receiving module is configured to perform receiving video information sent by the server device, where the video information at least includes: the number of the data fragments in the cartoon data and the starting time of the first data fragment in the cartoon data, wherein the video information is determined at least according to the cartoon information, and the data fragments are obtained by dividing the audio and video data; the determining module is configured to determine to cache the katon data according to the video information; the playing module is configured to execute the process of judging whether the data volume of the cached cartoon data accords with the condition of continuous playing, and if so, playing the cartoon data.
In one embodiment of the present application, the determining module includes a determining submodule and a buffering submodule, where the determining submodule is configured to determine whether to allow buffering of the katon data according to the video information; the caching sub-module is configured to perform caching of the stuck data if the caching of the stuck data is allowed. Namely, when the information of the katon fragment is allowed to be cached, the corresponding katon data is cached.
In an embodiment of the present application, the playing module includes a first determining submodule and a playing submodule, where the first determining submodule is configured to determine a playing policy if the cached data amount of the katon data meets the condition of continuous playing, and the playing policy includes: the play start time and play speed of the first data segment in the clamping data; the play sub-module is configured to perform playing the cartoon data according to the play strategy. For example, in the case where the played content is important, the play multiple is reduced, and in the case where the played content is important, the play multiple is increased; and playing the clamping data according to the playing strategy. Namely, different playing strategies are formulated according to different cartoon data so as to ensure smooth playing of the cartoon data.
In an embodiment of the present application, the video information further includes the catch-up duration and a data amount of the katon data, and the first determining submodule is configured to perform: according toAnd determining the playing speed of the stuck data according to the data quantity of the stuck data and the catch-up time length. The catch-up time is the time required from the start of playing the clip to the synchronous playing of the current live content. Specifically, equation 1 is employed: t is t 1 f+tf=1.5 ft, calculated as catch-up time length, where t 1 Representing the catton duration, f representing the live speed, 1.5 representing the predetermined play multiplier speed, i.e. assuming that the catton data is played at 1.5 times the normal play speed, t representing the catch-up duration, it is available according to equation 1: t=2t 1 That is, it is assumed that the clip data is played at 1.5 times of the normal play speed, after the clip is restored, the audio and video data with 2 times of the clip time length is played, and then the clip time length can be synchronized with the current live content, and of course, the catch-up time length is changed along with the change of the clip time length, the live speed and the preset play time length.
In one embodiment of the present application, the first determining submodule includes a second determining submodule and a third determining submodule, where the second determining submodule is configured to determine that the playing speed in the first time period increases according to the data amount in the first time period and the first time period; the third determining submodule is configured to determine a playing speed in a second time period to be decreased according to the data amount in the second time period and the second time period, the catch-up time length is divided into the first time period and the second time period according to time sequence, and the data amount in the first time period and the data amount in the second time period are formed. The playing speed of the katon section in the time period is determined according to the length of the time period (comprising the first time period and the second time period) and the data quantity corresponding to the time period, so that smooth playing of the katon section is realized, the progress of live broadcasting is tracked by the double-speed playing, after the progress of live broadcasting is tracked, the playing speed is changed, the live broadcasting is watched, specifically, the playing speed of a user in playing the katon section is assumed to be 3 times of the normal playing speed, the playing speed is directly reduced to the normal playing speed, the user can feel uncomfortable when the user is reduced, and the user experience is ensured. And when the cartoon clip is played, the playing speed is gradually increased, so that the user adapts to the playing speed in a short time.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
In one embodiment of the present application, there is provided an electronic device, as described above in fig. 8, including: a processor 100; a memory 200 for storing the processor-executable instructions described above; wherein the processor is configured to execute the instructions to implement the corresponding live method. A specific architecture is shown in fig. 8, which also includes a memory controller 300 and a peripheral interface 400.
An embodiment of the present application provides a system including a server device and a client device, the server device configured to perform a related live method; the client device is configured to execute a related live method.
One embodiment of the present application provides a live broadcast system, and fig. 9 shows a block diagram of the live broadcast system. The live broadcast system comprises a collection end, a service end and a playing end, wherein the service end is a service end device, the collection end and the playing end are client end devices, and data interaction between the collection end and the service end, data interaction between the collection end and the playing end and data interaction between the service end and the playing end are shown in figure 9; the collection end directly interacts with the anchor, and the playing end interacts with the audience.
Another embodiment of the present application provides a storage medium, which when executed by a processor of a server device, enables the server device to perform a related live method.
Yet another embodiment of the present application provides a storage medium that, when executed by a processor of a client device, enables the client device to perform a related live method.
Yet another embodiment of the present application provides a computer program product comprising a computer program which, when executed by a processor, implements any of the live methods described above.
Alternatively, the storage medium may be a non-transitory computer readable storage medium, for example, the above-described non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (29)

1. A live broadcast method, comprising:
receiving jamming information, wherein the jamming information comprises the starting time and the jamming duration of jamming;
extracting cached click data from a cache area according to the click information, and transmitting the click data to client equipment for playing by the client equipment, wherein the click data comprises audio and video data which starts from the start time of the click and lasts for a preset time, the preset time is larger than or equal to the sum of the click time and the catch-up time, the catch-up time is the time from the client equipment to play the click data until the current live broadcast content is synchronously played,
before extracting the cached katon data from the cache area according to the katon information, the method further comprises:
encoding the audio and video data to obtain encoded audio and video data;
dividing the coded audio/video data into a plurality of data segments, caching the data segments in the cache region,
The step of extracting the cached katon data from the cache area according to the katon information and transmitting the katon data to the client device for playing comprises the following steps:
determining video information at least according to the katon information, wherein the video information at least comprises: the number of the data segments in the stuck data and the start time of the first of the data segments in the stuck data;
determining the corresponding clamping data according to the video information;
transmitting the video information and the corresponding cartoon data to the client equipment,
before determining video information based at least on the katon information, the method further comprises:
recording data segment information corresponding to each data segment, wherein the data segment information at least comprises the starting time of the data segment and the duration of the data segment,
the step of determining video information at least according to the katon information comprises the following steps:
determining the catch-up duration according to the stuck duration, the live broadcast speed and the preset play speed;
and determining the number of the data fragments, the starting time of the first data fragment and the data fragment information corresponding to the data fragment in the cartoon data according to the starting time of the cartoon, the live speed, the catch-up duration, the preset parameters and the data fragment information, wherein the preset parameters are the preset playing double speed or the cartoon duration.
2. The method of claim 1, wherein the encoding the audiovisual data step comprises:
and encoding the audio and video data by adopting a plurality of encoding parameters to obtain a plurality of encoded audio and video data with different definition, wherein the encoding parameters comprise bit rate and video code rate.
3. The method of claim 1, wherein the step of transmitting the video information and the corresponding katon data to the client device further comprises:
transmitting the video information to the client device; and under the condition that the predetermined information sent by the client equipment is received, the katon data is sent to the client equipment, wherein the predetermined information is information representing that the katon fragments are allowed to be cached.
4. The method of claim 1, wherein the preset time is greater than or equal to a sum of the click-on duration, the catch-up duration, and a buffering time, the buffering time being a time required for the client device to buffer the click-on data.
5. The method of claim 4, wherein the video information further comprises sharpness of the data segment, and wherein determining video information based at least on the click-through information comprises:
And determining the definition of the stuck data according to the catch-up duration.
6. The method of claim 5, wherein the step of determining the sharpness of the click-on data based on the catch-up time period comprises:
determining the definition of the cartoon data as a first definition under the condition that the catch-up time length is larger than a first threshold value and smaller than or equal to a second threshold value, wherein the first definition is lower than the definition of live broadcasting;
determining that the definition of the cartoon data is the definition of live broadcast under the condition that the catch-up duration is smaller than or equal to the first threshold value;
and under the condition that the catch-up time length is larger than the second threshold value, determining that the definition of the cartoon data is a second definition, wherein the second definition is lower than the first definition.
7. The method according to any one of claims 1 to 6, wherein the method further comprises, before extracting the cached katon data from the cache area according to the katon information and issuing the katon data to the client device for playing by the client device:
acquiring first barrage information in a live broadcast process;
and integrating the first barrage information into the corresponding data fragment information.
8. The method according to any one of claims 1 to 6, further comprising:
acquiring second barrage information received by the client device when the client device plays the cartoon data;
and integrating the second barrage information into the data fragment information.
9. A live broadcast method, comprising:
the method comprises the steps of obtaining and sending the jamming information to a server device, wherein the jamming information comprises the starting time and the jamming duration of jamming in the process of playing live video;
receiving and playing the click data sent by the server device, wherein the click data is audio and video data of preset time determined at least according to the click information, the start time of the preset time is the start time of the click, the audio and video data is cached in the live broadcast process, the preset time is greater than or equal to the sum of the click time and the catch-up time, the catch-up time is the time from the start of playing the click data until the current live broadcast content is synchronously played,
the step of receiving and playing the cartoon data sent by the server-side equipment comprises the following steps:
receiving video information sent by the server-side equipment, wherein the video information at least comprises: the number of the data fragments in the cartoon data and the starting time of the first data fragment in the cartoon data, wherein the video information is determined at least according to the cartoon information, and the data fragments are obtained by dividing the audio and video data;
Determining to cache the stuck data according to the video information;
judging whether the cached data volume of the cartoon data accords with the condition of continuous playing, if so, playing the cartoon data,
and if yes, the step of playing the cartoon data comprises the following steps:
and under the condition that the cached data volume of the cartoon data accords with the continuous playing condition, determining a playing strategy, wherein the playing strategy comprises the following steps: the play start time and play speed in the first data segment in the clip segment;
playing the stuck data according to the playing strategy,
the video information further includes the catch-up time period and the data amount of the stuck data,
and under the condition that the cached data volume of the cartoon data accords with the continuous playing condition, determining the playing strategy comprises the following steps:
determining the playing speed of the stuck data according to the data quantity of the stuck data and the catch-up time length,
the step of determining the playing speed doubling of the cartoon data according to the data quantity and the catch-up duration comprises the following steps:
According to the data amount of the first time period and the first time period, determining the play speed increment in the first time period;
according to the data volume of the second time period and the second time period, determining that the playing speed in the second time period is reduced, wherein the catch-up time length is divided into the first time period and the second time period according to time sequence, and the data volume is composed of the data volume of the first time period and the data volume of the second time period.
10. The method of claim 9, wherein the step of determining to cache the katon data based on the video information comprises:
determining whether to allow buffering of the stuck data according to the video information;
and under the condition that the blocking data is allowed to be cached, the blocking data is cached.
11. A live broadcast method, comprising:
in the live broadcast process, the server device caches live broadcast audio and video data to a cache region;
the client device records the jamming information and sends the jamming information to the server device, wherein the jamming information comprises the starting time and the jamming duration of the jamming;
the server side equipment extracts cached cartoon data from a cache area and transmits the cartoon data to the client side equipment, wherein the cartoon data comprises audio and video data which starts from the starting time of the cartoon and lasts for a preset time, the preset time is larger than or equal to the sum of the cartoon duration and the catch-up duration, and the catch-up duration is the time from the client side equipment to synchronously play the current live broadcast content;
The client device receives the stuck data and plays the stuck data;
before the server device extracts the cached katon data from the cache region, the method further comprises:
the server-side equipment encodes the audio and video data to obtain encoded audio and video data;
the server device divides the coded audio and video data into a plurality of data segments and caches the data segments in the cache region,
the step of the server device extracting the cached katon data from the cache area and issuing the katon data to the client device comprises the following steps:
the server device determines video information at least according to the clamping information, wherein the video information at least comprises: the number of data segments in the stuck data and the start time of a first one of the data segments;
the server equipment determines the corresponding cartoon data according to the video information;
the server device sends the video information and the corresponding katon data to the client device,
the step of receiving the jamming data and playing the jamming data by the client device comprises the following steps:
receiving video information sent by the server-side equipment;
Determining whether to allow buffering of the katon fragments according to the video information;
under the condition that the katon fragments are allowed to be cached, sending preset information to the server-side equipment, wherein the preset information is information representing that the katon fragments are allowed to be cached;
caching the stuck data;
judging whether the cached data volume of the cartoon data accords with the condition of continuous playing, if so, playing the cartoon data,
the video information further includes the catch-up time period and the data amount of the stuck data,
and if yes, the step of playing the cartoon data comprises the following steps:
and under the condition that the cached data volume of the cartoon data accords with the continuous playing condition, the client equipment determines a playing strategy, wherein the playing strategy comprises the following steps: the play start time and play speed in the first data segment in the clip segment;
the client device plays the stuck data according to the play strategy,
and under the condition that the cached data volume of the cartoon data accords with the continuous playing condition, the step of determining the playing strategy by the client device comprises the following steps:
The client device determines the playing speed of the cartoon segment according to the received data quantity of the cartoon data and the catch-up duration,
the step of determining the playing speed doubling of the cartoon segment by the client device according to the received data quantity of the cartoon data and the catch-up duration comprises the following steps:
the client device determines the play speed increment in a first time period according to the data volume in the first time period and the first time period;
the client device determines that the playing speed in the second time period decreases according to the data volume in the second time period and the second time period, the catch-up duration is divided into the first time period and the second time period according to time sequence, and the data volume is composed of the data volume in the first time period and the data volume in the second time period.
12. The method of claim 11, wherein the step of the server device encoding the audio-video data comprises:
and the server equipment adopts various coding parameters to code the audio and video data to obtain the coded audio and video data with different definition.
13. The method of claim 11, wherein prior to the server device determining video information based at least on the katon information, the method further comprises:
the server side equipment records data fragment information corresponding to each data fragment, the data fragment information at least comprises the starting time of the data fragment and the duration of the data fragment, and the step of determining video information by the server side equipment at least according to the clamping information comprises the following steps: the server equipment calculates the catch-up duration according to the blocking duration, the live broadcast speed and the preset play speed;
the server device determines the number of the data segments, the starting time of the first data segment and the data segment information corresponding to the data segments in the clip according to the starting time of the clip, the live broadcast speed, the catch-up duration, a preset parameter and each piece of data segment information, wherein the preset parameter is the preset playing speed or the clip duration.
14. The method according to any one of claims 11 to 13, wherein the video information further comprises a sharpness of the data segment, and the server device determining the video information at least based on the katon information comprises:
And the server equipment determines the definition of the cartoon segment according to the catch-up duration.
15. The method of claim 11, wherein before the server device determines the stuttered data according to the stuttered information and issues the stuttered data to the client device, the stuttered data includes a stuttered segment, the method further comprises:
the server equipment acquires first barrage information in the live broadcast process;
and the server-side equipment integrates the first barrage information into corresponding data fragment information.
16. The method of claim 15, wherein the method further comprises:
the server side equipment acquires second barrage information received when the client side equipment plays the katon fragment;
and the server-side equipment integrates the second barrage information into the corresponding data fragment information.
17. A live broadcast device, comprising:
a first acquisition unit configured to perform receiving click information, wherein the click information includes a click start time and a click duration;
a sending unit configured to perform the katon data buffered from the buffer area according to the katon information, and send the katon data to a client device for playing by the client device, where the katon data includes audio/video data that starts from a start time of the katon and lasts for a preset time, the preset time is greater than or equal to a sum of the katon duration and a catch-up duration, the catch-up duration is a time that the client device starts from playing the katon data and until the current live content is synchronously played,
The live broadcast device further includes:
the encoding unit is configured to encode the audio and video data before extracting the cached katon data from the cache area according to the katon information, so as to obtain encoded audio and video data;
a buffer unit configured to divide the encoded audio/video data into a plurality of data segments and buffer the data segments to the buffer area,
the transmitting unit includes:
a first determining module configured to perform determining video information based at least on the click-through information, the video information including at least: the number of the data segments in the stuck data and the start time of the first of the data segments in the stuck data;
a second determining module configured to determine the corresponding stuck data according to the video information;
a transmission module configured to perform transmission of the video information and the corresponding katon data to the client device,
the live broadcast device further includes:
a recording unit configured to perform recording of data segment information corresponding to each of the data segments, the data segment information including at least a start time of the data segment and a duration of the data segment, before determining video information based at least on the katon information,
The first determining module includes:
a first determining submodule configured to determine the catch-up time period according to the stuck time period, the live speed and a predetermined play multiplier;
the second determining submodule is configured to determine the number of the data segments, the starting time of the first data segment and the data segment information corresponding to the data segments in the cartoon data according to the starting time of the cartoon, the live speed, the catch-up time length, preset parameters and the data segment information, wherein the preset parameters are the preset playing double speed or the cartoon time length.
18. The live device of claim 17, wherein the encoding unit is further configured to perform:
and encoding the audio and video data by adopting a plurality of encoding parameters to obtain a plurality of encoded audio and video data with different definition, wherein the encoding parameters comprise bit rate and video code rate.
19. The live broadcast apparatus of claim 17, wherein the transmitting module comprises:
a first transmission sub-module configured to perform transmission of the video information to the client device;
And the second sending submodule is configured to send the cartoon data to the client equipment when receiving preset information sent by the client equipment, wherein the preset information is information which characterizes that the cartoon data is allowed to be cached.
20. The live broadcast apparatus of claim 17, wherein the preset time is greater than or equal to a sum of the click-on duration, the catch-up duration, and a buffering time, the buffering time being a time required for the client device to buffer the click-on data.
21. The live device of claim 20, wherein the video information further comprises sharpness of the data segment, the first determining module further comprising:
and a third determining sub-module configured to determine the sharpness of the stuck data according to the catch-up duration.
22. The live device of claim 21, wherein the third determination submodule comprises:
a fourth determining submodule configured to determine that the sharpness of the stuck data is a first sharpness, which is lower than the sharpness of the live broadcast, if the catch-up time is longer than a first threshold and is less than or equal to a second threshold;
A fifth determining submodule configured to determine that the sharpness of the stuck data is the sharpness of live broadcast if the catch-up duration is less than or equal to the first threshold;
a sixth determination submodule configured to determine the sharpness of the stuck data to be a second sharpness if the catch-up time period is greater than the second threshold, the second sharpness being lower than the first sharpness.
23. The live device of any of claims 17-22, wherein the live device further comprises:
the second acquisition unit is configured to extract cached cartoon data from the cache area according to the cartoon information, and send the cartoon data to the client equipment so as to acquire first bullet screen information in the live broadcast process before the client equipment plays the cartoon data;
and the first integration unit is configured to integrate the first barrage information into the corresponding data fragment information.
24. The live device of any of claims 17-22, wherein the live device further comprises:
the third acquisition unit is configured to acquire second bullet screen information received when the client device plays the katon data;
And a second integration unit configured to perform integration of the second bullet screen information into the data segment information.
25. A live broadcast device, comprising:
the second acquisition unit is configured to acquire and send the jamming information to the server device, wherein the jamming information comprises the starting time and the jamming duration of jamming in the process of playing the live video;
a receiving unit configured to perform receiving and playing of the click data sent by the server device, where the click data is audio/video data of a preset time determined at least according to the click information, and a start time of the preset time is a start time of the click, the audio/video data is cached in a live broadcast process, the preset time is greater than or equal to a sum of the click duration and a catch-up duration, the catch-up duration is a time from when the click data is played until a current live broadcast content is synchronously played,
the receiving unit includes:
the receiving module is configured to receive video information sent by the server-side equipment, and the video information at least comprises: the number of the data fragments in the cartoon data and the starting time of the first data fragment in the cartoon data, wherein the video information is determined at least according to the cartoon information, and the data fragments are obtained by dividing the audio and video data;
A determining module configured to perform determining to cache the katon data according to the video information;
a playing module configured to execute the step of judging whether the cached data volume of the stuck data accords with the condition of continuous playing, if so, playing the stuck data,
the playing module comprises:
a first determining submodule configured to determine a playing policy in a case where the cached data amount of the cartoon data meets the condition of continuous playing, the playing policy comprising: the play start time and play speed in the first data segment in the cartoon data; a play sub-module configured to perform playing of the clip data according to the play policy,
the video information further includes the catch-up duration and a data amount of the click-through data, the first determination submodule configured to perform:
determining the playing speed of the stuck data according to the data quantity of the stuck data and the catch-up time length,
the first determination submodule includes:
a second determining sub-module configured to perform determining a play multiplier increment within a first time period according to an amount of data of the first time period and the first time period;
And the third determining submodule is configured to determine the playing speed in the second time period to be reduced according to the data volume of the second time period and the second time period, the catch-up duration is divided into the first time period and the second time period according to the time sequence, and the data volume is composed of the data volume of the first time period and the data volume of the second time period.
26. The live device of claim 25, wherein the determining module comprises:
a determining submodule configured to perform a determination of whether to allow buffering of the stuck data according to the video information;
and the caching submodule is configured to perform caching of the cartoon data under the condition that the cartoon data is allowed to be cached.
27. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the live method of any one of claims 1 to 8, the live method of any one of claims 9 to 10, or the live method of any one of claims 11 to 16.
28. A system, comprising:
A server device configured to perform the live method of any one of claims 1 to 8;
a client device configured to perform the live method of claim 9 or 10.
29. A storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the live method of any one of claims 1 to 8, the live method of claim 9 or 10, or the live method of any one of claims 11 to 16.
CN202011631129.4A 2020-12-30 2020-12-30 Live broadcast method, live broadcast device and computer program product Active CN112788360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011631129.4A CN112788360B (en) 2020-12-30 2020-12-30 Live broadcast method, live broadcast device and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011631129.4A CN112788360B (en) 2020-12-30 2020-12-30 Live broadcast method, live broadcast device and computer program product

Publications (2)

Publication Number Publication Date
CN112788360A CN112788360A (en) 2021-05-11
CN112788360B true CN112788360B (en) 2023-06-20

Family

ID=75754732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011631129.4A Active CN112788360B (en) 2020-12-30 2020-12-30 Live broadcast method, live broadcast device and computer program product

Country Status (1)

Country Link
CN (1) CN112788360B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113038166A (en) * 2021-03-29 2021-06-25 读书郎教育科技有限公司 Intelligent classroom missed course playing control system and method
CN113434561A (en) * 2021-06-24 2021-09-24 北京金山云网络技术有限公司 Live broadcast data verification method and system, electronic device and storage medium
CN114401447A (en) * 2021-12-20 2022-04-26 北京字节跳动网络技术有限公司 Video stuck prediction method, device, equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104639977A (en) * 2015-02-05 2015-05-20 小米科技有限责任公司 Program playing method and device
CN106101146A (en) * 2016-08-12 2016-11-09 暴风集团股份有限公司 The method and system that Flash peer-to-peer network is live are carried out based on block style
CN107396171A (en) * 2017-07-24 2017-11-24 广州酷狗计算机科技有限公司 Live network broadcast method, device and storage medium
CN110166834A (en) * 2018-02-11 2019-08-23 腾讯科技(深圳)有限公司 A kind of data playing method, device and storage medium
CN110198495A (en) * 2019-06-28 2019-09-03 广州市百果园信息技术有限公司 A kind of method, apparatus, equipment and the storage medium of video download and broadcasting
CN110401869A (en) * 2019-07-26 2019-11-01 歌尔股份有限公司 A kind of net cast method, system and electronic equipment and storage medium
CN111885334A (en) * 2020-08-26 2020-11-03 杭州速递科技有限公司 Method for reducing delay of real-time frame pursuit of audio and video
CN111918093A (en) * 2020-08-13 2020-11-10 腾讯科技(深圳)有限公司 Live broadcast data processing method and device, computer equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10530825B2 (en) * 2016-06-01 2020-01-07 Amazon Technologies, Inc. Catching up to the live playhead in live streaming
CN110248204B (en) * 2019-07-16 2021-12-24 广州虎牙科技有限公司 Processing method, device, equipment and storage medium for live broadcast cache
CN111294634B (en) * 2020-02-27 2022-02-18 腾讯科技(深圳)有限公司 Live broadcast method, device, system, equipment and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104639977A (en) * 2015-02-05 2015-05-20 小米科技有限责任公司 Program playing method and device
CN106101146A (en) * 2016-08-12 2016-11-09 暴风集团股份有限公司 The method and system that Flash peer-to-peer network is live are carried out based on block style
CN107396171A (en) * 2017-07-24 2017-11-24 广州酷狗计算机科技有限公司 Live network broadcast method, device and storage medium
CN110166834A (en) * 2018-02-11 2019-08-23 腾讯科技(深圳)有限公司 A kind of data playing method, device and storage medium
CN110198495A (en) * 2019-06-28 2019-09-03 广州市百果园信息技术有限公司 A kind of method, apparatus, equipment and the storage medium of video download and broadcasting
CN110401869A (en) * 2019-07-26 2019-11-01 歌尔股份有限公司 A kind of net cast method, system and electronic equipment and storage medium
CN111918093A (en) * 2020-08-13 2020-11-10 腾讯科技(深圳)有限公司 Live broadcast data processing method and device, computer equipment and storage medium
CN111885334A (en) * 2020-08-26 2020-11-03 杭州速递科技有限公司 Method for reducing delay of real-time frame pursuit of audio and video

Also Published As

Publication number Publication date
CN112788360A (en) 2021-05-11

Similar Documents

Publication Publication Date Title
CN112788360B (en) Live broadcast method, live broadcast device and computer program product
CN111918093B (en) Live broadcast data processing method and device, computer equipment and storage medium
US10841667B2 (en) Producing video data
US7614064B2 (en) Determining program boundaries through viewing behavior
CN112822503B (en) Method, device and equipment for playing live video stream and storage medium
US9826257B2 (en) Caption and speech alignment for a video delivery system
CN108737884B (en) Content recording method and equipment, storage medium and electronic equipment
CN111083514B (en) Live broadcast method and device, electronic equipment and storage medium
CN107690093B (en) Video playing method and device
CN110139128B (en) Information processing method, interceptor, electronic equipment and storage medium
CN105430453A (en) Media data acquisition method, media terminal and online music teaching system
CN111050204A (en) Video clipping method and device, electronic equipment and storage medium
CN110784757A (en) Game live broadcast method and device in live broadcast client, medium and electronic equipment
JP6987567B2 (en) Distribution device, receiver and program
US11483535B2 (en) Synchronizing secondary audiovisual content based on frame transitions in streaming content
JP4937211B2 (en) Still image extraction apparatus and still image extraction program
CN111064698B (en) Method and device for playing multimedia stream data
US20220165306A1 (en) Playback device
JP3144285B2 (en) Video processing equipment
CN110856028B (en) Media data playing method, equipment and storage medium
CN114302223A (en) Incorporating visual objects into video material
CN115278349B (en) Method for processing drag watching video under wireless communication environment
US11546675B2 (en) Methods, systems, and media for streaming video content using adaptive buffers
US11895176B2 (en) Methods, systems, and media for selecting video formats for adaptive video streaming
CN113473228B (en) Transmission control method, device, storage medium and equipment for 8K recorded and played video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant