CN114501052A - Live broadcast data processing method, cloud platform, computer equipment and storage medium - Google Patents

Live broadcast data processing method, cloud platform, computer equipment and storage medium Download PDF

Info

Publication number
CN114501052A
CN114501052A CN202210092864.5A CN202210092864A CN114501052A CN 114501052 A CN114501052 A CN 114501052A CN 202210092864 A CN202210092864 A CN 202210092864A CN 114501052 A CN114501052 A CN 114501052A
Authority
CN
China
Prior art keywords
live broadcast
node
media
stream
buffer queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210092864.5A
Other languages
Chinese (zh)
Other versions
CN114501052B (en
Inventor
李志成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210092864.5A priority Critical patent/CN114501052B/en
Publication of CN114501052A publication Critical patent/CN114501052A/en
Application granted granted Critical
Publication of CN114501052B publication Critical patent/CN114501052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests

Abstract

The application relates to a live broadcast data processing method, a cloud platform, computer equipment and a storage medium, which can be applied to live video. The method comprises the following steps: pushing the live broadcast data stream pushed by the live broadcast terminal to a media processing node at an uplink access node; at a media processing node, performing media processing on the obtained live broadcast data stream of the live broadcast terminal to obtain a media stream, and distributing the media stream to a content distribution node; at the content distribution node, responding to a live broadcast watching request sent by a watching terminal, and sending the media stream to the watching terminal; the first buffer queue in the stream pushing direction issues the live broadcast data to the next service node by taking a frame as a unit; the last buffer queue in the stream pushing direction sends all the buffered live broadcast data to the next service node; and the buffer unit of the last buffer queue is a picture group. The method can reduce the watching jamming of the downlink user caused by the instability of the live broadcast uplink plug flow network or the instability of the audio and video acquisition equipment.

Description

Live broadcast data processing method, cloud platform, computer equipment and storage medium
Technical Field
The application relates to the technical field of cloud platforms and live broadcast, in particular to a live broadcast data processing method, a cloud platform, computer equipment and a storage medium.
Background
With the development of the cloud platform technology and the live broadcast technology, the audio and video service performed through the cloud platform has the advantages of high concurrency, low delay, easy access and the like, and can be applied to various scene applications such as live broadcast e-commerce, live entertainment, online education and audio and video interaction.
The audio and video data are issued to the watching users in real time in the live broadcasting process, and the uplink network or audio and video equipment at the stream pushing end is unstable in acquisition, so that the downlink watching terminal is easy to be blocked in the watching process.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a live data processing method, apparatus, computer device, and storage medium capable of reducing the seizure.
In a first aspect, the present application provides a live data processing method, where the method includes:
pushing the live broadcast data stream pushed by the live broadcast terminal to a media processing node at an uplink access node;
at a media processing node, performing media processing on the obtained live broadcast data stream of the live broadcast terminal to obtain a media stream, and distributing the media stream to a content distribution node;
at the content distribution node, responding to a live broadcast watching request sent by a watching terminal, and sending the media stream to the watching terminal;
the first buffer queue in the stream pushing direction issues the live broadcast data to the next service node by taking a frame as a unit; the last buffer queue in the stream pushing direction sends all the buffered live broadcast data to the next service node; and the buffer unit of the last buffer queue is a picture group.
In a second aspect, the present application further provides a live data processing cloud platform, including:
the uplink access node is used for pushing the live data stream pushed by the live terminal to the media processing node;
the media processing node is used for carrying out media processing on the obtained live broadcast data stream of the live broadcast terminal to obtain a media stream and distributing the media stream to the content distribution node;
the content distribution node is used for responding to a live broadcast watching request sent by a watching terminal and sending the media stream to the watching terminal;
the first buffer queue in the stream pushing direction issues the live broadcast data to the next service node by taking a frame as a unit; the last buffer queue in the stream pushing direction sends all the buffered live broadcast data to the next service node; and the buffer unit of the last buffer queue is a picture group.
In a third aspect, the present application further provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
pushing the live broadcast data stream pushed by the live broadcast terminal to a media processing node at an uplink access node;
at a media processing node, performing media processing on the obtained live broadcast data stream of the live broadcast terminal to obtain a media stream, and distributing the media stream to a content distribution node;
at the content distribution node, responding to a live broadcast watching request sent by a watching terminal, and sending the media stream to the watching terminal;
the first buffer queue in the stream pushing direction issues the live broadcast data to the next service node by taking a frame as a unit; the last buffer queue in the stream pushing direction sends all the buffered live broadcast data to the next service node; and the buffer unit of the last buffer queue is a picture group.
In a fourth aspect, the present application further provides a computer readable storage medium having a computer program stored thereon, the computer program when executed by a processor implementing the steps of:
pushing the live broadcast data stream pushed by the live broadcast terminal to a media processing node at an uplink access node;
at a media processing node, performing media processing on the obtained live broadcast data stream of the live broadcast terminal to obtain a media stream, and distributing the media stream to a content distribution node;
at the content distribution node, responding to a live broadcast watching request sent by a watching terminal, and sending the media stream to the watching terminal;
the first buffer queue in the stream pushing direction issues the live broadcast data to the next service node by taking a frame as a unit; the last buffer queue in the stream pushing direction sends all the buffered live broadcast data to the next service node; and the buffer unit of the last buffer queue is a picture group.
In a fifth aspect, the present application further provides a computer program product comprising a computer program, wherein the computer program when executed by a processor implements the steps of:
pushing the live broadcast data stream pushed by the live broadcast terminal to a media processing node at an uplink access node;
at a media processing node, performing media processing on the obtained live broadcast data stream of the live broadcast terminal to obtain a media stream, and distributing the media stream to a content distribution node;
at the content distribution node, responding to a live broadcast watching request sent by a watching terminal, and sending the media stream to the watching terminal;
the first buffer queue in the stream pushing direction issues the live broadcast data to the next service node by taking a frame as a unit; the last buffer queue in the stream pushing direction sends all the buffered live broadcast data to the next service node; and the buffer unit of the last buffer queue is a picture group.
According to the live broadcast data processing method, the cloud platform, the computer equipment, the storage medium and the computer program product, secondary data caching is added on the basis of primary caching of the platform, and the secondary caching and the primary caching have a primary time difference, so that the watching end and the live broadcast terminal have a secondary time difference in live broadcast. Even if the redundant cache media stream needs to be discarded due to the cache size configured by the watching terminal, since the cloud platform has a cache process, the delay of the watching terminal relative to the live broadcasting terminal is the time difference of the cache plus the cache size configured by the terminal. After the accumulated time length caused by the instability of the acquisition of the upstream network or the audio and video equipment at the plug flow end exceeds the time length of the receiving buffer of the terminal, the time difference of one-time buffering exists in the cloud platform, so that some tolerance time is added for the pause, and the pause of watching of a downstream user caused by the instability of the live upstream plug flow network or the instability of the audio and video acquisition equipment is reduced.
Drawings
FIG. 1 is a diagram of an application environment of a live data processing method in one embodiment;
FIG. 2 is a flow diagram of a method for live data processing in one embodiment;
FIG. 3 is a schematic diagram of a workflow of a live cloud platform in one embodiment;
FIG. 4 is a schematic diagram illustrating a data flow of a live broadcast cloud platform according to another embodiment;
FIG. 5 is a schematic diagram illustrating a data flow of a live broadcast cloud platform according to another embodiment;
FIG. 6 is a schematic diagram illustrating a data flow of a live broadcast cloud platform according to another embodiment;
fig. 7 is a schematic diagram of data flow of a live cloud platform in another embodiment;
FIG. 8 is a schematic diagram illustrating data flow of a live broadcast cloud platform according to another embodiment;
FIG. 9 is a schematic diagram of data flow of a live broadcast cloud platform according to another embodiment;
FIG. 10 is a data flow diagram of a live broadcast cloud platform according to another embodiment;
FIG. 11 is a schematic diagram illustrating data flow of a live broadcast cloud platform according to another embodiment;
fig. 12 is a schematic data flow diagram of a live broadcast cloud platform according to another embodiment;
fig. 13 is a schematic data flow diagram of a live broadcast cloud platform according to another embodiment;
FIG. 14 is a diagram illustrating live effects in one embodiment;
FIG. 15 is an architecture diagram of a live cloud platform, under an embodiment;
FIG. 16 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
The live data processing method provided by the application can be applied to the application environment shown in fig. 1. The live broadcast terminal 102 communicates with the cloud platform 104, and the viewing terminal 106 communicates with the cloud platform 104. The cloud platform has an uplink access point 1041, a media processing node 1042, and a content distribution node 1043. The live broadcast terminal 102 is a terminal with a camera, such as a PC desktop, a smart phone, and the like, and the live broadcast terminal 102 acquires, quantizes, encodes, and encapsulates audio and video data to obtain a live broadcast data stream, and transmits the live broadcast data stream to an uplink access module of the cloud platform through a transport container format protocol such as RTMP. And pushing the live data stream pushed by the live terminal to the media processing node at the uplink access node 1041. At the media processing node 1042, media processing is performed on the obtained live data stream of the live terminal to obtain a media stream, and the media stream is distributed to the content distribution node. And in the content distribution node 1043, responding to a live broadcast watching request sent by a watching terminal, and sending the media stream to the watching terminal. At least two cache queues are set in the uplink access node 1041, the media processing node 1042 and the content distribution node 1043, so that the related data of the media stream sent to the viewing terminal is cached twice on the cloud platform.
The cloud platform can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, big data and artificial intelligence platform and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, a smart voice interaction device, a smart home appliance, a vehicle-mounted terminal, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
In an embodiment, as shown in fig. 2, a live data processing method is provided, which is described by taking an example that the method is applied to the cloud platform in fig. 1, and includes the following steps: a
Step 202, at the uplink access node, pushing the live data stream pushed by the live terminal to the media processing node.
In an embodiment, a basic architecture of a live broadcast cloud platform is shown in fig. 3, and based on the basic architecture of fig. 3, a live broadcast process includes:
1. the anchor uplink acquires, quantizes, encodes and encapsulates audio and video data through a live broadcast terminal (a terminal with a camera, such as a PC desktop, a smart phone and the like) and then uploads the audio and video data to the platform uplink access module through a media stream transmission container format protocol. The media stream transmission container format protocol may be RTMP/TS/WebRTC, etc.
2. And the uplink (upload) access node uploads the carried authentication information parameters to the authentication center according to the live broadcast user to identify whether the live broadcast authority exists.
3. The media processing node processes media according to the audio and video format watched by the downlink user, and the media processing comprises the following steps: and performing media processing such as format conversion packaging of the audio and video media container and audio and video transcoding and the like, and distributing the media processing to each content distribution access cluster. Based on different terminal requirements, the audio and video formats can be FLV, HLS, DASH, CMAF, and the like.
4. And the media access point records and screenshot stores the audio and video file to the distributed file system.
5. And the auditing and monitoring module audits and identifies the screenshot, and if the screenshot is illegal, the auditing and monitoring module informs the authentication center to prohibit the live broadcast and the user from watching in real time.
6. And the watching terminal selects an audio and video format to the CDN distribution center according to the requirement to watch the live broadcast nearby.
Based on the live broadcast cloud platform, the uplink access node receives live broadcast data streams pushed by the live broadcast terminal.
And 204, performing media processing on the acquired live broadcast data stream of the live broadcast terminal at a media processing node to obtain a media stream, and distributing the media stream to a content distribution network.
The media processing is a multimedia data processing service, and can transcode the multimedia data into a format suitable for playing on a full platform, such as FLV, HLS, DASH, and CMAF. The media processing comprises media processing such as audio/video media container format conversion packaging and/or audio/video transcoding. The media processing is processed by a media processing node of the cloud platform.
Through media processing, the live data stream is processed into audio and video formats supported by different terminals, i.e. media streams, such as FLV, HLS, DASH, CMAF, etc. For example, the audio/video format of the upstream main push stream is RTMP (h.264/AAC), but when viewed by a user, FLV/HLS/DASH or h.264/h.265/AV1 can be selected according to the terminal device and network conditions. Through media processing, relevant media processing can be performed according to downlink selection of a user, for example, the media processing is converted into a media container required by the user through encapsulation processing, and the media container is converted into a coding format required by the user through transcoding, so that a relevant media stream is obtained.
Step 206, responding to the live broadcast watching request sent by the watching terminal at the content distribution node, and sending the media stream to the watching terminal.
A Content Delivery node, specifically a Content Delivery Network (CDN) of a cloud platform, has a basic idea of avoiding bottlenecks and links on the internet that may affect data transmission speed and stability as much as possible, so that Content transmission is faster and more stable. By placing node servers at various positions of the network to form a layer of intelligent virtual network on the basis of the existing internet, the CDN system can redirect the request of a user to a service node closest to the user in real time according to network flow, connection of each node, load condition, distance to the user, response time and other comprehensive information. The method aims to enable the user to obtain the required content nearby, solve the problem of congestion of the Internet network and improve the response speed of the user for accessing the website.
When the watching terminal triggers the live watching request, the live watching request is sent to the content distribution network, the CND node closest to the user in the content distribution network responds to the live watching request sent by the watching terminal, and the media stream is sent to the watching terminal, so that the user can obtain the requested live broadcast nearby.
In this embodiment, at least two buffer queues are set at the uplink access node, the media processing node and the content distribution node, and the first buffer queue in the push flow direction issues the live broadcast data to the next service node by taking a frame as a unit; and the last buffer queue in the stream pushing direction sends all the buffered live broadcast data to the next service node, and the buffer unit of the last buffer queue is a frame group.
The last buffer queue in the stream pushing direction is the buffer queue closest to the viewing terminal, and the buffer unit is a group Of pictures (gop) which is composed Of a series Of I frames, P frames and B frames in a fixed mode, and all the buffered live broadcast data is sent to the next service node. For example, if the last buffer queue is set in the content distribution node, the next service node is the viewing terminal. And if the last cache queue is arranged at the media processing node, the next service node is a content distribution node.
Because the last buffer queue in the stream pushing direction is the buffer queue closest to the watching terminal, the time difference between the live broadcast of the watching terminal and the live broadcast of the live broadcast terminal can be ensured by sending all the live broadcast data buffered by the last buffer queue to the next service node, so that for the watching terminal, a certain buffer is provided, and the live broadcast jam in the watching process of a user caused by network jitter, instability of the main broadcast uplink audio and video data, back source network abnormality of a CND node and the like in the watching process of the user can be reduced. The buffer of the last buffer queue is not larger, the better, the larger the buffer is, the delay of live broadcast watching of a user is increased, and in practical application, the buffer can be set by combining with service requirements. For example, for interactive live broadcast with a high live broadcast delay requirement, the cache size can be reduced, and for entertainment live broadcast with a low live broadcast delay requirement, the cache size can be increased appropriately.
Although the configuration of the last buffer queue enables the live broadcast of the watching terminal and the live broadcast terminal to have a time difference, and can solve the problem of network jitter in the watching process of a user to a certain extent, the watching terminal has different data for buffer due to different hardware configurations such as memory/CPU/GPU and the like and comprehensive consideration of time delay, and the transmitted redundant buffer media stream can be lost when the buffer size configured by the watching terminal is exceeded. For example, if the size of the secondary buffer of the platform is 5 seconds, and the size of the buffer configured by the terminal is 2 seconds, then the buffer data exceeding 3 seconds configured by the terminal will be discarded, and for the terminal, only the 2 seconds buffer can be received. Then, when the accumulated time length caused by unstable acquisition of the upstream network or the audio and video equipment at the plug flow end exceeds the time length of the terminal receiving buffer, the watching terminal still can be blocked. For example, when the cumulative duration caused by unstable acquisition of the upstream network or the audio/video device at the stream pushing end exceeds 2 seconds (the buffer size configured by the terminal), the downstream viewing user will be stuck.
Aiming at the problem, on the basis of primary cache of the platform, secondary data cache is added, and the secondary cache and the primary cache have a time difference, so that the watching end and the live broadcast of the live broadcast terminal have a time difference twice. Specifically, the first buffer queue in the stream pushing direction issues the live broadcast data to the next service node in units of frames.
Even if the redundant cache media stream needs to be discarded due to the cache size configured by the watching terminal, since at least one cache processing exists before the last cache of the cloud platform, the delay of the watching terminal relative to the live broadcasting terminal is the time difference of the first cache plus the cache size configured by the terminal. The first buffer queue in the push flow direction is a timer, and the timing drive buffer queue issues data to the next processing node by taking a frame as a unit, namely, issues the data to the next service node frame by frame. The unit of the buffer queue driving FPS (frame Per Second) is the same as the unit of the upstream anchor source FPS. Therefore, each buffer queue in the cloud platform is equivalent to a play buffer arranged at the server. The first buffer queue is arranged in front of the last buffer queue, so that the phenomenon that more buffers are discarded by the watching terminal due to the limitation of the buffer size can be avoided, the resistance of watching jamming of a downlink user caused by unstable jitter collection of a plug-flow uplink network or audio and video equipment is increased, and the jamming of watching jamming of the downlink user caused by unstable live-broadcast uplink plug-flow network or unstable audio and video collection equipment is reduced as much as possible. Specifically, the related data may be cached at any two nodes of an uplink access point, a media processing node, and a content distribution node of the cloud platform once, may be cached at the uplink access point twice, may be cached at the media processing node of the cloud platform twice, and may be cached at the content distribution node of the cloud platform twice.
It can be understood that the processing of the live broadcast data at the uplink access point, the media processing node and the content distribution node of the cloud platform is a streaming processing, so that there is a precedence order in the two caching processes, and the live broadcast data stream can be pushed to the media processing node after the two caching processes are successively performed at the uplink access point. The media processing node may further perform the first caching processing on the live data stream at the uplink access point, and then push the live data stream to the media processing node, and then perform the second caching processing on the obtained media stream after performing the media processing on the live data stream at the media processing node. The method can also be implemented by performing media processing on the live data stream at the media processing node, performing first caching on the media stream, distributing the data cached for the first time to the content distribution node, performing second caching at the content distribution node, and then issuing the data to the watching terminal. The media processing node may also distribute the media stream to the content distribution node after media processing the live data stream, and issue the data to the viewing terminal after the content distribution node performs secondary cache on the media stream in sequence.
Therefore, in this embodiment, on the basis of the primary cache of the platform, a secondary data cache is added, and the secondary cache and the primary cache have a time difference, so that the watching end and the live broadcast of the live broadcast terminal have a time difference twice. Even if the redundant cache media stream needs to be discarded due to the cache size configured by the watching terminal, since the cloud platform has a cache process, the delay of the watching terminal relative to the live broadcasting terminal is the time difference of the cache plus the cache size configured by the terminal. After the accumulated time length caused by the instability of the acquisition of the upstream network or the audio and video equipment at the plug flow end exceeds the time length of the receiving buffer of the terminal, the time difference of one-time buffering exists in the cloud platform, so that some tolerance time is added for the pause, and the pause of watching of a downstream user caused by the instability of the live upstream plug flow network or the instability of the audio and video acquisition equipment is reduced.
In one embodiment, the processing module of the cloud platform is shown in fig. 4 and includes: an uplink access node 401, a media processing node 402 and a content distribution node 403.
And the uplink access node 401 pushes the live data stream pushed by the live terminal to the media processing node 402. In the media processing node 402, media processing is performed on the obtained live data stream of the live terminal to obtain a media stream, and the media stream is distributed to the content distribution node 403. In the content distribution node 403, in response to a live viewing request sent by a viewing terminal, the media stream is sent to the viewing terminal.
In this embodiment, the cloud platform is provided with a first buffer queue and a second buffer queue. And in the process that the live broadcast data is processed by the cloud platform through the uplink access node, the media processing node and the content distribution node, the relevant data is cached by utilizing the first cache queue and the second cache queue.
In one embodiment, as shown in fig. 5, a first buffer queue 5011 is disposed in the uplink access point 501, and a second buffer queue 5022 is disposed in the media processing node 502. The media processing node 502 performs media processing on the live data stream by using the media processing module 5021. There are two different ways to set the second buffer queue at the media processing node, as shown in fig. 5 and fig. 6.
As shown in fig. 5, in the uplink access point 501, a live data stream pushed by a live terminal is obtained, the live data stream is stored in a first cache queue 5011, and the live data stream in the first cache queue 5011 is pushed to a media processing node 5022 in units of frames. At the media processing node 502, the media processing module 5021 is arranged before the second buffer queue 5022, that is, at the media processing node 502, the media processing module 5021 performs media processing on the obtained live data stream of the live terminal to obtain a media stream, stores the media stream into the second buffer queue 5022, and distributes all the media streams buffered in the second buffer queue to the content distribution node 503. In the content distribution node 503, a live viewing request sent by a viewing terminal is responded, and a media stream is sent to the viewing terminal.
It is understood that there may be multiple media processing modules 5021, and each media processing module may perform different types of media processing, such as audio/video media container format conversion packaging and audio/video transcoding processing.
As shown in fig. 6, in the uplink access point 601, a live data stream pushed by a live terminal is obtained, and the live data stream is stored in a first buffer queue 6011. Pushing the live data stream in the first buffer queue 6011 to the media processing node 602 in units of frames. At the media processing node 602, after the media processing module 6022 is set in the second buffer queue 6021, that is, at the media processing node, the obtained live broadcast data stream of the live broadcast terminal is stored in the second buffer queue 6021, and media processing is performed on all live broadcast data streams buffered in the second buffer queue 6021 to obtain a media stream; the media stream is distributed to a content distribution node. In the content distribution node 503, a live viewing request sent by a viewing terminal is responded, and a media stream is sent to the viewing terminal.
It is understood that there may be multiple media processing modules 5021, and each media processing module may perform different types of media processing, such as audio/video media container format conversion packaging and audio/video transcoding processing.
The second cache module arranged at the media processing node can also be arranged between the two media processing modules. After one media processing mode is finished, the cache processing is carried out, and then the second media processing mode is carried out. Taking a media processing mode as a container-to-package processing mode and an audio-video transcoding processing mode as an example, at a media processing node, a container-to-package module is used for performing the container-to-package processing, and an audio-video transcoding module is used for performing the audio-video transcoding processing. As shown in fig. 7, a second buffer queue 7023 may be disposed between the container transcoding module 7021 and the audiovisual transcoding module 7022.
As shown in fig. 7, in the uplink access point 701, a live data stream pushed by a live terminal is obtained, the live data stream is stored in a first buffer queue 7011, and the live data stream in the first buffer queue is pushed to the media processing node 702 in units of frames. At the media processing node 702, the container conversion and encapsulation module 7021 performs container conversion and encapsulation processing on the obtained live data stream of the live terminal, stores the live data stream subjected to the container conversion and encapsulation processing in a second cache queue 7023, and performs audio and video transcoding processing on all the live data streams cached in the second cache queue 7023 to obtain a media stream; the media stream is distributed to a content distribution node 703.
The above three ways are that the first buffer queue is set at the uplink access node, and the second buffer queue is set at the media processing node, and the difference is that the second buffer queue is set before the media processing module, or after the media processing module, or between two different media processing modules (i.e. at the media processing node, the direct data stream may be buffered once and then processed by the media processing module). Correspondingly, at the media processing node, after the live data stream is subjected to media processing, the processed media stream is subjected to primary cache processing. Or after one media processing, caching and then using the cached data to perform another media processing. The three modes can achieve the same secondary caching effect, can reduce the blockage, and are different only in the processing sequence.
In another embodiment, as shown in fig. 8, a first buffer queue is disposed at the upstream access point 801 and a second buffer queue is disposed at the content distribution node 802. As shown in fig. 8, in the uplink access point 801, a live data stream pushed by a live terminal is obtained, the live data stream is stored in the first buffer queue 8011, and the live data stream in the first buffer queue 8011 is pushed to the media processing node 802 in units of frames. In the media processing node 802, media processing is performed on the obtained live data stream of the live terminal to obtain a media stream, and the media stream is distributed to the content distribution node 803. At the content distribution node, storing the distributed media stream in a second buffer queue 8031; and responding to a live broadcast watching request sent by a watching terminal, and sending all the media streams cached in the second cache queue to the watching terminal.
In the above manner, the first buffer queue is arranged at the uplink access node, the second buffer queue is arranged at the content distribution node, and the delay due to blocking can be reduced by using the secondary buffer.
In another embodiment, a first buffer queue is disposed at the media processing node; the second buffer queue is arranged at the content distribution node.
In the media processing node, the first buffer queue may be set in the same manner as the second buffer queue set by the media processing node, for example, the first buffer queue may be set before the media processing module, after the media processing module, or between two different media processing modules.
In this embodiment, for example, as shown in fig. 9, a first cache queue is arranged behind a media processing module of a media processing node, and a live data stream pushed by a live terminal is pushed to a media processing node 902 by an uplink access node 901. In the media processing node 902, the media processing module 9021 performs media processing on the obtained live data stream of the live terminal to obtain a media stream, stores the media stream in the first buffer queue 9022, and distributes the media stream in the first buffer queue 9022 to the content distribution node 903 in units of frames. At the content distribution node 903, storing the distributed media stream into a second buffer queue 9031; and responding to a live watching request sent by the watching terminal, and sending all the media streams cached in the second cache queue 9031 to the watching terminal.
In the above manner, the first buffer queue is arranged at the media processing node, the second buffer queue is arranged at the content distribution node, and the secondary buffer is utilized, so that the jamming can be reduced.
In another embodiment, the manner of setting two cache queues in the cloud platform may be that the two cache queues are simultaneously set in the same processing node, and a secondary cache may also be achieved, so that the effect of reducing the stuck is reduced. Specifically, the first buffer queue and the second buffer queue are simultaneously disposed at the uplink access node, or simultaneously disposed at the media processing node, or simultaneously disposed at the content distribution node.
As shown in fig. 10, when the first buffer queue 9011 and the second buffer queue 9012 are simultaneously set in the uplink access node 901. In the uplink access node 901, a live data stream pushed by a live terminal is acquired, the live data stream is stored in a first buffer queue 9011, the live data stream in the first buffer queue is sent to a second buffer queue 9012 in units of frames, and all the live data streams buffered in the second buffer queue are pushed to the media processing node 902. In the media processing node 902, media processing is performed on the obtained live data stream of the live terminal to obtain a media stream, and the media stream is distributed to the content distribution node 903. And responding to a live broadcast watching request sent by a watching terminal at the content distribution node 903, and sending the media stream to the watching terminal.
As shown in fig. 11, the first buffer queue 1310 and the second buffer queue 1032 are simultaneously provided in the content distribution node 1003. In the uplink access node 1001, the live data stream pushed by the live terminal is pushed to the media processing node 1002. In the media processing node 1002, media processing is performed on the obtained live data stream of the live terminal to obtain a media stream, and the media stream is distributed to the content distribution node 1003. Storing the distributed media stream into a first buffer queue 1031 at the content distribution node 1003, and sending the media stream in the first buffer queue to a second buffer queue 1032 in a frame unit; and responding to a live broadcast watching request sent by a watching terminal, and sending all the media streams cached in the second cache queue 1032 to the watching terminal.
When the first and second buffer queues are simultaneously disposed at the media processing node, as shown in fig. 12, the first and second buffer queues 1222 and 1223 may be simultaneously disposed after the media processing module 1221. And the uplink access node 121 pushes the live data stream pushed by the live terminal to the media processing node. In the media processing node 122, the media processing module 1221 performs media processing on the obtained live data stream of the live terminal to obtain a media stream, stores the media stream in the first buffer queue 1222, sends the media stream in the first buffer queue to the second buffer queue 1223 in units of frames, and distributes all the media streams buffered in the second buffer queue 1223 to the content distribution node 123. In the content distribution node 123, the media stream is sent to the viewing terminal in response to a live viewing request sent by the viewing terminal.
As a variation of this method, the first buffer queue and the second buffer queue may be simultaneously disposed before the media processing module, that is, at the media processing node, the obtained live data stream media of the live terminal is stored in the first buffer queue, and then the data stream of the first buffer queue is stored in the second buffer queue, and the media processing module performs media processing on the obtained live data stream of the live terminal to obtain a media stream, and distributes the media stream to the content distribution node. Or the first buffer queue and the second buffer queue may be arranged between two different media processing modules, that is, at a media processing node, the obtained live broadcast data stream of the live broadcast terminal is subjected to container transfer encapsulation processing, and then the two buffer queues are respectively utilized for buffer processing, and then the audio/video transcoding module is utilized for processing to obtain the media stream. The two buffer queues are arranged on the media processing node, and the effect of blocking can be reduced by utilizing secondary buffer. It is similar to the process of setting a buffer queue at the media processing node, and is not described herein again.
In addition, the technical scheme of the application can also optimize the problem of the first frame blocking. For example, if only one cache is set in the content distribution node, for the first user in a live broadcast room, the CDN does not have enough cache, and audio and video data is delivered in real time in the live broadcast process, so that the watching user is stuck as long as the push streaming end uplink network or the audio and video device collects unstable jitter. Specifically, after the main stream is pushed, the user immediately links to see, so that the cache of the first user is not enough, the cache issued by the first user is the maximum value of the actual cacheable, and the cache of the first user in the scene is not necessarily enough and is easy to jam. According to the technical scheme, the first time of caching is performed on the first buffer queue, the second time of caching is performed on the second buffer queue, and the second time of caching is performed through the strategy of setting the second time of caching, so that the first buffer queue can be issued according to the cache configuration as long as the first buffer queue has enough caches, the cache data volume can be increased due to the fact that twice caching is performed, and the first frame pause phenomenon of a first user in a live broadcast room is reduced.
The application further provides an application scenario, and specifically, the CDN has configured one cache queue as a second cache queue, the CDN uses a cache unit of the second cache queue as a frame group, and the media cache queue is configured with a first cache queue. Specifically, at an uplink access node, pushing a live data stream pushed by a live terminal to a media processing node, at the media processing node, performing media processing on the obtained live data stream of the live terminal to obtain a media stream, distributing the media stream to a first cache queue, and sending the media stream to a content distribution node by the first cache queue in units of frames; and responding to a live broadcast watching request sent by the watching terminal at an internal distribution node provided with a second cache queue, and issuing all media streams cached in the second cache queue to the watching terminal.
Through the picture group of the second cache module in the CDN, the watching terminal has certain cache, so that the live broadcast pause in the watching process of the user caused by network jitter, unstable main broadcast uplink audio and video data, abnormal return source network of CDN nodes and the like in the watching process of the user can be reduced. On this basis, as shown in fig. 13, a static buffer queue is added in the media processing, the buffer queue takes a frame as a configuration unit, the buffer queue is a timer, the buffer queue is periodically driven to send data to the CDN (the timer drives fps units to align with the upstream main broadcast push stream sources), the basic principle of the buffer queue is equivalent to that the data is returned from the upstream access node to the source pull stream, then the data is subjected to container format trans-encapsulation or trans-coding processing and pushed to the buffer queue, the buffer queue is periodically timed by the source fps unit (the interval duration of the timer is 1000ms/fps, and is accurate to 2 bits of ms decimal place), and the data is sent to the CDN in a frame-by-frame manner, which is equivalent to that a play buffer is provided in the media processing node. Therefore, even if the viewing terminal needs to discard the redundant cache issued by the second cache queue due to the configuration of the cache size, since a primary play cache exists in the media processing node and the cache issues data to the CDN in units of frames, finally the delay of the viewing terminal relative to the live broadcast terminal is the time difference of the primary cache plus the cache size configured by the terminal. After the accumulated time length caused by the instability of the acquisition of the upstream network or the audio and video equipment at the plug flow end exceeds the time length of the receiving buffer of the terminal, the time difference of one-time buffering exists in the cloud platform, so that some tolerance time is added for the pause, and the pause of watching of a downstream user caused by the instability of the live upstream plug flow network or the instability of the audio and video acquisition equipment is reduced.
By adopting the mode, a certain cache size is configured in the cloud media processing layer, the resistance of watching jamming of the downlink user caused by unstable jitter of the plug-flow uplink network or the audio and video equipment is increased, the jamming of watching jamming of the downlink user caused by unstable live-flow uplink plug-flow network or unstable audio and video equipment is reduced as much as possible, and the live-flow watching experience of the user is improved. As shown in fig. 14, by configuring a cache with a certain size at the cloud media processing node, the live broadcast pause times are improved by about 5%.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
In one embodiment, a live data processing cloud platform, as in fig. 15, includes:
the uplink access node 151 is configured to push a live data stream pushed by a live terminal to a media processing node;
the media processing node 152 is configured to perform media processing on the obtained live data stream of the live terminal to obtain a media stream, and distribute the media stream to the content distribution node;
the content distribution node 153 is configured to respond to a live viewing request sent by a viewing terminal, and send the media stream to the viewing terminal;
the first buffer queue in the stream pushing direction issues the live broadcast data to the next service node by taking a frame as a unit; the last buffer queue in the stream pushing direction sends all the buffered live broadcast data to the next service node; and the buffer unit of the last buffer queue is a picture group.
In another embodiment, the at least two buffer queues include a first buffer queue and a second buffer queue;
the setting mode of the first buffer queue and the second buffer queue comprises any one of the following modes:
the first method comprises the following steps: the first buffer queue is arranged at the uplink access node, and the second buffer queue is arranged at the media processing node;
and the second method comprises the following steps: the first buffer queue is arranged at the uplink access node, and the second buffer queue is arranged at the content distribution node;
and the third is that: the first buffer queue is arranged at the media processing node; the second buffer queue is arranged at the content distribution node.
In another embodiment, when the media processing node is provided with a cache queue, at the media processing node, storing the obtained live broadcast data stream of the live broadcast terminal in the cache queue, and performing media processing on the live broadcast data stream in the cache queue to obtain a media stream; distributing the media stream to a content distribution node; or the like, or, alternatively,
and the media processing node is used for performing media processing on the acquired live broadcast data stream of the live broadcast terminal to obtain a media stream, storing the media stream into a cache queue, and distributing the media stream in the cache queue to the content distribution node.
In another embodiment, the at least two buffer queues include a first buffer queue and a second buffer queue; the first buffer queue and the second buffer queue are simultaneously arranged at the uplink access node, or at the media processing node, or at the content distribution node.
In another embodiment, when the first buffer queue and the second buffer queue are simultaneously disposed at the uplink access node, the uplink access node is configured to obtain a live broadcast data stream pushed by a live broadcast terminal, store the live broadcast data stream in the first buffer queue, send the live broadcast data stream in the first buffer queue to the second buffer queue in units of frames, and push all live broadcast data streams buffered in the second buffer queue to the media processing node.
In another embodiment, when the first buffer queue and the second buffer queue are simultaneously arranged at the content distribution node, the content distribution node is configured to store the distributed media stream into the first buffer queue, and send the media stream in the first buffer queue to the second buffer queue in units of frames; and responding to a live broadcast watching request sent by a watching terminal, and sending all the media streams cached in the second cache queue to the watching terminal.
In another embodiment, when the first buffer queue and the second buffer queue are in the media processing node, the media processing node is configured to store an acquired live broadcast data stream of a live broadcast terminal in the first buffer queue, and perform media processing on the live broadcast data stream in the first buffer queue to obtain a media stream; sending the media stream to a second cache queue by taking a frame as a unit, and distributing all the media streams cached in the second cache queue to a content distribution node; or the like, or, alternatively,
the method comprises the steps that a first cache queue is used for obtaining a live broadcast data stream of a live broadcast terminal; sending the live broadcast data stream in the first buffer queue to a second buffer queue by taking a frame as a unit; performing media processing on the live data stream in the second cache queue to obtain a media stream; distributing all the media streams in the second cache queue to a content distribution node; or the like, or, alternatively,
the system comprises a live broadcast terminal, a media server and a server, wherein the live broadcast terminal is used for acquiring a live broadcast data stream of the live broadcast terminal; storing the media stream into a first buffer queue, and sending all the media stream in a second buffer queue of the first buffer queue to a second buffer queue; and distributing all the media streams cached in the second cache queue to a content distribution node.
For specific limitations on the live data processing cloud platform, reference may be made to the above limitations on the live data processing method, which is not described herein again. All modules in the live data processing cloud platform can be completely or partially realized through software, hardware and a combination of the software and the hardware. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 16. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing live data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a live data processing method.
Those skilled in the art will appreciate that the architecture shown in fig. 16 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A live data processing method, characterized in that the method comprises:
pushing the live broadcast data stream pushed by the live broadcast terminal to a media processing node at an uplink access node;
at a media processing node, performing media processing on the obtained live broadcast data stream of the live broadcast terminal to obtain a media stream, and distributing the media stream to a content distribution node;
at the content distribution node, responding to a live broadcast watching request sent by a watching terminal, and sending the media stream to the watching terminal;
at least two buffer queues are arranged at the uplink access node, the media processing node and the content distribution node, and the first buffer queue in the stream pushing direction sends the live broadcast data to the next service node by taking a frame as a unit; the last buffer queue in the stream pushing direction sends all the buffered live broadcast data to the next service node; and the buffer unit of the last buffer queue is a picture group.
2. The method of claim 1, wherein the at least two buffer queues comprise a first buffer queue and a second buffer queue;
the setting mode of the first buffer queue and the second buffer queue comprises any one of the following modes:
the first method comprises the following steps: the first buffer queue is arranged at the uplink access node, and the second buffer queue is arranged at the media processing node;
and the second method comprises the following steps: the first buffer queue is arranged at the uplink access node, and the second buffer queue is arranged at the content distribution node;
and the third is that: the first buffer queue is arranged at the media processing node; the second buffer queue is arranged at the content distribution node.
3. The method according to claim 2, wherein when the media processing node is provided with a cache queue, the media processing node performs media processing on the obtained live data stream of the live terminal to obtain a media stream, and distributes the media stream to the content distribution node, and the method includes any one of the following manners:
the first method comprises the following steps: storing the obtained live broadcast data stream of the live broadcast terminal in a cache queue at a media processing node, and carrying out media processing on the live broadcast data stream in the cache queue to obtain a media stream; distributing the media stream to a content distribution node;
and the second method comprises the following steps: and at the media processing node, performing media processing on the acquired live broadcast data stream of the live broadcast terminal to obtain a media stream, storing the media stream into a cache queue, and distributing the media stream in the cache queue to the content distribution node.
4. The method of claim 1, wherein the at least two buffer queues comprise a first buffer queue and a second buffer queue; the first buffer queue and the second buffer queue are simultaneously arranged at the uplink access node, or at the media processing node, or at the content distribution node.
5. The method of claim 4, wherein when the first buffer queue and the second buffer queue are simultaneously set in the uplink access node, the pushing, by the uplink access node, the live data stream pushed by the live terminal to the media processing node comprises:
and at the uplink access node, acquiring a live broadcast data stream pushed by a live broadcast terminal, storing the live broadcast data stream into a first cache queue, sending the live broadcast data stream in the first cache queue to a second cache queue by taking a frame as a unit, and pushing all the live broadcast data streams cached in the second cache queue to the media processing node.
6. The method of claim 4, wherein when the first buffer queue and the second buffer queue are simultaneously set at the content distribution node, in response to a live viewing request sent by a viewing terminal, the sending the media stream to the viewing terminal at the content distribution node comprises: storing the distributed media stream into a first cache queue at the content distribution node, and sending the media stream in the first cache queue to a second cache queue by taking a frame as a unit; and responding to a live broadcast watching request sent by a watching terminal, and sending all the media streams cached in the second cache queue to the watching terminal.
7. The method according to claim 4, wherein when the first buffer queue and the second buffer queue are at the media processing node, the media processing node performs media processing on the obtained live data stream of the live terminal to obtain a media stream, and distributes the media stream to the content distribution node, and the method includes any one of the following manners:
the first method comprises the following steps: storing the obtained live broadcast data stream of the live broadcast terminal in a first cache queue at a media processing node, and performing media processing on the live broadcast data stream in the first cache queue to obtain a media stream; sending the media stream to a second cache queue by taking a frame as a unit, and distributing all the cached media streams in the second cache queue to a content distribution node;
and the second method comprises the following steps: storing the acquired live broadcast data stream of the live broadcast terminal in a first cache queue at a media processing node; sending the live broadcast data stream in the first buffer queue to a second buffer queue by taking a frame as a unit; performing media processing on the live data stream in the second cache queue to obtain a media stream; distributing all media streams in the second cache queue to a content distribution node;
and the third is that: at a media processing node, performing media processing on the acquired live broadcast data stream of the live broadcast terminal to obtain a media stream; storing the media stream into a first buffer queue, and sending the media stream of the first buffer queue to a second buffer queue by taking a frame as a unit; and distributing all the media streams cached in the second cache queue to a content distribution node.
8. A live data processing cloud platform, comprising:
the uplink access node is used for pushing the live data stream pushed by the live terminal to the media processing node;
the media processing node is used for carrying out media processing on the obtained live broadcast data stream of the live broadcast terminal to obtain a media stream and distributing the media stream to the content distribution node;
the content distribution node is used for responding to a live broadcast watching request sent by a watching terminal and sending the media stream to the watching terminal;
the first buffer queue in the stream pushing direction issues the live broadcast data to the next service node by taking a frame as a unit; the last buffer queue in the stream pushing direction sends all the buffered live broadcast data to the next service node; and the buffer unit of the last buffer queue is a picture group.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202210092864.5A 2022-01-26 2022-01-26 Live broadcast data processing method, cloud platform, computer equipment and storage medium Active CN114501052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210092864.5A CN114501052B (en) 2022-01-26 2022-01-26 Live broadcast data processing method, cloud platform, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210092864.5A CN114501052B (en) 2022-01-26 2022-01-26 Live broadcast data processing method, cloud platform, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114501052A true CN114501052A (en) 2022-05-13
CN114501052B CN114501052B (en) 2022-10-25

Family

ID=81474493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210092864.5A Active CN114501052B (en) 2022-01-26 2022-01-26 Live broadcast data processing method, cloud platform, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114501052B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086285A (en) * 2022-06-02 2022-09-20 深圳市欢太科技有限公司 Data processing method and device, storage medium and electronic equipment
CN117221617A (en) * 2023-09-28 2023-12-12 杭州星犀科技有限公司 Live broadcast push flow system, method and computer storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180167486A1 (en) * 2016-12-12 2018-06-14 Verizon Patent And Licensing Inc. User device ad-hoc distributed caching of content
CN108235120A (en) * 2018-03-23 2018-06-29 北京潘达互娱科技有限公司 Live video stream method for pushing, device and electronic equipment
CN108347622A (en) * 2018-03-06 2018-07-31 腾讯科技(深圳)有限公司 Multi-medium data method for pushing, device, storage medium and equipment
CN109348279A (en) * 2018-09-26 2019-02-15 广州虎牙信息科技有限公司 A kind of plug-flow method, apparatus, equipment and storage medium
CN113382278A (en) * 2021-06-11 2021-09-10 中国电信股份有限公司 Video pushing method and device, electronic equipment and readable storage medium
US20210352336A1 (en) * 2019-04-23 2021-11-11 Huawei Technologies Co., Ltd. Media Stream Sending Method and Apparatus, and Device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180167486A1 (en) * 2016-12-12 2018-06-14 Verizon Patent And Licensing Inc. User device ad-hoc distributed caching of content
CN108347622A (en) * 2018-03-06 2018-07-31 腾讯科技(深圳)有限公司 Multi-medium data method for pushing, device, storage medium and equipment
CN108235120A (en) * 2018-03-23 2018-06-29 北京潘达互娱科技有限公司 Live video stream method for pushing, device and electronic equipment
CN109348279A (en) * 2018-09-26 2019-02-15 广州虎牙信息科技有限公司 A kind of plug-flow method, apparatus, equipment and storage medium
US20210352336A1 (en) * 2019-04-23 2021-11-11 Huawei Technologies Co., Ltd. Media Stream Sending Method and Apparatus, and Device
CN113382278A (en) * 2021-06-11 2021-09-10 中国电信股份有限公司 Video pushing method and device, electronic equipment and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
沈骏祥: "在Cache系统实现主流互联网直播服务", 《电脑编程技巧与维护》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086285A (en) * 2022-06-02 2022-09-20 深圳市欢太科技有限公司 Data processing method and device, storage medium and electronic equipment
CN117221617A (en) * 2023-09-28 2023-12-12 杭州星犀科技有限公司 Live broadcast push flow system, method and computer storage medium

Also Published As

Publication number Publication date
CN114501052B (en) 2022-10-25

Similar Documents

Publication Publication Date Title
US11470405B2 (en) Network video streaming with trick play based on separate trick play files
US8301732B2 (en) Live media delivery over a packet-based computer network
US8776150B2 (en) Implementation method and system for a media-on-demand frame-spanning playing mode in a peer-to-peer network
US9615119B2 (en) Method and apparatus for providing timeshift service in digital broadcasting system and system thereof
CN114501052B (en) Live broadcast data processing method, cloud platform, computer equipment and storage medium
US20140359678A1 (en) Device video streaming with trick play based on separate trick play files
US20140297804A1 (en) Control of multimedia content streaming through client-server interactions
US11863841B2 (en) Video playing control method and system
US20190166395A1 (en) Fast Channel Change In A Video Delivery Network
CN108881931B (en) Data buffering method and network equipment
US9049481B2 (en) Fine-tuning the time for leaving/joining a multicast session during channel changes
US20110082943A1 (en) P2p network system and data transmitting and receiving method thereof
CN113141522B (en) Resource transmission method, device, computer equipment and storage medium
US9338204B2 (en) Prioritized side channel delivery for download and store media
CN112312162B (en) Video server for transmitting video stream
US10972761B2 (en) Minimizing stall duration tail probability in over-the-top streaming systems
US20220295127A1 (en) Consolidating content streams to conserve bandwidth
WO2009103351A1 (en) Method and apparatus for obtaining media over a communications network
US10893338B1 (en) Method for unified ad delivery to consumer devices within service provider networks
US9924239B2 (en) Video on demand over satellite
CN111405325B (en) Video content distribution method and device and electronic equipment
CN112565906A (en) On-demand video processing method and system
JP7419151B2 (en) Server device, information processing method and program
US11949945B2 (en) Dynamic creation of low latency video streams in a live event
JP7438835B2 (en) Server device, communication system, program and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant