CN117119208A - Live broadcast return source scheduling method and device, storage medium and electronic equipment - Google Patents

Live broadcast return source scheduling method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117119208A
CN117119208A CN202310839072.4A CN202310839072A CN117119208A CN 117119208 A CN117119208 A CN 117119208A CN 202310839072 A CN202310839072 A CN 202310839072A CN 117119208 A CN117119208 A CN 117119208A
Authority
CN
China
Prior art keywords
node
live stream
target live
source
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310839072.4A
Other languages
Chinese (zh)
Inventor
刘勇江
张建锋
杨成进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202310839072.4A priority Critical patent/CN117119208A/en
Publication of CN117119208A publication Critical patent/CN117119208A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26208Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure provides a scheduling method, apparatus, computer program product, non-transitory computer readable storage medium and electronic device for live feed back. The method comprises the following steps: receiving a source-returning request aiming at a target live stream, which is sent by a lower node; determining an upper node for providing the target live stream to a lower node according to the heat information of the target live stream, the state information of a plurality of downlink nodes and the source station information of the target live stream; and sending the identification of the upper node to the lower node. The embodiment of the disclosure can avoid influencing the source returning quality due to abnormal performance of the downlink node, and is beneficial to improving the stability and reliability of live broadcast source returning.

Description

Live broadcast return source scheduling method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates generally to the field of computer technology, and more particularly, to a method, apparatus, computer program product, non-transitory computer readable storage medium, and electronic device for scheduling live feed back.
Background
This section is intended to introduce a few aspects of the art that may be related to various aspects of the present disclosure that are described and/or claimed below. This section is believed to help provide background information to facilitate a better understanding of various aspects of the disclosure. It should therefore be understood that these statements are to be read in this light, and not as admissions of prior art.
The CDN is known as a content delivery network. It is a new content service system based on IP network. By strategically constructing and deploying widely distributed edge servers and assisting with related load balancing and central platform scheduling strategies, an access user can access available edge servers nearby and obtain required contents, so that the cache hit rate and response speed are improved. The general CDN delivery system is a tree-like structure, and the load capacity of the whole system is improved by layering and port multiplexing. And the edge CDN node close to the user acquires the real-time media stream or the static file from the source station through layer-by-layer source returning according to the decision of the scheduling system, so that the service is provided for the user.
In the existing back source scheme, once the bandwidth of the node for back source aggregation is too full, the response time of the whole back source link, the back source quality and the playing experience of a user can be influenced due to the abnormal conditions of too high CPU load or network fluctuation and the like.
Therefore, there is a need to propose a new solution to alleviate or solve at least one of the above-mentioned problems.
Disclosure of Invention
The application aims to provide a live broadcast source-returning scheduling method, a live broadcast source-returning scheduling device, a live broadcast source-returning scheduling computer program product, a non-transitory computer readable storage medium and electronic equipment, so as to avoid influencing source-returning quality due to abnormal performance of a downlink node, and improve stability and reliability of live broadcast source-returning.
According to a first aspect of the present disclosure, there is provided a scheduling method of live feed back, implemented based on a content distribution network, the content distribution network including a plurality of downstream nodes, the method comprising: receiving a source-returning request aiming at a target live stream, which is sent by a lower node; determining an upper node for providing the target live stream to the lower node according to the heat information of the target live stream, the state information of the plurality of downlink nodes and the source station information of the target live stream; and sending the identification of the upper node to the lower node.
According to a second aspect of the present disclosure, there is provided a scheduling apparatus of live feed back, implemented based on a content distribution network, the content distribution network including a plurality of downstream nodes, the apparatus comprising: the receiving module is used for receiving a source returning request aiming at the target live stream, which is sent by the lower node; the determining module is used for determining an upper node for providing the target live stream for the lower node according to the heat information of the target live stream, the state information of the plurality of downlink nodes and the source station information of the target live stream; and the sending module is used for sending the identification of the superior node to the subordinate node.
According to a third aspect of the present disclosure, there is provided a computer program product comprising program code instructions which, when the program product is executed by a computer, cause the computer to perform the method according to the first aspect of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method according to the first aspect of the present disclosure.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising: a processor, a memory in electronic communication with the processor; and instructions stored in the memory and executable by the processor to cause the electronic device to perform the method according to the first aspect of the present disclosure.
In the embodiment of the disclosure, the upper node for providing the target live stream to the lower node is determined according to the heat information of the target live stream, the state information of a plurality of downlink nodes and the source station information of the target live stream, so that the influence on the source returning quality due to the abnormal performance of the downlink nodes can be avoided, and the reliability and stability of the live broadcast source returning can be improved.
It should be understood that what is described in this section is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used solely to determine the scope of the claimed subject matter.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a system architecture diagram of one embodiment of a live feed back scheduling method according to the present disclosure;
FIG. 2 illustrates a flow chart of one embodiment of a method of scheduling live feed back in accordance with the present disclosure;
FIG. 3A illustrates a schematic diagram of a specific example of one embodiment of a live feed back scheduling method according to the present disclosure;
FIG. 3B illustrates a schematic diagram of a hot-fluid source-return path according to one embodiment of a live-return scheduling method of the present disclosure;
FIG. 3C illustrates a schematic diagram of a cold-flow source-return path of one embodiment of a live-return scheduling method according to the present disclosure
FIG. 4 illustrates an exemplary block diagram of one embodiment of a scheduling apparatus for live feed back in accordance with the present disclosure;
fig. 5 shows a schematic diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure.
Detailed description of the preferred embodiments
The present disclosure will be described more fully hereinafter with reference to the accompanying drawings. However, the present disclosure may be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein. Thus, while the disclosure is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intention to limit the disclosure to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure as defined by the appended claims.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the teachings of the present disclosure.
Some examples are described herein in connection with block diagrams and/or flow charts, wherein each block represents a portion of circuit elements, module, or code that comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in other implementations, the functions noted in the blocks may occur out of the order noted. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Reference herein to "an embodiment according to … …" or "in an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one implementation of the disclosure. The appearances of the phrase "in accordance with an embodiment" or "in an embodiment" in various places herein are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments necessarily mutually exclusive of other embodiments.
First, terms related to one or more embodiments of the present application will be explained.
And (3) source returning: after the user has accessed the backbone network nearby, if the node does not have the required live stream, a pull stream needs to be requested to the upper node, which action is called source back. In the embodiment of the disclosure, the source station is a server for uploading live broadcast content in real time by a host, the edge node is a server for users to finally watch live broadcast, and one or more layers of secondary source nodes, namely a relay server, are arranged between the source station and the edge node. When a user accesses the edge node, the edge node may not have the required live stream, and the edge node needs to request the secondary source node of the upper layer step by step until the source station to pull the relevant live stream, and the process is the source return.
Pushing flow: and the terminal equipment collects video data, and sends the video data to a streaming media server through coding and network transmission, wherein the server is built by itself or provided by a CDN.
And (3) drawing: the terminal device or the player downloads or pulls the designated media stream from the server to the local. In embodiments of the present disclosure, the back source of edge CDN nodes to push nodes corresponds to the edge CDN nodes pulling flows from the push nodes.
In the implementation of the present disclosure, a source station may also be referred to as an upstream plug-flow node.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of live feed back scheduling methods, apparatus, terminal devices, and storage media of the present disclosure may be applied.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as a voice interaction type application, a video conference type application, a short video social type application, a web browser application, a shopping type application, a search type application, an instant messaging tool, a mailbox client, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, various electronic devices with microphones and speakers may be available, including but not limited to smartphones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic video expert compressed standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic video expert compressed standard audio layer 4) players, portable computers and desktop computers, etc. When the terminal devices 101, 102, 103 are software, they can be installed in the above-listed electronic devices. Which may be implemented as a plurality of software or software modules, or as a single software or software module. The present application is not particularly limited herein.
The server 105 may be a server providing various services, for example, the server 105 may be a background server processing a scheduling request of a live feed back transmitted by the terminal devices 101, 102, 103.
In some cases, the scheduling method of live feed back provided by the present disclosure may be executed by the server 105, and correspondingly, the scheduling apparatus of live feed back may also be set in the server 105, where the system architecture 100 may not include the terminal devices 101, 102, 103.
In some cases, the scheduling method of live feed back provided by the present disclosure may be performed jointly by the terminal devices 101, 102, 103 and the server 105. Accordingly, the scheduling means for live feed back may also be provided in the terminal devices 101, 102, 103 and the server 105, respectively.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster formed by a plurality of servers, or as a single server. When server 105 is software, it may be implemented as a plurality of software or software modules (e.g., to provide distributed services), or as a single software or software module. The present application is not particularly limited herein.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 illustrates a flow chart of one embodiment of a method of scheduling live feed back according to the present disclosure.
The method in this embodiment is implemented based on a CDN that includes a plurality of downstream nodes. The downlink node may be further divided into a downlink edge node, a single-line relay node, and a multi-line relay node. The downstream edge node is a node that processes the user request nearby and directly provides the live stream to the user's terminal device. Single-wire relay nodes, which are intermediate nodes between the source station and the downstream edge node, support only a single operator. A multi-wire relay node is an intermediate node between a source station and a downstream edge node and supports multiple operators.
The method in this embodiment may be implemented by the server in fig. 1. Further, the server may be a server dedicated to central scheduling or path planning in the CDN.
As shown in fig. 2, the method comprises the steps of:
step 210, receiving a source-returning request for the target live stream sent by the subordinate node.
The target live stream in this embodiment may be a live stream in real time or a time-shifted stream in the past (also referred to as a non-real time live stream). The time-shifted stream can be obtained by recording a live stream in real time.
In this embodiment, the source-back request may carry a flow identifier of the target live stream and a node identifier of a subordinate node.
And 220, determining an upper node for providing the target live stream to the lower node according to the heat information of the target live stream, the state information of a plurality of downlink nodes and the source station information of the target live stream.
In this embodiment, the popularity of the live stream is used to characterize the interest or popularity of the live stream. The popularity of the live stream can be determined according to the information of the current stream watching number, the current live room attention number, the history stream maximum watching number, the live room history maximum attention number and the like.
In an alternative embodiment, the live stream may be split into a hot stream and a cold stream according to the heat information of the live stream. Note that, the heat classification method of the live stream is not limited to this, and may be divided into, for example, a hot stream, a cold stream, and a warm stream.
In this embodiment, the status information of the downstream node is used to describe the performance status of the downstream node.
In an alternative embodiment, the status information of the downstream nodes may include fault information and load information of each downstream node. The failure information of the downstream node is, for example, information such as disk damage or network jitter. The load information of the downstream node is, for example, information such as bandwidth load and CPU load.
In an alternative embodiment, for a failed downstream node, the downstream node is deleted from the candidate list of the upstream node; and for the downlink node with the load index exceeding the preset threshold value, reducing the probability that the downlink node is selected as the upper node. By the method, the influence of the abnormal node on the source return path can be reduced.
In this embodiment, the source station information of the target live stream refers to information of a real source station corresponding to the target live stream. When recording a live stream in real time, in order to relieve the pressure of an upstream push node, the recorded time-shifted stream is usually stored in other nodes, so that the live stream in real time and the recorded time-shifted stream are located at different source stations. In the case that the target live stream is a time-shifted stream, the source station provided by the push service is not the real source station of the time-shifted stream, and the real source station of the time-shifted stream is acquired in an address redirection mode.
In this embodiment, the source station information of the target live stream is stored in the execution body of the method, and the subordinate node does not need to request the source station information from the push stream service separately, does not need to perform address redirection operation, and is beneficial to improving the source return speed.
In this embodiment, the upper node is configured to provide the target live stream to the lower node. Here, the upper node and the lower node are relatively speaking. For two adjacent downstream nodes on the back source path, the level closer to the source station is higher (i.e., the "upper level"), and the level closer to the user terminal device is lower (i.e., the "lower level"). For example, referring to fig. 3A, for the "downstream edge node (carrier a) -single line relay node (carrier a) -multi-line relay node (carrier a line)" which is a back-source path, the single line relay node (carrier a) is an upper node with respect to the downstream edge node (carrier a) and is a lower node with respect to the multi-line relay node (carrier a line). For a multi-wire relay node (carrier a line), its upper node is an upstream push node (carrier a).
Step 230, the identification of the upper node is sent to the lower node.
In this embodiment, the identification of the node is used to implement the distinction between different nodes. Illustratively, the identification of the node may be the access address of the node.
In the embodiment of the disclosure, the upper node for providing the target live stream to the lower node is determined according to the heat information of the target live stream, the state information of a plurality of downlink nodes and the source station information of the target live stream, so that the influence on the source returning quality due to the abnormal performance of the downlink nodes can be avoided, and the reliability and stability of the live broadcast source returning can be improved.
The method in this embodiment may be implemented based on a short connectivity protocol. The short connection protocol is, for example, HLS (HTTP Live Streaming, dynamic code rate adaptation technique) protocol.
In this embodiment, each level of downstream nodes interact with an execution body (e.g., a central scheduling server) of the method in this embodiment each time a target live stream is acquired, so as to determine a corresponding upper level node.
In this embodiment, real-time adjustment of the source return path may be implemented based on the short connection protocol and interaction between each downlink node and the central scheduling server. For example, in the case where a cold stream changes to a hot stream as the heat increases, the length of the back source path of the live stream is increased in real time. For another example, in the case that one downstream node in the source return path fails, the downstream node is removed from the source return path in real time.
In this embodiment, the downstream nodes of each level corresponding to the target live stream form a source return path of the target live stream.
In an alternative embodiment, in the case that the target live stream is a hot stream, the length of the back source path of the target live stream is a first length, and in the case that the target live stream is a cold stream, the length of the back source path of the target live stream is a second length, the first length is greater than the second length.
In this embodiment, for the heat flow, a longer source return path is adopted, which can include more levels of relay nodes, and is beneficial to alleviating the convergence pressure of the relay nodes. For cold flow, the bandwidth cost of the back source can be reduced by adopting a shorter back source path.
In an alternative embodiment, for heat flow, the back source path may take a three-level structure, i.e. "downstream edge node-single line relay node-multi-line relay node" back source path. As shown in fig. 3A, the target live stream requested by the client 1 is a heat flow, and the data transmission path thereof is "uplink push node (operator a) -multi-line relay node (operator a line) -single-line relay node (operator a) -downlink edge node (operator a) -client 1". For cold flow, the downstream edge node can be removed, and one or more single-wire relay nodes can be directly used as a source medium between the terminal equipment and the source station. As shown in fig. 3B, the target live stream requested by the client 2 is a cold stream, and the data transmission path thereof is "upstream push node (operator a) -single-wire relay node (operator a) -client 2".
Generally, data transmission across operators results in increased back-source costs and reduced back-source quality.
In an alternative embodiment, for heat flow, the uppermost node in the back source path (i.e., the node that directly interacts with the source station) is a multi-wire relay node. As shown in fig. 3B, three stages of back source paths are employed to converge the flow. The single-wire relay node which is the same as the edge node and the operator is used, and finally the single-wire relay node is converged to the multi-wire relay node, so that the inter-operator source return of a main line (namely a line between a source station and the uppermost downlink node) can be avoided, the downlink bandwidth can be maximally shared, and meanwhile, more stable transmission quality is provided for heat flow. In fig. 3B, one heat flow may employ a transmission path of "uplink push node (carrier a) -multi-wire relay node (carrier a line) -single-wire relay node (carrier a) -downlink edge node (carrier a)", and the other heat flow may employ a transmission path of "uplink push node (carrier B) -multi-wire relay node (carrier B line) -single-wire relay node (carrier B) -downlink edge node (carrier B)".
In an alternative embodiment, as shown in fig. 3C, for cold flow, the single-wire relay node (operator a) and the uplink push node (operator B) belong to different operators, and a unidirectional relay node (operator B) which belongs to the same operator as the uplink push node is added between the single-wire relay node and the uplink push node, so as to ensure that the uppermost node in the source return path and the corresponding source station belong to the same operator, and avoid the source return of the main line across operators.
Fig. 3A illustrates a schematic diagram of a specific example of one embodiment of a live feed back scheduling method according to the present disclosure. As shown in fig. 3A, the central service performs path planning according to the flow heat information, the downstream CDN node source return information (i.e., the state information of the downstream node), and the source return information (i.e., the source station information). Each downstream node interacts with the central service (as shown by the dashed lines in fig. 3A) to obtain the address of the upstream node.
For the heat flow requested by the client 1, a longer back source path is planned, wherein the uppermost node is a multi-line relay node and belongs to the same operator as the upstream push node.
For the cold flow requested by the client 2, a shorter back source path is planned, wherein the client 2 directly interacts with a single-wire relay node, and the single-wire relay node and the upstream plug-flow node belong to the same operator.
Fig. 4 illustrates an exemplary block diagram of a live feed back scheduling apparatus according to an embodiment of the present disclosure. As shown in fig. 4, the scheduling apparatus 400 for live-broadcast back source includes: a receiving module 410, configured to receive a source return request for a target live stream sent by a lower node; a determining module 420, configured to determine an upper node for providing the target live stream to the lower node according to the heat information of the target live stream, the state information of the plurality of downlink nodes, and the source station information of the target live stream; and a sending module 430, configured to send the identifier of the upper node to the lower node.
It should be appreciated that the various modules of the apparatus 400 shown in fig. 4 may correspond to the various steps in the method 200 described with reference to fig. 2. Thus, the operations, features and advantages described above with respect to method 200 apply equally to apparatus 400 and the modules comprised thereby. For brevity, certain operations, features and advantages are not described in detail herein.
In an optional embodiment, downstream nodes of each level corresponding to the target live stream form a source return path of the target live stream, where a length of the source return path of the target live stream is a first length when the target live stream is a hot stream, and a length of the source return path of the target live stream is a second length when the target live stream is a cold stream, and the first length is greater than the second length.
In an alternative embodiment, in a case that the target live stream is a cold stream, the lowest node in the source return path is a single-line relay node.
In an alternative embodiment, in a case that the target live stream is a heat flow, the uppermost node in the source return path is a multi-line relay node.
In an alternative embodiment, the status information of the plurality of downlink nodes includes fault information and load information of each downlink node.
In an alternative embodiment, the determining module 420 is further configured to: for the down node with fault, deleting the down node from the candidate list of the upper node; and for the downlink node with the load index exceeding the preset threshold value, reducing the probability that the downlink node is selected as the upper node.
In an alternative embodiment, the source station information of the target live stream is stored in an execution body of the method.
In an alternative embodiment, the source station information of the target live stream is obtained by an execution body of the method through an address redirection mode.
In an alternative embodiment, the uppermost node in the source return path and the corresponding source station belong to the same operator.
In an alternative embodiment, the uppermost node in the source return path and the corresponding lower node belong to different operators.
Fig. 5 illustrates a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure. Referring to fig. 5, a block diagram of an electronic device 500 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein. As shown in fig. 5, the electronic device 500 includes a computing unit 501 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The computing unit 501, ROM 502, and RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504. Various components in the device 500 are connected to the I/O interface 505, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 501 performs the various methods and processes described above, such as the live feed back scheduling method. For example, in some embodiments, the live feed back scheduling method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into RAM 503 and executed by the computing unit 501, one or more steps of the live feed back scheduling method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the live feed back scheduling method in any other suitable way (e.g. by means of firmware).
The various illustrative logics, logical blocks, modules, circuits, and algorithm processes described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. The interchangeability of hardware and software has been described generally in terms of functionality, and is illustrated in the various illustrative components, blocks, modules, circuits, and processes described above. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
The hardware and data processing apparatus used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose single or multi-chip processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor or any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some aspects, particular processes and methods may be performed by circuitry specific to a given function.
In one or more aspects, the functions described may be implemented in hardware, digital electronic circuitry, computer software, firmware (including the structures disclosed in this specification and their equivalents), or in any combination thereof. Aspects of the subject matter described in this specification can also be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a computer storage medium for execution by, or to control the operation of, data processing apparatus.
If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The processes of the methods or algorithms disclosed herein may be implemented in software modules executable by a processor, which may reside on a computer readable medium. Computer-readable media includes both computer storage media and communication media including any medium that can transfer a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Further, any connection is properly termed a computer-readable medium. Disk (Disk) and disc (Disk) as used herein include high-density optical discs (CDs), laser discs, optical discs, digital Versatile Discs (DVDs), floppy disks, and blu-ray discs where disks (disks) usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may be embodied as one or any combination or set of codes and instructions on a machine-readable medium and computer-readable medium, which may be incorporated into a computer program product.
The various embodiments in this disclosure are described in a related manner, and identical and similar parts of the various embodiments are all referred to each other, and each embodiment is mainly described in terms of differences from the other embodiments. In particular, for apparatus embodiments, device embodiments, computer-readable storage medium embodiments, and computer program product embodiments, the description is relatively simple, as relevant to the method embodiments in part.

Claims (14)

1. A live feed back scheduling method implemented based on a content distribution network, the content distribution network including a plurality of downstream nodes, the method comprising:
receiving a source-returning request aiming at a target live stream, which is sent by a lower node;
determining an upper node for providing the target live stream to the lower node according to the heat information of the target live stream, the state information of the plurality of downlink nodes and the source station information of the target live stream;
and sending the identification of the upper node to the lower node.
2. The method of claim 1, wherein downstream nodes of each level to which the target live stream corresponds form a back source path of the target live stream, wherein a length of the back source path of the target live stream is a first length if the target live stream is a hot stream, and a length of the back source path of the target live stream is a second length if the target live stream is a cold stream, and the first length is greater than the second length.
3. The method of claim 2, wherein, in the case where the target live stream is a cold stream, the lowest level node in the back source path is a single line relay node.
4. The method of claim 2, wherein, in the case where the target live stream is a hot stream, the uppermost node in the back-source path is a multi-line relay node.
5. The method of claim 1, wherein the status information of the plurality of downstream nodes includes fault information and load information for each downstream node.
6. The method of claim 5, wherein for a failed downstream node, the downstream node is deleted from the candidate list of the upstream node; and for the downlink node with the load index exceeding the preset threshold value, reducing the probability that the downlink node is selected as the upper node.
7. The method of claim 1, wherein source station information of the target live stream is stored in an execution body of the method.
8. The method of claim 2, wherein the source station information of the target live stream is obtained by an execution body of the method by means of address redirection.
9. The method of claim 2, wherein an uppermost node in the back source path belongs to the same operator as the respective source station.
10. The method of claim 9, wherein an uppermost node in the back source path and a corresponding lower node belong to different operators.
11. A live feed back scheduling apparatus implemented based on a content distribution network, the content distribution network comprising a plurality of downstream nodes, the apparatus comprising:
the receiving module is used for receiving a source returning request aiming at the target live stream, which is sent by the lower node;
the determining module is used for determining an upper node for providing the target live stream for the lower node according to the heat information of the target live stream, the state information of the plurality of downlink nodes and the source station information of the target live stream;
and the sending module is used for sending the identification of the superior node to the subordinate node.
12. A computer program product comprising program code instructions which, when the program product is executed by a computer, cause the computer to carry out the method of at least one of claims 1-10.
13. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of at least one of claims 1-10.
14. An electronic device, comprising:
the processor may be configured to perform the steps of,
a memory in electronic communication with the processor; and
instructions stored in the memory and executable by the processor to cause the electronic device to perform the method according to at least one of claims 1-10.
CN202310839072.4A 2023-07-10 2023-07-10 Live broadcast return source scheduling method and device, storage medium and electronic equipment Pending CN117119208A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310839072.4A CN117119208A (en) 2023-07-10 2023-07-10 Live broadcast return source scheduling method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310839072.4A CN117119208A (en) 2023-07-10 2023-07-10 Live broadcast return source scheduling method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117119208A true CN117119208A (en) 2023-11-24

Family

ID=88797354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310839072.4A Pending CN117119208A (en) 2023-07-10 2023-07-10 Live broadcast return source scheduling method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117119208A (en)

Similar Documents

Publication Publication Date Title
US11711551B2 (en) Adaptive bit-rate methods for live broadcasting
CN100484069C (en) File data distributing method and relative device
CN107147921B (en) Video playing acceleration method and device based on slice and intelligent CDN scheduling
US20120317232A1 (en) Service processing method, method and service node for adjusting delivered content
CN102427585A (en) Interconnection method for base station and access network system
CN106993014A (en) The method of adjustment of cache contents, apparatus and system
CN115134632B (en) Video code rate control method, device, medium and content delivery network CDN system
US20120243535A1 (en) Cache collaboration method, apparatus, and system
CN107317809B (en) Information center network multi-level video media system and use method thereof
CN102307216B (en) Peer-to-peer (P2P) stream media broadcast method and system for multimedia telephone
Huang et al. D2D-assisted VR video pre-caching strategy
WO2019100364A1 (en) Dynamic resource allocation method in cloud video platform
Huang et al. Budget-aware video crowdsourcing at the cloud-enhanced mobile edge
CN105207993A (en) Data access and scheduling method in CDN, and system
CN117119208A (en) Live broadcast return source scheduling method and device, storage medium and electronic equipment
CN116155783A (en) Hot spot data detection method and device, storage medium and electronic equipment
CN116737068A (en) Storage cluster management method and device, storage medium and electronic equipment
US20230379763A1 (en) Dynamic continuous quality of service adjustment system
Hu et al. A novel video transmission optimization mechanism based on reinforcement learning and edge computing
CN105743772A (en) Message processing method and system
US20140164608A1 (en) Content transmission system
CN117119209A (en) Caching method and device for live broadcast back source data, storage medium and electronic equipment
CN117714410A (en) Screenshot resource deployment method and device, storage medium and electronic equipment
US20220303306A1 (en) Compression of uniform resource locator sequences for machine learning-based detection of target category examples
US20130073782A1 (en) Method and device for storing data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination