CN115643203A - Content distribution method, content distribution device, content distribution network, device, and medium - Google Patents

Content distribution method, content distribution device, content distribution network, device, and medium Download PDF

Info

Publication number
CN115643203A
CN115643203A CN202211124996.8A CN202211124996A CN115643203A CN 115643203 A CN115643203 A CN 115643203A CN 202211124996 A CN202211124996 A CN 202211124996A CN 115643203 A CN115643203 A CN 115643203A
Authority
CN
China
Prior art keywords
node
pull
shortest
cdn
flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211124996.8A
Other languages
Chinese (zh)
Inventor
刘叔正
莫小琪
曾福华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202211124996.8A priority Critical patent/CN115643203A/en
Publication of CN115643203A publication Critical patent/CN115643203A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

In the embodiment of the application, path planning is performed based on network state data of the content distribution network to plan a corresponding shortest pull flow path from each pull flow node to each push flow node, so that a plurality of CDN nodes are communicated with one another, the content distribution network is presented as a mesh network structure instead of a tree network structure, and the content distribution network is subjected to path planning in a decentralized mesh routing manner to a certain extent. Furthermore, a pull flow path from the pull flow node to the push flow node is selected by integrating communication states and communication time delays among the CDN nodes and network state data such as the current residual bandwidth of each CDN node, so that the probability of obtaining the pull flow path with smaller communication time delay is greatly improved, the bandwidth of each CDN node on the pull flow path is effectively controlled, and the content distribution efficiency is improved.

Description

Content distribution method, content distribution device, content distribution network, device, and medium
Technical Field
The present application relates to the field of network technologies, and in particular, to a content distribution method and apparatus, a content distribution network, a device, and a medium.
Background
A Content Delivery Network (CDN) depends on CDN nodes deployed in various regions, so that a user obtains required Content nearby, network congestion is reduced, and response speed and hit rate of user access are improved. Generally, CDN nodes may be divided into a scheduling node and an edge node close to a user side, and the CDN caches distributed content in a tree form. Referring to fig. 1, taking a live broadcast scenario as an example, any edge node may serve as a stream pushing node to receive a stream pushing request of an anchor, locally cache a live stream pushed by the anchor, simultaneously push the live stream to a previous edge node, so that the previous edge node locally caches the live stream, and continue to push the live stream outwards until the live stream is pushed to a scheduling node, so that the scheduling node locally caches the live stream. Any edge node can be used as a stream pulling node to receive a stream pulling request of audiences, if the live stream required by the audiences is locally cached on the stream pulling node, the locally cached live stream is returned to the audiences, if the live stream required by the audiences is not locally cached on the stream pulling node, the live stream required by the audiences is requested from the edge node at the upper level, and if the live stream required by the audiences is not locally cached on the edge node at the upper level, the edge node at the upper level continuously accesses the edge node at the higher level until the edge node locally cached with the live stream required by the audiences is accessed or until the scheduling node is accessed. Any edge node or scheduling node which caches the live stream required by the audience returns the live stream to the edge node at the next level until the live stream is returned to the stream pulling node, and the existing stream pulling node returns the live stream to the audience. However, the whole link for pulling the live stream is relatively long, which greatly increases the communication delay of the stream pulling path and causes great resource waste.
Disclosure of Invention
Aspects of the present application provide a content distribution method, apparatus, content distribution network, device, and medium to improve distribution efficiency of a content distribution network.
The embodiment of the application provides a content delivery method, which is applied to a content delivery network, wherein the content delivery network comprises a plurality of content delivery network CDN nodes, and the method comprises the following steps: acquiring network state data of a content delivery network, wherein the network state data comprises a communication state and a communication time delay between every two CDN nodes in part or all CDN nodes of the content delivery network, and the current residual bandwidth of each CDN node in the part or all CDN nodes; selecting at least one CDN node from a plurality of CDN nodes included in the content delivery network as at least one pull flow node, and selecting at least one corresponding push flow node from the plurality of CDN nodes included in the content delivery network for each pull flow node; performing path planning according to the network state data to obtain the shortest pull flow path from each pull flow node to at least one corresponding push flow node; and distributing the content according to the shortest pull flow path from each pull flow node to at least one corresponding push flow node.
An embodiment of the present application further provides a content distribution network, including: the system comprises a detection node, a scheduling node and a plurality of edge nodes; the detection node is used for detecting the communication state and the communication time delay between every two edge nodes in part or all of the edge nodes; the scheduling node is used for acquiring the communication state and the communication time delay between every two edge nodes in part or all of the edge nodes detected by the detection node and determining the current residual bandwidth of each edge node in the part or all of the edge nodes; selecting at least one edge node from the plurality of edge nodes as at least one pull node, and selecting corresponding at least one push node from the plurality of edge nodes for each pull node; performing path planning according to the network state data to obtain the shortest pull flow path from each pull flow node to at least one corresponding push flow node; and distributing the content according to the shortest pull flow path from each pull flow node to at least one corresponding push flow node.
An embodiment of the present application further provides a content distribution apparatus, including: the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring network state data of a content delivery network, and the network state data comprises a communication state and a communication time delay between every two CDN nodes in part or all CDN nodes of the content delivery network and the current residual bandwidth of each CDN node in the part or all CDN nodes; the system comprises a selection module and a pushing module, wherein the selection module is used for selecting at least one CDN node from a plurality of CDN nodes included in a content delivery network as at least one pull node, and selecting at least one corresponding push node from the plurality of CDN nodes included in the content delivery network for each pull node; the path planning module is used for planning paths according to the network state data to obtain the shortest pull flow path from each pull flow node to at least one corresponding push flow node; and the content distribution module is used for distributing content according to the shortest pull flow path from each pull flow node to at least one corresponding push flow node.
An embodiment of the present application further provides an electronic device, including: a memory and a processor; a memory for storing a computer program; the processor is coupled to the memory for executing the computer program for performing the steps in the content distribution method.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which, when executed by a processor, causes the processor to implement the steps in the content distribution method.
In the embodiment of the present application, path planning is performed based on network state data of a content delivery network, so as to plan a corresponding shortest pull path from each pull node to each push node, thereby enabling a plurality of CDN nodes to be mutually communicated, so that the content delivery network is presented as a mesh network structure, rather than a tree network structure, and to a certain extent, the content delivery network performs path planning in a decentralized mesh routing manner. Furthermore, a network state data such as a communication state and a communication time delay between CDN nodes and the current residual bandwidth of each CDN node is integrated to select a pull path from a pull node to a push node, so that the probability of obtaining the pull path with smaller communication time delay is greatly improved, the bandwidth of each CDN node on the pull path is effectively controlled, the occurrence of the situation that the network flow of the CDN nodes is overlarge is greatly reduced, the content distribution efficiency is improved, and the method is particularly suitable for point-to-point complex routing scenes with the quantity of massive nodes.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a diagram of an application scenario for content distribution in an exemplary content distribution network;
FIG. 2 is a diagram of another application scenario for content distribution by an exemplary content distribution network;
fig. 3 is a flowchart of a content distribution method provided in an embodiment of the present application;
fig. 4 is a flowchart of another content distribution method provided in the embodiment of the present application;
fig. 5 is a flowchart of another content distribution method provided in an embodiment of the present application;
FIG. 6 is a diagram of another application scenario for content distribution by an exemplary content distribution network;
fig. 7 is a schematic structural diagram of a content distribution apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes the access relationship of the associated object, meaning that there may be three relationships, e.g., A and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In the written description of this application, the character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In the embodiments of the present application, "first," "second," "third," "fourth," "fifth," and "sixth" are only used to distinguish the contents of different objects, and no special meaning is given.
At present, the whole link for pulling the live stream is long, the communication delay of a stream pulling path is greatly increased, and great resource waste is brought. In the embodiment of the present application, a path is planned based on network state data of the content delivery network, so as to plan a corresponding shortest pull path from each pull node to each push node, so that a plurality of CDN nodes are mutually communicated, and the content delivery network is presented as a mesh network structure instead of a tree network structure. Furthermore, the communication state and the communication time delay between the CDN nodes are integrated, network state data such as the current residual bandwidth of each CDN node are used for selecting a pull flow path from a pull flow node to a push flow node, the probability of obtaining the pull flow path with smaller communication time delay is greatly improved, the bandwidth of each CDN node on the pull flow path is effectively controlled, the situation that the network flow of the CDN nodes is overlarge is greatly reduced, the content distribution efficiency is improved, and the method is particularly suitable for point-to-point complex route selection scenes with a large number of nodes.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 2 is another application scenario diagram of content distribution by an exemplary content distribution network. As shown in fig. 2, the content delivery network includes a plurality of CDN nodes, where some nodes of the plurality of CDN nodes are located near the user side, the CDN nodes near the user side may be considered as edge nodes, and the locations of some nodes are located at the cloud side, and such nodes may be considered as cloud-side nodes, and of course, the locations of the CDN nodes may be flexibly set according to actual application requirements. The plurality of CDN nodes may include CDN nodes having the same function or CDN nodes having different functions, and the function of each CDN node may be flexibly set according to an actual application requirement. For example, the CDN nodes are divided according to different functions, and the CDN nodes may be divided into a scheduling node, a probe node, a pull node, and a push node. The scheduling node may be a node having a management and control function, for example, may undertake tasks such as network traffic analysis, load balancing, content distribution, and node scheduling, and may also undertake a pull path routing task from a pull node to a push node, but is not limited thereto. The detection node is mainly responsible for detecting network state data of the whole content delivery network, for example, whether the communication state of any two CDN nodes is communicable or non-communicable, and for example, communication delay between any two CDN nodes, or the like. The pull flow node is a CDN node that pulls required content data from a corresponding push flow node according to a pull flow request initiated by a user, and generally, the pull flow node is an edge node. The push flow node is a node for caching content data provided by a user according to a push flow request initiated by the user. Typically, a pull flow node or a push flow node is an edge node near the user side.
It is noted that fig. 2 only illustrates content data distributed by the content distribution network by taking a live stream as an example, but the content data that can be distributed by the content distribution network includes, but is not limited to: video data, audio data, text data, and image data.
In this embodiment, any CDN node may be, for example, a terminal device or a server. Terminal devices include, for example but are not limited to: tablet computers, desktop computers, wearable intelligent devices, intelligent home devices and the like. Servers include, for example but are not limited to: a single server or a distributed server cluster of multiple servers. It should be understood that fig. 2 is only a schematic diagram of a content delivery network provided in this embodiment, and this embodiment of the present application does not limit the number of CDN nodes included in fig. 2, nor the location relationship between CDN nodes in fig. 2.
In this embodiment, a path is planned based on network state data of the content delivery network, so as to plan a corresponding shortest pull path from each pull node to each push node, thereby enabling a plurality of CDN nodes to be mutually communicated, so that the content delivery network is presented as a mesh network structure, rather than a tree network structure, and to a certain extent, the content delivery network is planned in a decentralized mesh routing manner. Furthermore, a network state data such as a communication state and a communication time delay between CDN nodes and the current residual bandwidth of each CDN node is integrated to select a pull path from a pull node to a push node, so that the probability of obtaining the pull path with smaller communication time delay is greatly improved, the bandwidth of each CDN node on the pull path is effectively controlled, the occurrence of the situation that the network flow of the CDN nodes is overlarge is greatly reduced, the content distribution efficiency is improved, and the method is particularly suitable for point-to-point complex routing scenes with the quantity of massive nodes.
Fig. 3 is a flowchart of a content distribution method according to an embodiment of the present application. The method may be performed by a content delivery apparatus, which may typically be integrated into any CDN node in the CDN. Referring to fig. 3, the method may include the steps of:
300. the method comprises the steps of obtaining network state data of a content delivery network, wherein the network state data comprise communication states and communication time delays between every two CDN nodes in part or all CDN nodes of the content delivery network, and current residual bandwidths of all CDN nodes in part or all CDN nodes.
301. Selecting at least one CDN node from a plurality of CDN nodes included in the content delivery network as at least one pull flow node, and selecting corresponding at least one push flow node from the plurality of CDN nodes included in the content delivery network for each pull flow node.
302. And planning a path according to the network state data to obtain the shortest pull flow path from each pull flow node to at least one corresponding push flow node.
303. And distributing the content according to the shortest pull flow path from each pull flow node to at least one corresponding push flow node.
In this embodiment, a probe node with a probe function may request to acquire a communication state and a communication delay between every two CDN nodes in part or all of the CDN nodes of the content delivery network. It should be understood that, for every two CDN nodes in all CDN nodes of the content delivery network, the communication state and the communication delay between the two CDN nodes are obtained. And aiming at every two CDN nodes in part of CDN nodes of the content delivery network, acquiring the communication state and the communication time delay between the two CDN nodes.
Generally, a probe node may trigger one of the two CDN nodes to send a probe request to the other CDN node, and may probe whether a communication state between the two CDN nodes is communicable or incommunicable according to whether the other CDN node returns response information, where time taken to send the probe request to receive the response information, that is, round Trip Time (RTT) of the probe request is used as communication delay between the two CDN nodes. Of course, more description of probing the network state between nodes can be found in the related art.
In this embodiment, in addition to obtaining the communication state and the communication delay between each two CDN nodes in part or all CDN nodes of the content delivery network, the current remaining bandwidth of each CDN node in part or all CDN nodes needs to be obtained. The current remaining bandwidth of each CDN node may be determined according to the total available bandwidth of each CDN node and the current used bandwidth of each CDN node. Specifically, the current used bandwidth is subtracted from the total available bandwidth of each CDN node to obtain the current remaining bandwidth of each CDN node. In practical applications, the total available bandwidth of each CDN node may be maintained by any CDN node in the content delivery network, the current used bandwidth of each CDN node may be monitored in real time, and the current remaining bandwidth of each CDN node may be calculated based on the total available bandwidth of each CDN node and the current used bandwidth of each CDN node monitored in real time. Of course, the currently used bandwidth of each CDN node may also be monitored in real time by an external system independent of the content delivery network, for example, the log server may monitor the currently used bandwidth of each CDN node in real time. For this case, the content delivery network may request the log server to send the currently used bandwidth of each CDN node being monitored.
In this embodiment, at least one CDN node is selected from a plurality of CDN nodes included in the content delivery network as at least one pull node, and a corresponding at least one push node is selected from the plurality of CDN nodes included in the content delivery network for each pull node. It should be noted that all CDN nodes may be selected as pull nodes from a plurality of CDN nodes included in the content delivery network, or a part of CDN nodes may be selected as pull nodes from a plurality of CDN nodes included in the content delivery network. And for each pull flow node, selecting one or more CDN nodes from a plurality of CDN nodes in the content delivery network except the pull flow node as the push flow node of the pull flow node. That is, the number of the pull nodes may be one or more, and the number of the push nodes corresponding to each pull node may be one or more. In this embodiment, after the network state data of the content distribution network is obtained, path planning may be performed based on the network state data, so as to obtain a shortest pull flow path from each pull flow node to at least one corresponding push flow node. In practical application, any CDN node on the edge side may be used as a pull node, any CDN node on the edge side in the content delivery network except the pull node may be used as a push node, and the shortest pull path from the pull node to at least one corresponding push node of the pull node is planned. It should be noted that the shortest pull-stream path refers to a pull-stream path with the smallest communication delay, and the time consumed by the pull-stream node to pull the required content data from the corresponding push-stream node through the shortest pull-stream path is the shortest.
In practical application, when path planning is performed, based on a communication state between two CDN nodes, it can be ensured that two adjacent CDN nodes on any planned pull flow path are in a communicable state, thereby ensuring the effectiveness of the planned pull flow path. The communication time delay of the planned pull flow path can be effectively controlled based on the communication time delay between the two CDN nodes, and a basis is provided for the planning of the shortest pull flow path. The CDN nodes with insufficient bandwidth of the planned pull-flow path can be effectively controlled based on the current residual bandwidth of the CDN nodes, the network flow of each CDN node is effectively limited, and the load pressure born by each CDN node is reduced.
As an example, when path planning is performed, a plurality of relay nodes located between an egress node and an egress node may be selected for an egress path to be planned based on a condition that a communication state between every two adjacent CDN nodes is communicable, paths sequentially passing through the egress node, the plurality of relay nodes, and the egress node are used as the egress path, and communication delays between the two adjacent CDN nodes on the egress path are accumulated to obtain a communication delay of the egress path. In general, multiple pull flow paths may be planned for each set of pull and push nodes. Then, a plurality of pull flow paths with the minimum communication delay can be selected from the plurality of pull flow paths, and finally, the pull flow paths with the minimum communication delay and with the current remaining bandwidth shortage of the CDN nodes are eliminated, and at least one of the remaining pull flow paths can be used as at least one shortest pull flow path between a pull flow node and a push flow node. Of course, the pull flow paths with the current remaining bandwidth insufficient for the CDN nodes in the multiple pull flow paths may be eliminated, and then at least one pull flow path with the minimum communication delay is selected from the remaining multiple pull flow paths as the at least one shortest pull flow path, which is not limited. Further optionally, a screening condition that identical relay nodes cannot exist between the shortest pull-flow paths may also be added, and the at least one pull-flow path with the minimum communication delay meeting the bandwidth condition is screened again to select the at least one shortest pull-flow path meeting the screening condition, where identical relay nodes cannot exist between every two shortest pull-flow paths in the at least one shortest pull-flow path.
As another example, when performing path planning, for each group of the pull flow node and the push flow node, a plurality of pull flow paths with different hop counts may be planned for the pull flow node to the push flow node first, where the hop counts may reflect the number of relay nodes passed by the pull flow node to the push flow node. The hop count is 1 more than the number of relay nodes. For example, if the number of relay nodes is 0, the hop count is 1; if the number of the relay nodes is 1, the hop count is 2; and by analogy, the hop count is 1 more than the number of relay nodes. Then, according to the respective corresponding communication time delays of different pull flow paths, selecting a plurality of candidate shortest pull flow paths with the minimum communication time delay from the pull flow paths with different hop numbers; and finally, selecting a plurality of shortest pull flow paths from the pull flow nodes to the push flow nodes from the candidate pull flow paths, wherein the shortest pull flow paths meet the screening condition by taking the condition that the same relay nodes cannot exist among different shortest pull flow paths and the current residual bandwidth of each CDN node on the shortest pull flow path is greater than a bandwidth threshold value. The bandwidth threshold is flexibly set according to the actual application requirement.
After determining the shortest pull-flow path from each pull-flow node to its corresponding at least one push-flow node, the content distribution may be performed according to the shortest pull-flow path from each pull-flow node to its corresponding at least one push-flow node. Specifically, after receiving a pull request for any push flow node initiated by a user, any pull flow node transmits the pull flow request in each CDN node on the shortest pull flow path until the pull flow request is sent to the push flow node, and the push flow node obtains content data cached locally in response to the pull flow request and transmits the content data in each CDN node on the shortest pull flow path until the content data is sent to the pull flow node. And pushing the content data to a user after the pull flow node acquires the content data, so that the whole pull flow task is completed.
In practical applications, the scheduling node may maintain and manage path information of a shortest pull flow path from each pull flow node to its corresponding at least one push flow node, where the path information includes, for example and without limitation: and the identification information of each CDN node on the shortest pull flow path and the sequence of each CDN node on the shortest pull flow path. After receiving a pull request aiming at any push flow node initiated by a user, the pull flow node sends the identification information of the pull flow node and the identification information of the push flow node to a scheduling node so that the scheduling node can determine the path information of the shortest pull flow path corresponding to the pull flow request. When receiving a pull request transmitted by the last CDN node, each CDN node in the shortest pull path may send its own identifier information and a request identifier of the pull request to the scheduling node, receive identifier information of the next CDN node to be transmitted, which is returned by the scheduling node, and transmit the pull request to the next CDN node until the pull request is transmitted to the push node, so that the pull request transmission task is completed. In the link of returning the content data to the stream pulling node, the content data sequentially passes through each CDN node on the shortest stream pulling path until the stream pulling node is reached.
In practical application, the path information of the shortest pull flow path from each pull flow node to each push flow node can be pushed to each pull flow node. When each CDN node on the shortest pull flow path transmits a pull flow request to outside, the path information of the shortest pull flow path may be encapsulated in the pull flow request, so that each CDN node may accurately determine the next CDN node that receives the pull flow request, and ensure that the pull flow request is accurately sent to the push flow node. Similarly, when each CDN node in the shortest pull flow path externally delivers content data from the push flow node, the CDN node may also deliver path information of the shortest pull flow path at the same time, so that the content data is accurately returned to the pull flow node.
According to the technical scheme provided by the embodiment of the application, path planning is carried out on the basis of network state data of the content distribution network, so that the shortest pull flow path corresponding to each pull flow node to each push flow node is planned for each pull flow node, and then a plurality of CDN nodes are mutually communicated, the content distribution network is realized to be a mesh network structure instead of a tree network structure, and the content distribution network is realized to carry out path planning in a decentralized mesh routing mode to a certain extent. Furthermore, the communication state and the communication time delay between the CDN nodes are integrated, network state data such as the current residual bandwidth of each CDN node are used for selecting a pull flow path from a pull flow node to a push flow node, the probability of obtaining the pull flow path with smaller communication time delay is greatly improved, the bandwidth of each CDN node on the pull flow path is effectively controlled, the situation that the network flow of the CDN nodes is overlarge is greatly reduced, the content distribution efficiency is improved, and the method is particularly suitable for point-to-point complex route selection scenes with a large number of nodes.
In some optional embodiments, when path planning is performed according to the network state data, for any target pull flow node in the at least one pull flow node and any target push flow node in the at least one push flow node corresponding to the target pull flow node, at least one relay CDN node whose communication state with the target push flow node is communicable may be selected from CDN nodes included in the CDN except the target pull flow node and the target push flow node; determining a shortest pull flow path set from a target pull flow node to a target push flow node according to a plurality of shortest pull flow paths with different hop counts from the target pull flow node to each relay CDN node and a shortest pull flow path with the hop count from each relay CDN node to the target push flow node being 1; and screening out at least one shortest pull flow path from the target pull flow node to the target push flow node, wherein the current residual bandwidth of the at least one shortest pull flow path is greater than the bandwidth threshold value.
In this embodiment, the target pull node is any one of the at least one pull node, and the target push node is any one of a plurality of push nodes corresponding to the target pull node. Each CDN node in at least one CDN node which can communicate with the communication state of the target stream pushing node is used as a relay CDN node, and the relay CDN node is used as the last relay CDN node on a stream pulling path from the target stream pulling node to the target stream pushing node, namely a relay CDN node adjacent to the target stream pushing node. Therefore, a stream pulling path from the target stream pulling node to the target stream pushing node can be divided into two sections, wherein the first section is from the target stream pulling node to the relay CDN node, and the second section is from the relay CDN node to the target stream pushing node. For the determined relay CDN node, a shortest pull-flow path from the target pull-flow node to the target push-flow node can be determined as long as the shortest pull-flow path from the target pull-flow node to the relay CDN node is found. Under the condition that the plurality of relay CDN nodes are respectively used as the last relay CDN node adjacent to the target stream pushing node, a plurality of shortest stream pulling paths from the target stream pulling node to the target stream pushing node can be obtained, wherein hop numbers of the shortest stream pulling paths may be the same or different, and are specifically related to communication time delay among the CDN nodes.
For example, assume that the content delivery network includes CDN nodes 1, 2, 3, 4, 5, and 6,
for example, assume that the content delivery network includes CDN nodes 1, 2, 3, 4, 5, and 6. When the shortest pull-flow path from the CDN node 1 to the CDN node 6 is planned, the CDN node 2, the CDN node 3, the CDN node 4, and the CDN node 5 are sequentially used as the last relay CDN node on the pull-flow path from the CDN node 1 to the CDN node 6, respectively. According to the sequence of the hop count from small to large, the following steps are repeatedly executed: and splicing the shortest pull flow path with the minimum communication time delay from the CDN node 1 to the last relay CDN node, which is less than 1 hop of the current hop count, and the pull flow path with the hop count from the last relay CDN node to the node 6 being 1 to obtain a candidate shortest pull flow path from the CDN node 1 to the CDN node 6. In this way, candidate shortest pull-flow paths with different hop numbers, where CDN node 1 reaches CDN node 6 through each last relay CDN node, may be obtained. Taking the last relay CDN node as CDN node 2 as an example, the candidate shortest pull-flow paths with different hop counts include: the candidate shortest pull-flow path with hop number of 2 formed by the CDN nodes 1-CDN nodes 2-CDN nodes 6, the candidate shortest pull-flow path with hop number of 3 formed by the CDN nodes 1-CDN nodes 3-CDN nodes 2-CDN nodes 6, the candidate shortest pull-flow path with hop number of 4 formed by the CDN nodes 1-CDN nodes 3-CDN nodes 4-CDN nodes 2-CDN nodes 6, and the like. Under the condition that the CDN node 2, the CDN node 3, the CDN node 4, and the CDN node 5 are respectively used as a last relay CDN node on a pull flow path from the CDN node 1 to the CDN node 6, multiple candidate shortest pull flow paths with different hop counts from the CDN node 1 to the CDN node 6 may be obtained, and multiple candidate shortest pull flow paths with the smallest communication delay are screened from the multiple candidate shortest pull flow paths with different hop counts and added as multiple shortest pull flow paths to a shortest pull flow path set associated with the CDN node 1 to the CDN node 6.
In this embodiment, the shortest pull-flow path with the current remaining bandwidth being greater than the bandwidth threshold means that the current remaining bandwidth of each CDN node on the shortest pull-flow path is greater than the bandwidth threshold, that is, the bandwidth of each CDN node is sufficient. The bandwidth threshold is flexibly set according to the actual application requirement. And bandwidth constraint conditions are added when the shortest pull flow path set is screened, so that the situation that the CDN node on the shortest pull flow path has higher load pressure during content delivery can be greatly reduced, and the reliability of content delivery is ensured.
In practical applications, the sharing of the same relay node by different shortest pull paths may interfere with the reliability of content distribution. For example, when the same relay node receives pull requests for different push flow nodes at the same time, the same relay node responds to multiple pull flow requests at the same time, which is prone to generate a large load pressure, and thus affects normal content distribution. Based on this, in order to improve the reliability of content distribution, a constraint condition that "the same relay node cannot exist between different shortest pull flow paths" may be added in addition to the constraint condition that the bandwidth is increased, and then, screening at least one shortest pull flow path from a target pull flow node to a target push flow node, where a current remaining bandwidth is greater than a bandwidth threshold, from the set of shortest pull flow paths includes: screening at least one shortest pull-out path with the current residual bandwidth larger than a bandwidth threshold from the shortest pull-out path set; and taking the relay nodes which cannot be the same among different shortest pull flow paths as screening conditions, and screening at least one shortest pull flow path which meets the screening conditions from at least one shortest pull flow path with the current residual bandwidth larger than a bandwidth threshold.
In some optional embodiments, when the shortest pull-flow path from a pull-flow node to a last relay node and the shortest pull-flow path from the last relay node to a push-flow node with the hop count of 1 are planned, hop count limitation can be added to ensure content distribution efficiency, further, path planning is sequentially performed according to the order of hop count from small to large, and when each hop is added, the shortest pull-flow path of the next hop count can be accurately planned according to the shortest pull-flow path of the previous hop count and the shortest pull-flow path with the hop count of 1, so that the shortest pull-flow path between nodes is efficiently and accurately planned. Therefore, the embodiment of the application also provides a content distribution method. Fig. 4 is a flowchart of another content distribution method provided in the embodiment of the present application. The method may be performed by a content delivery apparatus, which may typically be integrated into any CDN node in the CDN. Referring to fig. 4, the method may include the steps of:
400. the method comprises the steps of obtaining network state data of a content delivery network, wherein the network state data comprise communication states and communication time delays between every two CDN nodes in part or all CDN nodes of the content delivery network, and current residual bandwidths of all CDN nodes in the part or all CDN nodes.
401. Selecting at least one CDN node from a plurality of CDN nodes included in the content delivery network as at least one pull flow node, and selecting corresponding at least one push flow node from the plurality of CDN nodes included in the content delivery network for each pull flow node.
402. And selecting at least one relay CDN node which can communicate with the target stream pushing node in the communication state from CDN nodes except the target stream pulling node and the target stream pushing node, wherein the CDN nodes are included in the target stream pulling node.
403. And adding the shortest pull flow path with the hop count from the target pull flow node to the target push flow node being 1 into the shortest pull flow path set, and setting the current hop count to be 2.
404. According to the shortest pull flow path from the target pull flow node to each relay CDN node, which is less than 1 hop of the current hop count, and the shortest pull flow path from each relay CDN node to the target push flow node, which is 1 hop count, a plurality of candidate shortest pull flow paths from the target pull flow node to the target push flow node, which have the current hop count, are determined, and respective communication time delays of the plurality of candidate shortest pull flow paths with the current hop count are determined.
405. And selecting a plurality of candidate shortest pull flow paths with the minimum communication delay and the current hop number as a plurality of shortest pull flow paths with the current hop number to be added into the shortest pull flow path set.
406. And judging whether the current hop count reaches the maximum allowable hop count, if not, executing step 407, and if so, executing step 408.
407. And adding 1 to the current hop count, and returning to execute the step 404 until the current hop count reaches the maximum allowed hop count.
408. And screening out at least one shortest pull flow path from the target pull flow node to the target push flow node, wherein the current residual bandwidth of the at least one shortest pull flow path is greater than the bandwidth threshold value.
409. And distributing the content according to the shortest pull flow path from each pull flow node to at least one corresponding push flow node.
For specific implementation manners of steps 400, 401, 408, and 409 in this embodiment, reference may be made to detailed descriptions of the foregoing embodiments, and details are not described herein again.
In this embodiment, the maximum value of the number of hops of the shortest pull-stream path cannot exceed the maximum allowable number of hops, that is, the minimum value of the number of hops of the shortest pull-stream path is 1, and the maximum value is the maximum allowable number of hops. And according to the sequence of the hop counts from small to large, subject to the maximum allowed hop count, and with the aid of the path planning result of the last hop count, sequentially performing path planning on the current hop count for each pull flow node to each push flow node. Specifically, the current hop count is increased from 1 until the maximum allowable hop count is reached, and a plurality of candidate shortest pull flow paths with the current hop count from the target pull flow node to the target push flow node are determined according to the shortest pull flow path from the target pull flow node to each relay CDN node, which is less than 1 hop in number than the current hop count, and the shortest pull flow path from each relay CDN node to the target push flow node, which is 1 hop in number. In this way, the communication delay of a plurality of candidate shortest pull flow paths with the current hop count from the target pull flow node to the target push flow node can be determined according to the communication delay of the shortest pull flow path with 1 hop less than the current hop count from the target pull flow node to each relay CDN node and the communication delay of the pull flow path with 1 hop count from each relay CDN node to the target push flow node.
Specifically, the communication delay and the communication state between two CDN nodes have been detected by a detection node having a detection function. All or part of CDN nodes which can be selected from a content delivery network are used as at least one pull flow node, and at least one CDN node except the pull flow node is used as at least one push flow node corresponding to the pull flow node for each pull flow node. That is, there may be one or more push flow nodes per pull flow node. For each group of pull flow nodes and push flow nodes, the other CDN nodes except the pull flow node and the push flow node are respectively used as a relay CDN node adjacent to the push flow node, and the relay CDN node adjacent to the push flow node is also the last relay CDN node.
Taking the maximum allowed hop count as 3 as an example, for each group of target pull flow nodes and target push flow nodes, starting from the minimum hop count of 1, adding the shortest pull flow path with the hop count of 1 from the target pull flow node to the target push flow node into the shortest pull flow path set corresponding to the group of target pull flow nodes and target push flow nodes. For example, for a set of CDN node 1 and CDN node 6, adding the shortest pull-flow path with hop count 1, that is, CDN node 1-CDN node 6, to the shortest pull-flow path set of itself. And adding the shortest pull flow path with the hop number of 1, namely CDN node 1-CDN node 2, into the shortest pull flow path set of the shortest pull flow path for the group consisting of CDN node 1 and CDN node 2. And by analogy, adding the shortest pull flow path with the hop number of 1 of each group of target pull flow nodes and target push flow nodes into the own shortest pull flow path set.
Then, the current hop count is updated from 1 to 2, that is, the shortest pull-out path to be planned needs to pass through one relay node. At this time, the shortest pull flow path with hop count of 2 to be planned is divided into the shortest pull flow path with hop count of 1 from the target pull flow node to the relay node and the shortest pull flow path with hop count of 1 from the relay node to the target push flow node. Since the communication delay between two CDN nodes is known, it is not difficult to determine the communication delay of the shortest pull-path with hop count of 2. Then, for each group of target pull flow nodes and target push flow nodes, a plurality of shortest pull flow paths with the hop count of 2 and the shortest communication delay is the shortest hop count of 2 may be selected from the plurality of shortest pull flow paths with the hop count of 2 and added to the shortest pull flow path set of the target pull flow nodes and the target push flow nodes. In this way, for each set of the shortest pull flow path set corresponding to the target pull flow node and the target push flow node, there is a shortest pull flow path with a hop count of 1, and there is also a shortest pull flow path with a hop count of 2. For example, for a group of CDN nodes 1 and 6, the shortest pull-flow paths with hop count of 2 are CDN nodes 1-CDN node 2-CDN node 6, CDN nodes 1-CDN node 3-CDN node 6, and CDN nodes 1-CDN node 4-CDN node 6, respectively, and the shortest pull-flow paths with hop count of 2 are added to their own shortest pull-flow path set. And by analogy, adding the shortest pull flow path with hop number of 2 of each group of target pull flow nodes and target push flow nodes into the own shortest pull flow path set.
Then, the current hop count is updated from 2 to 3, that is, the shortest pull-path to be planned needs to pass through two relay nodes. At this time, the shortest pull-flow path with hop count of 3 to be planned is divided into the shortest pull-flow path with hop count of 2 from the target pull-flow node to the last relay node and the shortest pull-flow path with hop count of 1 from the last relay node to the target push-flow node. And acquiring the shortest pull flow path with the hop count of 2 from each shortest pull flow path set, and easily determining the communication delay of each shortest pull flow path with the hop count of 3 based on the communication delay of the shortest pull flow path with the hop count of 2 and the known communication delay between every two CDN nodes. Then, for each group of target pull flow nodes and target push flow nodes, a plurality of shortest pull flow paths with the hop count of 3 and with the smallest communication delay can be selected from the plurality of shortest pull flow paths with the hop count of 3 and added into the shortest pull flow path set of the target pull flow nodes and the target push flow nodes. In this way, in the shortest pull flow path set corresponding to each set of target pull flow nodes and target push flow nodes, there are shortest pull flow paths with hop count 1, shortest pull flow paths with hop count 2, and shortest pull flow paths with hop count 3. For example, for a group of CDN nodes 1 and 6, the shortest pull-flow paths with hop counts of 3 are CDN nodes 1-CDN nodes 2-CDN nodes 3-CDN nodes 6, CDN nodes 1-CDN nodes 3-CDN nodes 4-CDN nodes 6, CDN nodes 1-CDN nodes 5-CDN nodes 4-CDN nodes 6, respectively, and the shortest pull-flow paths with hop counts of 3 are added to the set of shortest pull-flow paths of the shortest pull-flow paths. And by analogy, adding the shortest pull flow path with hop number of 3 of each group of target pull flow nodes and target push flow nodes into the own shortest pull flow path set.
According to the technical scheme provided by the embodiment of the application, the path planning is carried out based on the network state data of the content distribution network, so that the shortest pull-flow path corresponding to each pull-flow node to each push-flow node is planned for each pull-flow node, and then a plurality of CDN nodes are mutually communicated, so that the content distribution network is presented as a mesh network structure instead of a tree network structure, and the path planning of the content distribution network in a decentralized mesh routing mode is realized to a certain extent. Furthermore, a network state data such as a communication state and a communication time delay between CDN nodes and the current residual bandwidth of each CDN node is integrated to select a pull path from a pull node to a push node, so that the probability of obtaining the pull path with smaller communication time delay is greatly improved, the bandwidth of each CDN node on the pull path is effectively controlled, the occurrence of the situation that the network flow of the CDN nodes is overlarge is greatly reduced, the content distribution efficiency is improved, and the method is particularly suitable for point-to-point complex routing scenes with the quantity of massive nodes. And further, when the path is planned, hop limit is increased to ensure the content distribution efficiency, path planning is sequentially carried out according to the sequence of the hop from small to large, and when each hop is increased, the shortest pull-flow path of the next hop can be accurately planned according to the shortest pull-flow path of the previous hop and the shortest pull-flow path of which the hop is 1, so that the shortest pull-flow path between nodes is efficiently and accurately planned.
In practical application, after a plurality of candidate shortest pull flow paths with the current hop count and the minimum communication delay are selected as a plurality of shortest pull flow paths with the current hop count, the plurality of shortest pull flow paths with the current hop count can be added into the shortest pull flow path set in a targeted manner based on the number of the shortest pull flow paths already added into the shortest pull flow path set and the communication delay, so that the probability of obtaining the pull flow paths with the smaller communication delay is further improved, and the content distribution efficiency is improved. Based on this, the embodiment of the application also provides a content distribution method of the content distribution network.
Fig. 5 is a flowchart of another content distribution method according to an embodiment of the present application. The method may be performed by a content delivery apparatus, which may typically be integrated into any CDN node in the CDN. Referring to fig. 5, the method may include the steps of:
500. the method comprises the steps of obtaining network state data of a content delivery network, wherein the network state data comprise communication states and communication time delays between every two CDN nodes in part or all CDN nodes of the content delivery network, and current residual bandwidths of all CDN nodes in part or all CDN nodes.
501. Selecting at least one CDN node from a plurality of CDN nodes included in the content delivery network as at least one pull flow node, and selecting corresponding at least one push flow node from the plurality of CDN nodes included in the content delivery network for each pull flow node.
502. And selecting at least one relay CDN node which can communicate with the target stream pushing node in the communication state from CDN nodes except the target stream pulling node and the target stream pushing node, wherein the CDN nodes are included in the CDN, and the target stream pulling node and the target stream pushing node correspond to the target stream pulling node.
503. And adding the shortest pull flow path with the hop count from the target pull flow node to the target push flow node being 1 into the shortest pull flow path set, and setting the current hop count to be 2.
504. According to the shortest pull flow path from the target pull flow node to each relay CDN node, which is less than 1 hop of the current hop count, and the shortest pull flow path from each relay CDN node to the target push flow node, which is 1 hop count, a plurality of candidate shortest pull flow paths from the target pull flow node to the target push flow node, which have the current hop count, are determined, and respective communication time delays of the plurality of candidate shortest pull flow paths with the current hop count are determined.
505. And judging whether the number of the added shortest pull-out flow paths in the shortest pull-out flow path set reaches a set number threshold, if so, executing the step 506, and if not, executing the step 507.
506. Selecting one candidate shortest pull flow path with the minimum communication time delay from a plurality of candidate shortest pull flow paths with current hop numbers which are not selected as the shortest pull flow path to be added; adding the shortest pull flow path to be added into the shortest pull flow path set, and returning to execute step 505.
507. And selecting one candidate shortest pull-out path with the minimum communication delay from a plurality of unselected candidate shortest pull-out paths with the current hop count as the shortest pull-out path to be added.
508. Judging whether the shortest pull flow path set has an added shortest pull flow path with a communication delay larger than that of the shortest pull flow path to be added, if so, executing step 509; if the determination result is negative, step 510 is executed.
509. And deleting the added shortest pull-out path with the communication delay larger than the shortest pull-out path to be added, adding the shortest pull-out path to be added into the shortest pull-out path set, and returning to execute the step 507 until the added shortest pull-out path with the communication delay larger than the shortest pull-out path to be added does not exist in the shortest pull-out path set.
510. And judging whether the current hop count reaches the maximum allowable hop count, if not, executing the step 511, and if so, executing the step 512.
511. And adding 1 to the current hop count and returning to execute the step 504 until the current hop count reaches the maximum allowed hop count.
512. And screening out at least one shortest pull flow path from the target pull flow node to the target push flow node, wherein the current residual bandwidth of the at least one shortest pull flow path is greater than the bandwidth threshold value.
513. And distributing the content according to the shortest pull flow path from each pull flow node to at least one corresponding push flow node.
The implementation manners of steps 500 to 504 and steps 510 to 513 in this embodiment may refer to the related contents of the foregoing embodiments, and are not described herein again.
In this embodiment, after determining a plurality of candidate shortest pull flow paths with current hop count from the target pull flow node to the target push flow node and the communication delay of each candidate shortest pull flow path, adding the plurality of candidate shortest pull flow paths with current hop count to the shortest pull flow path set is controlled according to the number of added shortest pull flow paths in the shortest pull flow path set and the communication delay. Specifically, the maximum value of the number of the added shortest pull-out paths in the shortest pull-out path set is limited to be a set number threshold, and the set number threshold is set according to the actual application requirement. And directly adding the shortest pull flow path with the hop count of 1 from the target pull flow node to the target push flow node into the corresponding shortest pull flow path set aiming at the shortest pull flow path with the hop count of 1 from the target pull flow node to the target push flow node. And as time goes on, the hop count is increased continuously, and the number of the added shortest pull-out path in the shortest pull-out path set is increased more and more until the number of the added shortest pull-out path in the shortest pull-out path set is equal to the set number threshold. And for each current hop count, before the number of the shortest pull-out path added in the shortest pull-out path set does not reach a set number threshold, one candidate shortest pull-out path with the minimum communication delay in a plurality of candidate shortest pull-out paths with the current hop count is taken as the shortest pull-out path to be added in the shortest pull-out path set in sequence. And after the number of the added shortest pull-flow paths in the shortest pull-flow path set reaches a set number threshold, replacing the added shortest pull-flow path with the shortest pull-flow path with smaller communication delay until the added shortest pull-flow path with larger communication delay in the shortest pull-flow path set does not exist in the shortest pull-flow path set. Under the current hop count, if the situation that the communication delay does not exist in the shortest pull-flow path set and is larger than the added shortest pull-flow path to be added to the shortest pull-flow path does not exist, the current hop count is updated until the current hop count reaches the maximum allowable hop count, and the communication delay of the added shortest pull-flow path in the shortest pull-flow path set is effectively controlled to be maintained at a smaller level.
According to the technical scheme provided by the embodiment of the application, the path planning is carried out based on the network state data of the content distribution network, so that the shortest pull-flow path corresponding to each pull-flow node to each push-flow node is planned for each pull-flow node, and then a plurality of CDN nodes are mutually communicated, so that the content distribution network is presented as a mesh network structure instead of a tree network structure, and the path planning of the content distribution network in a decentralized mesh routing mode is realized to a certain extent. Furthermore, a network state data such as a communication state and a communication time delay between CDN nodes and the current residual bandwidth of each CDN node is integrated to select a pull path from a pull node to a push node, so that the probability of obtaining the pull path with smaller communication time delay is greatly improved, the bandwidth of each CDN node on the pull path is effectively controlled, the occurrence of the situation that the network flow of the CDN nodes is overlarge is greatly reduced, the content distribution efficiency is improved, and the method is particularly suitable for point-to-point complex routing scenes with the quantity of massive nodes. And further, when the path is planned, hop limit is increased to ensure the content distribution efficiency, path planning is sequentially carried out according to the sequence of the hop from small to large, and when each hop is increased, the shortest pull-flow path of the next hop can be accurately planned according to the shortest pull-flow path of the previous hop and the shortest pull-flow path of which the hop is 1, so that the shortest pull-flow path between nodes is efficiently and accurately planned. Furthermore, based on the number of the shortest pull-out paths added in the shortest pull-out path set and the communication delay, a plurality of shortest pull-out paths with current hop count are added in the shortest pull-out path set in a targeted manner, and the probability of obtaining the pull-out paths with smaller communication delay is further improved.
In some application scenarios, according to the functional division of CDN nodes in a content delivery network, the CDN nodes included in the content delivery network may be divided into a probe node, a scheduling node, and a plurality of edge nodes, and any edge node may be used as a pull flow node or a push flow node. Referring to fig. 6 (1), the probing node probes the communication state and the communication delay between each two edge nodes in some or all edge nodes, and sends the probing result to the scheduling node. If the scheduling node maintains and manages the current used bandwidth of each edge node, the scheduling node may determine the current remaining bandwidth of each edge node according to the total available bandwidth of each edge node and the current used bandwidth of each edge node. If the scheduling node itself does not maintain and manage the currently used bandwidth of each edge node, as shown in (2) in fig. 6, the scheduling node may obtain the currently used bandwidth of each edge node from the log server, as shown in (3) in fig. 6, and the scheduling node may determine the current remaining bandwidth of each edge node according to the total available bandwidth of each edge node and the currently used bandwidth of each edge node. The scheduling node acquires the communication state and the communication time delay between every two edge nodes in part or all of the edge nodes, and determines the current residual bandwidth of each edge node in part or all of the edge nodes; referring to (4) in fig. 6, the scheduling node selects a pull node and a push node. Specifically, at least one edge node is selected from a plurality of edge nodes as at least one pull flow node, and a corresponding at least one push flow node is selected from the plurality of edge nodes for each pull flow node; referring to (5) in fig. 6, the scheduling node performs path planning according to the network state data to obtain a shortest pull flow path from each pull flow node to at least one corresponding push flow node; referring to (6) in fig. 6, the scheduling node performs content distribution according to the shortest pull-flow path from each pull-flow node to its corresponding at least one push-flow node.
In practical applications, the scheduling node may maintain and manage path information of a shortest pull flow path from each pull flow node to its corresponding at least one push flow node, where the path information includes, but is not limited to: identification information of each edge node on the shortest pull-out path, and the sequence of each edge node on the shortest pull-out path. After receiving a pull request aiming at any push flow node initiated by a user, the pull flow node sends the identification information of the pull flow node and the identification information of the push flow node to a scheduling node so that the scheduling node can determine the path information of the shortest pull flow path corresponding to the pull flow request. When each edge node on the shortest pull stream path receives a pull stream request transmitted by the previous edge node, the edge node can send identification information of the edge node and a request identification of the pull stream request to the scheduling node, receive identification information of the next edge node to be transmitted returned by the scheduling node, and transmit the pull stream request to the next edge node until the pull stream request is transmitted to the push stream node, so that a pull stream request transmission task is completed. And in the link of returning the content data to the stream pulling node, the content data sequentially passes through each edge node on the shortest stream pulling path until the content data reaches the stream pulling node.
In practical application, the path information of the shortest pull flow path from each pull flow node to the corresponding at least one push flow node can also be pushed to each pull flow node. When each edge node on the shortest pull flow path transmits a pull flow request to the outside, the path information of the shortest pull flow path can be encapsulated into the pull flow request, so that each edge node can accurately determine the next edge node receiving the pull flow request, and the pull flow request is ensured to be accurately sent to the push flow node. Similarly, when each edge node on the shortest pull flow path externally transfers the content data from the push flow node, the path information of the shortest pull flow path can also be transferred at the same time, so that the content data can be accurately returned to the pull flow node.
For example, each edge node may serve as a streaming node to cache a live stream pushed by the anchor. Each edge node can be used as a stream pulling node, and according to a stream pulling request of a viewer to a live stream cached on any stream pushing node, each edge node is sequentially accessed according to a shortest stream pulling path until the stream pushing node is accessed, the live stream cached on the stream pushing node is pulled, the pulled live stream is returned to the stream pulling node according to the shortest stream pulling path, and the stream pulling node is provided for the viewer so that the viewer can watch the live stream pushed by the main broadcaster. Because any edge node can be used as a stream pulling node or a stream pushing node, any stream pulling node can access any stream pushing node, and further the whole content distribution network is represented as a mesh structure, content data cached by the edge node can be directly pushed to other edge nodes without passing through a scheduling node, the watching delay of a live broadcast scene is greatly reduced, the requirements of the live broadcast scene and an on-demand scene on low delay and low cost are met, and the situation that a service scene is diversified and complicated can be better met.
The detailed procedure of the steps of the content distribution method performed with respect to the content distribution network shown in fig. 6 can be referred to the foregoing embodiments.
Fig. 7 is a schematic structural diagram of a content distribution apparatus according to an embodiment of the present application. Referring to fig. 7, the apparatus may include:
an obtaining module 70, configured to obtain network state data of the CDN, where the network state data includes a communication state and a communication delay between every two CDN nodes in part or all CDN nodes of the content delivery network, and a current remaining bandwidth of each CDN node in part or all CDN nodes;
a selecting module 71, configured to select at least one CDN node from multiple CDN nodes included in a content delivery network as at least one pull node, and select, for each pull node, a corresponding at least one push node from multiple CDN nodes included in the content delivery network;
a path planning module 72, configured to perform path planning according to the network state data to obtain a shortest pull flow path from each pull flow node to at least one corresponding push flow node;
and the content distribution module 73 is configured to distribute content according to a shortest pull flow path from each pull flow node to at least one corresponding push flow node.
Further optionally, the path planning module 72 is specifically configured to: selecting at least one relay CDN node which is communicable with a target stream pushing node in a communication state from CDN nodes except the target stream pulling node and the target stream pushing node, wherein the CDN nodes are included in the CDN, and the target stream pulling node and the target stream pushing node are corresponding to the target stream pulling node; determining a shortest pull flow path set from a target pull flow node to a target push flow node according to a plurality of shortest pull flow paths with different hop counts from the target pull flow node to each relay CDN node and a shortest pull flow path with the hop count from each relay CDN node to the target push flow node being 1; and screening out at least one shortest pull flow path from the target pull flow node with the current residual bandwidth larger than the bandwidth threshold value to the target push flow node from the shortest pull flow path set.
Further optionally, when the path planning module 72 determines that the shortest pull flow path set from the target pull flow node to the target push flow node is specifically configured to: adding the shortest stream pulling path with the hop count of 1 from the target stream pulling node to the target stream pushing node into the shortest stream pulling path set, and setting the current hop count to be 2; determining a plurality of candidate shortest pull flow paths with current hop count from the target pull flow node to the target push flow node according to the shortest pull flow path with 1 hop less than the current hop count from the target pull flow node to each relay CDN node and the shortest pull flow path with 1 hop count from each relay CDN node to the target push flow node, and determining the respective communication time delay of the plurality of candidate shortest pull flow paths with the current hop count; selecting a plurality of candidate shortest pull flow paths with the minimum communication delay and the current hop count as a plurality of shortest pull flow paths with the current hop count to be added into a shortest pull flow path set; and judging whether the current hop count reaches the maximum allowable hop count, if not, adding 1 to the current hop count, and returning to execute the step of determining a plurality of candidate shortest pull flow paths with the current hop count from the target pull flow node to the target push flow node until the current hop count reaches the maximum allowable hop count.
Further optionally, the path planning module 72 determines respective communication delays of a plurality of candidate shortest pull-path paths with the current hop count, and is specifically configured to: and determining the communication time delay of a plurality of candidate shortest pull flow paths with the current hop count from the target pull flow node to the target push flow node according to the communication time delay of the shortest pull flow path with 1 hop less than the current hop count from the target pull flow node to each relay CDN node and the communication time delay of the pull flow path with 1 hop count from each relay CDN node to the target push flow node.
Further optionally, when the path planning module 72 selects multiple candidate shortest pull-path paths with the minimum communication delay and the current hop count as multiple shortest pull-path paths with the current hop count to join the shortest pull-path set, the path planning module is specifically configured to: judging whether the number of the added shortest pull-flow paths in the shortest pull-flow path set reaches a set number threshold value or not; if the judgment result is yes, selecting one candidate shortest pull-out path with the minimum communication time delay from a plurality of unselected candidate shortest pull-out paths with the current hop count as the shortest pull-out path to be added; if the shortest pull flow path set has the communication delay which is larger than the added shortest pull flow path to be added to the shortest pull flow path, deleting the added shortest pull flow path with the communication delay which is larger than the added shortest pull flow path to be added to the shortest pull flow path; and adding the shortest pull-out path to be added into the shortest pull-out path set, and returning to execute the step of selecting the candidate shortest pull-out path with the minimum communication delay as the shortest pull-out path to be added until the shortest pull-out path set does not have the added shortest pull-out path with the communication delay larger than that of the shortest pull-out path to be added.
Further optionally, the path planning module 72 is further configured to: if the judgment result is negative, selecting one candidate shortest pull flow path with the minimum communication time delay from a plurality of candidate shortest pull flow paths with the current hop number as the shortest pull flow path to be added; and adding the shortest pull-flow path to be added into the shortest pull-flow path set, and returning to execute the step of judging whether the number of the added shortest pull-flow paths in the shortest pull-flow path set reaches a set number threshold value or not.
Further optionally, when the path planning module 72 screens out at least one shortest pull flow path from the target pull flow node to the target push flow node, where the current remaining bandwidth is greater than the bandwidth threshold, from the shortest pull flow path set, the path planning module is specifically configured to: screening at least one shortest pull-out path with the current residual bandwidth larger than a bandwidth threshold from the shortest pull-out path set; and taking the relay nodes which cannot be the same among different shortest pull flow paths as screening conditions, and screening at least one shortest pull flow path which meets the screening conditions from at least one shortest pull flow path with the current residual bandwidth larger than a bandwidth threshold.
Further optionally, when the obtaining module 70 obtains the current remaining bandwidth of each CDN node in part or all of the CDN nodes, the obtaining module is specifically configured to: and for each CDN node in part or all the CDN nodes, determining the current residual bandwidth of the CDN node according to the total available bandwidth of the CDN node and the current used bandwidth of the CDN node.
The method shown in fig. 3 may be performed by the apparatus shown in fig. 7, and details of implementation principles and technical effects are not repeated. The specific manner in which each module and unit of the apparatus shown in fig. 7 in the above-described embodiment perform operations has been described in detail in the embodiment related to the method, and will not be described in detail herein.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 300 to 303 may be device a; for another example, the execution subject of steps 300 to 302 may be device a, and the execution subject of step 303 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 301, 302, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor do they limit the types of "first" and "second".
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 8, the electronic apparatus includes: a memory 81 and a processor 82;
the memory 81 is used to store computer programs and may be configured to store various other data to support operations on the computing platform. Examples of such data include instructions for any application or method operating on the computing platform, contact data, phonebook data, messages, pictures, videos, and so forth.
The memory 81 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A processor 82 coupled to the memory 81 for executing the computer program in the memory 81 for: acquiring network state data of the CDN, wherein the network state data comprises a communication state and a communication time delay between every two CDN nodes in part or all CDN nodes of the content delivery network, and the current residual bandwidth of each CDN node in the part or all CDN nodes; selecting at least one CDN node from a plurality of CDN nodes included in the content delivery network as at least one pull flow node, and selecting at least one corresponding push flow node from the plurality of CDN nodes included in the content delivery network for each pull flow node; performing path planning according to the network state data to obtain the shortest pull flow path from each pull flow node to at least one corresponding push flow node; and distributing the content according to the shortest pull flow path from each pull flow node to at least one corresponding push flow node.
Further, as shown in fig. 8, the electronic device further includes: communication components 83, display 84, power components 85, audio components 86, and the like. Only some of the components are schematically shown in fig. 8, and the electronic device is not meant to include only the components shown in fig. 8. In addition, the components within the dashed line in fig. 8 are optional components, not necessary components, and may be determined according to the product form of the electronic device. The electronic device of this embodiment may be implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, or an IOT device, or may be a server device such as a conventional server, a cloud server, or a server array. If the electronic device of this embodiment is implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, etc., the electronic device may include components within a dashed line frame in fig. 8; if the electronic device of this embodiment is implemented as a server device such as a conventional server, a cloud server, or a server array, the components in the dashed box in fig. 8 may not be included.
For details of the implementation process of each action performed by the processor, reference may be made to the foregoing method embodiment or the related description in the device embodiment, and details are not described herein again.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be executed by the electronic device in the foregoing method embodiments when executed.
Accordingly, the present application also provides a computer program product, which includes a computer program/instruction, when the computer program/instruction is executed by a processor, the processor is enabled to implement the steps that can be executed by an electronic device in the above method embodiments.
The communication component is configured to facilitate wired or wireless communication between the device in which the communication component is located and other devices. The device where the communication component is located can access a wireless network based on a communication standard, such as a WiFi, a 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
The display includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply assembly provides power for various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
The audio component may be configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional identical elements in the process, method, article, or apparatus comprising the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (14)

1. A content delivery method is applied to a content delivery network, the content delivery network comprises a plurality of Content Delivery Network (CDN) nodes, and the method comprises the following steps:
obtaining network state data of a content delivery network, wherein the network state data comprises a communication state and a communication time delay between every two CDN nodes in part or all CDN nodes of the content delivery network, and a current residual bandwidth of each CDN node in part or all CDN nodes;
selecting at least one CDN node from a plurality of CDN nodes included in the content delivery network as at least one pull node, and selecting at least one corresponding push node from the plurality of CDN nodes included in the content delivery network for each pull node;
performing path planning according to the network state data to obtain the shortest pull flow path from each pull flow node to at least one corresponding push flow node;
and distributing the content according to the shortest pull flow path from each pull flow node to at least one corresponding push flow node.
2. The method of claim 1, wherein performing path planning according to the network state data to obtain a shortest pull-flow path from each pull-flow node to at least one corresponding push-flow node, respectively, comprises:
for any target pull flow node in at least one pull flow node and any target push flow node in at least one push flow node corresponding to the target pull flow node, selecting at least one relay CDN node which is communicable with the target push flow node in CDN nodes except the target pull flow node and the target push flow node;
determining a shortest pull flow path set from the target pull flow node to the target push flow node according to a plurality of shortest pull flow paths with different hop counts from the target pull flow node to each relay CDN node and a shortest pull flow path with the hop count from each relay CDN node to the target push flow node being 1;
and screening out at least one shortest pull flow path from the target pull flow node to the target push flow node, wherein the current residual bandwidth of the shortest pull flow path is greater than a bandwidth threshold value.
3. The method of claim 2, wherein determining the shortest set of pull paths from the target pull node to the target push node comprises:
adding the shortest pull flow path with the hop count from the target pull flow node to the target push flow node being 1 into the shortest pull flow path set, and setting the current hop count to be 2;
determining a plurality of candidate shortest pull flow paths with the current hop count from the target pull flow node to the target push flow node according to the shortest pull flow path with 1 hop less than the current hop count from the target pull flow node to each relay CDN node and the shortest pull flow path with 1 hop count from each relay CDN node to the target push flow node, and determining respective communication time delay of the plurality of candidate shortest pull flow paths with the current hop count;
selecting a plurality of candidate shortest pull flow paths with the minimum communication delay and the current hop number as a plurality of shortest pull flow paths with the current hop number to be added into the shortest pull flow path set;
judging whether the current hop count reaches the maximum allowable hop count, if not, adding 1 to the current hop count, and returning to execute the step of determining a plurality of candidate shortest pull flow paths with the current hop count from the target pull flow node to the target push flow node until the current hop count reaches the maximum allowable hop count.
4. The method of claim 3, wherein determining the respective communication delays of the candidate shortest pull-path paths with the current hop count comprises:
and determining communication time delays of a plurality of candidate shortest pull flow paths with current hop counts from the target pull flow node to the target push flow node according to the communication time delay of the shortest pull flow path with 1 hop less than the current hop count from the target pull flow node to each relay CDN node and the communication time delay of the pull flow path with 1 hop count from each relay CDN node to the target push flow node.
5. The method of claim 3, wherein selecting the candidate shortest pull flow paths with the current hop count and the smallest communication delay as the shortest pull flow paths with the current hop count to be added to the shortest pull flow path set comprises:
judging whether the number of the added shortest pull-out path in the shortest pull-out path set reaches a set number threshold value or not;
if the judgment result is yes, selecting one candidate shortest pull flow path with the minimum communication time delay from a plurality of candidate shortest pull flow paths with the current hop number which are not selected as the shortest pull flow path to be added;
if the shortest pull flow path to be added has communication delay longer than the shortest pull flow path to be added in the shortest pull flow path set, deleting the shortest pull flow path to be added with communication delay longer than the shortest pull flow path to be added in;
and adding the shortest pull-out path to be added into the shortest pull-out path set, and returning to execute the step of selecting the candidate shortest pull-out path with the minimum communication delay as the shortest pull-out path to be added until the shortest pull-out path set does not have the added shortest pull-out path with the communication delay larger than the shortest pull-out path to be added.
6. The method of claim 5, further comprising:
if the judgment result is negative, selecting one candidate shortest pull flow path with the minimum communication time delay from a plurality of candidate shortest pull flow paths with the current hop number which are not selected as the shortest pull flow path to be added;
and adding the shortest pull flow path to be added into the shortest pull flow path set, and returning to execute the step of judging whether the number of the added shortest pull flow paths in the shortest pull flow path set reaches a set number threshold value.
7. The method according to any one of claims 2 to 6, wherein the step of screening out at least one shortest pull flow path from the target pull flow node to the target push flow node from the shortest pull flow path set, where a current remaining bandwidth is greater than a bandwidth threshold, comprises:
screening out at least one shortest pull-out path with the current residual bandwidth larger than a bandwidth threshold from the shortest pull-out path set;
and taking the relay nodes which cannot be the same among different shortest pull flow paths as screening conditions, and screening at least one shortest pull flow path which meets the screening conditions from at least one shortest pull flow path with the current residual bandwidth larger than a bandwidth threshold.
8. The method of any of claims 2 to 6, wherein obtaining the current remaining bandwidth of each of some or all of the CDN nodes comprises:
and determining the current residual bandwidth of each CDN node in part or all CDN nodes according to the total available bandwidth of the CDN node and the current used bandwidth of the CDN node.
9. A content distribution network, comprising: the system comprises a detection node, a scheduling node and a plurality of edge nodes;
the detection nodes are used for detecting the communication state and the communication time delay between every two edge nodes in part or all of the edge nodes;
the scheduling node is used for acquiring the communication state and the communication time delay between every two edge nodes in part or all of the edge nodes detected by the detection node and determining the current residual bandwidth of each edge node in part or all of the edge nodes; selecting at least one edge node from the plurality of edge nodes as at least one pull flow node, and selecting a corresponding at least one push flow node from the plurality of edge nodes for each pull flow node; performing path planning according to the network state data to obtain the shortest pull flow path from each pull flow node to at least one corresponding push flow node; and distributing the content according to the shortest pull flow path from each pull flow node to at least one corresponding push flow node.
10. The content distribution network according to claim 9, wherein the scheduling node, when determining the current remaining bandwidth of each of some or all of the edge nodes, is specifically configured to:
the method comprises the steps of acquiring the current used bandwidth of each edge node in part or all of the edge nodes from a log server, and determining the current residual bandwidth of the edge node according to the total available bandwidth of the edge node and the current used bandwidth of the edge node.
11. A content distribution apparatus, characterized by comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring network state data of a content delivery network, and the network state data comprises a communication state and a communication time delay between every two CDN nodes in part or all CDN nodes of the content delivery network and the current residual bandwidth of each CDN node in part or all CDN nodes;
a selecting module, configured to select at least one CDN node from multiple CDN nodes included in the content delivery network as at least one pull node, and select, for each pull node, a corresponding at least one push node from the multiple CDN nodes included in the content delivery network;
the path planning module is used for planning paths according to the network state data to obtain the shortest pull flow path from each pull flow node to at least one corresponding push flow node;
and the content distribution module is used for distributing content according to the shortest pull flow path from each pull flow node to at least one corresponding push flow node.
12. The apparatus of claim 11, wherein the path planning module is specifically configured to:
aiming at any target pull flow node in at least one pull flow node and any target push flow node in at least one push flow node corresponding to the target pull flow node, selecting at least one relay CDN node which can be communicated with the communication state of the target push flow node from CDN nodes except the target pull flow node and the target push flow node;
determining a shortest pull flow path set from the target pull flow node to the target push flow node according to a plurality of shortest pull flow paths with different hop counts from the target pull flow node to each relay CDN node and a shortest pull flow path with hop count from each relay CDN node to the target push flow node being 1;
and screening out at least one shortest pull flow path from the target pull flow node to the target push flow node, wherein the current residual bandwidth of the at least one shortest pull flow path is greater than a bandwidth threshold value.
13. An electronic device, comprising: a memory and a processor; the memory for storing a computer program; the processor is coupled to the memory for executing the computer program for performing the steps of the method of any of claims 1-11.
14. A computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, causes the processor to carry out the steps of the method of any one of claims 1 to 11.
CN202211124996.8A 2022-09-15 2022-09-15 Content distribution method, content distribution device, content distribution network, device, and medium Pending CN115643203A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211124996.8A CN115643203A (en) 2022-09-15 2022-09-15 Content distribution method, content distribution device, content distribution network, device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211124996.8A CN115643203A (en) 2022-09-15 2022-09-15 Content distribution method, content distribution device, content distribution network, device, and medium

Publications (1)

Publication Number Publication Date
CN115643203A true CN115643203A (en) 2023-01-24

Family

ID=84941990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211124996.8A Pending CN115643203A (en) 2022-09-15 2022-09-15 Content distribution method, content distribution device, content distribution network, device, and medium

Country Status (1)

Country Link
CN (1) CN115643203A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278466A (en) * 2023-09-14 2023-12-22 清华大学 Candidate path selection method for fault-tolerant traffic engineering scene

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804555A (en) * 2021-04-08 2021-05-14 北京新唐思创教育科技有限公司 Line scheduling method, system, electronic device and computer storage medium
CN114501073A (en) * 2022-02-16 2022-05-13 上海哔哩哔哩科技有限公司 Live broadcast source returning method and device
CN114760482A (en) * 2022-03-30 2022-07-15 上海哔哩哔哩科技有限公司 Live broadcast source returning method and device
CN114945046A (en) * 2022-05-19 2022-08-26 阿里巴巴(中国)有限公司 Return-source path determining method, content distribution network, storage medium, and program product

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804555A (en) * 2021-04-08 2021-05-14 北京新唐思创教育科技有限公司 Line scheduling method, system, electronic device and computer storage medium
CN114501073A (en) * 2022-02-16 2022-05-13 上海哔哩哔哩科技有限公司 Live broadcast source returning method and device
CN114760482A (en) * 2022-03-30 2022-07-15 上海哔哩哔哩科技有限公司 Live broadcast source returning method and device
CN114945046A (en) * 2022-05-19 2022-08-26 阿里巴巴(中国)有限公司 Return-source path determining method, content distribution network, storage medium, and program product

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278466A (en) * 2023-09-14 2023-12-22 清华大学 Candidate path selection method for fault-tolerant traffic engineering scene

Similar Documents

Publication Publication Date Title
US11405310B2 (en) Method and apparatus for selecting processing paths in a software defined network
US10819606B2 (en) Method and apparatus for selecting processing paths in a converged network
CN112218100B (en) Content distribution network, data processing method, device, equipment and storage medium
US11071037B2 (en) Method and apparatus for directing wireless resources in a communication network
US20220086729A1 (en) Method and apparatus for coordinating wireless resources in a communication network
CN109348264B (en) Video resource sharing method and device, storage medium and electronic equipment
CN104185036A (en) Video file source returning method and device
CN108683730B (en) Resource scheduling method, service server, client terminal, network system and medium
CN113301364A (en) Path planning method, CDN connection establishing method, device and storage medium
CN111800285A (en) Instance migration method and device and electronic equipment
CN105786539B (en) File downloading method and device
US9553790B2 (en) Terminal apparatus and method of controlling terminal apparatus
CN115643203A (en) Content distribution method, content distribution device, content distribution network, device, and medium
CN113301397A (en) CDN-based audio and video transmission, playing and delay detection method and device
CN110875947A (en) Data processing method and edge node equipment
CN108306923A (en) A kind of live video method for uploading, device, electronic equipment and storage medium
KR20130120288A (en) Real time monitoring system and method for street based on push type communication
CN112114804A (en) Application program generation method, device and system
CN103442257A (en) Method, device and system for achieving flow resource management
CN113301098A (en) Path planning method, CDN connection establishing method, device and storage medium
CN112203063B (en) Distributed implementation method and system for video networking and electronic equipment
CN109831467A (en) Data transmission method, equipment and system
CN112149964A (en) Resource allocation method and device
CN111104575A (en) Data capture method and device and electronic equipment
KR20080086142A (en) Method of providing mobile application and computer-readable medium having thereon program performing function embodying the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination