WO2023065893A1 - 一种网络调度方法、系统及设备 - Google Patents

一种网络调度方法、系统及设备 Download PDF

Info

Publication number
WO2023065893A1
WO2023065893A1 PCT/CN2022/118630 CN2022118630W WO2023065893A1 WO 2023065893 A1 WO2023065893 A1 WO 2023065893A1 CN 2022118630 W CN2022118630 W CN 2022118630W WO 2023065893 A1 WO2023065893 A1 WO 2023065893A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
node
cluster
nodes
parameters
Prior art date
Application number
PCT/CN2022/118630
Other languages
English (en)
French (fr)
Inventor
黄璐真
路有兵
杨昌鹏
石翰
Original Assignee
华为云计算技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为云计算技术有限公司 filed Critical 华为云计算技术有限公司
Publication of WO2023065893A1 publication Critical patent/WO2023065893A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/30Routing of multiclass traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W74/00Wireless channel access
    • H04W74/04Scheduled access

Definitions

  • the present application relates to the communication field, and in particular to a network scheduling method, system and equipment.
  • Cloud vendors usually use a tree network structure when providing cloud video services. That is, the network includes a certain number of central nodes and edge nodes, and the number of edge nodes is more than that of the central nodes. Users connect to edge nodes through the network to realize access to cloud video services. Due to the large number of edge nodes, all edge nodes need to frequently pull streams to the central node to obtain video data and provide it to users. Therefore, under the tree network structure, the load pressure on the central node is high, which may easily lead to the collapse of the server where the central node is located. In addition, the adjustment of the network structure needs to be carried out artificially relying on experience.
  • the present application provides a network scheduling method, which can optimize the network structure in real time.
  • the first aspect of the present application provides a method for network scheduling, the method comprising: obtaining business demand parameters of the business; determining an optimization goal on the network according to the business demand parameters; obtaining network architecture and multiple network operating parameters in the network , the network operating parameters include communication parameters between two nodes in the network and network operating parameters of the nodes; according to the optimization objective and the plurality of network operating parameters, at least one cluster is divided from the network, and each cluster includes the at least one node in the network.
  • the network architecture is optimized in real time based on the optimization target, and the network is automatically divided into at least one cluster, wherein each cluster includes multiple nodes. Therefore, artificially optimizing the network architecture based on experience is avoided, and the efficiency of network architecture optimization is effectively improved.
  • the method further includes: determining a routing path between two nodes within the cluster. After the network is divided into at least one cluster, the routing path between nodes in the same cluster is optimized to further optimize the network architecture.
  • the method further includes determining a routing path between the first node of the first cluster and the second node of the second cluster. After the network is divided into at least one cluster, the routing paths between nodes between the clusters are optimized to further optimize the network architecture.
  • the service requirement parameters include one or more of the following: delay, fluency, and clarity.
  • the method further includes: providing a first configuration interface, where the configuration interface is used to obtain the service requirement parameter input by the user.
  • the configuration interface is used to obtain the service requirement parameter input by the user.
  • the communication parameters include one or more of the following: time delay, packet loss rate, and jitter.
  • the method further includes: acquiring a plurality of historical network operating parameters of the network, the optimization target, the plurality of network operating parameters and the historical network operating parameters are used to divide the at least one cluster.
  • the method further includes: providing a second configuration interface, where the second configuration interface is used to obtain user-input constraints, the optimization objective, the constraints, and the plurality of network operating parameters are used in the The at least one cluster is divided in the network.
  • the constraint conditions input by the user are obtained, so as to better divide the clusters according to the user's needs and constraints, and ensure the rationality of network optimization.
  • the second aspect of the application provides a network scheduling node, the system includes a communication module and a processing module:
  • the communication unit is used to acquire service requirement parameters of the service
  • the processing unit is configured to determine an optimization target on the network according to the service requirement parameter
  • the communication module is also used to obtain a plurality of network operating parameters in the network, the network operating parameters include communication parameters between two nodes in the network; according to the optimization target and the plurality of network operating parameters, the network is divided into At least one cluster is generated, and each cluster includes at least one node in the network.
  • the processing module is also used to determine a routing path between two nodes in the cluster.
  • the processing module is further configured to determine a routing path between the first node of the first cluster and the second node of the second cluster.
  • the service requirement parameters include one or more of the following: delay, fluency, and definition.
  • the communication module is also used to provide a first configuration interface, and the configuration interface is used to obtain the service requirement parameter input by the user.
  • the communication parameters include one or more of the following: time delay, packet loss rate, and jitter.
  • the communication module is also used to acquire a plurality of historical network operating parameters of the network, and the optimization target, the plurality of network operating parameters and the historical network operating parameters are used to divide the network in the network at least one cluster.
  • the communication module is also used to provide a second configuration interface, where the second configuration interface is used to acquire user-input constraints, and the optimization objective, the constraints, and the plurality of network operating parameters are used for The at least one cluster is partitioned in the network.
  • a third aspect of the present application provides a network scheduling system, which includes a network scheduling node and a network node, where the network scheduling node is configured to execute the method as provided in the first aspect.
  • a fourth aspect of the present application provides a network scheduling node, including a processor and a memory, the processor is used to execute instructions in the memory, so that the network scheduling node performs any possible The method provided by the design.
  • a fifth aspect of the present application provides a computer program product containing instructions.
  • the instruction When the instruction is executed by a cluster of computer equipment, the cluster of computer equipment executes the method provided by the first aspect or any possible design of the first aspect.
  • a sixth aspect of the present application provides a computer-readable storage medium, including computer program instructions, for executing the method provided by the first aspect or any possible design of the first aspect.
  • Fig. 1 is a schematic diagram of a traditional tree network structure involved in the present application
  • Fig. 2 is an architecture diagram of a network scheduling involved in the present application
  • Fig. 3 is a flow chart of a network scheduling method involved in the present application.
  • Fig. 4 is a schematic diagram of an interactive interface involved in the present application.
  • Figure 5(a) is a schematic diagram of a cluster division involved in the present application.
  • Fig. 5 (b) is the framework diagram of a kind of network involved in this application.
  • Fig. 5 (c) is the structural diagram of another kind of network involved in this application.
  • Fig. 6 (a) is a structure diagram of an edge cluster involved in the present application.
  • Fig. 6 (b) is the architectural diagram of another kind of edge cluster involved in the present application.
  • FIG. 7 is a schematic diagram of a network scheduling node involved in the present application.
  • FIG. 8 is a schematic diagram of a network scheduling node involved in the present application.
  • FIG. 9 is a schematic diagram of a network scheduling node cluster involved in the present application.
  • FIG. 10 is a schematic diagram of another network scheduling node cluster involved in the present application.
  • Fig. 11 is a schematic diagram of another network scheduling node cluster involved in the present application.
  • first and second in the embodiments of the present application are used for description purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features.
  • CDN Content distribution network
  • IP Internet Protocol
  • a tree-structured network usually includes a certain number of source nodes, central nodes, and edge nodes, wherein the number of various types of nodes increases sequentially.
  • the source node is responsible for the recording and transcoding of data (such as media data streams);
  • the central node usually has a large bandwidth and stable network, and is responsible for the cross-region scheduling and forwarding of data; while the edge node is close to the user side and is responsible for the user's nearby access.
  • Pull stream The process in which user-side devices or edge nodes pull data from other servers.
  • Fig. 1 shows a traditional tree network structure, which includes at least two central nodes C1 and C2, and four edge nodes E1, E2, E3 and E4.
  • User H is the broadcaster, and user H accesses the network through E1.
  • anchor H starts broadcasting, the upstream video data is connected by the edge node E1 closest to anchor H, and then the edge node E1 will push the stream to its corresponding central node C1.
  • the request When a viewer requests to watch the anchor, the request will first be made to the edge node closest to the audience. If the edge node already has the stream of the anchor, the stream will be pulled and watched directly. If the node has no flow, it needs to make a pull request to the central node corresponding to the edge node.
  • Both user 1 and user 2 access the network through the edge node E2 closest to the two users.
  • the edge node E2 closest to the two users.
  • the central node C2 needs to pull the stream to C1 to obtain the live data stream of the anchor H. Among them, C2 will continue to obtain the live data stream of anchor H in C1 until users 3 and 4 both stop requesting to obtain the live data stream of anchor H.
  • edge Node E3 or E4 can obtain the live data stream of anchor H without performing stream pulling operations.
  • the edge node accessed by the user does not cache the data stream requested by the user in advance, the edge node needs to pull the stream to the central node after receiving the user's request.
  • the downlink bandwidth of the edge nodes will also increase continuously.
  • bandwidth costs will increase dramatically.
  • the tree-type network structure is relatively fixed, and the acquisition of data streams can only be requested from the upper-level nodes, that is, the network structure cannot be optimized according to network operating parameters (such as bandwidth, delay, etc.).
  • the present application proposes a network scheduling method 100, which optimizes and adjusts network topology and routing paths based on service types by collecting network operating parameters of the network.
  • Fig. 2 provides an architecture diagram of network scheduling.
  • the architecture diagram includes at least a network scheduling node 200, two central nodes C1 and C2, four edge nodes E1, E2, E3 and E4, and several users.
  • the central node or edge node is usually one or more network scheduling nodes (such as servers or server clusters), which are used to perform one or more of the functions of data read and write request forwarding, data sending and receiving, and data encoding/decoding. .
  • network scheduling nodes such as servers or server clusters
  • edge nodes are also used for data transmission with devices at the user layer.
  • the computing power or storage capacity of the central node is stronger than that of the edge nodes.
  • the central node includes more CPUs or larger storage space.
  • the central node or the edge node may also be a virtual machine or a container.
  • the user layer includes terminal devices corresponding to multiple users.
  • Terminal equipment refers to equipment that has one or more capabilities of data transmission, data processing, and display functions. For example, mobile phones, TVs, tablets, laptops, or smart speakers.
  • the network scheduling node 200 is used to collect multiple network operating parameters, and adjust and schedule the network topology and routing paths according to the network operating parameters.
  • the network operation parameter includes two parts, one part is the network operation parameter of each node. For example, bandwidth (uplink bandwidth and downlink bandwidth), central processing (central processing unit, CPU) utilization, memory occupancy and other parameters.
  • the other part is the communication parameters between nodes. For example, delay between nodes, packet loss rate between nodes, jitter between nodes, etc.
  • the nodes can use their own detection devices to obtain network operating parameters. Further, the acquired network operation parameters are sent to the network scheduling node 200 by way of data transmission.
  • modules in the network scheduling node 200 for collecting multiple network operating parameters may be arbitrarily distributed among various nodes.
  • a parameter collection module can be added to each node to collect network operating parameters.
  • a parameter collection module may be added to some nodes to collect network operating parameters.
  • the modules in the network scheduling node 200 for adjusting and scheduling the network topology and routing paths according to the network operating parameters may be deployed based on cloud services. For example, after acquiring the network operating parameters of multiple nodes, the remote server or server cluster can adjust and schedule the network topology and routing paths according to the network operating parameters. Wherein, the servers in the server cluster may not be deployed in the same computer room.
  • FIG. 3 shows a flowchart of a network scheduling method 100 .
  • the network scheduling node 200 obtains the types of services running on the network.
  • the network scheduling node 200 is configured to accept the service type indication information sent by the user, and the service type indication information is used to indicate the service type running on the network. Further, the network scheduling node 200 may determine service requirement parameters of various types of services according to service types. Business requirement parameters include low latency, high fluency, high definition, and high stability. Among them, high fluency requires low packet loss rate and low jitter, while high definition requires large bandwidth and low bandwidth occupancy.
  • the above service type indication information may also include service requirement parameters of various types of services.
  • the service type of the user may also be obtained by the network scheduling node 200 according to the historical records of the user.
  • a network can be used to run services of one or more users. That is, part of the network can be separately provided to a certain user as a dedicated network to ensure high stability of business operation and other requirements. At the same time, part of the network can also be provided to multiple users at the same time when the bandwidth, network, and delay can meet the requirements. Regardless of whether one user or multiple users occupy a network, there may be one or more types of corresponding services.
  • FIG. 4 provides an interactive interface for setting service types and service requirement parameters.
  • the interactive interface includes a primary service type selection control 301 , a secondary service type selection control 302 and a service requirement parameter setting control 303 .
  • the first-level business types include live video services, online education video services, and conference video services.
  • the second-level business type corresponds to the classification of the first-level business type.
  • the secondary services corresponding to the live video service include game live broadcast and song live broadcast, etc.
  • the secondary services corresponding to the conference video service include political meetings and technical conferences
  • the secondary services corresponding to the online education video service include large class, small class and art class etc.
  • the user can click the first-level service type selection control 301 to expand its second-level service menu, and then click the second-level service type selection control 302 to select its service type. It should be noted that the user can select one or more service types according to the actual situation.
  • the user can set the service requirement parameters of this type of service through the service requirement parameter setting control 303 .
  • the service requirement parameters include one or more of the following: parameters such as time delay, fluency, and clarity.
  • the three levels of parameter types for users to choose: high, medium and low.
  • the three levels of latency parameters correspond to three types of latency ranges.
  • the way for the user to set the business requirement parameter may also be to input a specified value or range of values.
  • the network scheduling node 200 acquires service characteristic parameters of network running services.
  • the service characteristic parameters of each type of service can be determined.
  • the service characteristic parameters include business geographical distribution, node type, node distribution and node running time distribution.
  • Geographical distribution of services refers to the area covered by the service and the distribution under the covered area.
  • take the video live broadcast service as an example.
  • the current live broadcast service can cover most countries in the world, while the anchors are mainly concentrated in a few popular cities.
  • the location of the main conference venue is relatively fixed, and the locations of other online conference participants are uncertain.
  • cluster planning and route calculation business geographical distribution can be added as a constraint to optimization calculations to improve the stability of core business functions. For example, for a city with a large number of anchors, a large uplink bandwidth should be planned for it, while the main venue of the video conference needs to provide backup nodes or routing paths.
  • the node type refers to the type of node used to provide services.
  • Types of nodes include servers, virtual machines, and containers. Further, node types also include servers of different specifications, virtual machines, containers, and so on. Specifically, taking a server as an example, servers of different specifications include different processor resources, storage resources, and network resources. Depending on the characteristics of the business, the types of nodes required by the business are also different.
  • Node distribution refers to the distribution of edge nodes, because edge nodes have the function of user access, so the distribution of edge nodes also represents the distribution of users.
  • the distribution of central nodes corresponding to edge nodes will also be affected.
  • the distribution of nodes can be determined according to the geographical distribution of viewers in history. Further, cluster planning can be performed according to the distribution of nodes.
  • Node running time distribution refers to the time distribution of users accessing edge nodes and sending requests for data reading and writing. Specifically, when a user accesses an edge node, the edge node can receive the user's data read and write requests, and perform operations such as streaming or returning to the source according to the request. However, the access time of different users is not completely the same, so some edge nodes may be idle. Planning nodes with similar runtimes in the same cluster can improve the utilization of computing resources in the nodes.
  • the network scheduling node 200 acquires multiple network operating parameters.
  • the network scheduling node 200 can acquire multiple network operating parameters through a collection module deployed in the node.
  • the network operation parameters include two parts, one part is the network operation parameters of each node. For example, parameters such as bandwidth (uplink bandwidth and downlink bandwidth), bandwidth utilization, CPU utilization, and memory usage. The other part is the parameters between nodes. For example, delay between nodes, packet loss rate between nodes, jitter between nodes, etc.
  • sending delay and propagation delay are our main considerations. For the case of a large packet length, the transmission delay is the main influencing factor; for the small packet length, the propagation delay is the main influencing factor.
  • LLLL ultra-low latency live
  • ms milliseconds
  • RTC real time communication
  • the requirements for delay are higher, and the delay is required to be less than 300ms.
  • a live broadcast service with a real-time voice call function belongs to the RTC service.
  • Bandwidth refers to the amount of data that can pass through a link per unit time, in units of bits per second (bps), that is, the number of bits that can be transmitted per second.
  • the collection period of the network operating parameters may be determined as required.
  • these two types of parameters may be stored in the network scheduling node 200 .
  • the network scheduling node 200 performs cluster planning according to service requirement parameters and network operation parameters.
  • the clusters are divided into two types, one is the center cluster and the other is the edge cluster.
  • the center cluster includes a plurality of center nodes
  • the edge cluster includes at least one center node and at least one edge node.
  • Each node in the cluster can communicate directly. That is, any two edge nodes in the edge cluster may not communicate through the central node.
  • cluster planning refers to planning the number of clusters in the network and the number of nodes contained in each cluster according to at least one of the business requirement parameters and multiple network operating parameters.
  • the central node in the center cluster is also a part of some edge clusters.
  • Figure 5(a) provides a network with two central nodes and four edge nodes.
  • the central cluster includes central nodes C1 and C2.
  • Edge cluster 1 includes central node C1, edge nodes E1 and E2, and edge cluster 2 includes central node C2, edge nodes E3 and E4.
  • the nodes in the same cluster can communicate directly.
  • the prerequisite for communication is to detect the communication channel between each node in the cluster and another node. For example, when parameters such as delay, packet loss rate, and jitter between edge nodes E1 and E2 meet communication requirements, the two nodes E1 and E2 can be allowed to communicate.
  • the nodes in the same cluster are connected through wired cables.
  • the number and distribution of nodes in each cluster can be determined.
  • Figure 5(b) provides a network containing multiple nodes.
  • such courses usually require low latency (less than 300ms) because of the need for real-time interaction between teachers and students.
  • the edge node accessed by the teacher is E1
  • the edge nodes accessed by other students are E2, E3, and E4.
  • the uplink/downlink delay between each node is the same, as shown in the figure, the delay between E1 and E2 in the edge cluster 1 and the central node C1 is 50ms, and the delay between E1 and E2 is 30ms.
  • the time delays from E3 and E4 in the edge cluster 2 to the central node C2 are both 50 ms, and the time delay between E3 and E4 is 30 ms.
  • the time delay between two central nodes C1 and C2 in the central cluster is 80ms.
  • the communication delay between each edge node via the central node is less than the required 300ms. That is, teachers and students can interact smoothly.
  • the delay between the central node C1 and the edge node E2 fluctuates from 50 ms to 500 ms, and the edge node
  • the delay between E1 and edge node E2 also fluctuates from 30ms to 350ms.
  • the time delay for E2 to obtain the data flow of E1 via C1 or directly obtain the data flow of E1 will be greater than the required 300ms.
  • the delay between the edge node E2 and the central node C2 is only 50ms, that is, the delay for the edge node E2 to obtain the data flow of the edge node E1 via the central nodes C2 and C1 in turn is 180ms, which is less than the required 300ms. Therefore, the edge node E2 can be removed from the edge cluster 1, and the node E2 can be added to the edge cluster 2, so as to ensure the normal use of users accessing from the edge node E2.
  • cluster planning can also be performed based on neural network algorithms such as deep learning according to the cluster planning situation in historical data.
  • the historical data includes historical network operating parameters, historical delay data, historical site failure data, and delay distribution data.
  • constraints can be formed according to one or more of requirements such as fluency, clarity, and stability.
  • multiple central nodes can be provided in the edge cluster as required.
  • a backup central node can be arranged in the edge cluster to ensure that when a central node fails, the backup central node can be used to provide services.
  • the cycle for cluster planning can be determined according to needs, for example, it can be performed every hour, or when the network scheduling node 200 detects some abnormal parameters (such as large fluctuations in delay).
  • S109 The network scheduling node 200 performs scheduling according to the cluster plan.
  • the network scheduling node 200 distributes the cluster planning to each node, and divides the nodes in the network according to the cluster planning.
  • the network scheduling node 200 performs route calculation according to service requirement parameters, service characteristic parameters and network operation parameters.
  • routing paths between all nodes in each cluster can be calculated. Specifically, based on parameters such as the bandwidth of each node in the same cluster, the delay between each node, and the packet loss rate and jitter between each node, the optimal routing path or multiple routing paths can be obtained based on the optimization method. Prioritize.
  • Fig. 6(a) shows an edge cluster including central node C2 and edge nodes E2, E3 and E4. Next, with low latency as the optimization goal, an optimal routing calculation process is introduced.
  • the delay between edge nodes E2 and E4 is 80ms, while the delay between edge nodes E2 and E3 is 20ms, and the delay between edge nodes E3 and E4 is 30ms, also That is, in the case of considering the factor of delay, the delay of data transmission by edge node E2 through the routing path E2-E3-E4 is 50ms, which is less than the time delay of data transmission by E2 through the routing path E2-E4 delay.
  • the optimal routing path between the edge node E3 and the central node C2 can also be calculated. Further, by comparing the three routing paths (E3-C2, E3-E2-C2, E3-E4-C2) between the edge node E3 and the central node C2, it can be determined that the path with the shortest delay is E3-E2-C2, E3-E4-C2 is next, and finally E3-C2. That is, the routing paths between the various nodes can be sorted. Depending on the ordering, a backup can be provided in case the optimal routing path between these two nodes fails. For example, when the edge node E2 fails, the edge node E3 cannot obtain data from the central node C2 via E2, and may choose to obtain data from the central node C2 via the edge node E3.
  • Figure 6(b) shows the minimum delay routing paths between nodes in the edge cluster.
  • route calculation can also be performed according to multiple business requirement parameters.
  • an objective function can be constructed based on two or more of parameters such as low delay, low high fluency, high definition, and low back-to-source cost, and an optimal routing path can be obtained based on an optimization method.
  • the back-to-source cost refers to the cost generated by the bandwidth data consumption of the bandwidth data pulled from the edge node to the central node.
  • the objective function when constructing the objective function according to multiple parameters, it may be constructed by assigning weights of different sizes to different parameters.
  • the size of the weight value corresponding to each parameter can be determined according to needs (such as service type).
  • a constraint function may also be constructed according to service characteristic parameters.
  • different objective functions can be formulated for different clusters according to the geographic distribution of services.
  • the anchor has relatively high requirements for uplink bandwidth, so when the edge cluster where the edge node accessed by the anchor is located performs routing calculation, the objective function can be constructed mainly based on parameters such as uplink bandwidth and/or bandwidth occupancy rate.
  • the objective function of the edge cluster accessed by users watching the live broadcast can be constructed based on parameters such as downlink bandwidth and delay.
  • a constraint function can also be constructed according to the running time distribution. For example, in the live video service, the access time of some users is relatively fixed and longer than other users, and the running time of the edge nodes corresponding to such users with longer access time is also longer. When determining the edge node to be accessed, the idle duration can be selected to meet user requirements.
  • a constraint function may also be constructed according to business requirement parameters.
  • the total time delay between nodes may be constrained not to exceed a fixed value (for example, 400ms). That is, on the basis of guaranteeing the delay, the objective function is constructed with the lowest back-to-source cost and/or the lowest bandwidth occupancy rate.
  • the delay of the routing path between some nodes may not be the lowest, but the objective function is maximized under the condition of satisfying the constraint function.
  • routing paths between nodes in different clusters may be calculated.
  • the routing path between two of the central nodes can be calculated, and the specific calculation method can refer to the aforementioned calculation method of the routing path in the same cluster.
  • routing calculation strategies between edge clusters.
  • similar routing calculations can be performed for central clusters containing three or more central nodes, so as to realize the optimization of routing paths.
  • the network contains multiple central nodes
  • the node contained in the edge cluster where one of the central nodes is located corresponds to the main venue or the main speaker. That is, most of the data transmission needs to go through this central node, so a constraint function can be constructed to ensure that the optimal paths between the central nodes in the central cluster obtained through routing calculations all pass through this central node.
  • the execution frequency of route calculation may be different from the frequency of cluster planning in S107.
  • the frequency of route calculation can be more frequent than that of cluster planning. For example, it can be done every 1-10 minutes.
  • S107 may be executed now at S109, or may be executed later at S109.
  • the above shows how to perform cluster planning and routing calculation for a specific service type or service requirement, so that the network dedicated to a specific service can perform data transmission according to the obtained optimal path.
  • it is first necessary to repeat the above steps to calculate the optimal route corresponding to each service type.
  • the optimal route for the service type is selected according to the obtained service type, so as to realize providing services for multiple types of services by using one network.
  • the network scheduling node 200 performs scheduling according to the calculated routing path.
  • the network scheduling node 200 After the optimal route or multiple routing paths are calculated, the network scheduling node 200 provides routing basis for data transmission between nodes by delivering the routing optimization strategy or routing paths to each node.
  • the network scheduling method 100 realizes real-time optimal scheduling of network topology and routing paths based on business types by collecting multiple network operating parameters, effectively avoiding user experience caused by node failures or node parameter fluctuations drop problem. Furthermore, by setting different objective functions, business needs can also be met in a targeted manner. For example, low latency and low back-to-source costs are guaranteed. In addition, by sending the calculated routing optimization strategy to each node, it avoids artificially adjusting the routing path based on experience, while ensuring the rationality of the routing path between nodes, it also reduces the time and cost of operation and maintenance .
  • the present application also provides a network scheduling node 200 , as shown in FIG. 7 , including a communication module 202 , a storage module 204 and a processing module 206 .
  • the communication module 202 is configured to obtain the service type input by the user through the configuration interface in S101, and is also used to obtain service characteristic parameters in S103. In S105 , multiple network operating parameters of the network are also acquired by the communication module 202 . The communication module 202 is further configured to deliver the cluster planning scheme to the nodes in S109. At the same time, the operation of delivering the optimal routing path to the nodes in S113 is also performed by the communication module 202 .
  • the communication module 202 is further configured to acquire the constraints input by the user through another configuration interface in S111.
  • the storage module 204 is used to store the service type obtained in S101, and is also used to store the service characteristic parameters obtained in S103.
  • the multiple network operating parameters and historical network operating parameters acquired in S105 will also be stored in the storage module 204 .
  • the storage module 204 is further configured to store the cluster planning solution obtained in S107 and the routing path obtained in S111.
  • the processing module 206 is configured to execute cluster planning in S107 according to service requirement parameters and network operation parameters, and to perform route calculation in S111 according to service requirement parameters, service characteristic parameters and network operation parameters. Specifically, in S111, the operations of determining a routing path between two nodes in at least one cluster and determining a routing path between a first node of a first cluster and a second node of a second cluster are performed by processing Module 206 is executed.
  • the processing module 206 also performs the cluster planning operation according to the optimization target, multiple network operating parameters and historical network operating parameters.
  • the present application also provides a network scheduling node 400 .
  • the network scheduling node includes: a bus 402 , a processor 404 , a memory 406 and a communication interface 408 .
  • the processor 404 , the memory 406 and the communication interface 408 communicate through the bus 402 .
  • the network scheduling node 400 may be a server or a terminal device. It should be understood that the present application does not limit the number of processors and memories in the network scheduling node 400 .
  • the bus 402 may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus or the like.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one line is used in FIG. 8 , but it does not mean that there is only one bus or one type of bus.
  • Bus 404 may include pathways for communicating information between various components of network scheduling node 400 (eg, memory 406 , processor 404 , communication interface 408 ).
  • the processor 404 may include processing such as a central processing unit (central processing unit, CPU), a graphics processing unit (graphics processing unit, GPU), a microprocessor (micro processor, MP) or a digital signal processor (digital signal processor, DSP). Any one or more of them.
  • CPU central processing unit
  • GPU graphics processing unit
  • MP microprocessor
  • DSP digital signal processor
  • the memory 406 may include a volatile memory (volatile memory), such as a random access memory (random access memory, RAM).
  • Processor 404 can also include non-volatile memory (non-volatile memory), such as read-only memory (read-only memory, ROM), flash memory, mechanical hard disk (hard disk drive, HDD) or solid state hard disk (solid state drive, SSD).
  • Executable program codes are stored in the memory 406 , and the processor 404 executes the executable program codes to implement the foregoing network scheduling method 100 .
  • the memory 406 stores instructions used by the network scheduling system for executing the network scheduling method 100 .
  • the communication interface 403 implements communication between the network scheduling node 400 and other devices or communication networks by using transceiver modules such as but not limited to network interface cards and transceivers.
  • the embodiment of the present application also provides a network scheduling node cluster.
  • the network scheduling node cluster includes at least one network scheduling node 400.
  • the network scheduling nodes included in the network scheduling node cluster may all be terminal devices, or all be cloud servers, or partly be cloud servers and partly be terminal devices.
  • the memory 406 in one or more network scheduling nodes 400 in the network scheduling node cluster may store the same network scheduling node 200 for executing the network scheduling method 100 instructions.
  • one or more network scheduling nodes 400 in the network scheduling node cluster may also be used to execute some instructions of the network scheduling method 100 .
  • a combination of one or more network scheduling nodes 400 can jointly execute the instructions of the network scheduling node 200 for executing the network scheduling method 100 .
  • memories 406 in different network scheduling nodes 400 in the network scheduling node cluster may store different instructions for executing some functions of the network scheduling method 100 .
  • FIG. 10 shows a possible implementation.
  • two network scheduling nodes 400A and 400B are connected through a communication interface 408 .
  • the memory in the network scheduling node 400A stores instructions for performing the functions of the communication module 202 and the processing module 206 .
  • the memory in the network scheduling node 400B stores instructions for executing the functions of the storage unit 204 .
  • the memories 406 of the network scheduling nodes 400A and 40B jointly store instructions for the network scheduling node 200 to execute the network scheduling method 100 .
  • connection mode between the network scheduling nodes shown in FIG. 10 may be based on the consideration that the network scheduling method 100 provided by the present application needs to store a large amount of network operating parameters. Therefore, it is considered that the storage function is performed by the network scheduling node 400B.
  • the function of the network scheduling node 400A shown in FIG. 10 may also be completed by multiple network scheduling nodes 400 .
  • the function of the network scheduling node 400B can also be completed by multiple network scheduling nodes 400 .
  • one or more network scheduling nodes among the network scheduling nodes may be connected through a network.
  • the network may be a wide area network or a local area network or the like.
  • Figure 11 shows a possible implementation. As shown in FIG. 11 , two network scheduling nodes 400C and 400D are connected through a network. Specifically, the network is connected to the network through a communication interface in each network scheduling node.
  • the memory 406 in the network scheduling node 400C stores instructions for executing the communication module 202 .
  • the memory 406 in the network scheduling node 400D stores instructions for executing the storage module 204 and the processing module 206 .
  • connection mode between the network scheduling nodes shown in FIG. 11 can be considered that the network scheduling method 100 provided by this application requires a large amount of storage of network operating parameters and cluster planning and routing path calculation. Therefore, it is considered that the processing module 206 and the storage module 204 The implemented functions are executed by the network scheduling node 400D.
  • the functions of the network scheduling node 400C shown in FIG. 11 may also be completed by multiple network scheduling nodes 400 .
  • the function of the network scheduling node 400D can also be completed by multiple network scheduling nodes 400 .
  • the present application also provides a network scheduling system 500, and the network scheduling system 500 includes the network scheduling node 200 and network nodes.
  • the functions of the network scheduling node 200 are as described above, so details are not repeated here.
  • the network nodes correspond to the edge nodes and central nodes in the aforementioned network.
  • the function of the network scheduling node 200 may be implemented by one computing device or a cluster composed of multiple computing devices.
  • the network scheduling system 500 may include some edge nodes and/or central nodes in the network.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium may be any available medium that the network scheduling node can store, or a data storage device such as a data center containing one or more available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state hard disk), etc.
  • the computer-readable storage medium includes instructions, and the instructions instruct the network scheduling node to execute the above-mentioned network scheduling node 200 for executing the network scheduling method 100 .
  • the embodiment of the present application also provides a computer program product including instructions.
  • the computer program product may be a software or program product containing instructions, capable of running on a network dispatch node or stored in any available medium.
  • the computer program product runs on at least one computer device, at least one computer device is made to execute the above-mentioned network scheduling method 100 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请提供了一种网络调度方法,所述方法包括:获取业务的业务需求参数,根据所述业务需求参数确定在网络上的优化目标,再获取网络中多个网络运行参数,所述网络运行参数包括所述网络中两个节点之间的通信参数,然后根据所述优化目标和所述多个网络运行参数,从所述网络划分出至少一个簇,每一簇包括所述网络中的至少一个节点。通过根据网络运行参数自动对网络进行划分,避免了人为地依靠经验进行网络架构的优化,提升了网络架构优化的效率。

Description

一种网络调度方法、系统及设备 技术领域
本申请涉及通信领域,特别涉及一种网络调度方法、系统及设备。
背景技术
云厂商在提供云视频服务时通常使用树型网络结构。即网络中包括一定数量的中心节点和边缘节点,其中,边缘节点的数量较中心节点的数量多一些。用户通过网络连接边缘节点,实现云视频服务的接入。由于边缘节点数量较多,且所有的边缘节点都需要频繁向中心节点进行拉流,以获取视频数据并向用户提供。因此,树型网络结构下中心节点的负荷压力大,容易导致中心节点所在的服务器崩溃的情况。此外,网络结构的调整需要人为地依靠经验进行。
因此,如何对云视频服务下的网络结构进行优化成为了亟待解决的问题。
发明内容
本申请提供了一种网络调度方法,该方法可以实时地优化网络结构。
本申请的第一方面提供了一种网络调度方法,该方法包括:获取业务的业务需求参数;根据该业务需求参数确定在网络上的优化目标;获取网络的架构以及网络中多个网络运行参数,该网络运行参数包括该网络中两个节点之间的通信参数和节点的网络运行参数;根据该优化目标和该多个网络运行参数,从该网络划分出至少一个簇,每一簇包括该网络中的至少一个节点。
通过获取网络运行参数,基于优化目标对网络架构进行实时的优化,自动地将网络划分为至少一个簇,其中,每个簇中包括多个节点。因此,避免了人为地依旧经验对网络架构进行优化,有效地提升了网络架构优化的效率。
在一些可能的设计中,该方法还包括:在该簇内确定两个节点之间的路由路径。在将网络划分为至少一个簇后,对同一簇内节点之间的路由路径进行优化,进一步地优化网络架构。
在一些可能的设计中,该方法还包括:在第一簇的第一节点和第二簇的第二节点之间确定路由路径。在将网络划分为至少一个簇后,对簇与簇之间的节点之间的路由路径进行优化,进一步地优化网络架构。
在一些可能的设计中,该业务需求参数包括下述的一种或多种:时延、流畅度、清晰度。
在一些可能的设计中,该方法还包括:提供第一配置接口,该配置接口用于获取用户输入的该业务需求参数。通过提供配置接口,获取用户输入的业务需求参数,从而准确地针对用户的需求进行簇的划分,保证网络优化的针对性。
在一些可能的设计中,该通信参数包括下述的一种或多种:时延、丢包率、抖动。
在一些可能的设计中,该方法还包括:获取该网络的多个历史网络运行参数,该优化目标、该多个网络运行参数和该历史网络运行参数用于在该网络中划分出该至少一个簇。通过获取网络的历史网络运行参数并用于划分该至少一个簇,提供了解决类似问题 的大量数据,可以进一步提升优化的准确度。
在一些可能的设计中,该方法还包括:提供第二配置接口,该第二配置接口用于获取用户输入的约束条件,该优化目标、该约束条件和该多个网络运行参数用于在该网络中划分出该至少一个簇。通过提供另一配置接口,获取用户输入的约束条件,从而更好地针对用户的需求和约束进行簇的划分,保证网络优化的合理性。
本申请的第二方面提供了一种网络调度节点,该系统包括通信模块和处理模块:
该通信单元,用于获取业务的业务需求参数;
该处理单元,用于根据该业务需求参数确定在网络上的优化目标;
该通信模块,还用于获取该网络中多个网络运行参数,该网络运行参数包括该网络中两个节点之间的通信参数;根据该优化目标和该多个网络运行参数,从该网络划分出至少一个簇,每一簇包括该网络中的至少一个节点。
在一些可能的设计中,该处理模块,还用于在该簇内确定两个节点之间的路由路径。
在一些可能的设计中,该处理模块,还用于在第一簇的第一节点和第二簇的第二节点之间确定路由路径。
在一些可能的设计中,该业务需求参数包括下述的一种或多种:时延、流畅度、清晰度。
在一些可能的设计中,该通信模块,还用于提供第一配置接口,该配置接口用于获取用户输入的该业务需求参数。
在一些可能的设计中,该通信参数包括下述的一种或多种:时延、丢包率、抖动。
在一些可能的设计中,该通信模块,还用于获取该网络的多个历史网络运行参数,该优化目标、该多个网络运行参数和该历史网络运行参数用于在该网络中划分出该至少一个簇。
在一些可能的设计中,该通信模块,还用于提供第二配置接口,该第二配置接口用于获取用户输入的约束条件,该优化目标、该约束条件和该多个网络运行参数用于在该网络中划分出该至少一个簇。
本申请的第三方面提供了一种网络调度系统,该系统包括网络调度节点和网络节点,该网络调度节点用于执行如第一方面提供的方法。
本申请的第四方面提供了一种网络调度节点,包括处理器和存储器,该处理器用于执行该存储器中的指令,以使得该网络调度节点执行如第一方面或第一方面的任意可能的设计提供的方法。
本申请的第五方面提供了一种包含指令的计算机程序产品,当该指令被计算机设备集群运行时,使得该计算机设备集群执行如第一方面或第一方面的任意可能的设计提供的方法。
本申请的第六方面提供了一种计算机可读存储介质,包括计算机程序指令,用于执行如第一方面或第一方面的任意可能的设计提供的方法。
附图说明
为了更清楚地说明本申请实施例的技术方法,下面将对实施例中所需使用的附图作以简单地介绍。
图1是本申请涉及的一种传统的树型网络结构的示意图;
图2是本申请涉及的一种网络调度的架构图;
图3是本申请涉及的一种网络调度方法的流程图;
图4是本申请涉及的一种交互界面的示意图;
图5(a)是本申请涉及的一种簇划分的示意图;
图5(b)是本申请涉及的一种网络的架构图;
图5(c)是本申请涉及的又一种网络的架构图;
图6(a)是本申请涉及的一种边缘簇的架构图;
图6(b)是本申请涉及的另一种边缘簇的架构图;
图7是本申请涉及的一种网络调度节点的示意图;
图8是本申请涉及的一种网络调度节点的示意图;
图9是本申请涉及的一种网络调度节点集群的示意图;
图10是本申请涉及的又一种网络调度节点集群的示意图;
图11是本申请涉及的又一种网络调度节点集群的示意图。
具体实施方式
本申请实施例中的术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。
为了便于理解本申请实施例,首先,对本申请涉及的部分术语进行解释说明。
内容分发网络(content distribution network,CDN):由互联网内相互连接的数据处理设备组成的系统,它们之间互相配合把内容(尤其是大量的媒体内容)透传到最终用户。CDN是一种新型网络内容服务体系,其基于网际互连协议是(internet protocol,IP)网络而构建,基于内容访问与应用的效率要求、质量要求和内容秩序而提供内容的分发和服务。
树型结构(tree structure):一种网络结构,它是由n(n>=1)个有限节点组成一个具有层次关系的集合。在树形结构的网络中通常包括一定数量的源节点、中心节点和边缘节点,其中,各类节点的数量依次增多。源节点负责数据(例如媒体数据流)的录制和转码等;中心节点通常带宽大,网络稳定,负责数据的跨区域的调度转发;而边缘节点靠近用户侧,负责用户的就近接入。
数据回源(data retrieval):用户请求的数据没有在网络本地命中(内容未预先注入、内容还没有被本地缓存或内容已过期)时,网络需要先访问源节点获取内容后,再向用户提供服务。
拉流(pull stream):用户侧设备或者边缘节点从其他服务器拉取数据的过程。
图1示出了传统的树型网络结构,该树型网络结构至少包括两个中心节点C1和C2,以及四个边缘节点E1、E2、E3和E4。用户H为主播,用户H通过E1接入该网络。主播H开播后,视频数据上行流由距离主播H最近的边缘节点E1接入,随即该边缘节点E1会将该流向其对应的中心节点C1进行推送。
当有观众请求收看该主播时,会首先向距离观众最近的边缘节点进行请求,如果该边缘节点已存在该主播的流,则直接进行拉流和收看。若该节点没有流,则需要向该边缘节点对应的中心节点进行拉流请求。
用户1和用户2均是通过距离该两名用户最近的边缘节点E2接入该网络,在用户1和用户2均是首次接入的情况下,边缘节点E2中不存在主播H的直播数据流。因此需要向与边缘节点E2对应的中心节点C1进行拉流,以获取主播H的直播数据流。
用户3通过距离其最近的边缘节点E3接入该网络,用户4则是通过距离其最近的边缘节点E4接入该网络。在用户3和用户4也均是首次接入的情况下,边缘节点E3和E4中不存在主播H的直播数据流。因此,需要分别向与所述两个边缘节点所对应的中心节点C2进行拉流。
在中心节点C2中也不存在主播H的直播数据流的情况下,中心节点C2需要向C1进行拉流,以获取主播H的直播数据流。其中,C2将持续获取C1中主播H的直播数据流,直至用户3和4均停止获取主播H的直播数据流的请求。
需要说明的是,在上述C2持续获取C1中主播H的直播数据流的情况下,再有用户接入中心节点C2对应的边缘节点E3或E4,且请求获取主播H的直播数据流时,边缘节点E3或E4无需进行拉流的操作,即可获取主播H的直播数据流。
可以看到,在上述传统的树型网络结构下,如果用户接入的边缘节点没有提前缓存用户请求的数据流,那么边缘节点在接收到用户的请求后就需要向中心节点进行拉流。在这一情况下,随着接入用户的不断增加,边缘节点下行的带宽也要不断增加。相应的,带宽成本将急剧增大。此外,树型网络结构相对固定,数据流的获取只能是向上一级的节点进行请求,也即无法根据网络运行参数(如带宽、时延等)对网络结构进行优化。
因此,针对以上问题,本申请提出一种网络调度方法100,该方法通过采集网络的网络运行参数,基于业务类型对网络拓扑结构以及路由路径进行优化和调整。
图2提供了一种网络调度的架构图。所述架构图中至少包括网络调度节点200、两个中心节点C1和C2、四个边缘节点E1、E2、E3和E4以及若干用户。
中心节点或边缘节点通常是一个或多个网络调度节点(如服务器或服务器集群),用于执行数据读写请求的转发、数据的收发以及数据的编/解码等功能中的一项或多项。
不同的是,边缘节点还用于与用户层的设备进行数据传输。中心节点与边缘节点之间通常是一对多的对应关系,也即多个边缘节点均是与同一个中心节点进行通信。因此,中心节点的带宽,尤其是下行带宽相比于边缘节点更大。同时由于需要对多个边缘站点发出的数据获取请求进行响应,中心节点的计算能力或存储能力相比于边缘节点更强。例如,中心节点包括的CPU数量更多或者存储空间更大。
可选的,中心节点或边缘节点还可以是虚拟机或者容器。
用户层中包括多个用户对应的终端设备。终端设备是指具有数据传输、数据处理以及显示功能中的一项或多项能力的设备。例如,手机、电视、平板电脑、手提电脑或智能音箱等。
网络调度节点200用于采集多个网络运行参数,并根据所述网络运行参数对网络拓扑结构和路由路径进行调整和调度。其中,网络运行参数包括两部分,一部分是各节点的网络运行参数。例如,带宽(上行带宽和下行带宽)、中央处理(central processing unit,CPU)利用率、内存占用率等参数。另一部分是节点之间的通信参数。例如,节点之间的时延、节点之间的丢包率、节点之间的抖动等。
可选的,节点可以利用自带的检测装置获取网络运行参数。进一步地,通过数据传输的方式,将所述获取的网络运行参数发送至网络调度节点200。
可选的,网络调度节点200中用于采集多个网络运行参数的模块可以任意的分布在各个节点之中。例如,可以在每一节点中加入参数采集模块,用于采集网络运行参数。又例如,可以在部分节点中加入参数采集模块,用于采集网络运行参数。
可选的,网络调度节点200中用于根据所述网络运行参数对网络拓扑结构和路由路径进行调整和调度的模块可以基于云服务进行部署。例如,在获取到多个节点的网络运行参数后,远端的服务器或者服务器集群可以根据网络运行参数对网络拓扑结构和路由路径进行调整和调度。其中,服务器集群中各服务器可以不部署在同一机房内。
图3示出了一种网络调度方法100的流程图。
S101:网络调度节点200获取运行于网络上的的业务类型。
不同类型的业务的特征是不同的。例如,以在线教育类的视频业务为例,文化基础课通常要求低时延和高流畅度,而美术课在以上特征的基础上,对于画质的要求也比较高。角色扮演类线上娱乐游戏(如剧本推理游戏)对于画质要求不高,但是对于音频的流畅性有较高的要求。
综上所述,网络调度节点200用于接受用户发送的业务类型指示信息,所述业务类型指示信息用于指示运行于网络之上的业务类型。进一步地,网络调度节点200根据业务类型可以确定各类型业务的业务需求参数。业务需求参数包括低时延、高流畅度、高清晰度和高稳定性等。其中,高流畅度要求低丢包率和低抖动,高清晰度则是要求大带宽和低带宽占用情况。
可选的,上述业务类型指示信息中也可以包含各类型业务的业务需求参数。
可选的,用户的业务类型也可以由网络调度节点200根据该用户的历史记录来获得。
需要说明的是,一个网络可以用于运行一个或多个用户的业务。也即,部分网络可以单独的提供给某一用户作为专用网络,以保证业务运行的高稳定性等要求。同时,部分网络可以在带宽、网络、时延均能满足需求的情况下,也可以同时提供给多个用户共同使用。而无论是一个用户还是多个用户占用一个网络,对应的业务类型都可能是一种或者多种。
图4提供一种设置业务类型和业务需求参数的交互界面。该交互界面包括一级业务类型选择控件301、二级业务类型选择控件302和业务需求参数设置控件303。其中,一级业务类型包括直播视频业务、在线教育视频业务和会议视频业务等。而二级业务类型对应一级业务类型的分类。例如,直播视频业务对应的二级业务包括游戏直播和歌曲直播等;会议视频业务对应的二级业务包括政要会议和技术会议;在线教育视频业务对应的二级业务包括大班课、小班课和美术课等。
可选的,用户可以通过点击一级业务类型选择控件301展开其二级业务菜单,进而通过点击二级业务类型选择控件302选择其业务类型。需要说明的是,用户可以根据实际情况选择一个或多个业务类型。
进一步地,在选定了任一二级业务类型后,用户可以通过业务需求参数设置控件303对该类型业务的业务需求参数进行设置。其中,业务需求参数包括下述的一种或多种:时延、流畅度和清晰度等参数。
可选的,供用户选择的参数类型有高、中、低三个档次。以时延为例,三个档次的时延参数对应三种时延的范围。
可选的,用户设置业务需求参数的方式也可以是输入指定的数值或数值范围。
S103:网络调度节点200获取网络运行业务的业务特征参数。
根据在S101中获取的网络业务类型,可以确定各类型业务的业务特征参数。其中,业务特征参数包括业务地理分布、节点类型、节点分布以及节点运行时长分布。
业务地理分布是指业务覆盖的区域以及覆盖区域下的分布情况。例如,以视频直播业务为例,目前的直播业务可以覆盖全球大部分国家,而主播则主要集中在几个热门城市。又例如,对于视频会议类业务,主会场所在的地点相对固定,线上参会的其他人员的地点具有不确定性。在簇规划(cluster planning)以及路由计算(route calculation)时,可以将业务地理分布作为约束条件加入优化计算中,以提升业务核心功能的稳定性。例如,对于主播数量多的城市应为其规划较大的上行带宽,而视频会议的主会场则需要提供备份节点或者路由路径。
节点类型是指用于提供业务的节点的类型。节点的类型包括服务器、虚拟机和容器等。进一步地,节点类型还包括不同规格的服务器、虚拟机和容器等。具体地,以服务器为例,不同规格的服务器包含的处理器资源、存储资源和网络资源不完全相同。根据业务的特征不同,业务所需要的节点类型也不同。
节点分布是指边缘节点的分布,因为边缘节点具有用户接入的功能,因此边缘节点的分布情况也代表着用户的分布情况。相应地,与边缘节点对应的中心节点的分布情况也将受到影响。例如,在直播业务中,可以根据历史上观众的地域分布确定节点分布。进一步地,可以根据节点分布情况进行簇规划。
节点运行时长分布是指用户接入边缘节点并发送数据读写等请求的时长分布情况。具体地,当用户接入边缘节点后,边缘节点可以接收到用户的数据读写等请求,并根据请求进行拉流或者回源等操作。而不同的用户的接入时间是不完全相同的,因此可能出现部分边缘节点出于空闲状态的情况。将运行时长相近的节点规划在同一簇中可以提升节点中计算资源的利用率。
S105:网络调度节点200获取多个网络运行参数。
网络调度节点200可以通过部署在节点中的采集模块获取多个网络运行参数。网络运行参数包括两部分,一部分是各节点的网络运行参数。例如,带宽(上行带宽和下行带宽)、带宽利用率、CPU利用率、内存占用率等参数。另一部分是节点之间的参数。例如,节点之间的时延、节点之间的丢包率、节点之间的抖动等。
时延是指一个报文或分组从一个节点传送到另一个节点所需要的时间。它包括了发送时延、传播时延、处理时延和排队时延。也即,时延=发送时延+传播时延+处理时延+排队时延。一般,发送时延与传播时延是我们主要考虑的。对于报文长度较大的情况,发送时延是主要影响因素;报文长度较小的情况,传播时延是主要影响因素。
对于超低时延直播(low latency live,LLL)业务来说,通常要求时延小于800毫秒(millisecond,ms)。对于实时音视频通信(real time communication,RTC)业务来说,对于时延的要求更高,要求时延小于300ms。例如,具有实时语音通话功能的直播业务就属于RTC业务。
带宽是指单位时间能通过链路的数据量,以比特每秒(bit per second,bps)为单位,即每秒可传输之位数。
需要说明的是,网络运行参数的采集周期可以根据需要确定。
进一步地,在采集了上述的网络运行参数和业务特征参数后,可以将这两类参数存储至网络调度节点200中。
S107:网络调度节点200根据业务需求参数、网络运行参数进行簇规划。
首先,簇分为两种,一种是中心簇,另一种是边缘簇。其中,中心簇包括多个中心节点,而边缘簇包括至少一个中心节点和至少一个边缘节点。簇内的各个节点均可直接进行通信。也即,边缘簇内的任意两个边缘节点之间可以不通过中心节点进行通信。
其次,簇规划是指根据业务需求参数中的至少一项,以及多个网络运行参数,对网络内簇的数量以及每个簇包含的节点数量进行规划。
需要说明的是,中心簇和边缘簇存在交集,也即中心簇中的中心节点也是部分边缘簇的一部分。
图5(a)提供了一个包含两个中心节点和四个边缘节点的网络。其中,中心簇包括中心节点C1和C2。边缘簇1包括中心节点C1、边缘节点E1和E2,边缘簇2包括中心节点C2、边缘节点E3和E4。
其中,同一个簇内的节点之间都可以直接进行通信。但进行通信的前提是需要对簇内的每一个节点与另一节点之间的通信通道进行检测。例如,当边缘节点E1和E2之间的时延、丢包率和抖动等参数满足通信需求时,才可以允许E1和E2这两个节点进行通信。
可选的,同一簇内的节点之间通过有线线缆的方式进行连接。
根据业务类型确定的低时延、高流畅度和高清晰度等业务需求参数,可以确定每一簇内节点的数量以及分布情况。
图5(b)提供了一种包含多个节点的网络。以前述文化基础课为例,这类课程因为有老师和学生实时交互的需求,所以通常要求较低的时延(小于300ms)。假设老师接入的边缘节点为E1,其他学生接入的边缘节点为E2、E3和E4。并且假设各个节点之间的上/下行时延相同,如图所示,边缘簇1中的E1和E2到中心节点C1的时延均为50ms,E1和E2之间的时延为30ms。边缘簇2中E3和E4到中心节点C2的时延均为50ms,E3和E4之间的时延为30ms。中心簇中的两个中心节点C1和C2之间的时延为80ms。在图5(b)示出的网络中,在不考虑用户接入边缘节点产生的时延时,各个边缘节点之间经由中心节点的通信时延均小于要求的300ms。也即,老师和学生们可以顺利地进行交互。
当边缘节点E2与中心节点C1之间的通信时延出现较大的波动时,如图5(c)所示,中心节点C1与边缘节点E2之间的时延由50ms波动至500ms,边缘节点E1与边缘节点E2之间的时延也由30ms波动至350ms。那么在这一情况下E2经由C1获取或直接获取E1的数据流的时延将大于要求的300ms。然而边缘节点E2与中心节点C2之间的时延仅为50ms,也即边缘节点E2依次经由中心节点C2和C1获取边缘节点E1的数据流的时延为180ms,小于要求的300ms。因此,可以将边缘节点E2从边缘簇1中剔除,将节点E2加入边缘簇2中,以保证从边缘节点E2接入的用户的正常使用。
可选的,也可以根据历史数据中的簇规划情况,基于深度学习等神经网络算法进行簇规划。其中,历史数据包括历史网络运行参数、时延历史数据、站点故障历史数据和时延分布数据等。
以上介绍了一种以低时延为约束的簇规划,进一步地,可以根据流畅度、清晰度和稳定性等需求中的一种或多种形成约束。
可选的,可以根据需要在边缘簇中提供多个中心节点。例如在大型会议等对稳定性 要求较高的业务场景下,可以在边缘簇中布置一个备份的中心节点,以保障在一个中心节点出现故障时,备份的中心节点可以用于提供服务。
进行簇规划的周期可以根据需要决定,例如,可以每隔一小时进行一次,或者当网络调度节点200检测到部分参数异常(如时延大幅度波动)时进行一次。
S109:网络调度节点200根据簇规划进行调度。
在进行了簇规划之后,在确定簇的数量或各个簇中节点数量有变化时,网络调度节点200通过将簇规划下发至各个节点中,根据该簇规划对网络中的节点进行划分。
S111:网络调度节点200根据业务需求参数、业务特征参数以及网络运行参数进行路由计算。
在S107中确定了网络包含的簇的数量和每一簇中包含的节点数量后,可以对每一簇内所有节点之间的路由路径进行计算。具体地,根据同一簇内的各个节点的带宽、各个节点之间的时延以及各个节点之间的丢包率和抖动等参数,基于最优化方法可以获得最优路由路径或者多个路由路径的优先级排序。
图6(a)示出了一个包含中心节点C2和边缘节点E2、E3和E4的边缘簇。接下来以低时延为优化目标,介绍一种最优路由的计算过程。
如图6(a)所示,边缘节点E2和E4之间的时延为80ms,而边缘节点E2和E3之间的时延为20ms,边缘节点E3和E4之间的时延为30ms,也即,在考虑时延这一因素的情况下,边缘节点E2通过E2-E3-E4这一路由路径进行数据传输的时延为50ms,小于E2通过E2-E4这一路由路径进行数据传输的时延。
类似的,还可以对边缘节点E3与中心节点C2之间的最优路由路径进行计算。进一步地,通过比较边缘节点E3和中心节点C2之间的三条路由路径(E3-C2、E3-E2-C2、E3-E4-C2),可以确定时延最短的路径是E3-E2-C2,而E3-E4-C2次之,最后是E3-C2。也即,可以对各个节点之间的路由路径进行排序。根据排序情况,可以为这两个节点之间最优路由路径出现故障时提供备份。例如,当边缘节点E2故障时,边缘节点E3无法经由E2向中心节点C2获取数据,可以选择经由边缘节点E3向中心节点C2获取数据。
因此,基于上述方法可以确定该边缘簇内各个节点之间的最优路由路径或路由路径优先级的排序。图6(b)所示出了该边缘簇中各个节点之间的最小时延路由路径。
进一步地,还可以根据多个业务需求参数进行路由计算。具体地,可以基于低时延、低高流畅度、高清晰度和低回源成本等参数中的两个或以上构建目标函数,基于最优化方法获得最优路由路径。其中,回源成本是指从边缘节点向中心节点进行拉流的带宽数据消耗带宽而产生的成本。
可选的,在根据多个参数构建目标函数时,可以通过为不同参数分配不同大小的权值来构建。其中,各个参数对应的权值的大小可以根据需要(如业务类型)确定。
可选的,在进行上述的优化时,还可以根据业务特征参数构建约束函数。具体地,可以根据业务地理分布针对不同的簇制定不同的目标函数。例如,直播视频业务中主播对于上行带宽的要求比较高,那么主播接入的边缘节点所在的边缘簇进行路由计算时,可以主要根据上行带宽和/或带宽占用率等参数构建目标函数。
同样,可以根据节点分布针对不同的簇制定不同的目标函数。例如,观看直播的用户接入的边缘簇的目标函数则可以基于下行带宽和延迟等参数构建目标函数。
可选的,还可以根据运行时长分布构建约束函数。例如,直播视频业务中部分用户 的接入时长相对固定且较其他用户更长,那么这类接入时间更长的用户所对应的边缘节点的运行时长也就更长,因此在为这类用户确定接入的边缘节点时可以选择空闲时长满足用户需求的。
可选的,还可以根据业务需求参数构建约束函数。例如,可以约束各个节点之间的总时延不超过一个固定的数值(例如400ms)。也即,在保障时延的基础上,以回源成本最低和/或带宽占用率最低等构建目标函数。在这一可能的实现方式中,部分节点之间的路由路径的延时可能不是最低的,但是是在满足约束函数的情况下目标函数最大化的。
可选的,在S107中确定了网络包含的簇的数量和每一簇中包含的节点数量后,可以对不同簇内的节点之间的路由路径进行计算。例如,在包含至少三个中心节点的网络中,可以对其中的两个中心节点之间的路由路径进行计算,具体的计算方式可以参考前述的同一簇内的路由路径的计算方式。
以上提供了一些边缘簇之间的路由计算策略,相应地,对于包含三个及以上中心节点的中心簇亦可类似的进行路由计算,以实现路由路径的优化。
例如,在会议视频业务中,当网络中包含多个中心节点,且其中一个中心节点所在的边缘簇中包含的节点对应主会场或者主要发言人时。也即,绝大部分的数据传输都需要经过这一中心节点,因此可以构建一个约束函数,以保证经过路由计算获得的中心簇中各个中心节点之间的最优路径均经过这一中心节点。
需要说明的是,路由计算的执行频率可以不同于S107中的簇规划的频率。通常来说,路由计算的频率相较于簇规划可以更频繁。例如,可以每1-10分钟进行一次。
此外,S107和S109的执行没有固定的先后顺序。也即,S107可以现于S109执行,也可以后于S109执行。
以上示出了如何针对特定的业务类型或业务需求进行簇规划和路由计算,因此对于专用于特定业务的网络可以按照获得的最优路径进行数据传输。然而对于用于提供多种业务的网络,则首先需要重复以上的步骤,对每一业务类型对应的最优路由进行计算。其次,在执行业务时根据获取的业务类型选取对于该业务类型的最优路由,以实现利用一个网络为多个类型的业务提供服务。
S113:网络调度节点200根据计算的路由路径进行调度。
在计算得到了最优路由或者多条路由路径后,网络调度节点200通过将路由优化策略或路由路径下发至各个节点中,为各个节点之间的数据传输提供路由依据。
需要说明的是,上述步骤的执行顺序与业务接入不存在固定的先后顺序。具体地,对于支持的业务类型固定的网络而言,上述的部分步骤可以在业务接入之前执行。而对于支持的业务类型较多且不固定的网络而言,上述的部分步骤可以在业务接入时触发从而被执行。
本申请提供的网络调度方法100通过采集多个网络运行参数,实现了基于业务类型对网络拓扑结构以及路由路径进行实时优化调度,有效地避免了因节点故障或节点参数波动而带来的用户体验下降问题。进一步地,通过设置不同的目标函数,还可以有针对性地满足业务需要。例如,保障低时延、低回源成本等。此外,通过将计算得到的路由优化策略下发至各个节点中,又避免了人为的根据经验对路由路径进行调整,在保障节点间路由路径合理性的同时,还减少了运维的时间和成本。
本申请还提供一种网络调度节点200,如图7所示,包括通信模块202、存储模块204和处理模块206。
通信模块202,用于在S101中通过配置接口获取用户输入的业务类型,还用于在S103中获取业务特征参数。在S105中,网络的中多个网络运行参数也是由通信模块202获取的。通信模块202,还用于在S109中将簇规划方案下发至节点。同时,S113中将最优路由路径下发至节点的操作也是由通信模块202执行的。
可选的,通信模块202还用于在S111中通过另一配置接口获取用户输入的约束条件。
存储模块204,用于存储在S101中获取的业务类型,也用于存储在S103中获取的业务特征参数。在S105中获取的多个网络运行参数以及历史网络运行参数也将被存储至存储模块204。存储模块204,还用于存储S107中获取的簇规划方案和S111中获取的路由路径。
处理模块206,用于在S107中执行根据业务需求参数、网络运行参数进行簇规划的操作,还用于在S111中根据业务需求参数、业务特征参数以及网络运行参数进行路由计算的操作。具体地,在S111中,在至少一个簇内确定两个节点之间的路由路径,以及在第一簇的第一节点和第二簇的第二节点之间确定路由路径的操作均是由处理模块206执行的。
可选的,根据优化目标、多个网络运行参数和历史网络运行参数进行簇规划的操作也是由处理模块206执行的。
本申请还提供一种网络调度节点400。如图8所示,网络调度节点包括:总线402、处理器404、存储器406和通信接口408。处理器404、存储器406和通信接口408之间通过总线402通信。网络调度节点400可以是服务器或终端设备。应理解,本申请不限定网络调度节点400中的处理器、存储器的个数。
总线402可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图8中仅用一条线表示,但并不表示仅有一根总线或一种类型的总线。总线404可包括在网络调度节点400各个部件(例如,存储器406、处理器404、通信接口408)之间传送信息的通路。
处理器404可以包括中央处理器(central processing unit,CPU)、图形处理器(graphics processing unit,GPU)、微处理器(micro processor,MP)或者数字信号处理器(digital signal processor,DSP)等处理器中的任意一种或多种。
存储器406可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM)。处理器404还可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM),快闪存储器,机械硬盘(hard disk drive,HDD)或固态硬盘(solid state drive,SSD)。存储器406中存储有可执行的程序代码,处理器404执行该可执行的程序代码以实现前述网络调度方法100。具体的,存储器406上存有网络调度系统用于执行网络调度方法100的指令。
通信接口403使用例如但不限于网络接口卡、收发器一类的收发模块,来实现网络调度节点400与其他设备或通信网络之间的通信。
本申请实施例还提供了一种网络调度节点集群。如图9所示,所述网络调度节点集群包括 至少一个网络调度节点400。该网络调度节点集群中包括的网络调度节点可以全部是终端设备,也可以全部是云服务器,还可以部分是云服务器部分是终端设备。
在上述的三种关于网络调度节点集群的部署方式下,网络调度节点集群中的一个或多个网络调度节点400中的存储器406中可以存有相同的网络调度节点200用于执行网络调度方法100的指令。
在一些可能的实现方式中,该网络调度节点集群中的一个或多个网络调度节点400也可以用于执行网络调度方法100的部分指令。换言之,一个或多个网络调度节点400的组合可以共同执行网络调度节点200用于执行网络调度方法100的指令。
需要说明的是,网络调度节点集群中的不同的网络调度节点400中的存储器406可以存储不同的指令,用于执行网络调度方法100的部分功能。
图10示出了一种可能的实现方式。如图10所示,两个网络调度节点400A和400B通过通信接口408实现连接。网络调度节点400A中的存储器上存有用于执行通信模块202和处理模块206的功能的指令。网络调度节点400B中的存储器上存有用于执行存储单元204的功能的指令。换言之,网络调度节点400A和40B的存储器406共同存储了网络调度节点200用于执行网络调度方法100的指令。
图10所示的网络调度节点之间的连接方式可以是考虑到本申请提供的网络调度方法100需要大量存储网络运行参数。因此,考虑将存储功能交由网络调度节点400B执行。
应理解,图10中示出的网络调度节点400A的功能也可以由多个网络调度节点400完成。同样,网络调度节点400B的功能也可以由多个网络调度节点400完成。
在一些可能的实现方式中,网络调度节点中的一个或多个网络调度节点可以通过网络连接。其中,所述网络可以是广域网或局域网等等。图11示出了一种可能的实现方式。如图11所示,两个网络调度节点400C和400D之间通过网络进行连接。具体地,通过各个网络调度节点中的通信接口与所述网络进行连接。在这一类可能的实现方式中,网络调度节点400C中的存储器406中存有执行通信模块202的指令。同时,网络调度节点400D中的存储器406中存有执行存储模块204和处理模块206的指令。
图11所示的网络调度节点之间的连接方式可以是考虑到本申请提供的网络调度方法100需要大量存储网络运行参数和进行簇规划和路由路径计算,因此考虑将处理模块206和存储模块204实现的功能交由网络调度节点400D执行。
应理解,图11中示出的网络调度节点400C的功能也可以由多个网络调度节点400完成。同样,网络调度节点400D的功能也可以由多个网络调度节点400完成。
本申请还提供一种网络调度系统500,所述网络调度系统500包括所述网络调度节点200和网络节点。其中,所述网络调度节点200的功能如前所述,故不再赘述。而网络节点则是对应前述网络中的边缘节点和中心节点。
需要说明的是,网络调度节点200的功能可以由一个计算设备或多个计算设备组成的集群来实现。
可选的,所述网络调度系统500可以包括网络中部分的边缘节点和/或中心节点。
本申请实施例还提供了一种计算机可读存储介质。所述计算机可读存储介质可以是网络调度节点能够存储的任何可用介质或者是包含一个或多个可用介质的数据中心等数据存储设 备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘)等。该计算机可读存储介质包括指令,所述指令指示网络调度节点执行上述应用于网络调度节点200用于执行网络调度方法100。
本申请实施例还提供了一种包含指令的计算机程序产品。所述计算机程序产品可以是包含指令的,能够运行在网络调度节点上或被储存在任何可用介质中的软件或程序产品。当所述计算机程序产品在至少一个计算机设备上运行时,使得至少一个计算机设备执行上述网络调度方法100。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的保护范围。

Claims (20)

  1. 一种网络调度方法,其特征在于,所述方法包括:
    获取业务的业务需求参数;
    根据所述业务需求参数确定在网络上的优化目标;
    获取网络中多个网络运行参数,所述网络运行参数包括所述网络中两个节点之间的通信参数;
    根据所述优化目标和所述多个网络运行参数,从所述网络划分出至少一个簇,每一簇包括所述网络中的至少一个节点。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在所述簇内确定两个节点之间的路由路径。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    在第一簇的第一节点和第二簇的第二节点之间确定路由路径。
  4. 根据权利要求1至3中任一所述的方法,其特征在于,所述业务需求参数包括下述的一种或多种:
    时延、流畅度、清晰度。
  5. 根据权利要求1至4中任一所述的方法,其特征在于,所述获取运行于所述网络上业务的业务需求参数,包括:
    提供第一配置接口,所述配置接口用于获取用户输入的所述业务需求参数。
  6. 根据权利要求1至5中任一所述的方法,其特征在于,所述通信参数包括下述的一种或多种:
    时延、丢包率、抖动。
  7. 根据权利要求1至6中任一所述的方法,其特征在于,所述根据所述优化目标和所述多个网络运行参数,从所述网络划分出至少一个簇节点之间之前,所述方法包括:
    获取所述节点之间网络的多个历史网络运行参数,所述优化目标、所述多个网络运行参数和所述历史网络运行参数用于在所述网络中划分出所述至少一个簇。
  8. 根据权利要求1至6中任一所述的方法,其特征在于,所述根据所述优化目标和所述多个网络运行参数,从所述网络划分出至少一个簇节点之间之前,所述方法包括:
    提供第二配置接口,所述第二配置接口用于获取用户输入的约束条件,所述优化目标、所述约束条件和所述多个网络运行参数用于在所述网络中划分出所述至少一个簇。
  9. 一种网络调度节点,其特征在于,所述节点包括:
    通信模块,用于获取业务的业务需求参数;
    处理模块,用于根据所述业务需求参数确定在网络上的优化目标;
    所述通信模块,还用于获取所述网络中多个网络运行参数,所述网络运行参数包括所述网络中两个节点之间的通信参数;根据所述优化目标和所述多个网络运行参数,从所述网络划分出至少一个簇,每一簇包括所述网络中的至少一个节点。
  10. 根据权利要求9所述的节点,其特征在于,所述处理模块,还用于在所述簇内确定两个节点之间的路由路径。
  11. 根据权利要求9或10所述的节点,其特征在于,所述处理模块,还用于在第一簇的第一节点和第二簇的第二节点之间确定路由路径。
  12. 根据权利要求9至11中任一所述的节点,其特征在于,所述业务需求参数包括下述的一种或多种:
    时延、流畅度、清晰度。
  13. 根据权利要求9至12中任一所述的节点,其特征在于,所述通信模块,还用于提供第一配置接口,所述配置接口用于获取用户输入的所述业务需求参数。
  14. 根据权利要求9至13中任一所述的节点,其特征在于,所述通信参数包括下述的一种或多种:
    时延、丢包率、抖动。
  15. 根据权利要求9至14中任一所述的节点,其特征在于,所述通信模块,还用于获取所述网络的多个历史网络运行参数,所述优化目标、所述多个网络运行参数和所述历史网络运行参数用于在所述网络中划分出所述至少一个簇。
  16. 根据权利要求9至15中任一所述的节点,其特征在于,所述通信模块,还用于提供第二配置接口,所述第二配置接口用于获取用户输入的约束条件,所述优化目标、所述约束条件和所述多个网络运行参数用于在所述网络中划分出所述至少一个簇。
  17. 一种网络调度系统,其特征在于,所述系统包括网络调度节点和网络节点,所述网络调度节点用于执行权利要求1至8中任一所述的方法。
  18. 一种网络调度节点,其特征在于,包括处理器和存储器;
    所述处理器用于执行所述存储器中的指令,以使得所述网络调度节点执行如权利要求1至8中任一所述的方法。
  19. 一种包含指令的计算机程序产品,其特征在于,当所述指令被计算机设备集群运行时,使得所述计算机设备集群执行如权利要求的1至8中任一所述的方法。
  20. 一种计算机可读存储介质,其特征在于,包括计算机程序指令,所述计算机程序用于执行如权利要求1至8中任一所述的方法。
PCT/CN2022/118630 2021-10-20 2022-09-14 一种网络调度方法、系统及设备 WO2023065893A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202111219204 2021-10-20
CN202111219204.0 2021-10-20
CN202111644958.0A CN115996189A (zh) 2021-10-20 2021-12-30 一种网络调度方法、系统及设备
CN202111644958.0 2021-12-30

Publications (1)

Publication Number Publication Date
WO2023065893A1 true WO2023065893A1 (zh) 2023-04-27

Family

ID=85993055

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/118630 WO2023065893A1 (zh) 2021-10-20 2022-09-14 一种网络调度方法、系统及设备

Country Status (2)

Country Link
CN (1) CN115996189A (zh)
WO (1) WO2023065893A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050120105A1 (en) * 2003-12-01 2005-06-02 Popescu George V. Method and apparatus to support application and network awareness of collaborative applications using multi-attribute clustering
US20140143407A1 (en) * 2012-11-21 2014-05-22 Telefonaktiebolaget L M Ericsson (Publ) Multi-objective server placement determination
CN106850460A (zh) * 2017-02-10 2017-06-13 北京邮电大学 一种业务流聚合方法及装置
CN109831792A (zh) * 2019-03-11 2019-05-31 中国科学院上海微系统与信息技术研究所 一种基于多目标优化的无线传感器网络拓扑控制方法
CN113032938A (zh) * 2021-03-26 2021-06-25 北京邮电大学 时间敏感流的路由调度方法、装置、电子设备及介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050120105A1 (en) * 2003-12-01 2005-06-02 Popescu George V. Method and apparatus to support application and network awareness of collaborative applications using multi-attribute clustering
US20140143407A1 (en) * 2012-11-21 2014-05-22 Telefonaktiebolaget L M Ericsson (Publ) Multi-objective server placement determination
CN106850460A (zh) * 2017-02-10 2017-06-13 北京邮电大学 一种业务流聚合方法及装置
CN109831792A (zh) * 2019-03-11 2019-05-31 中国科学院上海微系统与信息技术研究所 一种基于多目标优化的无线传感器网络拓扑控制方法
CN113032938A (zh) * 2021-03-26 2021-06-25 北京邮电大学 时间敏感流的路由调度方法、装置、电子设备及介质

Also Published As

Publication number Publication date
CN115996189A (zh) 2023-04-21

Similar Documents

Publication Publication Date Title
JP6937918B2 (ja) ビデオライブブロードキャスト方法及び装置
US11857872B2 (en) Content adaptive data center routing and forwarding in cloud computing environments
JP4752786B2 (ja) マルチキャスト配信システムおよびマルチキャスト配信方法
KR101089562B1 (ko) 고화질 미디어 방송을 위한 피투피 라이브 스트리밍 시스템 및 방법
CN101543019B (zh) 贡献感知对等实时流传输服务
WO2022222755A1 (zh) 业务处理方法、装置及存储介质
JP5934828B2 (ja) P2p基盤のストリーミングサービスのデータストリームをパケット化するシステムおよび方法
KR20080076803A (ko) 대역요구 시스템, 대역요구 장치, 클라이언트 기기,대역요구 방법, 콘텐츠 재생 방법 및 프로그램
CN109348264B (zh) 视频资源共享方法、装置、存储介质及电子设备
Liang et al. Incentivized peer-assisted streaming for on-demand services
CN100459502C (zh) 一种非对称跨网段多路数据流动态复制分发方法
US11838572B2 (en) Streaming video trunking
Gotoh et al. A broadcasting scheme for selective contents considering available bandwidth
CN102158767B (zh) 一种基于可扩展编码的对等网络流媒体直播系统
US7805475B2 (en) Contents distribution system, terminal apparatus, contents distribution method, and recording medium on which is recorded a program used therein
US20220070507A1 (en) Method for distributing audio/video stream in audio/video stream distribution system, and dynamic parent node
WO2023065893A1 (zh) 一种网络调度方法、系统及设备
US20140146128A1 (en) System and method for providing video conference service
CN110996114A (zh) 一种直播调度方法、装置、电子设备和存储介质
Ghosh et al. Utilizing Continuous Time Markov Chain for analyzing video-on-demand streaming in multimedia systems
US11425464B2 (en) Communication device, communication control device, and data distribution system
WO2024008089A1 (zh) 策略选择方法、装置及通信系统、设备及存储介质
KR20100132331A (ko) 실시간 방송 시스템 및 그 방법
KR102013579B1 (ko) 스트리밍 서비스의 성능 확장 방법 및 이를 위한 장치
Jena et al. THE PERFORMANCE ANALYSIS OF OPTIMIZED LOAD BALANCING IN MULTIDIMENSIONAL DISTRIBUTED DATABASE SYSTEM FOR VIDEO ON-DEMAND

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22882504

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022882504

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022882504

Country of ref document: EP

Effective date: 20240404