WO2020248166A1 - 流量调度方法、设备及存储介质 - Google Patents

流量调度方法、设备及存储介质 Download PDF

Info

Publication number
WO2020248166A1
WO2020248166A1 PCT/CN2019/090921 CN2019090921W WO2020248166A1 WO 2020248166 A1 WO2020248166 A1 WO 2020248166A1 CN 2019090921 W CN2019090921 W CN 2019090921W WO 2020248166 A1 WO2020248166 A1 WO 2020248166A1
Authority
WO
WIPO (PCT)
Prior art keywords
bandwidth
module
scheduling
queue
flow queue
Prior art date
Application number
PCT/CN2019/090921
Other languages
English (en)
French (fr)
Inventor
沈国明
汤成
李东川
伊学文
谭幸均
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201980096976.4A priority Critical patent/CN113906720B/zh
Priority to PCT/CN2019/090921 priority patent/WO2020248166A1/zh
Publication of WO2020248166A1 publication Critical patent/WO2020248166A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • H04L47/562Attaching a time tag to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/60Queue scheduling implementing hierarchical scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/6225Fixed service order, e.g. Round Robin
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/623Weighted service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6265Queue scheduling characterised by scheduling criteria for service slots or service orders past bandwidth allocation

Definitions

  • the embodiments of the present application relate to communication technologies, and in particular, to a traffic scheduling method, device, and storage medium.
  • Traffic scheduling equipment is a core component of network equipment supporting quality of service (QoS). As network equipment QoS requirements become higher and higher, the performance and specifications of traffic scheduling equipment are getting higher and higher, and the services that can be supported are becoming more and more complex.
  • QoS quality of service
  • the existing traffic scheduling equipment is mainly implemented in hardware, and the supported scheduling algorithm is fixed, and different business scenarios are supported by adjusting the configuration parameters of the scheduling algorithm.
  • chip iterations are usually required to be supported, so that new business needs cannot be responded to in time.
  • the embodiments of the present application provide a traffic scheduling method, device, and storage medium to overcome the shortcomings of the existing traffic scheduling device's business scenarios being limited to the scheduling algorithm supported by the traffic scheduling device, and respond to new business requirements in a timely manner.
  • an embodiment of the present application provides a traffic scheduling device including: a bandwidth pre-allocation module, a scheduler module, and a queue management module.
  • the bandwidth pre-allocation module determines the bandwidth control parameters according to the input bandwidth of the flow queue of the queue management module, the expected output bandwidth of the service configuration, and the configuration parameters of the corresponding scheduling algorithm, and configures the bandwidth control parameters to the scheduler module to schedule According to the bandwidth control parameter, the device module performs traffic scheduling on the flow queue therein.
  • the bandwidth control parameters are determined according to the input bandwidth of the flow queue of the queue management module, the expected output bandwidth of the service configuration, and the configuration parameters of the corresponding scheduling algorithm, the bandwidth control parameters are adjusted in real time according to the actual situation.
  • the bandwidth pre-allocation module configures the bandwidth control parameter to the scheduler module, which can realize the flexible configuration of the service scheduling tree in the scheduler module, and dynamically control the nodes in each scheduling level in the service scheduling tree
  • the maximum output bandwidth of the traffic scheduling device can be programmed to achieve the maximum complexity of the traffic scheduling device, which can overcome the shortcomings of the business scenario that is limited to the scheduling algorithm supported by the traffic scheduling device, and respond to new business requirements in a timely manner.
  • the aforementioned bandwidth pre-allocation module includes: a bandwidth measurement sub-module and a bandwidth allocation sub-module.
  • the bandwidth measurement sub-module is used to monitor the input bandwidth and expected output bandwidth of the flow queue of the queue management module, and transmit the input bandwidth and expected output bandwidth of the flow queue to the bandwidth allocation sub-module;
  • the bandwidth allocation sub-module is used to The input bandwidth of the flow queue, the expected output bandwidth and the configuration parameters of the corresponding scheduling algorithm, determine the bandwidth control parameters, and configure the bandwidth control parameters to the scheduler module.
  • the bandwidth measurement sub-module is realized by hardware, and the bandwidth allocation sub-module is realized by software.
  • the software and hardware cooperative traffic scheduling device architecture in which the bandwidth measurement sub-module has a simple structure, the same logical resources can support higher performance and higher accuracy, and the service scheduling tree is implemented by the bandwidth allocation sub-module through software algorithms. It can flexibly support more complex service scheduling trees and scheduling algorithms, and it only requires software upgrades to support new service features.
  • bandwidth measurement sub-module and the bandwidth allocation sub-module communicate with each other through any of the following buses: PCIE bus and chip internal bus.
  • the bandwidth allocation sub-module as described above can be specifically used for: according to the affiliation between nodes of each scheduling level in the service scheduling tree, scheduling algorithms and configuration parameters, and the input bandwidth and expected output of the flow queue Bandwidth, determine the bandwidth control parameters, and configure the bandwidth control parameters to the scheduler module.
  • the bandwidth allocation sub-module as described above can be specifically used to: according to the traffic priority, the affiliation between nodes of each scheduling level in the service scheduling tree, the scheduling algorithm and configuration parameters, and the input bandwidth and expected output bandwidth of the flow queue, Determine the bandwidth control parameters, and configure the bandwidth control parameters to the scheduler module.
  • the bandwidth measurement submodule as described above can be specifically used to record the system global time stamp of the enqueue data and dequeue data corresponding to each flow queue; obtain the flow queue according to the system global time stamp The input bandwidth and expected output bandwidth. Among them, the actual output bandwidth value is used for the bandwidth allocation sub-module to modify the calculation result of the bandwidth control parameter.
  • the bandwidth allocation sub-module as described above can also be used to adjust the WRED parameters of the flow queue according to a preset algorithm according to the input bandwidth and expected output bandwidth of the flow queue, and set the adjusted WRED parameters Configured to the queue management module; accordingly, the queue management module manages the entry and exit of the data information in the flow queue according to the adjusted WRED parameters. Since the WRED parameters actually configured to the queue management module are adjusted by the software algorithm according to the input bandwidth and expected output bandwidth of the real-time stream queue, when configuring the WRED parameters, the actual buffer allocation can be ignored. To a certain extent Above, the cache configured for each flow queue can exceed the actual allocatable cache size, that is, a certain degree of cache over-allocation is supported.
  • the scheduler module stores the first bandwidth configuration parameter table entry and the second bandwidth configuration parameter table entry.
  • the bandwidth pre-allocation module configures the bandwidth control parameters to the scheduler module, it is specifically: the bandwidth pre-allocation module is used to update the bandwidth control parameters to the first bandwidth configuration parameter table item; the scheduler module is used in the pipeline In a specific time slot, the content of the first bandwidth configuration parameter table entry is updated to the second bandwidth configuration parameter table entry, and the second bandwidth configuration parameter table entry is an entry used when the scheduler module performs an operation.
  • a first bandwidth configuration parameter table item that can be directly modified by software at any time is added, such as a shadow table item.
  • the scheduler module updates the content of the first bandwidth configuration parameter table item to the second bandwidth configuration parameter table item used by the logic according to the specific time slot of the pipeline executed by the logic, which can avoid the conflict between the configuration table item update operation and the logic execution operation .
  • the scheduler module supports at least one of the following scheduling algorithms: SP scheduling algorithm, RR scheduling algorithm, and so on.
  • the traffic scheduling device further includes: a data packet processing module.
  • the queue management module is also used to perform a dequeue operation based on the scheduling result after the scheduler module sends the scheduling result to the queue management module, and send the dequeued data information to the data packet processing module; the data packet processing module uses The data corresponding to the dequeued data information is taken out of the cache and output externally.
  • an embodiment of the present application provides a traffic scheduling method, which is applied to a traffic scheduling device.
  • the traffic scheduling device includes: a bandwidth pre-allocation module, a scheduler module, and a queue management module.
  • the method includes: the bandwidth pre-allocation module determines the bandwidth control parameters of the flow queue according to the input bandwidth of each flow queue of the queue management module, the expected output bandwidth of the service configuration, and the configuration parameters of the corresponding scheduling algorithm, and determines the bandwidth control parameters of the flow queue.
  • the control parameter is configured to the scheduler module; the scheduler module performs traffic scheduling on the flow queue in the queue management module according to the bandwidth control parameter.
  • the implementation of the traffic scheduling method can refer to the implementation of the device, and the repetition will not be repeated.
  • an embodiment of the present application provides a traffic scheduling device, including a memory and a processor.
  • the memory stores a computer program that can be executed by the processor; when the processor reads and executes the computer program, the processor is caused to execute the method according to any one of the foregoing second aspects.
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program includes at least a piece of code that can be executed by a computer to control the computer to execute the above Any of the methods.
  • the embodiments of the present application provide a program, when the program is executed by a computer, it is used to execute any of the methods described above.
  • the foregoing program may be stored in whole or in part on a storage medium packaged with the processor, and may also be stored in part or in a memory not packaged with the processor.
  • an embodiment of the present application provides a chip with a computer program stored on the chip, and when the computer program is executed by a processor, the method described in the embodiment of the present application in the second aspect is executed.
  • Figure 1 is an implementation scheme of an existing traffic scheduling device
  • Figure 2 is an example diagram of a service scheduling tree of a hardware scheduler
  • FIG. 3 is a schematic structural diagram of a traffic scheduling device provided by an embodiment of this application.
  • FIG. 4 is a schematic structural diagram of a traffic scheduling device provided by another embodiment of this application.
  • FIG. 5 is a flowchart of a traffic scheduling method provided by an embodiment of the application.
  • FIG. 6 is a schematic structural diagram of a traffic scheduling device provided by another embodiment of this application.
  • RR scheduling algorithm The principle of round robin (RR) scheduling algorithm is to schedule each flow queue cyclically. Each round of scheduling starts from flow queue 1 to flow queue N, where N is the total number of flow queues, and then restarts the cycle .
  • Deficit round robin (DRR) scheduling algorithm allocates a constant (time slice proportional to weight) and a variable (balance) to each flow queue.
  • the constant reflects the long-term average number of bytes that the flow queue can send.
  • the initial value of the variable is zero, and it is reset to zero when the flow queue is empty.
  • DRR scheduling serves a new flow queue, the scheduler resets the counter to indicate the number of bytes that the cycle has sent from the flow queue.
  • SP scheduling algorithm sends the data corresponding to the higher priority flow queue in strict accordance with the priority order from high to low. When the higher priority flow queue is empty, send it again Data corresponding to the lower priority flow queue.
  • FIG. 1 is an implementation scheme of an existing traffic scheduling device.
  • the traffic scheduling device 10 includes a hardware scheduler 11, a queue manager 12 and a data packet processor 13. Among them, all supported business scenarios are defined in the hardware scheduler 11; and, the scheduling level in the hardware scheduler 11 is fixed.
  • a business scheduling tree contains multiple nodes. These nodes can be called root nodes, port nodes, user group nodes, user nodes, and flow queue nodes due to different scheduling levels. Among them, There is usually only one root node, and the number of other nodes can be one or more.
  • the service scheduling tree includes five scheduling levels, from top to bottom, namely: the scheduling level where the root node is located, the scheduling level where the port node is located, the scheduling level where the user group node is located, the scheduling level where the user node is located, and The scheduling level where the flow queue node is located. Therefore, the fixed scheduling level in the hardware scheduler 11 means that the hierarchical structure of the service scheduling tree of the hardware scheduler 11 is fixed.
  • nodes between different scheduling levels have subordination relationships, that is, the mapping relationship between parent nodes and child nodes.
  • the affiliation relationship between the nodes of the lower layer and the nodes of the upper layer can be configured flexibly, and the nodes correspond to the configuration parameters of the scheduling algorithm, for example, the ratio value in the DRR scheduling algorithm or the priority of the SP scheduling algorithm And so on, and the maximum available scheduling bandwidth of the node can be flexibly configured.
  • the queue manager 12 completes queue entry and exit management and buffer allocation of a flow queue (FlowQueue), wherein the buffer allocation can be configured based on a fixed algorithm such as a weighted random early detection (WRED) of the service flow.
  • WRED weighted random early detection
  • the key workflow of the traffic scheduling device 10 is as follows:
  • the data packet processor 13 After the data packet processor 13 receives the data and stores the data in the cache, it notifies the queue manager 12 of the enqueued data information, and the queue manager 12 processes the enqueued data information according to the business flow, and Generate status update messages for the flow queue.
  • Different services correspond to different service flows, and the service flow is the data involved in the service.
  • the data information of the queue includes the identification of the data stored in the cache.
  • the queue manager 12 processes the enqueued data information according to the business inflow queue, which means that the queue manager 12 distinguishes the business flow of the enqueued data information and stores it in different flow queues.
  • the queue manager 12 sends the status update message to the hardware scheduler 11.
  • the hardware scheduler 11 updates the schedulable status of the corresponding node in each scheduling level on the service scheduling tree as shown in FIG. 2 according to the status update message.
  • the hardware scheduler 11 selects an appropriate node level by level according to the schedulable status of each node on the updated service scheduling tree, the configured scheduling algorithm and related configuration parameters, and finally outputs a flow queue ID as the scheduling result.
  • the scheduling process of the hardware scheduler 11 is cyclically continuous, and each round of scheduling outputs a scheduling result.
  • the hardware scheduler 11 transmits the scheduling result to the queue manager 12, and the queue manager 12 performs a dequeue operation according to the scheduling result.
  • the queue manager 12 sends the dequeued data information to the data packet processor 13, and the data packet processor 13 fetches the data corresponding to the dequeued data information from the cache and outputs it to the outside.
  • the configuration parameters of the scheduling algorithm need to be configured and determined before there is traffic, and the affiliation of nodes between the scheduling levels and different scheduling levels needs to be implemented according to a collection of various business scenarios, and the configuration relationship is complex and resources
  • the cost is high, and the application scenarios are limited by the maximum complexity of hardware implementation.
  • the embodiments of the present application provide a traffic scheduling method, device, and storage medium to overcome the shortcomings of the existing traffic scheduling equipment’s business scenarios being limited to the scheduling algorithms supported by the traffic scheduling equipment, and respond to new traffic in a timely manner. Business needs.
  • FIG. 3 is a schematic structural diagram of a traffic scheduling device provided by an embodiment of this application.
  • the traffic scheduling device 20 includes: a bandwidth pre-allocation module 21, a scheduler module 22, and a queue management module 23. among them:
  • the bandwidth pre-allocation module 21 is used to determine the bandwidth control parameters of the flow queue according to the input bandwidth of each flow queue of the queue management module 23, the expected output bandwidth of the service configuration, and the configuration parameters of the corresponding scheduling algorithm, and to determine the bandwidth control parameters of the flow queue.
  • the bandwidth control parameters are configured to the scheduler module 22.
  • service use case 1 In the service scheduling tree, the input bandwidth of flow queue 1 is 300 megabits per second (Mbit/s, Mbps), and the input bandwidth of flow queue 2 is 200 Mbps.
  • the two flow queues belong to the same user node X, that is, the parent node of flow queue 1 and flow queue 2 is user node X, and the expected output bandwidth configured for the user node X is 150 Mbps, and flow queue 1 and Flow queue 2 is 1:2 Deficit Round Robin (DRR) scheduling in user node X, then the bandwidth control parameters of flow queue are: flow queue 1 allocates scheduling bandwidth of 50 Mbps, flow queue 2 allocates scheduling bandwidth of 100 Mbps.
  • the bandwidth control parameters of flow queue 1 and flow queue 2 are configured in the scheduler module 22.
  • the scheduler module 22 is configured to perform traffic scheduling on the flow queues in the queue management module 23 according to bandwidth control parameters.
  • the scheduler module 22 maintains the schedulable status of the nodes at the relevant levels in the scheduling tree according to the schedulable status of the flow queue nodes fed back by the queue management module 23, and at the same time performs continuous periodic flow of the flow queue nodes according to the bandwidth control parameters
  • Each round of scheduling outputs a flow queue ID as a scheduling result, and transmits the scheduling result to the queue management module 23.
  • the queue management module 23 is configured to perform a dequeue operation according to the scheduling result, and send the dequeued data information to the data packet processing module 24.
  • the dequeued data information includes information such as the length of the header data and its storage address.
  • the data packet processing module 24 is used to fetch the data corresponding to the dequeued data information from the cache and output it to the outside.
  • bandwidth control parameters of the flow queues are determined by the bandwidth pre-allocation module 21 according to the input bandwidth of each flow queue of the queue management module 23, the expected output bandwidth of the service configuration, and the configuration parameters of the corresponding scheduling algorithm, the technology in the art Personnel can understand that the bandwidth control parameters of the flow queue are adjusted in real time according to the actual received traffic conditions, and are not fixed.
  • the bandwidth pre-allocation module 21 configures the bandwidth control parameters of the flow queue to the scheduler module 22.
  • the scheduler module 22 supports that the bandwidth control parameters of each node at each scheduling level in the service scheduling tree can be adjusted in real time by software.
  • the scheduler module 22 does not need to support a service scheduling tree and a scheduling algorithm with a complexity consistent with the service in the software algorithm, and only needs to support a simple round robin (RR) scheduling algorithm for each node. Even the business scheduling level and the number of nodes in the software algorithm can be flexibly expanded.
  • flow queue 1 and flow queue 2 need to share the expected output bandwidth of 150 Mbps obtained by user node X in a weight ratio of 1:2.
  • the user node X needs to support the DRR scheduling algorithm and corresponding weight configuration.
  • the bandwidth pre-allocation module 21 uses a software algorithm, according to the expected output bandwidth of 150 Mbps obtained by the user node X, and the input bandwidth and weight of the flow queue 1 and the flow queue 2, and calculates the flow queue 1 in real time
  • the allocated scheduling bandwidth is 50Mbps
  • the allocated scheduling bandwidth of flow queue 2 is 100Mbps.
  • the scheduler module 22 only needs to perform simple RR scheduling and integer bandwidth control of the flow queue to ensure that the flow queue 1 and the flow queue 2 obtain the actual output bandwidth. Therefore, the embodiment of the present application can simplify the design of the scheduler module.
  • the scheduler module 22 supports but is not limited to at least one of the following scheduling algorithms: SP scheduling algorithm, RR scheduling algorithm, and so on.
  • the bandwidth pre-allocation module 21 can flexibly configure the service scheduling tree in the scheduler module 22 to dynamically control the output bandwidth of nodes in each scheduling level in the service scheduling tree, so that the scheduling algorithm of the traffic scheduling device can be programmable, thereby avoiding traffic
  • the business scenarios of the scheduling equipment are limited.
  • the service scheduling tree is, for example, as shown in FIG. 2, but the embodiment of the present application is not limited thereto.
  • the bandwidth pre-allocation module determines the bandwidth control parameter according to the input bandwidth of the flow queue of the queue management module, the expected output bandwidth of the service configuration, and the configuration parameter of the corresponding scheduling algorithm, and configures the bandwidth control parameter to the scheduling According to the bandwidth control parameter, the scheduler module performs traffic scheduling on the flow queue therein.
  • the bandwidth control parameters are determined according to the input bandwidth of the flow queue of the queue management module, the expected output bandwidth of the service configuration, and the configuration parameters of the corresponding scheduling algorithm, the bandwidth control parameters are adjusted in real time according to the actual situation.
  • the bandwidth pre-allocation module configures the bandwidth control parameter to the scheduler module, which can realize the flexible configuration of the service scheduling tree in the scheduler module, and dynamically control the nodes in each scheduling level in the service scheduling tree
  • the maximum output bandwidth of the traffic scheduling device can be programmed to achieve the maximum complexity of the traffic scheduling device, which can overcome the shortcomings of the business scenario that is limited to the scheduling algorithm supported by the traffic scheduling device, and respond to new business requirements in a timely manner.
  • the bandwidth pre-allocation module 21 may include: a bandwidth measurement sub-module 211 and a bandwidth allocation sub-module 212.
  • the bandwidth measurement submodule 211 is used to monitor the input bandwidth of the flow queue of the queue management module 23 and the expected output bandwidth of the service configuration, and transmit the input bandwidth and the expected output bandwidth to the bandwidth allocation submodule 212; the bandwidth allocation submodule 212, configured to determine the bandwidth control parameter according to the input bandwidth, the expected output bandwidth, and the configuration parameter of the corresponding scheduling algorithm, and configure the bandwidth control parameter to the scheduler module 22.
  • the bandwidth measurement sub-module 211 is implemented by hardware, and the bandwidth allocation sub-module 212 is implemented by software.
  • the bandwidth measurement submodule 211 is a set of counters set according to the flow queue, which records the real-time queue length of each flow queue, the packet length of entering and leaving the queue, etc., and can calculate the flow queue's information based on these information. Enter the bandwidth.
  • the bandwidth allocation sub-module 212 is a software module running on the CPU embedded in the chip or on the single board. The bandwidth allocation sub-module 212 performs analysis and calculation according to the input bandwidth, the expected output bandwidth and the configuration parameters of the corresponding scheduling algorithm, determines the bandwidth control parameters, and configures them to the scheduler module 22.
  • the software and hardware cooperative traffic scheduling device architecture in which the bandwidth measurement sub-module 211 has a simple structure, the same logical resources can support higher performance and higher accuracy, and the service scheduling tree is implemented by the bandwidth allocation sub-module 212 through software algorithms Realization can flexibly support more complex business scheduling trees and scheduling algorithms, and support new business features only requires software upgrades.
  • the bandwidth measurement sub-module 211 and the bandwidth allocation sub-module 212 can communicate with each other through any of the following buses: a high-speed serial computer expansion bus standard (peripheral component interconnect express, PCIE) bus, a chip internal bus, and so on.
  • PCIE peripheral component interconnect express
  • the bandwidth allocation sub-module 212 may be implemented in a manner that a central processing unit core (CPU CORE) is embedded in the chip. In this case, the analysis and calculation of the bandwidth allocation result does not require a single-board CPU.
  • the information intercommunication between the bandwidth measurement sub-module 211 and the bandwidth allocation sub-module 212 can be implemented through the internal bus of the chip.
  • bandwidth allocation sub-module 212 For the implementation of the bandwidth allocation sub-module 212 to determine the bandwidth control parameters, multiple solutions may be included, and examples are described here.
  • the bandwidth allocation sub-module 212 is used to determine according to the affiliation between nodes at each scheduling level in the service scheduling tree, scheduling algorithms and configuration parameters, as well as the input bandwidth of the flow queue and the expected output bandwidth of the service configuration
  • the bandwidth control parameter is configured to the scheduler module 22.
  • the bandwidth allocation sub-module 212 is used to: according to the traffic priority, the affiliation of each scheduling level node in the service scheduling tree, the scheduling algorithm and configuration parameters, and the input bandwidth of the flow queue and the service configuration expectations Output the bandwidth, determine the bandwidth control parameter, and configure the bandwidth control parameter to the scheduler module 22.
  • service use case 2 In the service scheduling tree, the input bandwidth of flow queue 3 is 100 Mbps, and the input bandwidth of flow queue 4 is 200 Mbps. The two flow queues belong to the same user node Y, and the expected output bandwidth configured for the user node Y is 150Mbps.
  • flow queue 3 and flow queue 4 for SP scheduling in user node Y
  • flow queue 3 is With high priority, the bandwidth control parameters of the flow queue are: flow queue 3 is allocated 100 Mbps bandwidth, and flow queue 4 is allocated 50 Mbps bandwidth.
  • the bandwidth control parameters of flow queue 3 and flow queue 4 are configured in the scheduler module 22.
  • bandwidth control parameters takes a certain amount of time, considering that the software of the bandwidth allocation sub-module 212 embedded in the chip or the CPU running on the single board may have newly received high-priority traffic that is not monitored by the bandwidth measurement sub-module 211 If such high-priority traffic requires low-latency guarantees, a certain amount of bandwidth needs to be reserved for high-priority traffic in bandwidth allocation to support the preferential passage of newly received high-priority traffic.
  • the second scheme increases the consideration of traffic priority.
  • the bandwidth measurement submodule 211 can be used to record the system global time stamps of the enqueue data and dequeue data corresponding to each flow queue, and obtain the input bandwidth and actual output of the flow queue according to the system global time stamp.
  • Bandwidth where the actual output bandwidth value is used for the bandwidth allocation sub-module 212 to modify the calculation result of the bandwidth control parameter. That is, the bandwidth measurement sub-module 211 records the packet length of the enqueue data and dequeue data corresponding to each flow queue in the queue management module 23 in real time, and at the same time brings the system global time stamp into the recorded original data.
  • the functions of the actual output bandwidth and the expected output bandwidth reference may be made to related technologies, which are not repeated in the embodiment of this application.
  • the bandwidth measurement sub-module 211 is a traffic monitoring component that supports the global time stamp of the system, it can accurately count the input bandwidth and expected output bandwidth of the flow queue in real time, and ensure the correctness and completeness of the calculation of the input bandwidth and expected output bandwidth of the flow queue. Further, based on the input bandwidth and expected output bandwidth of the flow queue, the bandwidth allocation sub-module 212 can determine accurate bandwidth control parameters after analyzing through a software algorithm, and can also predict traffic behavior. In addition, the predicted result can be applied to the calculation of the scheduling algorithm in real time.
  • the bandwidth allocation submodule 212 may also be used to adjust the WRED parameters of the flow queue according to a preset algorithm according to the input bandwidth of the flow queue and the expected output bandwidth of the service configuration, and adjust the adjusted WRED parameters Configure to the queue management module 23.
  • the WRED parameter may include at least one of the following parameters: minimum threshold, minimum threshold, label probability denominator, and so on.
  • the queue management module 23 performs queue entry and exit management of data information in the flow queue according to the adjusted WRED parameters.
  • the WRED parameters actually configured to the queue management module are adjusted by the software algorithm according to the input bandwidth and expected output bandwidth of the real-time stream queue, when configuring the WRED parameters, the actual buffer allocation can be ignored.
  • the cache configured for each flow queue can exceed the actual allocatable cache size, that is, a certain degree of cache over-allocation is supported.
  • the available cache is not activated, and the inactive flow queue only needs to reserve a small amount of cache, thereby effectively improving the cache utilization of the system.
  • configure the adjusted WRED parameters to the queue management module to effectively support dynamic queue length management.
  • the scheduler module 22 stores a first bandwidth configuration parameter table entry and a second bandwidth configuration parameter table entry.
  • the bandwidth pre-allocation module 21 configures the bandwidth control parameters to the scheduler module 22, which may include: the bandwidth pre-allocation module 21 updates the bandwidth control parameters to the first bandwidth configuration parameter table item; in a specific time slot of the pipeline, the scheduler module 22 updates the content of the first bandwidth configuration parameter table entry to the second bandwidth configuration parameter table entry, and the second bandwidth configuration parameter table entry is the table entry used when the scheduler module 22 performs operations.
  • a first bandwidth configuration parameter table item that can be directly modified by software at any time is added, such as a shadow table item.
  • the scheduler module updates the content of the first bandwidth configuration parameter table item to the second bandwidth configuration parameter table item used by the logic according to the specific time slot of the pipeline executed by the logic, which can avoid the conflict between the configuration table item update operation and the logic execution operation .
  • Fig. 5 is a flowchart of a traffic scheduling method provided by an embodiment of the application.
  • the embodiment of the present application provides a traffic scheduling method, which is applied to a traffic scheduling device.
  • the traffic scheduling device includes a bandwidth pre-allocation module, a scheduler module, and a queue management module. As shown in Figure 5, the method includes:
  • the bandwidth pre-allocation module determines the bandwidth control parameters of the flow queue according to the input bandwidth of each flow queue of the queue management module, the expected output bandwidth of the service configuration, and the configuration parameters of the corresponding scheduling algorithm, and sets the bandwidth control parameters of the flow queue Configure to the scheduler module.
  • the scheduler module performs traffic scheduling on the flow queue in the queue management module according to the bandwidth control parameter.
  • the traffic scheduling method described in the embodiment of the present application may be executed by the traffic scheduling device in any of the above apparatus embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the bandwidth pre-allocation module includes a bandwidth measurement sub-module and a bandwidth allocation sub-module.
  • the bandwidth pre-allocation module determines the bandwidth control parameters of the flow queue according to the input bandwidth of each flow queue of the queue management module, the expected output bandwidth of the service configuration, and the configuration parameters of the corresponding scheduling algorithm, and sets the bandwidth control parameters of the flow queue.
  • the bandwidth control parameters configured to the scheduler module may include: the bandwidth measurement sub-module monitors the input bandwidth and expected output bandwidth of the flow queue of the queue management module, and transmits the input bandwidth and expected output bandwidth of the flow queue to the bandwidth allocation sub-module; bandwidth; The allocation sub-module determines the bandwidth control parameters according to the input bandwidth of the flow queue, the expected output bandwidth and the configuration parameters of the corresponding scheduling algorithm, and configures the bandwidth control parameters to the scheduler module.
  • bandwidth measurement sub-module and the bandwidth allocation sub-module are interconnected through any of the following buses: PCIE bus, internal bus of the chip, etc.
  • the bandwidth allocation sub-module determines the bandwidth control parameters according to the input bandwidth of the flow queue, the expected output bandwidth and the configuration parameters of the corresponding scheduling algorithm, and configures the bandwidth control parameters to the scheduler module, which may include: bandwidth allocation The sub-module determines the bandwidth control parameters according to the affiliation between nodes of each scheduling level in the service scheduling tree, scheduling algorithms and configuration parameters, as well as the input bandwidth and expected output bandwidth of the flow queue, and configures the bandwidth control parameters to the scheduler module.
  • the bandwidth allocation sub-module determines the bandwidth control parameters according to the input bandwidth of the flow queue, the expected output bandwidth and the configuration parameters of the corresponding scheduling algorithm, and configures the bandwidth control parameters to the scheduler module, which may include: bandwidth The allocation sub-module determines the bandwidth control parameters according to the traffic priority, the affiliation between the nodes of each scheduling level in the service scheduling tree, the scheduling algorithm and configuration parameters, and the input bandwidth and expected output bandwidth of the flow queue, and configures the bandwidth control parameters to Scheduler module.
  • the bandwidth measurement sub-module monitors the input bandwidth and expected output bandwidth of the flow queue of the queue management module, and transmits the input bandwidth and expected output bandwidth of the flow queue to the bandwidth allocation sub-module, which may include: the bandwidth measurement sub-module records each The system global time scale of the incoming and outgoing data corresponding to the flow queue; the bandwidth measurement sub-module obtains the input bandwidth and expected output bandwidth of the flow queue according to the system global time scale.
  • the traffic scheduling method may further include: the bandwidth allocation sub-module adjusts the WRED parameters of the flow queue according to a preset algorithm according to the input bandwidth and expected output bandwidth of the flow queue, and configures the adjusted WRED parameters to the queue management Module.
  • the queue management module performs queue entry and exit management of data information in the flow queue according to the adjusted WRED parameters.
  • the scheduler module stores a first bandwidth configuration parameter table entry and a second bandwidth configuration parameter table entry.
  • the bandwidth pre-allocation module configures the bandwidth control parameters to the scheduler module, which may include: the bandwidth pre-allocation module updates the bandwidth control parameters to the first bandwidth configuration parameter table item; in the specific time slot of the pipeline, the scheduler module sets the first The content of the bandwidth configuration parameter table item is updated to the second bandwidth configuration parameter table item, and the second bandwidth configuration parameter table item is the table item used when the scheduler module performs an operation.
  • the scheduler module supports at least one of the following scheduling algorithms: SP scheduling algorithm and cyclic RR scheduling algorithm.
  • the traffic scheduling device may further include: a data packet processing module.
  • the method may further include: after the scheduler module sends the scheduling result to the queue management module, the queue management module performs a dequeue operation according to the scheduling result, and sends the dequeued data information to the data packet processing module; The data packet processing module fetches the data corresponding to the dequeued data information from the cache and outputs it to the outside.
  • the aforementioned bandwidth pre-allocation module, scheduler module, and queue management module may be embedded in the processor.
  • the aforementioned bandwidth pre-allocation module may be a processor, and the aforementioned scheduler module and queue management module may be implemented in hardware.
  • the traffic scheduling device 60 of this embodiment may include a memory 61 and a processor 62.
  • the memory 61 is used to store a computer program that can be executed by the processor 62.
  • the processor 62 reads and executes the computer program
  • the processor 62 is caused to execute the method described above, or when the processor 62 reads and executes the computer program, the processor 62 is caused to execute the method described in the above method. Steps performed by the bandwidth pre-allocation module.
  • the embodiment of the present application also provides a computer-readable storage medium, and the computer-readable storage medium stores a computer program.
  • the computer program includes at least one piece of code. The method, or, implements the steps performed by the bandwidth pre-allocation module in the method described in any of the above method embodiments.
  • the computer program can be implemented in the form of a software functional unit and can be sold or used as an independent product, and the memory can be any form of computer readable storage medium.
  • all or part of the technical solution of the present application can be embodied in the form of a software product, including several instructions to enable a computer device, specifically a processor, to execute the first in each embodiment of the present application. All or part of the steps of the terminal device.
  • the aforementioned computer-readable storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other various programs that can store programs The medium of the code.
  • modules in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • the functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules.
  • the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including a number of instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请实施例提供一种流量调度方法、设备及存储介质,该流量调度设备包括:带宽预分配模块、调度器模块和队列管理模块。其中,带宽预分配模块,用于根据队列管理模块的每个流队列的输入带宽、业务配置的期望输出带宽以及相应的调度算法的配置参数,确定带宽控制参数,并将带宽控制参数配置给调度器模块;调度器模块,用于根据带宽控制参数,对队列管理模块中流队列进行流量调度。本申请实施例可以克服现有流量调度设备的业务场景受限于流量调度设备所支持的调度算法的缺陷,并可以及时响应新的业务需求。

Description

流量调度方法、设备及存储介质 技术领域
本申请实施例涉及通信技术,尤其涉及一种流量调度方法、设备及存储介质。
背景技术
流量调度设备,是网络设备支持服务质量(quality of service,QoS)的核心部件。随着网络设备QoS的要求越来越高,流量调度设备的性能和规格也越来越高,可支持的业务也越来越复杂。
现有的流量调度设备,主要是以硬件方式实现,所支持的调度算法是固定的,通过调整调度算法的配置参数来支持不同的业务场景。对于通过调整调度算法的配置参数所不能支持的业务场景,通常需要芯片迭代的方式才能被支持,这样就不能及时响应新的业务需求。
发明内容
本申请实施例提供一种流量调度方法、设备及存储介质,以克服现有流量调度设备的业务场景受限于流量调度设备所支持的调度算法的缺陷,及时响应新的业务需求。
第一方面,本申请实施例提供一种流量调度设备,包括:带宽预分配模块、调度器模块和队列管理模块。其中,带宽预分配模块根据队列管理模块的流队列的输入带宽、业务配置的期望输出带宽以及相应的调度算法的配置参数,确定带宽控制参数,并将该带宽控制参数配置给调度器模块,调度器模块根据该带宽控制参数,对其中流队列进行流量调度。一方面,由于带宽控制参数是根据队列管理模块的流队列的输入带宽、业务配置的期望输出带宽以及相应的调度算法的配置参数确定的,因此,该带宽控制参数是根据实际情况实时调整的,并非固定不变的;另一方面,带宽预分配模块将该带宽控制参数配置给调度器模块,可实现调度器模块中的业务调度树的灵活配置,动态控制业务调度树中各调度层次中节点的输出带宽,做到流量调度设备的最大复杂度的可编程,可以克服业务场景受限于流量调度设备所支持的调度算法的缺陷,以及时响应新的业务需求。
一种可能的实施方式中,如上所述带宽预分配模块,包括:带宽计量子模块和带宽分配子模块。其中,带宽计量子模块,用于监控队列管理模块的流队列的输入带宽和期望输出带宽,并将流队列的输入带宽和期望输出带宽传输至带宽分配子模块;带宽分配子模块,用于根据流队列的输入带宽、期望输出带宽以及相应的调度算法的配置参数,确定带宽控制参数,并将带宽控制参数配置给调度器模块。其中,带宽计量子模块通过硬件实现,带宽分配子模块通过软件实现。该实施例中,软硬件协同的流量调度设备架构,其中带宽计量子模块结构简单,相同的逻辑资源可以支持更高性能和更高精度,而业务调度树由带宽分配子模块通过软件算法实现,可以灵活支持更复杂的业务调度树和调度算法,且支持 新的业务特性仅需要软件升级即可。
可选地,带宽计量子模块和带宽分配子模块之间通过以下任一总线互通:PCIE总线和芯片内部总线。
一种可能的实施方式中,如上所述带宽分配子模块,可具体用于:根据业务调度树中各调度层次的节点间从属关系,调度算法和配置参数,以及流队列的输入带宽和期望输出带宽,确定带宽控制参数,并将带宽控制参数配置给调度器模块。
或者,如上所述带宽分配子模块,可具体用于:根据流量优先级、业务调度树中各调度层次的节点间从属关系,调度算法和配置参数,以及流队列的输入带宽和期望输出带宽,确定带宽控制参数,并将带宽控制参数配置给调度器模块。
一种可能的实施方式中,如上所述带宽计量子模块,可具体用于:记录每个流队列对应的入队数据和出队数据的系统全局时标;根据系统全局时标,得到流队列的输入带宽及期望输出带宽。其中,实际输出带宽值用于带宽分配子模块修正带宽控制参数的计算结果。
一种可能的实施方式中,如上所述带宽分配子模块,还可以用于:根据流队列的输入带宽及期望输出带宽,按预设算法调整流队列的WRED参数,并将调整后的WRED参数配置到队列管理模块;相应地,队列管理模块根据调整后的WRED参数进行流队列中数据信息的出入队管理。由于实际配置到队列管理模块的WRED参数是由软件算法根据实时流队列的输入带宽及期望输出带宽调整后的参数,所以,当配置WRED参数时,可以不考虑实际缓存的分配情况,在一定程度上,给各个流队列配置的缓存可以超过实际可分配的缓存大小,即支持一定程度的缓存超配。
一种可能的实施方式中,如上所述调度器模块存储有第一带宽配置参数表项和第二带宽配置参数表项。其中,带宽预分配模块在将带宽控制参数配置给调度器模块时,具体为:带宽预分配模块,用于将带宽控制参数更新至第一带宽配置参数表项;调度器模块,用于在流水线特定时隙,将第一带宽配置参数表项的内容更新至第二带宽配置参数表项,第二带宽配置参数表项为调度器模块执行操作时使用的表项。该实施例,增加可由软件直接随时修改的第一带宽配置参数表项,例如shadow表项。由调度器模块根据逻辑执行的流水线特定时隙将第一带宽配置参数表项的内容更新到逻辑使用的第二带宽配置参数表项,可以避免配置表项更新操作与逻辑执行操作之间的冲突。
可选地,如上所述调度器模块支持以下调度算法中至少一个:SP调度算法和RR调度算法等。
进一步地,流量调度设备还包括:数据包处理模块。队列管理模块,还用于在调度器模块将调度结果发送给队列管理模块之后,根据调度结果进行一次出队操作,并将出队的数据信息发送给数据包处理模块;数据包处理模块,用于将出队的数据信息对应的数据从缓存中取出并对外输出。
第二方面,本申请实施例提供一种流量调度方法,应用于流量调度设备。该流量调度设备包括:带宽预分配模块、调度器模块和队列管理模块。该方法包括:带宽预分配模块根据队列管理模块的每个流队列的输入带宽、业务配置的期望输出带宽以及相应的调度算法的配置参数,确定流队列的带宽控制参数,并将流队列的带宽控制参数配置给调度器模块;调度器模块根据带宽控制参数,对队列管理模块中流队列进行流量调度。
基于同一发明构思,由于该流量调度方法解决问题的原理与第一方面的装置设计中的方案对应,因此该流量调度方法的实施可以参见装置的实施,重复之处不再赘述。
第三方面,本申请实施例提供一种流量调度设备,包括:存储器和处理器。其中,存储器上存储有可供处理器执行的计算机程序;当处理器读取并执行计算机程序时,使得处理器执行如上述第二方面任一项所述的方法。
第四方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,计算机程序包含至少一段代码,该至少一段代码可由计算机执行,以控制计算机执行如上所述的任一项方法。
第五方面,本申请实施例提供一种程序,当该程序被计算机执行时,用于执行如上所述的任一项方法。
其中,上述程序可以全部或者部分存储在与处理器封装在一起的存储介质上,也可以部分或者全部存储在不与处理器封装在一起的存储器上。
第六方面,本申请实施例提供一种芯片,所述芯片上存储有计算机程序,在所述计算机程序被处理器执行时,执行如第二方面本申请实施例所述的方法。
本申请的这些和其它方面在以下(多个)实施例的描述中会更加简明易懂。
附图说明
图1为现有的流量调度设备的实现方案;
图2为一硬件调度器的业务调度树的示例图;
图3为本申请一实施例提供的流量调度设备的结构示意图;
图4为本申请另一实施例提供的流量调度设备的结构示意图;
图5为本申请一实施例提供的流量调度方法的流程图;
图6为本申请又一实施例提供的流量调度设备的结构示意图。
具体实施方式
首先,对本申请实施例涉及的部分技术术语进行解释说明。
循环(round robin,RR)调度算法,其原理是循环调度各流队列,每一轮调度,从流队列1开始,直到流队列N,其中,N为流队列的总个数,然后重新开始循环。
差额循环(deficit round robin,DRR)调度算法,为每个流队列分配一个常量(以权重为比例的时间片)和一个变量(差额)。其中,常量反应了该流队列可以发送的长期平均字节数。变量的初始值为零,且当流队列为空时复位为零。当DRR调度服务一个新流队列时,调度器复位计数器,表示该循环已经从流队列中发送的字节数。
严格优先级(strict priority,SP)调度算法,SP调度算法严格按照优先级从高到低的次序优先发送较高优先级流队列对应的数据,当较高优先级流队列为空时,再发送较低优先级流队列对应的数据。
图1为现有的流量调度设备的实现方案。参考图1,该流量调度设备10包括硬件调度器11、队列管理器12和数据包处理器13。其中,该硬件调度器11中定义了所有可以支 持的业务场景;且,硬件调度器11中调度层次固定。
对于硬件调度器11中调度层次固定,可以通过图2所示例的硬件调度器的业务调度树的结构进行理解。如图2所示,在一业务调度树中,包含多个节点,这些节点由于所在调度层次的不同又分别可以称为根节点、端口节点、用户组节点、用户节点和流队列节点,其中,根节点通常只有一个,其他节点的个数可以为一个或多个。可以理解,该业务调度树包括五个调度层次,由上至下,分别为:根节点所在的调度层次、端口节点所在的调度层次、用户组节点所在的调度层次,用户节点所在的调度层次和流队列节点所在的调度层次。因此,硬件调度器11中调度层次固定是指硬件调度器11的业务调度树的层次结构是固定的。
在业务调度树中,不同调度层次之间的节点存在从属关系,即父节点和子节点的映射关系。现有技术中,由上至下,下层的节点与上层的节点的从属关系可以有限制灵活配置,节点对应调度算法的配置参数,例如,DRR调度算法中的比例值或SP调度算法的优先级等,和该节点最多可获得的调度带宽可以灵活配置。队列管理器12完成流队列(FlowQueue)的出入队管理以及缓存分配,其中缓存分配可以基于业务流按加权随机先期检测(weighted random early detection,WRED)等固定算法配置。
具体地,流量调度设备10的关键工作流程如下:
1、数据包处理器13接收到数据,并将数据存入缓存之后,将入队的数据信息通知队列管理器12,队列管理器12将入队的数据信息按业务流进行入队处理,并生成流队列的状态更新消息。不同的业务对应不同的业务流,业务流即该业务所涉及的数据。其中,入队的数据信息包括存入缓存的数据的标识。队列管理器12将入队的数据信息按业务流入队处理,是指队列管理器12将入队的数据信息,区分业务流,存入不同的流队列。
2、队列管理器12将所述状态更新消息发送至硬件调度器11。
3、硬件调度器11根据所述状态更新消息,更新如图2所示的业务调度树上各个调度层次中对应节点的可调度状态。
4、硬件调度器11根据更新后的业务调度树上各节点的可调度状态以及配置的调度算法和相关配置参数,逐级选择一个合适的节点,最终输出一个流队列ID作为调度结果。其中,硬件调度器11的调度过程是循环持续的,每一轮调度输出一个调度结果。
5、硬件调度器11将调度结果传输给队列管理器12,队列管理器12根据调度结果进行一次出队操作。
6、队列管理器12将出队的数据信息发送给数据包处理器13,数据包处理器13将该出队的数据信息对应的数据从缓存中取出并对外输出。
在上述方案中,调度算法的配置参数需要在有流量之前就配置确定,且其中调度层次和不同调度层次之间的节点的从属关系需要按各种业务场景的合集来实现,配置关系复杂,资源代价大,应用场景受限于硬件实现的最大复杂度。
因此,基于上述技术问题,本申请实施例提供一种流量调度方法、设备及存储介质,以克服现有流量调度设备的业务场景受限于流量调度设备所支持的调度算法的缺陷,及时响应新的业务需求。
本申请实施例所提供的方案,可应用于高端路由器,高端DC交换机等需要高性能复杂业务调度的设备中,以及多核CPU架构中,等等,需要硬件支持高性能资源调度和负载 均衡的场合。
图3为本申请一实施例提供的流量调度设备的结构示意图。如图3所示,流量调度设备20包括:带宽预分配模块21、调度器模块22和队列管理模块23。其中:
带宽预分配模块21,用于根据队列管理模块23的每个流队列的输入带宽、业务配置的期望输出带宽以及相应的调度算法的配置参数,确定流队列的带宽控制参数,并将流队列的带宽控制参数配置给调度器模块22。
例如,业务用例1:业务调度树中,流队列1的输入带宽为300兆比特每秒(Mbit/s,Mbps),流队列2的输入带宽为200Mbps。此两个流队列属于同一个用户节点X,也即流队列1和流队列2的父节点为用户节点X,且业务为该用户节点X配置的期望输出带宽为150Mbps,同时配置流队列1与流队列2在用户节点X中为1:2的差额循环(deficit round robin,DRR)调度,则流队列的带宽控制参数为:流队列1分配调度带宽50Mbps,流队列2分配调度带宽100Mbps。流队列1和流队列2的带宽控制参数被配置到调度器模块22中。
调度器模块22,用于根据带宽控制参数,对队列管理模块23中流队列进行流量调度。
具体地,调度器模块22根据队列管理模块23反馈的流队列节点的可调度状态,维护调度树上各相关层次节点的可调度状态,同时根据带宽控制参数,对流队列节点进行流量持续的周期性的调度,每轮调度输出一个流队列ID作为调度结果,并将该调度结果传输给队列管理模块23。
队列管理模块23,用于根据该调度结果进行一次出队操作,并将出队的数据信息发送给数据包处理模块24。其中,出队的数据信息包含队头数据长度及其存储地址等信息。
数据包处理模块24,用于将该出队的数据信息对应的数据从缓存中取出并对外输出。
比较图1和图3所示的结构,可知二者的区别至少包括:图3所示的结构中,新增了带宽预分配模块21。
由于流队列的带宽控制参数是由带宽预分配模块21根据队列管理模块23的每个流队列的输入带宽、业务配置的期望输出带宽以及相应的调度算法的配置参数确定的,因此,本领域技术人员可以理解,该流队列的带宽控制参数是根据实际接收的流量的情况实时调整的,并非固定不变的。
另外,带宽预分配模块21将该流队列的带宽控制参数配置给调度器模块22。调度器模块22支持业务调度树中各个调度层次的各个节点的带宽控制参数可以由软件实时调整。调度器模块22不需要支持软件算法中的与业务一致的复杂度的业务调度树和调度算法,仅需要支持各个节点简单的循环(round robin,RR)调度算法等即可。甚至软件算法中的业务调度层次和节点数目都可以灵活扩展。
如前述业务用例1中,流队列1和流队列2之间需要按权重比例1:2分享用户节点X所获得的期望输出带宽150Mbps。在现有方案中,用户节点X需支持DRR调度算法以及相应的权重配置。在硬件调度器11中需要按DRR调度算法进行调度并记录各自分配的调度机会,才能保证流队列1和流队列2能够按权重比例获得相应的调度带宽。但在本申请实施例中,带宽预分配模块21通过软件算法,根据用户节点X获得的期望输出带宽150Mbps,以及流队列1和流队列2的输入带宽和权重等信息,实时计算流队列1可分配的调度带宽为50Mbps,流队列2可分配的调度带宽为100Mbps。这样,调度器模块22仅需 要进行简单的RR调度和流队列的整型带宽控制即可保证流队列1和流队列2获得实际输出带宽。因此,本申请实施例可以简化调度器模块的设计。
可选地,调度器模块22支持但不限于以下调度算法中至少一个:SP调度算法,RR调度算法,等等。
可以理解,带宽预分配模块21可以灵活配置调度器模块22中的业务调度树,动态控制业务调度树中各调度层次中节点的输出带宽,做到流量调度设备的调度算法可编程,从而避免流量调度设备的业务场景的受限。其中,业务调度树例如为图2所示,但本申请实施例不以此为限制。
本申请实施例中,带宽预分配模块根据队列管理模块的流队列的输入带宽、业务配置的期望输出带宽以及相应的调度算法的配置参数,确定带宽控制参数,并将该带宽控制参数配置给调度器模块,调度器模块根据该带宽控制参数,对其中流队列进行流量调度。一方面,由于带宽控制参数是根据队列管理模块的流队列的输入带宽、业务配置的期望输出带宽以及相应的调度算法的配置参数确定的,因此,该带宽控制参数是根据实际情况实时调整的,并非固定不变的;另一方面,带宽预分配模块将该带宽控制参数配置给调度器模块,可实现调度器模块中的业务调度树的灵活配置,动态控制业务调度树中各调度层次中节点的输出带宽,做到流量调度设备的最大复杂度的可编程,可以克服业务场景受限于流量调度设备所支持的调度算法的缺陷,以及时响应新的业务需求。
在上述基础上,一种具体实现方式中,如图4所示,在流量调度设备30中,带宽预分配模块21可以包括:带宽计量子模块211和带宽分配子模块212。其中,带宽计量子模块211,用于监控队列管理模块23的流队列的输入带宽及业务配置的期望输出带宽,并将该输入带宽及期望输出带宽传输至带宽分配子模块212;带宽分配子模块212,用于根据输入带宽、期望输出带宽以及相应的调度算法的配置参数,确定带宽控制参数,并将带宽控制参数配置给调度器模块22。
可选地,带宽计量子模块211通过硬件实现,带宽分配子模块212通过软件实现。示例性地,带宽计量子模块211为按流队列设置的一组计数器,记录每个流队列的实时队列长度,入队和出队的包长等信息,并可以根据这些信息计算得到流队列的输入带宽。带宽分配子模块212则属于运行在芯片内嵌或单板上CPU的一个软件模块。带宽分配子模块212根据输入带宽、期望输出带宽及相应的调度算法的配置参数进行分析计算,确定带宽控制参数,并将其配置给调度器模块22。该实施例中,软硬件协同的流量调度设备架构,其中带宽计量子模块211结构简单,相同的逻辑资源可以支持更高性能和更高精度,而业务调度树由带宽分配子模块212通过软件算法实现,可以灵活支持更复杂的业务调度树和调度算法,且支持新的业务特性仅需要软件升级即可。
进一步地,带宽计量子模块211与带宽分配子模块212之间可以通过以下任一总线互通:高速串行计算机扩展总线标准(peripheral component interconnect express,PCIE)总线,芯片内部总线,等等。其中,带宽分配子模块212可以以芯片内嵌中央处理器核(central processing unit core,CPU CORE)的方式实现,此时带宽分配结果的分析计算不需要使用单板CPU。带宽计量子模块211与带宽分配子模块212间的信息互通可以通过芯片内部总线实现。
对于带宽分配子模块212确定带宽控制参数的实现方式,可包括多种方案,在此进行示例说明。
第一种方案中,带宽分配子模块212,用于:根据业务调度树中各调度层次的节点间从属关系,调度算法和配置参数,以及流队列的输入带宽和业务配置的期望输出带宽,确定带宽控制参数,并将带宽控制参数配置给调度器模块22。具体例子可参考前述业务用例1。
第二种方案中,带宽分配子模块212,用于:根据流量优先级、业务调度树中各调度层次的节点的从属关系,调度算法和配置参数,以及流队列的输入带宽和业务配置的期望输出带宽,确定带宽控制参数,并将该带宽控制参数配置给调度器模块22。例如,业务用例2:业务调度树中,流队列3的输入带宽为100Mbps,流队列4的输入带宽为200Mbps。此两个流队列属于同一个用户节点Y,且业务为该用户节点Y配置的期望输出带宽为150Mbps,同时配置流队列3与流队列4在用户节点Y中为SP调度,且流队列3为高优先级,则流队列的带宽控制参数为:流队列3分配带宽100Mbps,流队列4分配带宽50Mbps。流队列3和流队列4的带宽控制参数被配置到调度器模块22中。
由于带宽控制参数的确定需要一定时间,考虑到芯片内嵌或单板上CPU运行的带宽分配子模块212的软件运行过程中可能会有新接收的高优先流量未被带宽计量子模块211监控到,如果此类高优先级流量需要低延时保证,则在带宽分配上需要给高优先级流量预留一定的带宽以支持新接收的高优先流量的优先通过。
上述两种方案的区别在于:第二种方案增加了流量优先级的考量。
一些实施例中,带宽计量子模块211可用于:记录每个流队列对应的入队数据和出队数据的系统全局时标,并根据该系统全局时标,得到流队列的输入带宽及实际输出带宽,其中,实际输出带宽值用于带宽分配子模块212修正带宽控制参数的计算结果。也就是说,带宽计量子模块211实时记录队列管理模块23中各个流队列对应的入队数据和出队数据的数据包长的同时,将系统全局时标带入记录的原始数据中。至于实际输出带宽与期望输出带宽的作用可参考相关技术,本申请实施例不再赘述。
例如,流队列输入带宽=本轮流队列对应的入队数据的数据包长总和/本轮统计周期时长,其中,本轮统计周期时长=本轮流队列对应的最后一个入队数据的系统全局时标-本轮流队列对应的首个入队数据的系统全局时标。
由于带宽计量子模块211为支持系统全局时标的流量监控部件,从而可以实时精确统计流队列的输入带宽及期望输出带宽,确保流队列的输入带宽及期望输出带宽计算的正确性和完整性。进一步地,基于流队列的输入带宽及期望输出带宽,带宽分配子模块212通过软件算法进行分析后,可以确定准确的带宽控制参数,还可以对流量行为进行预测。另外,预测的结果可以实时应用于调度算法的计算。
在上述实施例的基础上,带宽分配子模块212还可以用于:根据流队列的输入带宽及业务配置的期望输出带宽,按预设算法调整流队列的WRED参数,并将调整后的WRED参数配置到队列管理模块23。其中,WRED参数可以包括以下参数中的至少一个:最小门限、最小门限和标签概率分母等。
相应地,队列管理模块23根据调整后的WRED参数进行流队列中数据信息的出入队 管理。
由于实际配置到队列管理模块的WRED参数是由软件算法根据实时流队列的输入带宽及期望输出带宽调整后的参数,所以,当配置WRED参数时,可以不考虑实际缓存的分配情况,在一定程度上,给各个流队列配置的缓存可以超过实际可分配的缓存大小,即支持一定程度的缓存超配。在实际使用场景中,由于接收的流量是实时变化的,可以根据当前流量接收情况,经过软件算法的分析,灵活调整不同流队列的实时WRED参数,总体上可以保证实际接收的流量可以有较大的可用缓存,而未激活流队列仅需要预留少量缓存,从而有效提高系统的缓存利用率。同时,将调整后的WRED参数配置到队列管理模块,从而有效支持动态的队列长度管理。
一些实施例中,调度器模块22存储有第一带宽配置参数表项和第二带宽配置参数表项。此时,带宽预分配模块21将带宽控制参数配置给调度器模块22,可以包括:带宽预分配模块21将带宽控制参数更新至第一带宽配置参数表项;在流水线特定时隙,调度器模块22将第一带宽配置参数表项的内容更新至第二带宽配置参数表项,该第二带宽配置参数表项为调度器模块22执行操作时使用的表项。
该实施例,增加可由软件直接随时修改的第一带宽配置参数表项,例如shadow表项。由调度器模块根据逻辑执行的流水线特定时隙将第一带宽配置参数表项的内容更新到逻辑使用的第二带宽配置参数表项,可以避免配置表项更新操作与逻辑执行操作之间的冲突。
图5为本申请一实施例提供的流量调度方法的流程图。本申请实施例提供一种流量调度方法,该方法应用于流量调度设备,流量调度设备包括:带宽预分配模块、调度器模块和队列管理模块。如图5所示,该方法包括:
S501、带宽预分配模块根据队列管理模块的每个流队列的输入带宽、业务配置的期望输出带宽以及相应的调度算法的配置参数,确定流队列的带宽控制参数,并将流队列的带宽控制参数配置给调度器模块。
S502、调度器模块根据带宽控制参数,对队列管理模块中流队列进行流量调度。
本申请实施例所述的流量调度方法,可以由上述任一装置实施例中流量调度设备执行,其实现原理和技术效果类似,此处不再赘述。
在上述实施例的基础上,一些实施例中,带宽预分配模块包括带宽计量子模块和带宽分配子模块。相应地,S501、带宽预分配模块根据队列管理模块的每个流队列的输入带宽、业务配置的期望输出带宽以及相应的调度算法的配置参数,确定流队列的带宽控制参数,并将流队列的带宽控制参数配置给调度器模块,可以包括:带宽计量子模块监控队列管理模块的流队列的输入带宽和期望输出带宽,并将流队列的输入带宽和期望输出带宽传输至带宽分配子模块;带宽分配子模块根据流队列的输入带宽、期望输出带宽以及相应的调度算法的配置参数,确定带宽控制参数,并将带宽控制参数配置给调度器模块。
可选地,带宽计量子模块和带宽分配子模块之间通过以下任一总线互通:PCIE总线和芯片内部总线等。
一种实现方式中,带宽分配子模块根据流队列的输入带宽、期望输出带宽以及相应的调度算法的配置参数,确定带宽控制参数,并将带宽控制参数配置给调度器模块,可以包 括:带宽分配子模块根据业务调度树中各调度层次的节点间从属关系,调度算法和配置参数,以及流队列的输入带宽和期望输出带宽,确定带宽控制参数,并将带宽控制参数配置给调度器模块。
另一种实现方式中,带宽分配子模块根据流队列的输入带宽、期望输出带宽以及相应的调度算法的配置参数,确定带宽控制参数,并将带宽控制参数配置给调度器模块,可以包括:带宽分配子模块根据流量优先级、业务调度树中各调度层次的节点间从属关系,调度算法和配置参数,以及流队列的输入带宽和期望输出带宽,确定带宽控制参数,并将带宽控制参数配置给调度器模块。
进一步地,带宽计量子模块监控队列管理模块的流队列的输入带宽和期望输出带宽,并将流队列的输入带宽和期望输出带宽传输至带宽分配子模块,可以包括:带宽计量子模块记录每个流队列对应的入队数据和出队数据的系统全局时标;带宽计量子模块根据系统全局时标,得到流队列的输入带宽及期望输出带宽。
更进一步地,所述流量调度方法还可以包括:带宽分配子模块根据流队列的输入带宽及期望输出带宽,按预设算法调整流队列的WRED参数,并将调整后的WRED参数配置到队列管理模块。相应地,队列管理模块根据调整后的WRED参数进行流队列中数据信息的出入队管理。
一些实施例中,调度器模块存储有第一带宽配置参数表项和第二带宽配置参数表项。此时,带宽预分配模块将带宽控制参数配置给调度器模块,可以包括:带宽预分配模块将带宽控制参数更新至第一带宽配置参数表项;在流水线特定时隙,调度器模块将第一带宽配置参数表项的内容更新至第二带宽配置参数表项,第二带宽配置参数表项为调度器模块执行操作时使用的表项。
可选地,调度器模块支持以下调度算法中至少一个:SP调度算法和循环RR调度算法。
在一些实施例中,流量调度设备还可以包括:数据包处理模块。相应地,所述方法还可以包括:队列管理模块在调度器模块将调度结果发送给队列管理模块之后,根据调度结果进行一次出队操作,并将出队的数据信息发送给数据包处理模块;数据包处理模块将出队的数据信息对应的数据从缓存中取出并对外输出。
在一些实施例中,在硬件实现上,上述的带宽预分配模块、调度器模块和队列管理模块可以内嵌于处理器中。或者,在硬件实现上,上述的带宽预分配模块可以为处理器,上述的调度器模块和队列管理模块可以硬件实现。
相应地,如图6所示,本实施例的流量调度设备60可以包括存储器61和处理器62。其中,存储器61用于存储可供处理器62执行的计算机程序。当处理器62读取并执行计算机程序时,使得处理器62执行如上所述的方法,或者,当处理器62读取并执行计算机程序时,使得处理器62执行如上所述的方法中所述带宽预分配模块执行的步骤。
本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序包含至少一段代码,至少一段代码可由处理器执行,实现如上述任一方法实施例所述的方法,或者,实现如上述任一方法实施例所述的方法中所述带宽预分配模块执行的步骤。
所述计算机程序可以以软件功能单元的形式实现并能够作为独立的产品销售或使 用,所述存储器可以是任意形式的计算机可读取存储介质。基于这样的理解,本申请的技术方案的全部或部分可以以软件产品的形式体现出来,包括若干指令用以使得一台计算机设备,具体可以是处理器,来执行本申请各个实施例中第一终端设备的全部或部分步骤。而前述的计算机可读存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。在本申请的实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。

Claims (22)

  1. 一种流量调度设备,其特征在于,包括:带宽预分配模块、调度器模块和队列管理模块;其中:
    所述带宽预分配模块,用于根据所述队列管理模块的每个流队列的输入带宽、业务配置的期望输出带宽以及相应的调度算法的配置参数,确定所述流队列的带宽控制参数,并将所述流队列的带宽控制参数配置给所述调度器模块;
    所述调度器模块,用于根据所述带宽控制参数,对所述队列管理模块中流队列进行流量调度。
  2. 根据权利要求1所述的设备,其特征在于,所述带宽预分配模块,包括:带宽计量子模块和带宽分配子模块;其中,
    所述带宽计量子模块,用于监控所述队列管理模块的流队列的输入带宽和期望输出带宽,并将所述流队列的输入带宽和所述期望输出带宽传输至所述带宽分配子模块;
    所述带宽分配子模块,用于根据所述流队列的输入带宽、所述期望输出带宽以及相应的调度算法的配置参数,确定所述带宽控制参数,并将所述带宽控制参数配置给所述调度器模块。
  3. 根据权利要求2所述的设备,其特征在于,所述带宽计量子模块和所述带宽分配子模块之间通过以下任一总线互通:
    高速串行计算机扩展总线标准PCIE总线,芯片内部总线。
  4. 根据权利要求2或3所述的设备,其特征在于,所述带宽分配子模块,具体用于:
    根据业务调度树中各调度层次的节点间从属关系,调度算法和配置参数,以及所述流队列的输入带宽和所述期望输出带宽,确定所述带宽控制参数,并将所述带宽控制参数配置给所述调度器模块。
  5. 根据权利要求2或3所述的设备,其特征在于,所述带宽分配子模块,具体用于:
    根据流量优先级、业务调度树中各调度层次的节点间从属关系,调度算法和配置参数,以及所述流队列的输入带宽和所述期望输出带宽,确定所述带宽控制参数,并将所述带宽控制参数配置给所述调度器模块。
  6. 根据权利要求2-5任一所述的设备,其特征在于,所述带宽计量子模块,具体用于:
    记录每个流队列对应的入队数据和出队数据的系统全局时标;
    根据所述系统全局时标,得到所述流队列的输入带宽及所述期望输出带宽。
  7. 根据权利要求2-6任一所述的设备,其特征在于,所述带宽分配子模块,还用于:
    根据所述流队列的输入带宽及所述期望输出带宽,按预设算法调整所述流队列的加权随机先期检测WRED参数,并将调整后的WRED参数配置到所述队列管理模块;
    相应地,所述队列管理模块根据所述调整后的WRED参数进行流队列中数据信息的出入队管理。
  8. 根据权利要求1-7任一所述的设备,其特征在于,所述调度器模块存储有第一带宽配置参数表项和第二带宽配置参数表项;
    其中,所述带宽预分配模块在将所述带宽控制参数配置给所述调度器模块时,具体为:
    所述带宽预分配模块,用于将所述带宽控制参数更新至所述第一带宽配置参数表项;
    所述调度器模块,用于在流水线特定时隙,将所述第一带宽配置参数表项的内容更新至所述第二带宽配置参数表项,所述第二带宽配置参数表项为所述调度器模块执行操作时使用的表项。
  9. 根据权利要求1-8任一所述的设备,其特征在于,所述调度器模块支持以下调度算法中至少一个:
    严格优先级SP调度算法,循环RR调度算法。
  10. 根据权利要求1-9任一所述的设备,其特征在于,还包括:数据包处理模块;
    所述队列管理模块,还用于在所述调度器模块将调度结果发送给所述队列管理模块之后,根据所述调度结果进行一次出队操作,并将出队的数据信息发送给所述数据包处理模块;
    所述数据包处理模块,用于将所述出队的数据信息对应的数据从缓存中取出并对外输出。
  11. 一种流量调度方法,其特征在于,应用于流量调度设备,所述流量调度设备包括:带宽预分配模块、调度器模块和队列管理模块,所述方法包括:
    所述带宽预分配模块根据所述队列管理模块的每个流队列的输入带宽、业务配置的期望输出带宽以及相应的调度算法的配置参数,确定所述流队列的带宽控制参数,并将所述流队列的带宽控制参数配置给所述调度器模块;
    所述调度器模块根据所述带宽控制参数,对所述队列管理模块中流队列进行流量调度。
  12. 根据权利要求11所述的方法,其特征在于,所述带宽预分配模块包括带宽计量子模块和带宽分配子模块,所述带宽预分配模块根据所述队列管理模块的每个流队列的输入带宽、业务配置的期望输出带宽以及相应的调度算法的配置参数,确定所述流队列的带宽控制参数,并将所述流队列的带宽控制参数配置给所述调度器模块,包括:
    所述带宽计量子模块监控所述队列管理模块的流队列的输入带宽和期望输出带宽,并将所述流队列的输入带宽和所述期望输出带宽传输至所述带宽分配子模块;
    所述带宽分配子模块根据所述流队列的输入带宽、所述期望输出带宽以及相应的调度算法的配置参数,确定所述带宽控制参数,并将所述带宽控制参数配置给所述调度器模块。
  13. 根据权利要求12所述的方法,其特征在于,所述带宽计量子模块和所述带宽分配子模块之间通过以下任一总线互通:
    高速串行计算机扩展总线标准PCIE总线,芯片内部总线。
  14. 根据权利要求12或13所述的方法,其特征在于,所述带宽分配子模块根据所述流队列的输入带宽、所述期望输出带宽以及相应的调度算法的配置参数,确定所述带宽控制参数,并将所述带宽控制参数配置给所述调度器模块,包括:
    所述带宽分配子模块根据业务调度树中各调度层次的节点间从属关系,调度算法和配置参数,以及所述流队列的输入带宽和所述期望输出带宽,确定所述带宽控制参数,并将所述带宽控制参数配置给所述调度器模块。
  15. 根据权利要求12或13所述的方法,其特征在于,所述带宽分配子模块根据所述流队列的输入带宽、所述期望输出带宽以及相应的调度算法的配置参数,确定所述带宽控制参数,并将所述带宽控制参数配置给所述调度器模块,包括:
    所述带宽分配子模块根据流量优先级、业务调度树中各调度层次的节点间从属关系, 调度算法和配置参数,以及所述流队列的输入带宽和所述期望输出带宽,确定所述带宽控制参数,并将所述带宽控制参数配置给所述调度器模块。
  16. 根据权利要求12-15任一所述的方法,其特征在于,所述带宽计量子模块监控所述队列管理模块的流队列的输入带宽和期望输出带宽,并将所述流队列的输入带宽和所述期望输出带宽传输至所述带宽分配子模块,包括:
    所述带宽计量子模块记录每个流队列对应的入队数据和出队数据的系统全局时标;
    所述带宽计量子模块根据所述系统全局时标,得到所述流队列的输入带宽及所述期望输出带宽。
  17. 根据权利要求12-16任一所述的方法,其特征在于,还包括:
    所述带宽分配子模块根据所述流队列的输入带宽及所述期望输出带宽,按预设算法调整所述流队列的加权随机先期检测WRED参数,并将调整后的WRED参数配置到所述队列管理模块;
    相应地,所述队列管理模块根据所述调整后的WRED参数进行流队列中数据信息的出入队管理。
  18. 根据权利要求11-17任一所述的方法,其特征在于,所述调度器模块存储有第一带宽配置参数表项和第二带宽配置参数表项;
    其中,所述带宽预分配模块将所述带宽控制参数配置给所述调度器模块,包括:
    所述带宽预分配模块将所述带宽控制参数更新至所述第一带宽配置参数表项;
    在流水线特定时隙,所述调度器模块将所述第一带宽配置参数表项的内容更新至所述第二带宽配置参数表项,所述第二带宽配置参数表项为所述调度器模块执行操作时使用的表项。
  19. 根据权利要求11-18任一所述的方法,其特征在于,所述调度器模块支持以下调度算法中至少一个:
    严格优先级SP调度算法,循环RR调度算法。
  20. 根据权利要求11-19任一所述的方法,其特征在于,所述流量调度设备还包括:数据包处理模块,所述方法还包括:
    所述队列管理模块在所述调度器模块将调度结果发送给所述队列管理模块之后,根据所述调度结果进行一次出队操作,并将出队的数据信息发送给所述数据包处理模块;
    所述数据包处理模块将所述出队的数据信息对应的数据从缓存中取出并对外输出。
  21. 一种流量调度设备,其特征在于,包括:存储器和处理器;
    其中,所述存储器上存储有可供所述处理器执行的计算机程序;
    当所述处理器读取并执行所述计算机程序时,使得所述处理器执行如权利要求11至20中任一项所述的方法。
  22. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包含至少一段代码,所述至少一段代码可由处理器执行,实现如权利要求11至20中任一项所述的方法。
PCT/CN2019/090921 2019-06-12 2019-06-12 流量调度方法、设备及存储介质 WO2020248166A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980096976.4A CN113906720B (zh) 2019-06-12 2019-06-12 流量调度方法、设备及存储介质
PCT/CN2019/090921 WO2020248166A1 (zh) 2019-06-12 2019-06-12 流量调度方法、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/090921 WO2020248166A1 (zh) 2019-06-12 2019-06-12 流量调度方法、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020248166A1 true WO2020248166A1 (zh) 2020-12-17

Family

ID=73780828

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/090921 WO2020248166A1 (zh) 2019-06-12 2019-06-12 流量调度方法、设备及存储介质

Country Status (2)

Country Link
CN (1) CN113906720B (zh)
WO (1) WO2020248166A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114285753A (zh) * 2021-12-27 2022-04-05 上海七牛信息技术有限公司 一种cdn调度方法和系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114640630B (zh) * 2022-03-31 2023-08-18 苏州浪潮智能科技有限公司 一种流量管控方法、装置、设备及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101009655A (zh) * 2007-02-05 2007-08-01 华为技术有限公司 流量调度方法及装置
CN102075407A (zh) * 2009-11-24 2011-05-25 中兴通讯股份有限公司 混合业务流的处理方法及装置
US20120106562A1 (en) * 2010-10-28 2012-05-03 Compass Electro Optical Systems Ltd. Router and switch architecture
CN102611605A (zh) * 2011-01-20 2012-07-25 华为技术有限公司 一种数据交换网的调度方法、装置和系统

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6628609B2 (en) * 1998-04-30 2003-09-30 Nortel Networks Limited Method and apparatus for simple IP-layer bandwidth allocation using ingress control of egress bandwidth
CN100414916C (zh) * 2003-09-03 2008-08-27 华为技术有限公司 网络灾难时的优先级报文流量保证方法
US7675926B2 (en) * 2004-05-05 2010-03-09 Cisco Technology, Inc. Hierarchical QoS behavioral model
US7697436B2 (en) * 2006-02-15 2010-04-13 Fujitsu Limited Bandwidth allocation
WO2009130218A1 (en) * 2008-04-24 2009-10-29 Xelerated Ab A traffic manager and a method for a traffic manager
IT1400169B1 (it) * 2010-05-24 2013-05-17 Selex Communications Spa Procedimento e sistema di controllo di banda per il rispetto di una predeterminata qualità di servizio presso un punto di accesso ad una rete di comunicazioni operante una aggregazione di flussi di traffico eterogenei.
AU2012207471B2 (en) * 2011-01-18 2016-07-28 Nomadix, Inc. Systems and methods for group bandwidth management in a communication systems network
CN102594830B (zh) * 2012-03-02 2015-04-29 黄东 一种提高多业务条件下的网络带宽利用率方法
US9690261B2 (en) * 2013-06-25 2017-06-27 Linestream Technologies Method for automatically setting responsiveness parameters for motion control systems
US9450881B2 (en) * 2013-07-09 2016-09-20 Intel Corporation Method and system for traffic metering to limit a received packet rate
CN103685069B (zh) * 2013-12-30 2017-02-01 华为技术有限公司 一种跨板流量控制方法、系统及调度器、线路板和路由器
CN114338523B (zh) * 2014-12-30 2023-04-11 华为技术有限公司 一种报文转发方法和装置
CN107872403B (zh) * 2017-11-10 2019-12-24 西安电子科技大学 一种实现层次化QoS的五级队列调度装置及方法
CN108063734A (zh) * 2017-12-05 2018-05-22 郑州云海信息技术有限公司 一种网络资源调度方法及装置
CN108881045B (zh) * 2018-06-04 2022-03-01 河南科技大学 一种异构网络中基于QoS保障的拥塞控制方法
CN109039953B (zh) * 2018-07-24 2022-06-10 新华三技术有限公司 带宽调度方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101009655A (zh) * 2007-02-05 2007-08-01 华为技术有限公司 流量调度方法及装置
CN102075407A (zh) * 2009-11-24 2011-05-25 中兴通讯股份有限公司 混合业务流的处理方法及装置
US20120106562A1 (en) * 2010-10-28 2012-05-03 Compass Electro Optical Systems Ltd. Router and switch architecture
CN102611605A (zh) * 2011-01-20 2012-07-25 华为技术有限公司 一种数据交换网的调度方法、装置和系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114285753A (zh) * 2021-12-27 2022-04-05 上海七牛信息技术有限公司 一种cdn调度方法和系统
CN114285753B (zh) * 2021-12-27 2024-04-26 上海七牛信息技术有限公司 一种cdn调度方法和系统

Also Published As

Publication number Publication date
CN113906720A (zh) 2022-01-07
CN113906720B (zh) 2024-05-10

Similar Documents

Publication Publication Date Title
WO2020258920A1 (zh) 一种网络切片资源管理方法及设备
US10812395B2 (en) System and method for policy configuration of control plane functions by management plane functions
KR101782345B1 (ko) 엔드-투-엔드 데이터센터 성능 제어
US9270527B2 (en) Methods, systems, and computer readable media for enabling real-time guarantees in publish-subscribe middleware using dynamically reconfigurable networks
US9882832B2 (en) Fine-grained quality of service in datacenters through end-host control of traffic flow
WO2018177012A1 (zh) 一种控制带宽的方法、装置和设备
CN111512602B (zh) 一种发送报文的方法、设备和系统
US9112809B2 (en) Method and apparatus for controlling utilization in a horizontally scaled software application
US11206193B2 (en) Method and system for provisioning resources in cloud computing
JP2018198068A (ja) 分散型クラウドにおける作業負荷移動に基づくプロファイルベースのsla保証
CN112789832B (zh) 动态切片优先级处理
CN108616458A (zh) 客户端设备上调度分组传输的系统和方法
WO2020034819A1 (zh) 分布式存储系统中服务质量保障方法、控制节点及系统
US20160094450A1 (en) Reducing internal fabric congestion in leaf-spine switch fabric
WO2017010922A1 (en) Allocation of cloud computing resources
WO2018090191A1 (zh) 网络功能的管理方法、管理单元及系统
EP3605975A1 (en) Client service transmission method and device
WO2020248166A1 (zh) 流量调度方法、设备及存储介质
TW201351933A (zh) 執行數據直通轉發的入口節點、數據傳輸系統及其方法
CN115622952A (zh) 资源调度方法、装置、设备及计算机可读存储介质
WO2019109902A1 (zh) 队列调度方法及装置、通信设备、存储介质
CN115766582A (zh) 流量控制方法、装置和系统、介质和计算机设备
US20190108060A1 (en) Mobile resource scheduler
CN102594670B (zh) 多端口多流的调度方法、装置及设备
US10986036B1 (en) Method and apparatus for orchestrating resources in multi-access edge computing (MEC) network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19932394

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19932394

Country of ref document: EP

Kind code of ref document: A1