CN113906720B - Traffic scheduling method, traffic scheduling device and storage medium - Google Patents

Traffic scheduling method, traffic scheduling device and storage medium Download PDF

Info

Publication number
CN113906720B
CN113906720B CN201980096976.4A CN201980096976A CN113906720B CN 113906720 B CN113906720 B CN 113906720B CN 201980096976 A CN201980096976 A CN 201980096976A CN 113906720 B CN113906720 B CN 113906720B
Authority
CN
China
Prior art keywords
bandwidth
module
scheduling
queue
flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980096976.4A
Other languages
Chinese (zh)
Other versions
CN113906720A (en
Inventor
沈国明
汤成
李东川
伊学文
谭幸均
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN113906720A publication Critical patent/CN113906720A/en
Application granted granted Critical
Publication of CN113906720B publication Critical patent/CN113906720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • H04L47/562Attaching a time tag to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/60Queue scheduling implementing hierarchical scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/6225Fixed service order, e.g. Round Robin
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/623Weighted service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6265Queue scheduling characterised by scheduling criteria for service slots or service orders past bandwidth allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application provides a traffic scheduling method, equipment and a storage medium, wherein the traffic scheduling equipment comprises the following steps: a bandwidth pre-allocation module, a scheduler module and a queue management module. The bandwidth pre-allocation module is used for determining bandwidth control parameters according to the input bandwidth of each flow queue of the queue management module, the expected output bandwidth of service configuration and the configuration parameters of a corresponding scheduling algorithm, and configuring the bandwidth control parameters to the scheduler module; and the scheduler module is used for carrying out flow scheduling on the flow queues in the queue management module according to the bandwidth control parameters. The embodiment of the application can overcome the defect that the service scene of the existing flow scheduling equipment is limited by the scheduling algorithm supported by the flow scheduling equipment, and can respond to new service demands in time.

Description

Traffic scheduling method, traffic scheduling device and storage medium
Technical Field
Embodiments of the present application relate to communications technologies, and in particular, to a traffic scheduling method, device, and storage medium.
Background
The traffic scheduling device is a core component of the network device supporting quality of service (quality of service, qoS). As the QoS requirements of network devices become higher, the performance and specifications of traffic scheduling devices become higher, and supportable services become more complex.
The existing flow dispatching equipment is mainly realized in a hardware mode, a supported dispatching algorithm is fixed, and different service scenes are supported by adjusting configuration parameters of the dispatching algorithm. For a service scenario that cannot be supported by adjusting the configuration parameters of the scheduling algorithm, a chip iteration mode is generally required to support the service scenario, so that a new service requirement cannot be responded in time.
Disclosure of Invention
The embodiment of the application provides a flow scheduling method, equipment and a storage medium, which are used for overcoming the defect that the service scene of the existing flow scheduling equipment is limited by a scheduling algorithm supported by the flow scheduling equipment and responding to new service demands in time.
In a first aspect, an embodiment of the present application provides a traffic scheduling apparatus, including: a bandwidth pre-allocation module, a scheduler module and a queue management module. The bandwidth pre-allocation module determines bandwidth control parameters according to the input bandwidth of the flow queue of the queue management module, the expected output bandwidth of the service configuration and the configuration parameters of the corresponding scheduling algorithm, and configures the bandwidth control parameters to the scheduler module, and the scheduler module performs flow scheduling on the flow queue according to the bandwidth control parameters. On the one hand, since the bandwidth control parameter is determined according to the input bandwidth of the flow queue of the queue management module, the expected output bandwidth of the service configuration and the configuration parameter of the corresponding scheduling algorithm, the bandwidth control parameter is adjusted in real time according to the actual situation and is not fixed; on the other hand, the bandwidth pre-allocation module configures the bandwidth control parameters to the scheduler module, so that flexible configuration of a service scheduling tree in the scheduler module can be realized, the output bandwidth of nodes in each scheduling layer in the service scheduling tree is dynamically controlled, the maximum complexity of the flow scheduling device is programmable, the defect that a service scene is limited by a scheduling algorithm supported by the flow scheduling device can be overcome, and new service demands can be responded in time.
In a possible implementation manner, the bandwidth pre-allocation module includes: the bandwidth meter sub-module and the bandwidth allocation sub-module. The bandwidth metering sub-module is used for monitoring the input bandwidth and the expected output bandwidth of the flow queue of the queue management module and transmitting the input bandwidth and the expected output bandwidth of the flow queue to the bandwidth allocation sub-module; and the bandwidth allocation sub-module is used for determining bandwidth control parameters according to the input bandwidth of the flow queue, the expected output bandwidth and the configuration parameters of the corresponding scheduling algorithm, and configuring the bandwidth control parameters to the scheduler module. The bandwidth metering sub-module is realized by hardware, and the bandwidth allocation sub-module is realized by software. In the embodiment, the flow scheduling device architecture with cooperation of software and hardware, wherein the bandwidth metering submodule has a simple structure, the same logic resource can support higher performance and higher precision, the service scheduling tree is realized by the bandwidth allocation submodule through a software algorithm, the service scheduling tree and the scheduling algorithm can be flexibly supported, and the support of new service characteristics can be realized only by software upgrading.
Optionally, the bandwidth metering submodule and the bandwidth allocation submodule are communicated through any one of the following buses: PCIE bus and chip internal bus.
In a possible implementation manner, the bandwidth allocation submodule as described above may be specifically used for: according to the node subordination relation of each scheduling layer in the service scheduling tree, the scheduling algorithm and the configuration parameters, and the input bandwidth and the expected output bandwidth of the flow queue, the bandwidth control parameters are determined, and the bandwidth control parameters are configured to the scheduler module.
Or as described above, the bandwidth allocation submodule may be specifically configured to: and determining bandwidth control parameters according to the flow priority, the node subordination relation of each scheduling layer in the service scheduling tree, the scheduling algorithm and the configuration parameters, and the input bandwidth and the expected output bandwidth of the flow queue, and configuring the bandwidth control parameters to the scheduler module.
In a possible implementation, the bandwidth metering sub-module as described above may be specifically used for: recording system global time marks of enqueue data and dequeue data corresponding to each flow queue; and obtaining the input bandwidth and the expected output bandwidth of the flow queue according to the global time scale of the system. The actual output bandwidth value is used for correcting the calculation result of the bandwidth control parameter by the bandwidth allocation submodule.
In a possible implementation, the bandwidth allocation submodule as described above may also be used to: according to the input bandwidth and the expected output bandwidth of the flow queue, adjusting the WRED parameters of the flow queue according to a preset algorithm, and configuring the adjusted WRED parameters to a queue management module; accordingly, the queue management module performs enqueue and dequeue management of the data information in the stream queue according to the adjusted WRED parameter. Because the WRED parameters actually configured to the queue management module are parameters adjusted by the software algorithm according to the input bandwidth and the expected output bandwidth of the real-time flow queues, when the WRED parameters are configured, the allocation situation of the actual caches can not be considered, and the caches configured for each flow queue can exceed the size of the actually allocable caches to a certain extent, namely, a certain degree of cache overcomplicating is supported.
In a possible implementation, the scheduler module stores a first bandwidth configuration parameter table entry and a second bandwidth configuration parameter table entry as described above. The bandwidth pre-allocation module, when configuring the bandwidth control parameter to the scheduler module, specifically comprises: the bandwidth pre-allocation module is used for updating the bandwidth control parameters to the first bandwidth configuration parameter table entry; and the scheduler module is used for updating the content of the first bandwidth configuration parameter table entry to a second bandwidth configuration parameter table entry in a pipeline specific time slot, wherein the second bandwidth configuration parameter table entry is used when the scheduler module executes the operation. This embodiment adds a first bandwidth configuration parameter table entry, such as a shadow table entry, that can be modified by the software directly at any time. The scheduler module updates the content of the first bandwidth configuration parameter table entry to the second bandwidth configuration parameter table entry used by the logic according to the pipeline specific time slot executed by the logic, so that the conflict between the configuration table entry updating operation and the logic executing operation can be avoided.
Optionally, the scheduler module supports at least one of the following scheduling algorithms as described above: SP scheduling algorithm, RR scheduling algorithm, etc.
Further, the traffic scheduling device further includes: and the data packet processing module. The queue management module is also used for performing one-time dequeuing operation according to the scheduling result after the scheduling result is sent to the queue management module by the scheduler module, and sending dequeued data information to the data packet processing module; and the data packet processing module is used for taking out the data corresponding to the dequeued data information from the cache and outputting the data to the outside.
In a second aspect, an embodiment of the present application provides a traffic scheduling method, which is applied to a traffic scheduling device. The traffic scheduling device includes: a bandwidth pre-allocation module, a scheduler module and a queue management module. The method comprises the following steps: the bandwidth pre-allocation module determines bandwidth control parameters of the flow queues according to the input bandwidth of each flow queue of the queue management module, the expected output bandwidth of service configuration and the configuration parameters of a corresponding scheduling algorithm, and configures the bandwidth control parameters of the flow queues to the scheduler module; and the scheduler module performs flow scheduling on the flow queues in the queue management module according to the bandwidth control parameters.
Based on the same inventive concept, since the principle of the flow scheduling method for solving the problem corresponds to the solution in the device design of the first aspect, the implementation of the flow scheduling method can refer to the implementation of the device, and the repetition is not repeated.
In a third aspect, an embodiment of the present application provides a traffic scheduling apparatus, including: memory and a processor. Wherein the memory has stored thereon a computer program executable by the processor; the computer program, when read and executed by a processor, causes the processor to perform the method according to any of the second aspects described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program comprising at least one piece of code executable by a computer to control the computer to perform any of the methods described above.
In a fifth aspect, embodiments of the present application provide a program for performing any of the methods described above when the program is executed by a computer.
The program may be stored in whole or in part on a storage medium packaged with the processor, or in part or in whole on a memory not packaged with the processor.
In a sixth aspect, embodiments of the present application provide a chip having a computer program stored thereon, which, when executed by a processor, performs a method according to the embodiments of the present application of the second aspect.
These and other aspects of the application will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
Drawings
FIG. 1 is an implementation of a prior art flow scheduling device;
FIG. 2 is an exemplary diagram of a service dispatch tree of a hardware scheduler;
fig. 3 is a schematic structural diagram of a flow scheduling device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a flow scheduling device according to another embodiment of the present application;
FIG. 5 is a flow chart of a flow scheduling method according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of a flow scheduling device according to another embodiment of the present application.
Detailed Description
First, some technical terms related to the embodiments of the present application will be explained.
The Round Robin (RR) scheduling algorithm is based on the principle that each flow queue is scheduled in a round-robin manner, and each round of scheduling starts from the flow queue 1 until the flow queue N, where N is the total number of flow queues, and then the round-robin is restarted.
The deficit round robin (deficit round robin, DRR) scheduling algorithm assigns a constant (time slice in proportion to weight) and a variable (deficit) to each flow queue. Where the constant reflects the long-term average number of bytes that the flow queue can send. The initial value of the variable is zero and is reset to zero when the flow queue is empty. When the DRR schedule services a new flow queue, the scheduler resets a counter indicating the number of bytes that the cycle has sent from the flow queue.
And (3) a Strict Priority (SP) scheduling algorithm, wherein the SP scheduling algorithm transmits data corresponding to a higher priority flow queue in a priority order from high priority to low priority, and when the higher priority flow queue is empty, the SP scheduling algorithm transmits data corresponding to a lower priority flow queue.
Fig. 1 is a schematic diagram of an implementation of a conventional traffic scheduling device. Referring to fig. 1, the traffic scheduling apparatus 10 includes a hardware scheduler 11, a queue manager 12, and a packet processor 13. Wherein, the hardware dispatcher 11 defines all supportable business scenes; the scheduling hierarchy in the hardware scheduler 11 is fixed.
The fixed scheduling hierarchy in the hardware scheduler 11 can be understood by the structure of the traffic scheduling tree of the hardware scheduler illustrated in fig. 2. As shown in fig. 2, a service scheduling tree includes a plurality of nodes, which may be respectively referred to as a root node, a port node, a user group node, a user node and a flow queue node due to different scheduling levels, where the root node is usually only one, and the number of other nodes may be one or more. It can be understood that the service scheduling tree includes five scheduling levels, from top to bottom, respectively: the method comprises the steps of a scheduling level of a root node, a scheduling level of a port node, a scheduling level of a user group node, a scheduling level of a user node and a scheduling level of a stream queue node. Thus, a fixed schedule hierarchy in the hardware scheduler 11 means that the hierarchy of the traffic scheduling tree of the hardware scheduler 11 is fixed.
In the service scheduling tree, there is a subordinate relationship between nodes of different scheduling levels, namely, a mapping relationship between a parent node and a child node. In the prior art, the dependency relationship between the nodes of the lower layer and the nodes of the upper layer can be limited and flexibly configured, the nodes correspond to the configuration parameters of the scheduling algorithm, such as the proportion value in the DRR scheduling algorithm or the priority of the SP scheduling algorithm, and the most available scheduling bandwidth of the nodes can be flexibly configured. Queue manager 12 performs dequeue management of flow queues (FlowQueue) and buffer allocation, which may be configured by a fixed algorithm such as weighted random early detection (weighted random early detection, WRED) based on traffic flows.
Specifically, the key workflow of the traffic scheduling device 10 is as follows:
1. After the packet processor 13 receives the data and stores the data in the buffer, the queue manager 12 is notified of the enqueued data information, and the queue manager 12 enqueues the enqueued data information according to the service flow and generates a status update message of the flow queue. Different services correspond to different service flows, i.e. the data to which the service relates. Wherein the enqueued data information includes an identification of the data stored in the cache. The queue manager 12 processes the enqueued data information into queues according to the service flow, that is, the queue manager 12 distinguishes the service flow and stores the enqueued data information into different flow queues.
2. The queue manager 12 sends the status update message to the hardware scheduler 11.
3. The hardware scheduler 11 updates the schedulable state of the corresponding node in each scheduling hierarchy on the service scheduling tree as shown in fig. 2 according to the state update message.
4. The hardware scheduler 11 selects a proper node step by step according to the schedulable state of each node on the updated service scheduling tree, the configured scheduling algorithm and related configuration parameters, and finally outputs a stream queue ID as a scheduling result. Wherein the scheduling process of the hardware scheduler 11 is cyclically continuous, and each round of scheduling outputs a scheduling result.
5. The hardware scheduler 11 transmits the scheduling result to the queue manager 12, and the queue manager 12 performs a dequeue operation according to the scheduling result.
6. The queue manager 12 sends the dequeued data information to the packet processor 13, and the packet processor 13 fetches the data corresponding to the dequeued data information from the buffer and outputs the data to the outside.
In the above scheme, the configuration parameters of the scheduling algorithm need to be configured and determined before the traffic exists, and the subordinate relations of the nodes between the scheduling layers and the different scheduling layers need to be realized according to the aggregate of various service scenes, so that the configuration relations are complex, the resource cost is high, and the application scene is limited by the maximum complexity of hardware realization.
Therefore, based on the technical problems described above, the embodiments of the present application provide a traffic scheduling method, a device and a storage medium, so as to overcome the defect that the service scenario of the existing traffic scheduling device is limited by the scheduling algorithm supported by the traffic scheduling device, and respond to new service requirements in time.
The scheme provided by the embodiment of the application can be applied to equipment such as high-end routers, high-end DC switches and the like which need high-performance complex service scheduling, multi-core CPU architecture and the like, and occasions which need hardware to support high-performance resource scheduling and load balancing.
Fig. 3 is a schematic structural diagram of a flow scheduling device according to an embodiment of the present application. As shown in fig. 3, the traffic scheduling device 20 includes: a bandwidth pre-allocation module 21, a scheduler module 22 and a queue management module 23. Wherein:
The bandwidth pre-allocation module 21 is configured to determine bandwidth control parameters of the flow queues according to the input bandwidth of each flow queue of the queue management module 23, the expected output bandwidth of the service configuration, and the configuration parameters of the corresponding scheduling algorithm, and configure the bandwidth control parameters of the flow queues to the scheduler module 22.
For example, business case 1: in the service scheduling tree, the input bandwidth of the stream queue 1 is 300 megabits per second (Mbit/s, mbps), and the input bandwidth of the stream queue 2 is 200Mbps. The two flow queues belong to the same user node X, that is, the father node of the flow queue 1 and the flow queue 2 is the user node X, and the expected output bandwidth configured for the user node X by the service is 150Mbps, and meanwhile, the differential cyclic (deficit round robin, DRR) scheduling of the flow queue 1 and the flow queue 2 in the user node X is configured, the bandwidth control parameters of the flow queue are: stream queue 1 allocates a scheduling bandwidth of 50Mbps and stream queue 2 allocates a scheduling bandwidth of 100Mbps. The bandwidth control parameters of the flow queues 1 and 2 are configured into the scheduler module 22.
The scheduler module 22 is configured to schedule traffic for the flow queues in the queue management module 23 according to the bandwidth control parameter.
Specifically, the scheduler module 22 maintains the schedulable state of each relevant level node on the scheduling tree according to the schedulable state of the flow queue node fed back by the queue management module 23, and performs periodic scheduling of continuous flow on the flow queue node according to the bandwidth control parameter, and each round of scheduling outputs a flow queue ID as a scheduling result, and transmits the scheduling result to the queue management module 23.
The queue management module 23 is configured to perform a dequeue operation according to the scheduling result, and send dequeued data information to the packet processing module 24. The dequeue data information comprises information such as a queue head data length, a storage address and the like.
And the data packet processing module 24 is used for taking out the data corresponding to the dequeued data information from the cache and outputting the data.
Comparing the structures shown in fig. 1 and 3, it can be seen that the differences at least include: in the structure shown in fig. 3, a bandwidth pre-allocation module 21 is added.
Since the bandwidth control parameters of the flow queues are determined by the bandwidth pre-allocation module 21 according to the input bandwidth of each flow queue of the queue management module 23, the expected output bandwidth of the service configuration and the configuration parameters of the corresponding scheduling algorithm, it will be understood by those skilled in the art that the bandwidth control parameters of the flow queues are adjusted in real time according to the actual received traffic conditions and are not fixed.
In addition, the bandwidth pre-allocation module 21 configures the bandwidth control parameters of the flow queues to the scheduler module 22. The scheduler module 22 supports that bandwidth control parameters of each node of each scheduling hierarchy in the traffic scheduling tree can be adjusted by software in real time. The scheduler module 22 need not support a service scheduling tree and a scheduling algorithm of a complexity consistent with a service in a software algorithm, but only needs to support a Round Robin (RR) scheduling algorithm or the like of each node. Even the service scheduling level and the node number in the software algorithm can be flexibly expanded.
As in the foregoing service case 1, the desired output bandwidth obtained by the user node X needs to be shared between the flow queue 1 and the flow queue 2 in a weight ratio of 1:2 for 150Mbps. In the existing scheme, the user node X needs to support a DRR scheduling algorithm and corresponding weight configuration. The hardware scheduler 11 needs to schedule according to the DRR scheduling algorithm and record the scheduling opportunities allocated respectively, so as to ensure that the flow queue 1 and the flow queue 2 can obtain corresponding scheduling bandwidths according to the weight proportion. However, in the embodiment of the present application, the bandwidth pre-allocation module 21 calculates, in real time, that the allocable scheduling bandwidth of the flow queue 1 is 50Mbps and that of the flow queue 2 is 100Mbps according to the information such as the expected output bandwidth 150Mbps obtained by the user node X, the input bandwidths and weights of the flow queue 1 and the flow queue 2, and the like, by using a software algorithm. In this way, the scheduler module 22 only needs to perform simple RR scheduling and integer bandwidth control of the flow queues to ensure that the flow queue 1 and the flow queue 2 obtain the actual output bandwidth. Therefore, the embodiment of the application can simplify the design of the scheduler module.
Optionally, the scheduler module 22 supports, but is not limited to, at least one of the following scheduling algorithms: SP scheduling algorithms, RR scheduling algorithms, etc.
It can be understood that the bandwidth pre-allocation module 21 can flexibly configure the service scheduling tree in the scheduler module 22, dynamically control the output bandwidth of the nodes in each scheduling layer in the service scheduling tree, and make the scheduling algorithm of the traffic scheduling device programmable, so as to avoid the limitation of the service scenario of the traffic scheduling device. The service scheduling tree is shown in fig. 2, for example, but the embodiment of the application is not limited thereto.
In the embodiment of the application, the bandwidth pre-allocation module determines the bandwidth control parameter according to the input bandwidth of the flow queue of the queue management module, the expected output bandwidth of the service configuration and the configuration parameter of the corresponding scheduling algorithm, and configures the bandwidth control parameter to the scheduler module, and the scheduler module performs flow scheduling on the flow queue according to the bandwidth control parameter. On the one hand, since the bandwidth control parameter is determined according to the input bandwidth of the flow queue of the queue management module, the expected output bandwidth of the service configuration and the configuration parameter of the corresponding scheduling algorithm, the bandwidth control parameter is adjusted in real time according to the actual situation and is not fixed; on the other hand, the bandwidth pre-allocation module configures the bandwidth control parameters to the scheduler module, so that flexible configuration of a service scheduling tree in the scheduler module can be realized, the output bandwidth of nodes in each scheduling layer in the service scheduling tree is dynamically controlled, the maximum complexity of the flow scheduling device is programmable, the defect that a service scene is limited by a scheduling algorithm supported by the flow scheduling device can be overcome, and new service demands can be responded in time.
On the basis of the above, in a specific implementation manner, as shown in fig. 4, in the traffic scheduling device 30, the bandwidth pre-allocation module 21 may include: a bandwidth calculation sub-module 211 and a bandwidth allocation sub-module 212. The bandwidth metering sub-module 211 is configured to monitor an input bandwidth of a flow queue of the queue management module 23 and an expected output bandwidth of a service configuration, and transmit the input bandwidth and the expected output bandwidth to the bandwidth allocation sub-module 212; the bandwidth allocation sub-module 212 is configured to determine bandwidth control parameters according to the input bandwidth, the expected output bandwidth and the configuration parameters of the corresponding scheduling algorithm, and configure the bandwidth control parameters to the scheduler module 22.
Alternatively, the bandwidth-metering submodule 211 is implemented by hardware, and the bandwidth-allocation submodule 212 is implemented by software. Illustratively, the bandwidth metering sub-module 211 records information such as real-time queue length, enqueuing and dequeuing packet length of each flow queue for a set of counters set per flow queue, and can calculate an input bandwidth of the flow queue according to the information. The bandwidth allocation sub-module 212 is a software module running on-chip or on-board CPU. The bandwidth allocation sub-module 212 performs analysis and calculation according to the input bandwidth, the expected output bandwidth and the configuration parameters of the corresponding scheduling algorithm, determines bandwidth control parameters, and configures them to the scheduler module 22. In this embodiment, the flow scheduling device architecture with cooperation of software and hardware, where the bandwidth scheduling sub-module 211 has a simple structure, the same logic resource can support higher performance and higher accuracy, and the service scheduling tree is implemented by the bandwidth allocation sub-module 212 through a software algorithm, so that more complex service scheduling tree and scheduling algorithm can be flexibly supported, and only software upgrade is required to support new service characteristics.
Further, the bandwidth metering submodule 211 and the bandwidth allocation submodule 212 may communicate with each other through any one of the following buses: high-speed serial computer expansion bus standard (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, PCIE) bus, on-chip bus, etc. The bandwidth allocation submodule 212 may be implemented in a manner of embedding a central processing unit CORE (central processing unit CORE, CPU CORE) in a chip, where a single board CPU is not required for analysis and calculation of the bandwidth allocation result. The information intercommunication between the bandwidth metering submodule 211 and the bandwidth allocation submodule 212 can be realized through an internal bus of the chip.
Implementations of bandwidth allocation submodule 212 in determining bandwidth control parameters may include a variety of schemes, examples of which are illustrated herein.
In the first scheme, the bandwidth allocation submodule 212 is configured to: according to the affiliation between nodes of each scheduling hierarchy in the traffic scheduling tree, the scheduling algorithm and configuration parameters, as well as the input bandwidth of the flow queues and the desired output bandwidth of the traffic configuration, bandwidth control parameters are determined and configured to the scheduler module 22. Specific examples can be referred to in business case 1 above.
In the second scheme, the bandwidth allocation submodule 212 is configured to: based on the traffic priority, the membership of the nodes at each scheduling level in the traffic scheduling tree, the scheduling algorithm and configuration parameters, and the input bandwidth of the flow queues and the desired output bandwidth of the traffic configuration, bandwidth control parameters are determined and configured to the scheduler module 22. For example, business case 2: in the service scheduling tree, the input bandwidth of the flow queue 3 is 100Mbps, and the input bandwidth of the flow queue 4 is 200Mbps. The two flow queues belong to the same user node Y, the expected output bandwidth configured for the user node Y by the service is 150Mbps, meanwhile, the flow queue 3 and the flow queue 4 are configured to be SP scheduling in the user node Y, and the flow queue 3 is of high priority, and the bandwidth control parameters of the flow queue are as follows: stream queue 3 allocates bandwidth 100Mbps and stream queue 4 allocates bandwidth 50Mbps. The bandwidth control parameters of the flow queues 3 and 4 are configured into the scheduler module 22.
Since the determination of the bandwidth control parameter requires a certain time, considering that there may be newly received high priority traffic not monitored by the bandwidth meter sub-module 211 during the software running of the bandwidth allocation sub-module 212 running on the CPU on the chip or on the board, if such high priority traffic requires a low delay guarantee, a certain bandwidth needs to be reserved for the high priority traffic on the bandwidth allocation to support the preferential passing of the newly received high priority traffic.
The two schemes are different in that: the second approach increases traffic priority considerations.
In some embodiments, bandwidth meter sub-module 211 may be configured to: and recording the system global time marks of the enqueue data and the dequeue data corresponding to each flow queue, and obtaining the input bandwidth and the actual output bandwidth of the flow queue according to the system global time marks, wherein the actual output bandwidth value is used for correcting the calculation result of the bandwidth control parameter by the bandwidth allocation submodule 212. That is, the bandwidth metering sub-module 211 records the packet lengths of the enqueue data and the dequeue data corresponding to the respective flow queues in the queue management module 23 in real time, and at the same time, brings the system global time stamp into the recorded original data. For the effect of the actual output bandwidth and the expected output bandwidth, reference may be made to the related art, and the embodiments of the present application are not repeated.
For example, the input bandwidth of the flow queue=the sum of the packet lengths of the enqueued data corresponding to the current flow queue/the current round of statistical period duration, where the current round of statistical period duration=the system global time stamp of the last enqueued data corresponding to the current flow queue-the system global time stamp of the first enqueued data corresponding to the current flow queue.
Since the bandwidth metering sub-module 211 is a flow monitoring component supporting the global time scale of the system, the input bandwidth and the expected output bandwidth of the flow queue can be precisely counted in real time, and the accuracy and the integrity of the calculation of the input bandwidth and the expected output bandwidth of the flow queue are ensured. Further, based on the input bandwidth and the expected output bandwidth of the flow queue, the bandwidth allocation submodule 212 can determine an accurate bandwidth control parameter after analyzing the bandwidth by a software algorithm, and can also predict the flow behavior. In addition, the predicted result can be applied to calculation of the scheduling algorithm in real time.
Based on the above embodiments, the bandwidth allocation submodule 212 may also be configured to: according to the input bandwidth of the flow queue and the expected output bandwidth of the service configuration, the WRED parameters of the flow queue are adjusted according to a preset algorithm, and the adjusted WRED parameters are configured to the queue management module 23. Wherein the WRED parameters may include at least one of the following parameters: minimum threshold, and label probability denominator, etc.
Accordingly, the queue management module 23 performs dequeue management of the data information in the flow queue according to the adjusted WRED parameter.
Because the WRED parameters actually configured to the queue management module are parameters adjusted by the software algorithm according to the input bandwidth and the expected output bandwidth of the real-time flow queues, when the WRED parameters are configured, the allocation situation of the actual caches can not be considered, and the caches configured for each flow queue can exceed the size of the actually allocable caches to a certain extent, namely, a certain degree of cache overcomplicating is supported. In an actual use scene, as the received flow changes in real time, the real-time WRED parameters of different flow queues can be flexibly adjusted through analysis of a software algorithm according to the current flow receiving condition, so that the actual received flow can be ensured to have larger available cache, and only a small amount of cache is reserved for the inactive flow queue, thereby effectively improving the cache utilization rate of the system. Meanwhile, the adjusted WRED parameter is configured to a queue management module, so that dynamic queue length management is effectively supported.
In some embodiments, scheduler module 22 stores a first bandwidth configuration parameter table entry and a second bandwidth configuration parameter table entry. At this time, the bandwidth pre-allocation module 21 configures the bandwidth control parameters to the scheduler module 22, which may include: the bandwidth pre-allocation module 21 updates the bandwidth control parameters to the first bandwidth configuration parameter table entry; at a pipeline specific time slot, scheduler module 22 updates the contents of the first bandwidth configuration parameter table entry to a second bandwidth configuration parameter table entry, which is an entry used by scheduler module 22 in performing the operation.
This embodiment adds a first bandwidth configuration parameter table entry, such as a shadow table entry, that can be modified by the software directly at any time. The scheduler module updates the content of the first bandwidth configuration parameter table entry to the second bandwidth configuration parameter table entry used by the logic according to the pipeline specific time slot executed by the logic, so that the conflict between the configuration table entry updating operation and the logic executing operation can be avoided.
Fig. 5 is a flowchart of a flow scheduling method according to an embodiment of the present application. The embodiment of the application provides a flow scheduling method, which is applied to flow scheduling equipment, wherein the flow scheduling equipment comprises the following steps: a bandwidth pre-allocation module, a scheduler module and a queue management module. As shown in fig. 5, the method includes:
S501, the bandwidth pre-allocation module determines bandwidth control parameters of the flow queues according to the input bandwidth of each flow queue of the queue management module, the expected output bandwidth of service configuration and the configuration parameters of a corresponding scheduling algorithm, and configures the bandwidth control parameters of the flow queues to the scheduler module.
S502, the scheduler module performs flow scheduling on the flow queues in the queue management module according to the bandwidth control parameters.
The flow scheduling method in the embodiment of the present application may be executed by the flow scheduling device in any of the foregoing apparatus embodiments, and its implementation principle and technical effects are similar, and will not be described herein.
Based on the above embodiments, in some embodiments, the bandwidth pre-allocation module includes a bandwidth meter sub-module and a bandwidth allocation sub-module. Accordingly, S501, the bandwidth pre-allocation module determines a bandwidth control parameter of the flow queue according to the input bandwidth of each flow queue of the queue management module, the expected output bandwidth of the service configuration and the configuration parameter of the corresponding scheduling algorithm, and configures the bandwidth control parameter of the flow queue to the scheduler module, which may include: the bandwidth metering submodule monitors the input bandwidth and the expected output bandwidth of the flow queue of the queue management module and transmits the input bandwidth and the expected output bandwidth of the flow queue to the bandwidth allocation submodule; the bandwidth allocation sub-module determines bandwidth control parameters according to the input bandwidth of the flow queue, the expected output bandwidth and the configuration parameters of the corresponding scheduling algorithm, and configures the bandwidth control parameters to the scheduler module.
Optionally, the bandwidth metering submodule and the bandwidth allocation submodule are communicated through any one of the following buses: PCIE bus, chip internal bus, etc.
In one implementation, the bandwidth allocation submodule determines a bandwidth control parameter according to an input bandwidth of the flow queue, an expected output bandwidth and a configuration parameter of a corresponding scheduling algorithm, and configures the bandwidth control parameter to the scheduler module, which may include: the bandwidth allocation submodule determines bandwidth control parameters according to the node subordination relation of each scheduling layer in the service scheduling tree, the scheduling algorithm and the configuration parameters, and the input bandwidth and the expected output bandwidth of the flow queue, and configures the bandwidth control parameters to the scheduler module.
In another implementation, the bandwidth allocation submodule determines a bandwidth control parameter according to the input bandwidth of the flow queue, the expected output bandwidth and the configuration parameter of the corresponding scheduling algorithm, and configures the bandwidth control parameter to the scheduler module, which may include: the bandwidth allocation submodule determines bandwidth control parameters according to the traffic priority, the node subordination relation of each scheduling layer in the service scheduling tree, the scheduling algorithm and the configuration parameters, and the input bandwidth and the expected output bandwidth of the flow queue, and configures the bandwidth control parameters to the scheduler module.
Further, the bandwidth metering sub-module monitors an input bandwidth and an expected output bandwidth of the flow queue of the queue management module, and transmits the input bandwidth and the expected output bandwidth of the flow queue to the bandwidth allocation sub-module, may include: the bandwidth metering submodule records a system global time scale of enqueue data and dequeue data corresponding to each flow queue; and the bandwidth metering sub-module obtains the input bandwidth and the expected output bandwidth of the flow queue according to the global time scale of the system.
Still further, the traffic scheduling method may further include: the bandwidth allocation sub-module adjusts the WRED parameters of the flow queues according to a preset algorithm according to the input bandwidth and the expected output bandwidth of the flow queues, and configures the adjusted WRED parameters to the queue management module. Accordingly, the queue management module performs enqueue and dequeue management of the data information in the stream queue according to the adjusted WRED parameter.
In some embodiments, the scheduler module stores a first bandwidth configuration parameter table entry and a second bandwidth configuration parameter table entry. At this time, the bandwidth pre-allocation module configures the bandwidth control parameter to the scheduler module, which may include: the bandwidth pre-allocation module updates the bandwidth control parameters to the first bandwidth configuration parameter table entry; at a particular time slot of the pipeline, the scheduler module updates the contents of the first bandwidth configuration parameter table entry to a second bandwidth configuration parameter table entry, which is an entry used by the scheduler module in performing the operation.
Optionally, the scheduler module supports at least one of the following scheduling algorithms: SP scheduling algorithm and round robin RR scheduling algorithm.
In some embodiments, the traffic scheduling device may further include: and the data packet processing module. Accordingly, the method may further comprise: the queue management module performs one-time dequeuing operation according to the scheduling result after the scheduling result is sent to the queue management module by the scheduler module, and sends dequeued data information to the data packet processing module; and the data packet processing module takes out the data corresponding to the dequeued data information from the cache and outputs the data to the outside.
In some embodiments, the bandwidth pre-allocation module, scheduler module, and queue management module described above may be embedded in a processor on a hardware implementation. Alternatively, in a hardware implementation, the bandwidth pre-allocation module may be a processor, and the scheduler module and the queue management module may be implemented in hardware.
Accordingly, as shown in fig. 6, the flow scheduling device 60 of the present embodiment may include a memory 61 and a processor 62. Wherein the memory 61 is adapted to store a computer program for execution by the processor 62. The processor 62 is caused to perform the method as described above when the processor 62 reads and executes the computer program, or the processor 62 is caused to perform the steps performed by the bandwidth pre-allocation module in the method as described above when the processor 62 reads and executes the computer program.
Embodiments of the present application also provide a computer readable storage medium storing a computer program, where the computer program includes at least one piece of code, where at least one piece of code is executable by a processor to implement a method according to any of the above method embodiments, or to implement a step performed by the bandwidth pre-allocation module according to the method according to any of the above method embodiments.
The computer program may be embodied in the form of software functional units and may be sold or used as a stand-alone product, and the memory may be any form of computer readable storage medium. With such understanding, all or part of the technical solution of the present application may be embodied in the form of a software product, which includes several instructions for causing a computer device, which may be a processor in particular, to perform all or part of the steps of the first terminal device in the various embodiments of the present application. And the aforementioned computer-readable storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation. The functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk Solid STATE DISK (SSD)), etc.

Claims (16)

1. A traffic scheduling device, comprising: the system comprises a bandwidth pre-allocation module, a scheduler module and a queue management module, wherein the bandwidth pre-allocation module comprises: a bandwidth metering sub-module and a bandwidth allocation sub-module; wherein:
The bandwidth metering sub-module is used for recording system global time marks of enqueue data and dequeue data corresponding to each flow queue of the queue management module, obtaining input bandwidth of each flow queue and expected output bandwidth of service configuration according to the system global time marks, and transmitting the input bandwidth of the flow queue and the expected output bandwidth to the bandwidth allocation sub-module;
The bandwidth allocation submodule is used for determining bandwidth control parameters according to the input bandwidth of the flow queue, the expected output bandwidth and configuration parameters of a corresponding scheduling algorithm, and configuring the bandwidth control parameters to the scheduler module; the scheduler module is used for carrying out flow scheduling on the flow queues in the queue management module according to the bandwidth control parameters;
The bandwidth allocation submodule is further configured to:
According to the input bandwidth and the expected output bandwidth of the flow queue, adjusting weighted random early detection WRED parameters of the flow queue according to a preset algorithm, and configuring the adjusted WRED parameters to the queue management module;
Correspondingly, the queue management module carries out dequeue management of data information in the stream queue according to the adjusted WRED parameter.
2. The apparatus of claim 1, wherein the bandwidth metering submodule and the bandwidth allocation submodule are in communication with each other via any one of the following buses:
the high-speed serial computer extends the bus standard PCIE bus and the internal bus of the chip.
3. The device according to claim 2, characterized in that said bandwidth allocation submodule is in particular configured to:
And determining the bandwidth control parameter according to the subordinate relation among the nodes of each scheduling level in the service scheduling tree, the scheduling algorithm and the configuration parameter, and the input bandwidth and the expected output bandwidth of the flow queue, and configuring the bandwidth control parameter to the scheduler module.
4. The device according to claim 2, characterized in that said bandwidth allocation submodule is in particular configured to:
and determining the bandwidth control parameter according to the flow priority, the node subordination relation of each scheduling level in the service scheduling tree, the scheduling algorithm and the configuration parameter, and the input bandwidth and the expected output bandwidth of the flow queue, and configuring the bandwidth control parameter to the scheduler module.
5. The apparatus of any of claims 1-4, wherein the scheduler module stores a first bandwidth configuration parameter table entry and a second bandwidth configuration parameter table entry;
the bandwidth pre-allocation module, when configuring the bandwidth control parameter to the scheduler module, specifically comprises:
The bandwidth pre-allocation module is configured to update the bandwidth control parameter to the first bandwidth configuration parameter table entry;
The scheduler module is configured to update, in a pipeline specific time slot, the content of the first bandwidth configuration parameter table entry to the second bandwidth configuration parameter table entry, where the second bandwidth configuration parameter table entry is an entry used when the scheduler module performs an operation.
6. The apparatus of any of claims 1-4, wherein the scheduler module supports at least one of the following scheduling algorithms:
strict priority SP scheduling algorithm, round robin RR scheduling algorithm.
7. The apparatus according to any one of claims 1-4, further comprising: a data packet processing module;
The queue management module is further configured to perform a dequeue operation according to the scheduling result after the scheduler module sends the scheduling result to the queue management module, and send dequeued data information to the data packet processing module;
And the data packet processing module is used for taking out the data corresponding to the dequeued data information from the cache and outputting the data to the outside.
8. A traffic scheduling method, characterized in that it is applied to a traffic scheduling device, the traffic scheduling device comprising: the system comprises a bandwidth pre-allocation module, a scheduler module and a queue management module, wherein the bandwidth pre-allocation module comprises a bandwidth metering sub-module and a bandwidth allocation sub-module, and the method comprises the following steps:
The bandwidth metering submodule records a system global time scale of enqueue data and dequeue data corresponding to each flow queue of the queue management module, obtains an input bandwidth of each flow queue and an expected output bandwidth of service configuration according to the system global time scale, and transmits the input bandwidth and the expected output bandwidth of the flow queue to the bandwidth allocation submodule;
The bandwidth allocation submodule determines bandwidth control parameters according to the input bandwidth of the flow queue, the expected output bandwidth and configuration parameters of a corresponding scheduling algorithm, and configures the bandwidth control parameters to the scheduler module;
the scheduler module performs flow scheduling on the flow queues in the queue management module according to the bandwidth control parameters;
the method further comprises the steps of:
The bandwidth allocation submodule adjusts weighted random early detection WRED parameters of the flow queues according to a preset algorithm according to the input bandwidth and the expected output bandwidth of the flow queues, and configures the adjusted WRED parameters to the queue management module;
Correspondingly, the queue management module carries out dequeue management of data information in the stream queue according to the adjusted WRED parameter.
9. The method of claim 8, wherein the bandwidth metering submodule and the bandwidth allocation submodule are in communication with each other through any one of the following buses:
the high-speed serial computer extends the bus standard PCIE bus and the internal bus of the chip.
10. The method of claim 8, wherein the bandwidth allocation submodule determines the bandwidth control parameter according to an input bandwidth of the flow queue, the expected output bandwidth, and configuration parameters of a corresponding scheduling algorithm, and configures the bandwidth control parameter to the scheduler module, comprising:
The bandwidth allocation sub-module determines the bandwidth control parameter according to the subordination relation among nodes of each scheduling level in the service scheduling tree, the scheduling algorithm and the configuration parameter, and the input bandwidth and the expected output bandwidth of the flow queue, and configures the bandwidth control parameter to the scheduler module.
11. The method of claim 8, wherein the bandwidth allocation submodule determines the bandwidth control parameter according to an input bandwidth of the flow queue, the expected output bandwidth, and configuration parameters of a corresponding scheduling algorithm, and configures the bandwidth control parameter to the scheduler module, comprising:
the bandwidth allocation submodule determines the bandwidth control parameter according to the traffic priority, the node subordination relation of each scheduling layer in the service scheduling tree, the scheduling algorithm and the configuration parameter, the input bandwidth of the flow queue and the expected output bandwidth, and configures the bandwidth control parameter to the scheduler module.
12. The method according to any of claims 8-11, wherein the scheduler module stores a first bandwidth configuration parameter table entry and a second bandwidth configuration parameter table entry;
wherein configuring the bandwidth control parameter to the scheduler module comprises:
The bandwidth pre-allocation sub-module updates the bandwidth control parameters to the first bandwidth configuration parameter table entry;
In a pipeline specific time slot, the scheduler module updates the content of the first bandwidth configuration parameter table entry to the second bandwidth configuration parameter table entry, wherein the second bandwidth configuration parameter table entry is an entry used by the scheduler module when performing an operation.
13. The method according to any of claims 8-11, wherein the scheduler module supports at least one of the following scheduling algorithms:
strict priority SP scheduling algorithm, round robin RR scheduling algorithm.
14. The method according to any of claims 8-11, wherein the traffic scheduling device further comprises: the data packet processing module, the method further comprises:
The queue management module performs a dequeuing operation according to the scheduling result after the scheduling result is sent to the queue management module by the scheduler module, and sends dequeued data information to the data packet processing module;
And the data packet processing module takes out the data corresponding to the dequeued data information from the cache and outputs the data to the outside.
15. A traffic scheduling device, comprising: a memory and a processor;
wherein the memory has stored thereon a computer program executable by the processor;
The computer program, when read and executed by the processor, causes the processor to perform the method of any one of claims 8 to 14.
16. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising at least one piece of code executable by a processor for implementing the method according to any one of claims 8 to 14.
CN201980096976.4A 2019-06-12 2019-06-12 Traffic scheduling method, traffic scheduling device and storage medium Active CN113906720B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/090921 WO2020248166A1 (en) 2019-06-12 2019-06-12 Traffic scheduling method, device, and storage medium

Publications (2)

Publication Number Publication Date
CN113906720A CN113906720A (en) 2022-01-07
CN113906720B true CN113906720B (en) 2024-05-10

Family

ID=73780828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980096976.4A Active CN113906720B (en) 2019-06-12 2019-06-12 Traffic scheduling method, traffic scheduling device and storage medium

Country Status (2)

Country Link
CN (1) CN113906720B (en)
WO (1) WO2020248166A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114285753B (en) * 2021-12-27 2024-04-26 上海七牛信息技术有限公司 CDN scheduling method and system
CN114640630B (en) * 2022-03-31 2023-08-18 苏州浪潮智能科技有限公司 Flow control method, device, equipment and readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1592267A (en) * 2003-09-03 2005-03-09 华为技术有限公司 Priority message flow ensuring method when network disaster
CN101009655A (en) * 2007-02-05 2007-08-01 华为技术有限公司 Traffic scheduling method and device
ITTO20100429A1 (en) * 2010-05-24 2011-11-25 Selex Communications Spa PROCEDURE AND BAND CONTROL SYSTEM FOR COMPLIANCE WITH A PREDETERMINED QUALITY OF SERVICE AT A POINT OF ACCESS TO A CONUMINCATION NETWORK OPERATING AN AGGREGATION OF HETEROGENEOUS TRAFFIC FLOWS
CN102594830A (en) * 2012-03-02 2012-07-18 黄东 Method for improving utilization factor of network bandwidth under multi-service condition
CN102611605A (en) * 2011-01-20 2012-07-25 华为技术有限公司 Scheduling method, device and system of data exchange network
CN103685069A (en) * 2013-12-30 2014-03-26 华为技术有限公司 Cross-board flow control method, system and scheduler, circuit board and router
CN107872403A (en) * 2017-11-10 2018-04-03 西安电子科技大学 A kind of implementation level QoS Pyatyi queue scheduling device and method
CN108063734A (en) * 2017-12-05 2018-05-22 郑州云海信息技术有限公司 A kind of network resource scheduling method and device
CN108881045A (en) * 2018-06-04 2018-11-23 河南科技大学 A kind of jamming control method ensured in heterogeneous network based on QoS
CN109039953A (en) * 2018-07-24 2018-12-18 新华三技术有限公司 bandwidth scheduling method and device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6628609B2 (en) * 1998-04-30 2003-09-30 Nortel Networks Limited Method and apparatus for simple IP-layer bandwidth allocation using ingress control of egress bandwidth
US7675926B2 (en) * 2004-05-05 2010-03-09 Cisco Technology, Inc. Hierarchical QoS behavioral model
US7697436B2 (en) * 2006-02-15 2010-04-13 Fujitsu Limited Bandwidth allocation
WO2009130218A1 (en) * 2008-04-24 2009-10-29 Xelerated Ab A traffic manager and a method for a traffic manager
CN102075407B (en) * 2009-11-24 2012-12-19 中兴通讯股份有限公司 Method and device for processing mixed business flow
US9363173B2 (en) * 2010-10-28 2016-06-07 Compass Electro Optical Systems Ltd. Router and switch architecture
AU2012207471B2 (en) * 2011-01-18 2016-07-28 Nomadix, Inc. Systems and methods for group bandwidth management in a communication systems network
US9690261B2 (en) * 2013-06-25 2017-06-27 Linestream Technologies Method for automatically setting responsiveness parameters for motion control systems
US9450881B2 (en) * 2013-07-09 2016-09-20 Intel Corporation Method and system for traffic metering to limit a received packet rate
CN104618265B (en) * 2014-12-30 2018-03-13 华为技术有限公司 A kind of message forwarding method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1592267A (en) * 2003-09-03 2005-03-09 华为技术有限公司 Priority message flow ensuring method when network disaster
CN101009655A (en) * 2007-02-05 2007-08-01 华为技术有限公司 Traffic scheduling method and device
WO2008095397A1 (en) * 2007-02-05 2008-08-14 Huawei Technologies Co., Ltd. Traffic scheduling method and apparatus thereof
ITTO20100429A1 (en) * 2010-05-24 2011-11-25 Selex Communications Spa PROCEDURE AND BAND CONTROL SYSTEM FOR COMPLIANCE WITH A PREDETERMINED QUALITY OF SERVICE AT A POINT OF ACCESS TO A CONUMINCATION NETWORK OPERATING AN AGGREGATION OF HETEROGENEOUS TRAFFIC FLOWS
CN102611605A (en) * 2011-01-20 2012-07-25 华为技术有限公司 Scheduling method, device and system of data exchange network
CN102594830A (en) * 2012-03-02 2012-07-18 黄东 Method for improving utilization factor of network bandwidth under multi-service condition
CN103685069A (en) * 2013-12-30 2014-03-26 华为技术有限公司 Cross-board flow control method, system and scheduler, circuit board and router
CN107872403A (en) * 2017-11-10 2018-04-03 西安电子科技大学 A kind of implementation level QoS Pyatyi queue scheduling device and method
CN108063734A (en) * 2017-12-05 2018-05-22 郑州云海信息技术有限公司 A kind of network resource scheduling method and device
CN108881045A (en) * 2018-06-04 2018-11-23 河南科技大学 A kind of jamming control method ensured in heterogeneous network based on QoS
CN109039953A (en) * 2018-07-24 2018-12-18 新华三技术有限公司 bandwidth scheduling method and device

Also Published As

Publication number Publication date
WO2020248166A1 (en) 2020-12-17
CN113906720A (en) 2022-01-07

Similar Documents

Publication Publication Date Title
US11805065B2 (en) Scalable traffic management using one or more processor cores for multiple levels of quality of service
CN111512602B (en) Method, equipment and system for sending message
US9882832B2 (en) Fine-grained quality of service in datacenters through end-host control of traffic flow
US9112809B2 (en) Method and apparatus for controlling utilization in a horizontally scaled software application
US8149846B2 (en) Data processing system and method
CN112789832B (en) Dynamic slice priority handling
EP2720422A1 (en) Queue monitoring to filter the trend for enhanced buffer management and dynamic queue threshold in 4G IP network/equipment for better traffic performance
US9548872B2 (en) Reducing internal fabric congestion in leaf-spine switch fabric
WO2020034819A1 (en) Service quality assurance method in distributed storage system, control node and system
CN113906720B (en) Traffic scheduling method, traffic scheduling device and storage medium
US20200396152A1 (en) Shaping outgoing traffic of network packets in a network management system
CN111371690A (en) Flow regulation and control method and device, network equipment and computer readable storage medium
Kumar et al. A delay-optimal packet scheduler for M2M uplink
CN115622952A (en) Resource scheduling method, device, equipment and computer readable storage medium
KR101448413B1 (en) Method and apparatus for scheduling communication traffic in atca-based equipment
US10044632B2 (en) Systems and methods for adaptive credit-based flow
CN115766582A (en) Flow control method, device and system, medium and computer equipment
US8467401B1 (en) Scheduling variable length packets
US20190108060A1 (en) Mobile resource scheduler
CN102594670B (en) Multiport multi-flow scheduling method, device and equipment
CN116137613A (en) Data scheduling method, system, device and computer readable storage medium
JP7205530B2 (en) Transmission device, method and program
US11743134B2 (en) Programmable traffic management engine
US10742710B2 (en) Hierarchal maximum information rate enforcement
Jeon et al. Dynamic bandwidth allocation for QoS in IEEE 802.16 broadband wireless networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant