CN118316883A - Hierarchical flow shaping device and method based on HQoS - Google Patents

Hierarchical flow shaping device and method based on HQoS Download PDF

Info

Publication number
CN118316883A
CN118316883A CN202410463881.4A CN202410463881A CN118316883A CN 118316883 A CN118316883 A CN 118316883A CN 202410463881 A CN202410463881 A CN 202410463881A CN 118316883 A CN118316883 A CN 118316883A
Authority
CN
China
Prior art keywords
token
traffic
module
shaper
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410463881.4A
Other languages
Chinese (zh)
Inventor
马佩军
季冠捷
王文勃
潘伟涛
史江义
李康
郝跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Publication of CN118316883A publication Critical patent/CN118316883A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides a layered flow shaping device and method based on HQoS, and relates to the technical field of network control. The device comprises a configuration information module, a configuration information distribution module, a scheduling module and a flow shaping module, wherein the flow shaping module comprises a plurality of flow shaper groups, each flow shaper group comprises a plurality of flow shapers, each flow shaper in each flow shaper group is connected through an interconnection bus, the flow shaping module is used for generating tokens according to a promised information rate and putting the tokens into token barrels corresponding to dequeue queues, when the number of the remaining tokens of the token barrels corresponding to the dequeue queues is equal to 0 and the dequeue of a target queue is required, a borrow token request is initiated through the interconnection bus, and when the borrow token request is responded, the tokens in the token barrels corresponding to the borrow token request are used for dequeuing the target queue. In this way, the flexibility of traffic shaping and bandwidth utilization may be improved.

Description

Hierarchical flow shaping device and method based on HQoS
Technical Field
The invention relates to the technical field of network control, in particular to a layered traffic shaping device and method based on HQoS.
Background
With the rapid development of the internet, service types and service traffic are greatly increased, which requires traffic shaping for the service traffic. Traffic shaping is a technique that is used to match the packet rate to downstream devices, and can prevent data from being transmitted from a high-speed link to a low-speed link, or when bursty traffic occurs, the bandwidth may be a bottleneck at the exit of the low-speed link, resulting in serious data loss. Traffic is shaped, typically using conventional quality of service (Quality of Service, qoS) techniques and layered quality of service (HIERARCHICAL QUALITY OF SERVICE, HQoS) techniques. HQoS is a technology for solving multi-user multi-service bandwidth guarantee under differentiated services (DIFFERENTIATED SERVICE, diffserv) model through a multi-level queue scheduling mechanism.
Currently, a manner of realizing flow shaping based on QOS is to deploy a plurality of flow shapers, namely a plurality of token buckets, and respectively configure an initial bucket depth, a bucket filling time schedule, a bucket reducing time schedule and a bucket reducing length of the flow shapers according to different protocols corresponding to each flow; each token bucket places tokens inward at a rate that subtracts the corresponding reduced bucket length when there is a queue dequeued, which prohibits scheduling when the tokens in the traffic shaper are zero. However, QOS can only perform traffic policing and shaping according to traffic types, such as voice, data, and video traffic, and cannot distinguish users, so that the flexibility of traffic shaping is low, and when the token in the traffic shaper is zero, the queue prohibits scheduling, so that the utilization rate of bandwidth is low.
Disclosure of Invention
The embodiment of the invention aims to provide a hierarchical flow shaping device and method based on HQoS, which solve the problems of lower flexibility of flow shaping and lower utilization rate of bandwidth.
In order to solve the technical problems, the embodiment of the invention provides the following technical scheme:
The first aspect of the present invention provides a layered traffic shaping device based on HQoS, the device comprising a configuration information module, a configuration information distribution module, a scheduling module and a traffic shaping module, the traffic shaping module comprising a plurality of traffic shaper groups, each traffic shaper group comprising a plurality of traffic shapers, each traffic shaper in each traffic shaper group being connected by an interconnection bus,
The configuration information module is used for receiving the address of each flow shaper and the configuration information corresponding to each address;
The configuration information distribution module is used for distributing configuration information to corresponding flow shapers according to each address, and storing the configuration information into the interconnection bus, wherein the configuration information comprises a promised information rate;
The scheduling module is used for selecting a dequeue from a plurality of enqueued queues according to a scheduling algorithm and sending the dequeue to the traffic shaping module;
And the traffic shaping module is used for generating tokens according to the promised information rate and putting the tokens into token buckets corresponding to the dequeue queues, when the number of the remaining tokens of the token buckets corresponding to the dequeue queues is equal to 0 and the dequeue of the target queue is required, initiating a borrowed token request through the interconnection bus, and when the borrowed token request is responded, dequeuing the target queue by using the tokens in the token buckets corresponding to the response borrowed token request.
The second aspect of the present application provides a layered traffic shaping method based on HQoS, the method comprising:
receiving the address of each flow shaper and configuration information corresponding to each address;
distributing configuration information according to each address, and storing the configuration information, wherein the configuration information comprises a promised information rate;
Selecting a dequeue queue from a plurality of enqueued queues according to a scheduling algorithm, and transmitting the dequeue queue;
And generating tokens according to the promised information rate and putting the tokens into token buckets corresponding to the dequeue queues, when the number of the remaining tokens of the token buckets corresponding to the dequeue queues is equal to 0 and the dequeue of the target queue is required, initiating a token borrowing request through an interconnection bus, and when the token borrowing request is responded, dequeuing the target queue by using the tokens in the token buckets corresponding to the token borrowing request.
Compared with the prior art, the device provided by the invention comprises a configuration information module, a configuration information distribution module, a scheduling module and a flow shaping module, wherein the flow shaping module comprises a plurality of flow shaper groups, each flow shaper group comprises a plurality of flow shapers, all the flow shapers in each flow shaper group are connected through an interconnection bus, and the scheduling module is used for selecting a dequeue queue from a plurality of enqueued queues according to a scheduling algorithm and sending the dequeue queue to the flow shaping module; and the traffic shaping module is used for generating tokens according to the promised information rate and putting the tokens into token buckets corresponding to the dequeue queues, when the number of the remaining tokens of the token buckets corresponding to the dequeue queues is equal to 0 and the dequeue of the target queue is required, initiating a borrowed token request through the interconnection bus, and when the borrowed token request is responded, dequeuing the target queue by using the tokens in the token buckets corresponding to the response borrowed token request. Thus, the flow shaping and speed limiting can be carried out through each flow shaper in the flow shaping module, and the flexibility of flow shaping is improved; when the tokens in the token bucket corresponding to the current queue are exhausted, if the user corresponding to the queue does not fully utilize the allocated bandwidth, the borrowing of the tokens to other services is allowed, and the bandwidth utilization rate is improved.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. In the drawings, wherein like or corresponding reference numerals indicate like or corresponding parts, there are shown by way of illustration, and not limitation, several embodiments of the invention, in which:
fig. 1 schematically shows a block diagram of a layered traffic shaping device based on HQoS;
FIG. 2 schematically shows a block diagram of a set of traffic shapers;
FIG. 3 schematically shows a block diagram of a traffic shaper in a traffic shaper set;
FIG. 4 schematically illustrates a block diagram of an interconnect bus;
Fig. 5 schematically shows an example block diagram of an application of the layered qos-based traffic shaping device in a network processor;
fig. 6 schematically shows a flow chart of a layered traffic shaping method based on HQoS.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
It should be noted that: unless otherwise defined, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs.
The following describes the device in the embodiment of the present invention in detail.
The layered traffic shaping device based on HQoS in the embodiment of the invention is applied to the design of a high-speed network switching chip, and can be oriented to the requirements of modern network business complexity and differentiated business traffic with smaller granularity.
Fig. 1 schematically illustrates a block diagram of a layered traffic shaping device based on HQoS in an embodiment of the present invention, and referring to fig. 1, the device may include a configuration information module, a configuration information distribution module, a scheduling module, and a traffic shaping module, where the traffic shaping module includes a plurality of traffic shaper groups, each traffic shaper group includes a plurality of traffic shapers, and each traffic shaper in each traffic shaper group is connected through an interconnection bus;
the configuration information module 101 is configured to receive the address of each traffic shaper and the configuration information corresponding to each address.
Specifically, the configuration information module needs to receive the configuration information sent by the external module. The configuration information includes sharing information for each traffic shaper, committed information rate (Committed Information Rate, CIR), number of tokens allowed to borrow, and mapping information between each dequeue and corresponding traffic shaper.
Before the configuration information module receives the address of each traffic shaper and the configuration information corresponding to each address, a software compiler needs to be used for compiling the sharing information, CIR, the token number allowed to borrow and the mapping information between each dequeue and each corresponding traffic shaper of each traffic shaper set by a user and corresponding to each address into a binary form. The configuration information module then receives the addresses of the traffic shapers in binary form and the sharing information of the traffic shapers corresponding to the addresses, the CIR, the number of tokens allowed to borrow, and the mapping information between the dequeue queues and the corresponding traffic shapers.
The configuration information distribution module 102 is configured to distribute configuration information to each corresponding traffic shaper according to each address, and store the configuration information into the interconnection bus, where the configuration information includes a committed information rate.
Specifically, the configuration information distribution module may distribute binary configuration information to each corresponding traffic shaper according to each address, and store the binary configuration information in the interconnection bus. When the same service flow queue is encountered or the user queues are shared to different groups, the configuration information distribution module can reasonably distribute configuration information. For example, there is traffic A, B, C, D, traffic shapers mapped when not shared are 0, 1,2, 3, respectively, traffic A, B is configured as a shared traffic group, traffic B, C is configured as a shared traffic group, traffic shapers mapped by traffic a and B are traffic shapers 0, traffic shapers mapped by traffic B and C are traffic shapers 1.
The scheduling module 103 is configured to select an enqueue queue from the plurality of enqueued queues according to a scheduling algorithm, and send the enqueue queue to the traffic shaping module.
And the scheduling module is also used for prohibiting the target queue from being continuously scheduled when receiving the feedback information sent by the traffic shaping module when the borrowing token request is not responded, and allowing the target queue to be continuously scheduled until the token appears in the token bucket.
Specifically, the scheduling module is configured to sort dequeue queues selected from the multiple enqueued queues according to a specific scheduling algorithm. For example, when a strict priority scheduling algorithm is executed, a weighted fair queue (WEIGHTED FAIR queue, WFQ) is executed each time the queue dequeues to the current highest priority queue, and the queues are dequeued in a limited manner according to a given weight. When receiving the queue information which is sent by the traffic shaping module and does not allow scheduling, the queue is placed at the last end of the sequencing, and dequeuing is forbidden.
The scheduling module needs to transmit information with the traffic shaping module. The traffic shaping module comprises a service traffic shaping sub-module, a user group traffic shaping sub-module and a port traffic shaping sub-module.
The network data flow may be divided into four layers: flow queues, user group queues, and port queues. The dequeue selected by the scheduling module is a stream queue, a user group queue or a port queue. Each layer is provided with a group of flow shapers, each group of flow shapers of the same layer is connected by adopting a transit interconnection register, a plurality of flow shapers and an interconnection bus are contained in the same group of flow shapers, the number of the flow shapers is more than 2, and the invention is exemplified by 8 flow shapers and an interconnection bus. While each traffic shaper has a configuration interface through which external logic configures the token bucket in the traffic shaper. Fig. 2 schematically shows a block diagram of a set of traffic shapers, in particular fig. 2 shows a set of traffic shapers implemented by combinational logic, where the network intersection locations are register files holding information of the respective traffic shapers in the set, and the diagram shows, by way of example, 8 traffic shapers in the set, including traffic shaper 0, traffic shaper 1, traffic shaper 2, traffic shaper 3, traffic shaper 4, traffic shaper 5, traffic shaper 6 and traffic shaper 7, which are connected by an interconnection bus.
The scheduling module comprises a service flow scheduling sub-module, a user group scheduling sub-module and a port scheduling sub-module. The service flow scheduling sub-module is used for selecting an dequeued flow queue from a plurality of enqueued flow queues, and sending the dequeued flow queue to the user flow scheduling sub-module and the user flow shaping sub-module. The user flow shaping submodule is used for selecting a dequeued user queue from dequeued flow queues and sending the dequeued user queue to the user group scheduling submodule and the user group flow shaping submodule. The user group scheduling sub-module is used for selecting the dequeued user group queue from the dequeued user queues, and sending the dequeued user group queue to the port scheduling sub-module and the port traffic shaping sub-module. The port scheduling submodule is used for selecting the dequeued port queue from the dequeued user group queues.
The traffic scheduling sub-module and the traffic shaping sub-module may be determined as a first layer of HQoS, the user flow scheduling sub-module and the user traffic shaping sub-module as a second layer of HQoS, the user group scheduling sub-module and the user group traffic shaping sub-module as a third layer of HQoS, and the port scheduling sub-module and the port traffic shaping sub-module as a fourth layer of HQoS.
And the traffic shaping module 104 is configured to generate tokens according to the committed information rate and put the tokens into token buckets corresponding to the dequeue queues, initiate a borrowing token request through the interconnection bus when the number of the remaining tokens of the token buckets corresponding to the dequeue queues is equal to 0 and the target queue needs to be dequeued, and dequeue the target queue by using the tokens in the token bucket corresponding to the response borrowing token request when the borrowing token request is responded.
And the traffic shaping module is also used for sending feedback information to the scheduling module when the borrowing token request is not responded. And the scheduling module is also used for prohibiting the target queue from being continuously scheduled when the feedback information is received, and allowing the target queue to be continuously scheduled until the token appears in the token bucket.
For example, a certain user queue can obtain a bandwidth of 100Mbps, 3 kinds of service flows under the user queue are recorded as A, B, C, the arrival traffic rate of A, B is 50Mbps and 30Mbps respectively, the arrival traffic rate of C is 40Mbps, the queues A and B are always reserved and insensitive to the peak rate, namely, the service flow C is not provided with the maximum borrowing quantity, the queue C sends a section of data at intervals, and the three queues are scheduled according to WFQ, and the weight is 5:3:2.
Under the condition that bandwidth borrowing is not allowed, when the C queue has no data, the bandwidths obtained by A and B are respectively 50Mbps and 30Mbps. In the case of data in the C queue, A, B, C gets 50Mbps,30Mbps,20Mbps, respectively.
Under the condition that bandwidth borrowing is allowed, the bandwidth borrowed by the externally configured A queue is 2/5 of the C queue, the bandwidth borrowed by the B queue is 2/5 of the C queue, and when the C queue has no data, the bandwidths obtained by A, B are (50+40×3/5) =74 Mbps and (30+40×2/5) =46 Mbps respectively. In the case of data in the C queue, A, B, C gets 50Mbps,30Mbps,20Mbps, respectively.
In the ideal state, when the C queue is idle, the A, B queue divides the idle bandwidth of the C queue according to the weight proportion of the A, B queue, so that the utilization rate of the bandwidth is improved.
Specifically, the traffic shaping module is a main module for completing traffic shaping and is divided into four layers of a service traffic shaping sub-module, a user group traffic shaping sub-module and a port traffic shaping sub-module. Each layer of flow shaping submodule comprises a group of flow shapers, the shapers of each layer are connected by adopting a tree structure, each layer of flow is required to be shaped, and the four layers of flow shaping submodules, namely the four layers of flow shapers, jointly complete the control of the flow and prevent the flow from exceeding the limited speed. Token sharing among token buckets in different traffic shapers in the same hierarchy may be performed through an interconnection bus between token buckets. The traffic shaping module comprises three interfaces, namely a configuration information interface, an interface interacted with the scheduling module and an interface for outputting queue information. The configuration information interface is mainly used for configuring sharing information of each flow shaper, CIR, token number T allowed to borrow and mapping information between each dequeue and each corresponding flow shaper in the information module. The interface interacted with the dispatching module is mainly used for receiving dispatching results and sending information of the queue which is not allowed to be dispatched.
The traffic shaping module is further configured to generate a token according to the committed information rate before determining a token bucket in the traffic shaper corresponding to the queue according to the mapping information, send the token to the token bucket when the token bucket corresponding to the token is not under the token and the token bucket is not full, send a token borrowing permission signal when the token bucket corresponding to the token is not under the token and the token bucket is full and there is a token borrowing request, and determine a target traffic shaper according to the handshake number of the first traffic shaper requesting to borrow the token and the second traffic shaper corresponding to the token borrowing permission signal, where the target traffic shaper is the traffic shaper borrowing the token or the traffic shaper borrowing the token.
Fig. 3 schematically shows a block diagram of traffic shapers in a traffic shaper set, where n traffic shapers belong to the traffic shaper set, and where the traffic shaper sets are connected by a transit interconnect register in order to ensure that the timing is not violated, based on this limitation, so that token borrowing between different traffic shaper sets is less real-time than token borrowing within the traffic shaper set. The n flow shapers are connected through an interconnection bus, and communicate through a combinational logic interconnection bus to finish operations such as token borrowing, credit value recording, flow sharing and the like. Specifically, the traffic shaping module 104 includes a token distributor, a token bucket and an overflow token arbitration sub-module, and the interconnection bus stores the amount of tokens owed by each token bucket;
And each token distributor is used for generating tokens according to the promised information rate, judging whether the token bucket is under-token according to the quantity of the corresponding token bucket under-token in the interconnection bus, if the token bucket is under-token, repaying the tokens according to the quantity of the token bucket under-token, and increasing the number of the tokens in the corresponding token bucket until the borrowed tokens when the clearing flow reaches the peak value are restored. If the token bucket is not full, the token bucket is judged to be full, if the token bucket is not full, the token is sent to the token bucket, the token bucket is filled with the token, and if the token bucket is full, the token bucket full information is sent to each overflow token arbitration sub-module.
Specifically, the token bucket used in the embodiment of the invention is a single-rate token bucket, and the single-rate token bucket mainly comprises three parameters, namely CIR, committed burst size (Committed Burst Size, CBS) and peak burst size (Extended Burst Size, EBS). CIR refers to the information transfer rate in normal state on a specific virtual circuit predetermined by the network, and the characteristic in the token bucket algorithm is the rate of token generation. CBS is expressed in bytes and is used to define the maximum burst traffic before the partial traffic exceeds the CIR, and is represented in the token bucket algorithm by the capacity (depth) of the token bucket, with a larger CBS indicating a larger burst size allowed. The EBS is used to define the maximum traffic size allowed per burst, in bytes, and the token bucket algorithm designed in the embodiment of the present invention is represented by a token borrowing threshold. Under the theoretical situation, the unit of the promised burst size and the peak burst size is bytes, but considering the complexity of circuit implementation and the requirement of timing convergence, the unit of the invention is the data size corresponding to 1 descriptor corresponding to the front-level cache management module, namely 32 bytes. This, while resulting in some degree of bias in bandwidth limitations, can greatly reduce the random access memory (Random Access Memory, RAM) resources consumed by storing token bucket data and sharing data in the bus when implemented in hardware.
Each token bucket is used for determining a token bucket in a flow shaper corresponding to a queue according to mapping information, receiving the token and putting the token into the token bucket, dequeuing the dequeue when the dequeue queue is received, determining the residual token quantity of the token bucket according to the total token quantity of the token bucket and the token quantity corresponding to the dequeue queue, initiating a borrow token request through an interconnection bus when the residual token quantity is equal to 0 and the target queue needs to be dequeued, and dequeuing the target queue by using the token in the token bucket corresponding to the response borrow token request when the borrow token request is responded.
Each overflow token arbitration sub-module is used for inquiring whether a token borrowing request exists in the interconnection bus when the token bucket is full, sending a token borrowing permission signal if the token borrowing request exists, and determining a target flow shaper according to the handshake quantity of a first flow shaper requesting to borrow a token and a second flow shaper corresponding to the token borrowing permission signal; if there is no token borrowing request, the token is discarded.
Fig. 4 schematically shows a block diagram of an interconnect bus, which may connect individual traffic shapers, illustrated by way of example with 8 traffic shapers in a group of traffic shapers, each having a queue entry therein, the queues comprising queues 1 to 8, the interconnect bus comprising a feedback device, a red line and a black line. The red line is a data exchange channel between the output result of the dispatching module, namely the dispatcher, and the traffic shaper, the red cross node is a switch node, whether the cross node is started or not is determined according to the sharing information of each traffic shaper in the external configuration information, the cross node is started when the sharing information is shared, and the cross node is not started when the sharing information is not shared. The bandwidth sharing between the queues and the cascade function of the traffic shaper can be realized through the cross node, for example, at this time, the queue 1 and the queue 2 start joint traffic shaping, the queue 2 and the queue 3 start joint traffic shaping, the queue 3 and the queue 4 start joint traffic shaping, then the queue 2 needs to pass through the traffic shapers 1 and 2, the queue 3 needs to pass through the traffic shapers 2 and 3 at the same time, whether the feedback device needs to back-pressure to the scheduler, namely the scheduling module, is judged according to the number of the remaining tokens of the token bucket in the traffic shapers, if the token bucket is empty, the scheduler needs to be notified, the queue or the queue group does not allow continuous scheduling, and if the token bucket is not empty, the feedback device does not work, and back-pressure is not needed. The black line is the data exchange channel between the traffic shapers, and can realize the function of token borrowing.
Each overflow token arbitration sub-module is specifically configured to query whether a token borrowing request exists in the interconnection bus when receiving token bucket full information, if the token borrowing request exists, send out a token borrowing permission signal, and when the number of the handshake is equal to 1 (that is, only one first traffic shaper sends out a token borrowing permission request on the interconnection bus, and one second traffic shaper sends out a token borrowing permission request), query the corresponding token borrowing number of the first traffic shaper on the corresponding intersection node in the interconnection bus, and determine whether the token borrowing number exceeds a configuration threshold, if so, allow the token in the second traffic shaper to be borrowed to the first traffic shaper, determine the second traffic shaper as a target traffic shaper, and if the token borrowing permission operation is reflected in the bus interconnection structure, namely, open the corresponding red intersection node; if not, the token in the second traffic shaper cannot be borrowed to the first traffic shaper, closing the crossover node.
When the number of the handshakes is larger than 1 (that is, a plurality of first traffic shapers send out token borrowing requests and one second traffic shapers send out a borrowing permission request) and the number of the first traffic shapers is a plurality of second traffic shapers, a preset first traffic shapers are selected from the plurality of first traffic shapers according to a first credit value and a first priority corresponding to each first traffic shapers, and the preset first traffic shapers are determined to be target traffic shapers. And when the number of the second traffic shapers is a plurality of and the number of the first traffic shapers is one, screening out a preset second traffic shaper from the plurality of second traffic shapers according to the second credit value and the second priority corresponding to each second traffic shaper, and determining the preset second traffic shaper as a target traffic shaper. If there is no token borrowing request, the token is discarded.
The grant borrowing token signal may also be referred to as a flag signal that the traffic stream is empty, indicating that other traffic shapers are allowed to borrow tokens.
And each overflow token arbitration sub-module is specifically configured to sort the first credit values and the first priorities of the plurality of first traffic shapers when the number of the first traffic shapers is more than 1 and the number of the second traffic shapers is one, and determine a preset first traffic shaper corresponding to the highest credit value and the highest priority as a target traffic shaper, so as to ensure that the bandwidth is lent to the service flow or the user with higher priority. When the number of the second traffic shapers is multiple and the number of the first traffic shapers is one, the second credit values and the second priorities of the second traffic shapers are ordered, the preset second traffic shapers corresponding to the highest credit value and the lowest priority are determined as target traffic shapers, the low-priority traffic and the bandwidth of the part of the user are guaranteed to be first-out, in general, due to the limitation of a scheduling algorithm, the queues with lower priority or lower weight are easier to have free bandwidth, and due to the existence of a token borrowing and returning mechanism, fairness of bandwidth allocation of the queues in a longer period of time can be guaranteed.
The credit value may be determined based on the number of borrowed tokens exceeding a configuration threshold, with the credit value being lowest when the number of borrowed tokens exceeds the configuration threshold and highest when the number of borrowed tokens does not exceed the configuration threshold.
Only if the token is full in a certain traffic shaper, the grant token can be issued, which indicates that the queue is idle, and the residual bandwidth exists, and when the rate of other queues exceeds the CIR, the residual bandwidth of the part can be occupied, so that the full utilization of the bandwidth is achieved.
Fig. 5 schematically shows an example block diagram of an application of a layered traffic shaping device based on HQoS to a network processor, where different traffic flows, different users, and different user groups have different demands on bandwidth during network transmission, and where the network rates supported by downstream network devices are different, such as hundred mega routers in home devices and routers with gigabit or larger in enterprises, which may cause network congestion if traffic shaping is not performed by the users. In the figure, network data, namely a data packet, passes through a parser to generate a data packet header Vector (PACKET HEADER Vector, PHV), and is matched into different service flows according to a software-defined rule by flow classification, wherein the matching result is 13bit data, and the data packet comprises a port number, a user group, a user name and a priority corresponding to the service flows. Therefore, 512 traffic shaper groups need to be partitioned in the first layer of the HQoS, and no data interaction is performed between the traffic shaper groups. In the second 128 traffic shaper groups of the HQoS, since the number of users included in each user group is externally configured, data transmission between the respective traffic shaper groups through the transit interconnect register is required. In the third layer user group queue of the HQoS, 16 traffic shaper groups need to be divided, and data transmission needs to be performed through a transit interconnection register. In the fourth layer port queue of the HQoS, because the ports are physically isolated, only 4 traffic shapers are required to be arranged for port traffic shaping.
The invention can realize the most basic flow shaping function, borrow the flow of the same level, and improve the maximum utilization rate of the bandwidth among each service or each user and the user group. The mapping relation of different queues to the shapers or the cascading relation among the shapers can be changed in an external configuration mode, when the network protocol and the application scene are updated, configuration information is only required to be rewritten, a hardware structure is not required to be changed, and a complex design verification process is saved.
Based on the implementation manner of the layered traffic shaping device based on the HQoS of fig. 1, the device in the embodiment of the present invention includes a configuration information module, a configuration information distribution module, a scheduling module, and a traffic shaping module, where the traffic shaping module includes a plurality of traffic shaper groups, each traffic shaper group includes a plurality of traffic shapers, each traffic shaper in each traffic shaper group is connected through an interconnection bus, and the configuration information module is configured to receive an address of each traffic shaper and configuration information corresponding to each address; the configuration information distribution module is used for distributing configuration information to corresponding flow shapers according to each address, and storing the configuration information into the interconnection bus, wherein the configuration information comprises a promised information rate; the scheduling module is used for selecting a dequeue from a plurality of enqueued queues according to a scheduling algorithm and sending the dequeue to the traffic shaping module; and the traffic shaping module is used for generating tokens according to the promised information rate and putting the tokens into token buckets corresponding to the dequeue queues, when the number of the remaining tokens of the token buckets corresponding to the dequeue queues is equal to 0 and the dequeue of the target queue is required, initiating a borrowed token request through the interconnection bus, and when the borrowed token request is responded, dequeuing the target queue by using the tokens in the token buckets corresponding to the response borrowed token request. Thus, the flow shaping and speed limiting can be carried out through each flow shaper in the flow shaping module, and the flexibility of flow shaping is improved; when the tokens in the token bucket corresponding to the current queue are exhausted, if the user corresponding to the queue does not fully utilize the allocated bandwidth, the borrowing of the tokens to other services is allowed, and the bandwidth utilization rate is improved.
Based on the same inventive concept, as an implementation of the layered traffic shaping device based on HQoS, the embodiment of the invention also provides a layered traffic shaping method based on HQoS. Fig. 6 is a flowchart of a layered traffic shaping method based on HQoS in an embodiment of the present invention, and referring to fig. 6, the method may include:
s601, receiving addresses of all traffic shapers and configuration information corresponding to the addresses.
S602, distributing the configuration information according to each address, and storing the configuration information.
Wherein the configuration information includes a committed information rate.
S603, selecting a dequeue queue from a plurality of enqueued queues according to a scheduling algorithm, and sending the dequeue queue.
The selected dequeue is a stream queue, a user group queue or a port queue.
When the multiple enqueued queues are stream queues, selecting dequeued stream queues from the multiple enqueued stream queues, and sending the dequeued stream queues. When a plurality of enqueued queues are dequeued flow queues, dequeued user queues are selected from the dequeued flow queues, and the dequeued user queues are sent; when a plurality of enqueued queues are dequeued user queues, dequeued user group queues are selected from the dequeued user queues, and the dequeued user group queues are sent; when the plurality of enqueued queues are dequeued user group queues, dequeued port queues are selected from the dequeued user group queues.
S604, generating tokens according to the promised information rate and putting the tokens into token buckets corresponding to the dequeue queues, when the number of the remaining tokens of the token buckets corresponding to the dequeue queues is equal to 0 and the target queue needs to be dequeued, initiating a borrow token request through an interconnection bus, and when the borrow token request is responded, dequeuing the target queue by using the tokens in the token buckets corresponding to the response borrow token request.
Specifically, generating tokens according to the committed information rate and putting the tokens into token buckets corresponding to dequeue queues, when the number of the remaining tokens of the token buckets corresponding to the dequeue queues is equal to 0 and the target queue needs to be dequeued, initiating a borrow token request through an interconnection bus, and when the borrow token request is responded, dequeuing the target queue by using the tokens in the token buckets corresponding to the response borrow token request, including:
Determining a token bucket in a traffic shaper corresponding to a queue according to mapping information, receiving tokens and putting the tokens into the token bucket, dequeuing the dequeue when the dequeue is received, determining the remaining token quantity of the token bucket according to the total token quantity of the token bucket and the token quantity corresponding to the dequeue, initiating a borrowing token request through an interconnection bus when the remaining token quantity is equal to 0 and the target queue needs to be dequeued, and dequeuing the target queue by using the tokens in the token bucket corresponding to the response borrowing token request when the borrowing token request is responded.
After dequeuing the target queue using tokens in the token bucket corresponding to the response borrow token request when the borrow token request is responded to, the method further comprises:
When the borrowing token request is not responded, feedback information is sent; and when feedback information is received, continuing to schedule the target queue until the target queue is allowed to continue to schedule when a token appears in the token bucket.
Before determining the token bucket in the traffic shaper corresponding to the queue according to the mapping information, the method further comprises:
Generating a token according to the promised information rate, sending the token to the token bucket when the token bucket corresponding to the token does not lack the token and the token bucket is not full, sending a token borrowing permission signal when the token bucket corresponding to the token does not lack the token and the token bucket is full and a token borrowing request exists, and determining a target traffic shaper according to the handshake quantity of a first traffic shaper requesting to borrow the token and a second traffic shaper allowing to borrow the token, wherein the target traffic shaper is a traffic shaper borrowing the token or a traffic shaper lending the token.
The configuration information also includes mapping information for each dequeue queue and corresponding each traffic shaper.
Specifically, generating a token according to a committed information rate, sending the token to a token bucket when the token bucket corresponding to the token is not under-token and the token bucket is not full, sending a token borrowing permission signal when the token bucket corresponding to the token is not under-token, the token bucket is full and a token borrowing request exists, and determining a target traffic shaper according to the handshake quantity of a first traffic shaper requesting to borrow the token and a second traffic shaper corresponding to the token borrowing permission signal, wherein the method comprises the following steps:
Generating tokens according to the promised information rate, judging whether the token bucket is under-token according to the quantity of the corresponding token bucket under-tokens in the interconnection bus, if the token bucket is under-token, repaying the tokens according to the quantity of the token bucket under-token, if the token bucket is not under-token, judging whether the token bucket is full, if the token bucket is not full, sending the tokens to the token bucket, filling the tokens into the token bucket, and if the token bucket is full, sending the token bucket full information;
Inquiring whether a token borrowing request exists in the interconnection bus when the token bucket is full, if so, sending a token borrowing permission signal, and determining a target flow shaper according to the handshake quantity of a first flow shaper requesting to borrow a token and a second flow shaper corresponding to the token borrowing permission signal; if there is no token borrowing request, the token is discarded.
Specifically, if there is a token borrowing request, a token borrowing permission signal is sent, and a target traffic shaper is determined according to the handshake number of a first traffic shaper requesting to borrow a token and a second traffic shaper corresponding to the token borrowing permission signal, including:
If a token borrowing request exists, a token borrowing permission signal is sent, when the number of the handshake is equal to 1, the token borrowing number corresponding to the first flow shaper is inquired on the interconnection bus, whether the token borrowing number exceeds a configuration threshold value is judged, if yes, the token in the second flow shaper is permitted to be borrowed to the first flow shaper, the second flow shaper is determined to be a target flow shaper, and if not, the token in the second flow shaper cannot be borrowed to the first flow shaper;
When the number of the handshakes is larger than 1, when the number of the first flow shapers is a plurality of and the number of the second flow shapers is one, screening out preset first flow shapers from the plurality of first flow shapers according to the first credit value and the first priority corresponding to each first flow shapers, and determining the preset first flow shapers as target flow shapers; and when the number of the second traffic shapers is a plurality of and the number of the first traffic shapers is one, screening out a preset second traffic shaper from the plurality of second traffic shapers according to the second credit value and the second priority corresponding to each second traffic shaper, and determining the preset second traffic shaper as a target traffic shaper.
Specifically, when the number of the handles is greater than 1, and the number of the first traffic shapers is a plurality of and the number of the second traffic shapers is one, selecting a preset first traffic shaper from the plurality of first traffic shapers according to a first credit value and a first priority corresponding to each first traffic shaper, and determining the preset first traffic shaper as a target traffic shaper, including: and when the number of the handshakes is larger than 1, when the number of the first traffic shapers is a plurality of and the number of the second traffic shapers is one, sequencing the first credit values and the first priorities of the plurality of first traffic shapers, and determining the preset first traffic shapers corresponding to the highest credit values and the highest priorities as target traffic shapers.
Specifically, when the number of the handles is greater than 1, and the number of the second traffic shapers is a plurality of and the number of the first traffic shapers is one, screening out a preset second traffic shaper from the plurality of second traffic shapers according to a second credit value and a second priority corresponding to each second traffic shaper, and determining the preset second traffic shaper as a target traffic shaper, including: and when the number of the handshakes is larger than 1, when the number of the second traffic shapers is a plurality of and the number of the first traffic shapers is one, sequencing the second credit values and the second priorities of the plurality of second traffic shapers, and determining the preset second traffic shapers corresponding to the highest credit value and the lowest priority as the target traffic shapers.
It should be noted here that: the above description of the layered traffic shaping method embodiment based on the HQoS is similar to the description of the device embodiment described above, and has similar advantageous effects as the device embodiment. For technical details not disclosed in the embodiments of the layered traffic shaping method of the HQoS of the embodiments of the present invention, please refer to the description of the embodiments of the apparatus of the present invention.
The foregoing is merely illustrative embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the technical scope of the present invention, and the invention should be covered. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. A hierarchical flow shaping device based on HQoS is characterized in that the device comprises a configuration information module, a configuration information distribution module, a scheduling module and a flow shaping module, wherein the flow shaping module comprises a plurality of flow shaper groups, each flow shaper group comprises a plurality of flow shapers, each flow shaper in each flow shaper group is connected through an interconnection bus,
The configuration information module is used for receiving the addresses of the traffic shapers and the configuration information corresponding to the addresses;
The configuration information distribution module is used for distributing the configuration information to corresponding traffic shapers according to the addresses, and storing the configuration information into the interconnection bus, wherein the configuration information comprises a promised information rate;
the scheduling module is used for selecting a dequeue queue from a plurality of enqueued queues according to a scheduling algorithm and sending the dequeue queue to the traffic shaping module;
And the traffic shaping module is used for generating tokens according to the promised information rate and placing the tokens into token buckets corresponding to the dequeue queues, when the number of the residual tokens of the token buckets corresponding to the dequeue queues is equal to 0 and the dequeue of a target queue is required, initiating a borrowed token request through the interconnection bus, and when the borrowed token request is responded, dequeuing the target queue by using the tokens in the token buckets corresponding to the borrowed token request.
2. The apparatus of claim 1, wherein the traffic shaping module is further configured to send feedback information to the scheduling module when the borrowed token request is not responded to; and the scheduling module is also used for prohibiting the target queue from being continuously scheduled when the feedback information is received until the token appears in the token bucket, and allowing the target queue to be continuously scheduled.
3. The apparatus of claim 1, wherein the traffic shaping module is further configured to, prior to determining a token bucket in the traffic shaper corresponding to the dequeue based on the mapping information, generate the token based on the committed information rate, and send a token to the token bucket when the token bucket corresponding to the token is not under-token and the token bucket is not full, issue a permit borrow token signal when the token bucket corresponding to the token is not under-token, the token bucket is full, and there is a token borrow request, and determine a target traffic shaper based on a number of handshakes requesting a first traffic shaper for borrowing a token and a second traffic shaper corresponding to the permit borrow token signal, the target traffic shaper being either a traffic shaper borrowing a token or a traffic shaper borrowing a token.
4. The apparatus of claim 3 wherein each traffic shaper comprises a token distributor, a token bucket, and an overflow token arbitration sub-module, wherein the interconnect bus stores the number of tokens owed by each token bucket, wherein the configuration information further comprises mapping information for each dequeue queue and corresponding traffic shaper,
Each token distributor is configured to generate the token according to the committed information rate, determine whether the token is under-token according to the number of under-tokens in the token bucket corresponding to the interconnection bus, if the token is under-token, repay the token according to the number of under-tokens in the token bucket, if the token is not under-token, determine whether the token bucket is full, if the token bucket is not full, send the token to the token bucket, fill the token into the token bucket, and if the token bucket is full, send the full information of the token bucket to each overflow token arbitration sub-module;
Each token bucket is configured to determine a token bucket in a traffic shaper corresponding to the dequeue according to the mapping information, receive the token and place the token in the token bucket, dequeue the dequeue when the dequeue is received, determine a remaining number of tokens in the token bucket according to a total number of tokens in the token bucket and a number of tokens corresponding to the dequeue, and initiate a borrow token request through the interconnect bus when the remaining number of tokens is equal to 0 and a target queue needs to be dequeued, and dequeue the target queue using a token in the token bucket corresponding to the borrow token request when the borrow token request is responded;
Each overflow token arbitration sub-module is used for inquiring whether a token borrowing request exists in the interconnection bus when the token bucket is full, if so, issuing the token borrowing permission signal, and determining the target flow shaper according to the handshake quantity of a first flow shaper requesting to borrow a token and a second flow shaper corresponding to the token borrowing permission signal; and discarding the token if the token borrowing request does not exist.
5. The apparatus of claim 4, wherein each overflow token arbitration sub-module is specifically configured to issue the permit token signal if the token borrowing request exists, query the interconnect bus for a corresponding token borrowing number of the first traffic shaper when the handshake number is equal to 1, determine whether the token borrowing number exceeds a configuration threshold, if so, permit the token in the second traffic shaper to be borrowed to the first traffic shaper, determine the second traffic shaper as the target traffic shaper, and if not, fail to lend the token in the second traffic shaper to the first traffic shaper.
6. The apparatus of claim 4, wherein each overflow token arbitration sub-module is specifically configured to issue the grant token signal if the token borrowing request exists, screen out the preset first traffic shaper according to a first credit value and a first priority corresponding to each first traffic shaper when the number of first traffic shapers is more than 1 and the number of second traffic shapers is one, determine the preset first traffic shaper as the target traffic shaper when the number of second traffic shapers is more than 1 and screen out the preset second traffic shaper according to a second credit value and a second priority corresponding to each second traffic shaper and determine the preset second traffic shaper as the target traffic shaper when the number of second traffic shapers is more than 1.
7. The apparatus of claim 6, wherein the overflow token arbitration sub-modules are specifically configured to, when the number of handshakes is greater than 1, rank a first credit value and a first priority of the plurality of first traffic shapers when the number of first traffic shapers is a plurality and the number of second traffic shapers is one, determine the preset first traffic shaper corresponding to a highest credit value and a highest priority as the target traffic shaper, and rank a second credit value and a second priority of the plurality of second traffic shapers when the number of second traffic shapers is a plurality and the number of first traffic shapers is one, and determine the preset second traffic shaper corresponding to a highest credit value and a lowest priority as the target traffic shaper.
8. The apparatus of claim 7, wherein the traffic shaping module comprises a traffic shaping sub-module, a user group traffic shaping sub-module, and a port traffic shaping sub-module; the dequeue selected by the scheduling module is a stream queue, a user group queue or a port queue; the scheduling module comprises a service flow scheduling sub-module, a user group scheduling sub-module and a port scheduling sub-module; the service flow scheduling sub-module is used for selecting a dequeued flow queue from a plurality of dequeued flow queues, sending the dequeued flow queue to the user flow scheduling sub-module and the user flow shaping sub-module, the user flow shaping sub-module is used for selecting a dequeued user queue from the dequeued flow queues, sending the dequeued user queue to the user group scheduling sub-module and the user group flow shaping sub-module, and the user group scheduling sub-module is used for selecting a dequeued user group queue from the dequeued user queues, sending the dequeued user group queue to the port scheduling sub-module and the port flow shaping sub-module, and the port scheduling sub-module is used for selecting a dequeued port queue from the dequeued user group queues.
9. A layered traffic shaping method based on HQoS, applied to the layered traffic shaping device based on HQoS according to any one of claims 1 to 8, the method comprising:
receiving the address of each flow shaper and configuration information corresponding to each address;
Distributing the configuration information according to the addresses, and storing the configuration information, wherein the configuration information comprises a promised information rate;
Selecting a dequeue queue from a plurality of enqueued queues according to a scheduling algorithm, and transmitting the dequeue queue;
Generating tokens according to the promised information rate, putting the tokens into token buckets corresponding to the dequeue queues, when the number of the residual tokens of the token buckets corresponding to the dequeue queues is equal to 0 and the target queue needs to be dequeued, initiating a borrow token request through the interconnection bus, and when the borrow token request is responded, dequeuing the target queue by using the tokens in the token buckets corresponding to the borrow token request.
10. The apparatus of claim 9, wherein after dequeuing the target queue using tokens in a token bucket corresponding to the borrowed token request when the borrowed token request is responded to, the method further comprises:
When the borrowing token request is not responded, feedback information is sent;
And when the feedback information is received, prohibiting the target queue from being continuously scheduled until the token appears in the token bucket, and allowing the target queue to be continuously scheduled.
CN202410463881.4A 2024-04-17 Hierarchical flow shaping device and method based on HQoS Pending CN118316883A (en)

Publications (1)

Publication Number Publication Date
CN118316883A true CN118316883A (en) 2024-07-09

Family

ID=

Similar Documents

Publication Publication Date Title
US7457297B2 (en) Methods and apparatus for differentiated services over a packet-based network
US7474668B2 (en) Flexible multilevel output traffic control
TWI477109B (en) A traffic manager and a method for a traffic manager
JP3715098B2 (en) Packet distribution apparatus and method in communication network
US7596086B2 (en) Method of and apparatus for variable length data packet transmission with configurable adaptive output scheduling enabling transmission on the same transmission link(s) of differentiated services for various traffic types
Feliciian et al. An asynchronous on-chip network router with quality-of-service (QoS) support
KR100323258B1 (en) Rate guarantees through buffer management
US8638664B2 (en) Shared weighted fair queuing (WFQ) shaper
Guérin et al. Scalable QoS provision through buffer management
Semeria Supporting differentiated service classes: queue scheduling disciplines
US7123622B2 (en) Method and system for network processor scheduling based on service levels
US6810031B1 (en) Method and device for distributing bandwidth
US7289514B2 (en) System and method for scheduling data traffic flows for a communication device
US20070070895A1 (en) Scaleable channel scheduler system and method
KR100463697B1 (en) Method and system for network processor scheduling outputs using disconnect/reconnect flow queues
EP3029898B1 (en) Virtual output queue authorization management method and device, and computer storage medium
Homg et al. An adaptive approach to weighted fair queue with QoS enhanced on IP network
AU2002339349B2 (en) Distributed transmission of traffic flows in communication networks
Moorman et al. Multiclass priority fair queuing for hybrid wired/wireless quality of service support
JP4087279B2 (en) BAND CONTROL METHOD AND BAND CONTROL DEVICE THEREOF
CN118316883A (en) Hierarchical flow shaping device and method based on HQoS
US6904056B2 (en) Method and apparatus for improved scheduling technique
JP3601449B2 (en) Cell transmission control device
Saha et al. Multi-rate traffic shaping and end-to-end performance guarantees in ATM networks
Katevenis et al. Multi-queue management and scheduling for improved QoS in communication networks

Legal Events

Date Code Title Description
PB01 Publication