WO2022012204A1 - 一种队列调度方法、装置及系统 - Google Patents

一种队列调度方法、装置及系统 Download PDF

Info

Publication number
WO2022012204A1
WO2022012204A1 PCT/CN2021/098028 CN2021098028W WO2022012204A1 WO 2022012204 A1 WO2022012204 A1 WO 2022012204A1 CN 2021098028 W CN2021098028 W CN 2021098028W WO 2022012204 A1 WO2022012204 A1 WO 2022012204A1
Authority
WO
WIPO (PCT)
Prior art keywords
scheduling
queue
multiple queues
queues
processing device
Prior art date
Application number
PCT/CN2021/098028
Other languages
English (en)
French (fr)
Inventor
张雄为
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21843080.9A priority Critical patent/EP4175253A4/en
Publication of WO2022012204A1 publication Critical patent/WO2022012204A1/zh
Priority to US18/155,565 priority patent/US20230155954A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/60Queue scheduling implementing hierarchical scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6295Queue scheduling characterised by scheduling criteria using multiple queues, one for each individual QoS, connection, flow or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/623Weighted service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Definitions

  • the present application relates to the field of communications, and in particular, to a queue scheduling method, device and system.
  • QoS quality of service
  • Ethernet equipment is required not only to further refine and differentiate service traffic, but also to perform unified management and hierarchical scheduling of data streams of multiple users and multiple services.
  • Hierarchical quality of service (HQoS) technology emerges as the times require.
  • the HQoS technology assembles the scheduling policy into a hierarchical tree structure (hereinafter referred to as "HQoS scheduling tree"), and the HQoS scheduling tree implements scheduling of queues transmitting different data streams.
  • the HQoS scheduling tree is implemented by the traffic management (TM) hardware entity.
  • TM traffic management
  • the embodiments of the present application provide a queue scheduling method, device, and system, which realizes flexible management of queues, meets actual transmission requirements, and saves resources.
  • a queue scheduling method is provided.
  • the method can be executed by a processing device, and the processing device can be a central processing unit (CPU), a network processor (NP), or the like.
  • the method includes the following steps: the processing device generates an HQoS scheduling tree, the HQoS scheduling tree is a tree structure used to describe nodes participating in scheduling in the communication network, the HQoS scheduling tree includes a plurality of leaf nodes, and each leaf node in the plurality of leaf nodes Used to identify queues on TM hardware entities.
  • the TM hardware entity includes multiple queues, and multiple leaf nodes correspond to multiple queues one-to-one.
  • the processing device may acquire the traffic characteristics of the multiple queues according to the multiple leaf nodes, and determine the scheduling parameters of at least one queue in the multiple queues according to the traffic characteristics of the multiple queues. The traffic characteristics of the data flow transmitted by each queue. And, the processing device sends a scheduling message to a scheduling device corresponding to the at least one queue in the TM hardware entity, where the scheduling message includes scheduling parameters of the at least one queue, and the scheduling parameters are used to schedule the at least one queue.
  • the HQoS scheduling tree is implemented by software, which can realize the purpose of flexible management of queues without changing the TM hardware entity, meet the actual transmission requirements, and save scheduling resources.
  • the HQoS scheduling tree further includes root nodes of multiple leaf nodes.
  • determining the scheduling parameter of at least one of the multiple queues by the processing device according to the traffic characteristics of the multiple queues may be: Scheduling parameters. Since the HQoS scheduling tree is implemented by software, the mapping relationship between the root node and the leaf node can be changed without changing the TM hardware entity, realizing flexible management of the queue.
  • the HQoS scheduling tree further includes a root node and at least one branch node corresponding to the root node, each branch node in the at least one branch node corresponds to one or more leaf nodes, and different branch nodes correspond to different leaf nodes.
  • the processing device determining the scheduling parameters of at least one of the multiple queues according to the traffic characteristics of the multiple queues may be: the processing device determines the traffic characteristics of the at least one branch node according to the traffic characteristics of the multiple queues, The traffic characteristics and the scheduling parameters of the root node determine the scheduling parameters of at least one branch node, and determine the scheduling parameters of at least one queue of the plurality of queues according to the traffic characteristics of the plurality of queues and the scheduling parameters of the at least one branch node. Since the HQoS scheduling tree is implemented by software, the mapping relationship between the root node, branch node and leaf node can be changed without changing the TM hardware entity, realizing flexible management of the queue.
  • the traffic characteristics include input rate and queue ID.
  • determining the scheduling parameters of at least one of the multiple queues by the processing device according to the traffic characteristics of the multiple queues may be: The rate and characteristic parameters determine scheduling parameters for at least one of the plurality of queues, wherein the characteristic parameters include at least one of priority and weight.
  • the scheduling device is a token bucket
  • the scheduling parameter is a rate at which tokens are output from the token bucket.
  • the TM hardware entity is an application specific integrated circuit ASIC chip or a programmable logic controller PLC.
  • the processing apparatus and the TM hardware entity belong to the same network device, and the network device may be a router, a switch, or a base station, or the like.
  • a queue scheduling method is provided.
  • the method is applied to a scheduling device of a TM hardware entity.
  • the TM hardware entity further includes a plurality of queues.
  • the method includes the following steps: the scheduling device receives a scheduling message from a processing device, and the scheduling message includes multiple queues. Scheduling parameters of at least one of the queues, the TM hardware entity does not include a processing device.
  • the scheduling device schedules the at least one queue according to the scheduling parameters of the at least one queue.
  • the scheduling device schedules the queue according to the scheduling message from the processing device that does not belong to the TM hardware entity, when the scheduling parameters of the queue in the scheduling message change, the TM hardware entity does not need to be replaced, so as to realize the The purpose of flexible management is to meet the actual transmission needs and save resources.
  • the processing device and the TM hardware entity belong to the same network device.
  • a processing device includes: a generating unit for generating a hierarchical quality of service HQoS scheduling tree, the HQoS scheduling tree is a tree structure used to describe nodes participating in scheduling in a communication network, the HQoS scheduling tree Including multiple leaf nodes, each leaf node in the multiple leaf nodes is used to identify the queue on the traffic management TM hardware entity, the TM hardware entity includes multiple queues, and the multiple leaf nodes correspond to the multiple queues one-to-one; the obtaining unit is used to obtain the traffic characteristics of multiple queues according to the multiple leaf nodes; the determining unit is used to determine the scheduling parameters of at least one queue in the multiple queues according to the traffic characteristics of the multiple queues, and the traffic characteristics of the multiple queues are multiple queues.
  • the traffic characteristics of the transmitted data flow is used to send a scheduling message to a scheduling device corresponding to at least one queue in the TM hardware entity, where the scheduling message includes scheduling parameters of the at least one queue, and the scheduling parameters are used to schedule the at least one queue.
  • the HQoS scheduling tree further includes root nodes of multiple leaf nodes
  • a determining unit configured to determine the scheduling parameter of at least one queue in the multiple queues according to the traffic characteristics of the multiple queues and the scheduling parameter of the root node.
  • the HQoS scheduling tree further includes a root node and at least one branch node corresponding to the root node, each branch node in the at least one branch node corresponds to one or more leaf nodes, and different branch nodes correspond to different leaf nodes;
  • the determining unit is configured to determine the traffic characteristics of at least one branch node according to the traffic characteristics of the multiple queues, determine the scheduling parameters of the at least one branch node according to the traffic characteristics of the at least one branch node and the scheduling parameters of the root node, and determine the scheduling parameters of the at least one branch node according to the traffic characteristics of the multiple queues.
  • the characteristics and the scheduling parameters of the at least one branch node determine the scheduling parameters of at least one of the plurality of queues.
  • the traffic characteristics include input rate and queue ID
  • the determining unit is configured to determine the characteristic parameters of the multiple queues according to the queue identifiers of the multiple queues, and determine the scheduling parameters of at least one of the multiple queues according to the input rates and characteristic parameters of the multiple queues, and the characteristic parameters include priority and weight. at least one of the.
  • the scheduling device is a token bucket
  • the scheduling parameter is a rate at which tokens are output from the token bucket.
  • the TM hardware entity is an application specific integrated circuit ASIC chip or a programmable logic controller PLC.
  • the processing device and the TM hardware entity belong to the same network device.
  • a scheduling device belongs to a TM hardware entity, the traffic management TM hardware entity further includes a plurality of queues, and the scheduling device includes: a receiving unit for receiving a scheduling message from a processing device, and the scheduling message includes a plurality of queues.
  • the scheduling parameter of at least one queue in the queue, the TM hardware entity does not include a processing device; the scheduling unit is configured to schedule the at least one queue according to the scheduling parameter of the at least one queue.
  • the processing device and the TM hardware entity belong to the same network device.
  • a queue scheduling system including the processing device of the third aspect and the scheduling device of the fourth aspect.
  • a computer-readable storage medium comprising instructions that, when run on a computer, cause the computer to perform the queue scheduling method of the first aspect or the queue scheduling method of the second aspect.
  • FIG. 1 is a schematic diagram of an HQoS scheduling tree provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a queue scheduling system provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a queue scheduling method provided by an embodiment of the present application.
  • FIG. 4 is another schematic diagram of an HQoS scheduling tree provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a processing apparatus 500 provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a scheduling apparatus 600 provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a device 700 according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a device 800 according to an embodiment of the present application.
  • the HQoS scheduling tree is a tree structure used to describe nodes participating in scheduling in a communication network, and the tree structure at least includes a plurality of leaf nodes and a root (root) to which the plurality of leaf nodes belong. ) node, optionally, it can also include branch nodes.
  • a leaf node is the lowest node in the HQoS scheduling tree, and a leaf node is used to identify a queue.
  • the root node is the topmost node in the HQoS scheduling tree.
  • the branch node is the node in the middle layer in the HQoS scheduling tree, between the root node and the leaf node.
  • An HQoS scheduling tree may include one or more layers of branch nodes, and each branch node corresponds to one or more leaf nodes.
  • One queue is used to transmit one data stream, and the queues corresponding to the same branch node can transmit data streams with the same attributes, such as belonging to the same user group, the same user, or the same service.
  • the HQoS scheduling tree includes a four-layer tree structure
  • the topmost layer of the tree structure is the root node 10
  • the second layer includes branch nodes 20 and branch nodes 21
  • the third layer includes branch nodes 30, branch nodes 31 and branch node 32
  • the fourth layer includes leaf node 40, leaf node 41, leaf node 42, leaf node 43, leaf node 44, leaf node 45, leaf node 46 and leaf node 47.
  • the above-mentioned eight leaf nodes respectively identify a queue, that is, a total of 8 queues are identified, which are used for transmitting 8 data streams.
  • the branch node 20 is the upper-level branch node of the branch node 30 and the branch node 31.
  • the branch node 20 is used to identify the data stream of the user group 1
  • the branch node 30 is used to identify the data stream of the user 1 belonging to the user group 1
  • the branch node 31 uses the To identify the data flow of user 2 belonging to user group 1.
  • the branch node 30 is the upper-level branch node of the leaf node 40, the leaf node 41 and the leaf node 42, wherein the leaf node 40 is used to identify the data flow of the service 1 of the user 1, and the leaf node 41 is used to identify the data of the service 2 of the user 1. flow, the leaf node 42 is used to identify the data flow of the service 3 of the user 1 .
  • the branch node 31 is an upper-level branch node of the leaf node 43 and the leaf node 44 , wherein the leaf node 43 is used to identify the data flow of the service 1 of the user 2 , and the leaf node 44 is used to identify the data flow of the service 4 of the user 2 .
  • the branch node 21 is an upper-level branch node of the branch node 32 , the branch node 21 is used to identify the data flow of the user group 2 , and the branch node 32 is used to identify the data flow of the user 3 belonging to the user group 2 .
  • the branch node 32 is the upper-level branch node of the leaf node 45, the leaf node 46 and the leaf node 47, wherein the leaf node 45 is used to identify the data flow of the service 1 of the user 3, and the leaf node 46 is used to identify the data of the service 2 of the user 3.
  • Flow the leaf node 47 is used to identify the data flow of the service 5 of the user 3 .
  • the HQoS scheduling tree is implemented by TM hardware entities.
  • the TM hardware entities include queues and schedulers. Each branch node and the root node have a corresponding scheduler, and the scheduler corresponding to the root node is used to schedule the scheduler of the branch node. , the scheduler of the branch node is used to schedule the queue corresponding to the leaf node.
  • the mapping relationship between schedulers at various levels and the mapping relationship between schedulers and queues are fixed. Because the mapping relationship is implemented through hardware connections, flexible management of queues cannot be implemented. If you need to change the mapping relationship between the layers, you need to replace the TM hardware entity to achieve this. In addition, if a branch node or root node has a reduced demand for the number of queues, the unused queues will cause a waste of resources.
  • the scheduler corresponding to the root node 10 is used to schedule the scheduler of the branch node 20 and the scheduler of the branch node 21
  • the scheduler of the branch node 20 is used to schedule the scheduler of the branch node 30 and the scheduler of the branch node 31, and the branch node 21
  • the scheduler is used to schedule the scheduler of the branch node 32
  • the scheduler of the branch node 30 is used to schedule the queues corresponding to the leaf node 40, the leaf node 41 and the leaf node 42 respectively
  • the scheduler of the branch node 31 is used to schedule the leaf node 43 and the leaf node. 44 respectively correspond to the queues
  • the branch node 32 is used to schedule the queues corresponding to the leaf nodes 45, 46 and 47 respectively.
  • the scheduling object of the above-mentioned scheduler is fixed and cannot be changed flexibly. If it changes, the TM hardware entity needs to be replaced. Assuming that the queue corresponding to the leaf node 40 has no data flow, while the queue corresponding to the leaf node 43 has many data flows, according to the traditional technology, the queue corresponding to the leaf node 40 cannot be released from the scheduler corresponding to the branch node 30, and the queue corresponding to the branch node 30 cannot be released by the branch node.
  • the scheduler corresponding to 31 is used for scheduling, so it cannot meet the actual transmission demand, which causes waste of resources to a certain extent.
  • an embodiment of the present application provides a queue scheduling method.
  • the main idea of the queue scheduling method is to use software to implement the HQoS scheduling tree.
  • the TM hardware entity only includes multiple queues and a scheduling device corresponding to each queue.
  • the scheduling device is, for example, a token bucket. Since the embodiments of the present application set the mapping relationship between the various levels at the software level, the mapping relationship between the various levels can be changed, thereby realizing flexible management of queues, meeting actual transmission requirements, and reducing waste of resources.
  • FIG. 2 is a schematic diagram of a hardware scenario where the queue scheduling method may be applied, that is, the queue scheduling method may be applied to a queue scheduling system, and the system includes a network device 10 and a device 11 .
  • the network device 10 may be a router, a switch, a base station, or the like. Wherein, when the network device 10 is a router or a switch, the network device 10 may be any network device of an access (access) network, an aggregation (aggressive) network or a core (core) network. When the network device 10 is an access network device, the network device 10 is, for example, a broadband remote access server (BRAS) or a digital subscriber line access multiplexer (DSLAM) or the like.
  • BRAS broadband remote access server
  • DLAM digital subscriber line access multiplexer
  • the network device 10 includes a processing device 101 and a TM hardware entity 102 .
  • the processing device 101 may be a central processing unit (central processing unit, CPU), a network processor (network processor, NP), or the like.
  • CPU central processing unit
  • NP network processor
  • the TM hardware entity 102 may be an application specific integrated circuit (application specific integrated circuit, ASIC) chip or a programmable logic controller (programmable logic controller, PLC), etc., which is not specifically limited in the embodiment of the present application.
  • ASIC application specific integrated circuit
  • PLC programmable logic controller
  • the TM hardware entity 102 includes a plurality of queues and a scheduling device corresponding to each queue, and the scheduling device is, for example, a token bucket.
  • the TM hardware entity 102 may determine the rate at which the corresponding queue outputs the data flow according to the rate at which the token bucket outputs tokens.
  • the processing device 101 and the TM hardware entity 102 are integrated in the same network device (ie, the network device 10 ). In some other application scenarios, the TM hardware entity 102 may be independent of the processing device. 101 network devices exist.
  • the device 11 may be a server or a terminal device, wherein the terminal device may be, for example, a mobile phone, a tablet computer, a personal computer (personal computer, PC), a multimedia playback device, or the like.
  • the network device 10 and the device 11 communicate through a network, and the network may be an operator network or a local area network.
  • the network device 10 receives the data stream from the device 11 through the communication interface, and transmits the data stream to the queue.
  • the processing device 101 of the network device 10 determines the scheduling parameter of the queue according to the traffic characteristics of the queue, and sends the scheduling message to the TM hardware entity 102, where the scheduling message includes the scheduling parameter, and the scheduling parameter is used by the TM hardware entity 102 to schedule the queue , so that the data flow corresponding to the corresponding queue is dequeued at a rate corresponding to the scheduling parameter, and the data flow is sent to the next-hop network device through the communication interface.
  • the embodiments of the present application provide a queue scheduling method and apparatus, which realizes the purpose of flexible management of mapping relationships between various levels of an HQoS scheduling tree and saves scheduling resources.
  • FIG. 3 is a schematic flowchart of a queue scheduling method provided by an embodiment of the present application.
  • the processing apparatus generates an HQoS scheduling tree.
  • the processing device may be the processing device 101 in the system shown in FIG. 2 .
  • the processing device generates an HQoS scheduling tree in advance, and the difference from the conventional technology is that the HQoS scheduling tree is implemented by software. That is to say, the root node and leaf node included in the HQoS scheduling tree are implemented by software codes, wherein, the leaf node can use the identifier of the corresponding queue on the TM hardware entity (for example, the TM hardware entity 102 shown in FIG. 2 ) to identify the queue.
  • the root node can be represented by the identity of the root node.
  • the HQoS scheduling tree may further include one or more layers of branch nodes, and each branch node may be represented by an identifier of a corresponding branch node.
  • the HQoS scheduling tree can be represented as a mapping relationship between the identifiers of each node.
  • the HQoS scheduling tree can be represented as a set of the following mapping relationships:
  • mapping relationship between the identifier of the branch node 32 and the identifier of the queue 45 , the identifier of the queue 46 , and the identifier of the queue 47 respectively.
  • the HQoS scheduling tree can be expressed as a mapping relationship between nodes at various layers, the HQoS scheduling tree can be obtained through configuration, or obtained by communicating with the processing device through a network management system (NMS).
  • NMS network management system
  • the network management device may be a controller or a terminal device or the like.
  • the processing apparatus may continue to perform the following S102-S104. It can be understood that the step of S101 does not need to be performed every time before the execution of S102-S104.
  • the processing device may repeatedly execute S102-S104 after executing S101 once.
  • S102 The processing apparatus acquires the traffic characteristics of the multiple queues according to the multiple leaf nodes.
  • a leaf node has a mapping relationship with a queue, and optionally, a leaf node may be represented as an identifier of a corresponding queue.
  • the traffic characteristic of the queue is the traffic characteristic of the data flow transmitted by the queue, for example, including the input rate and the queue identifier of the data flow corresponding to the queue.
  • S103 The processing apparatus determines a scheduling parameter of at least one of the multiple queues according to the traffic characteristics of the multiple queues.
  • the scheduling parameter of the queue is a parameter used to schedule the queue.
  • the scheduling parameter of the queue is the rate at which tokens are output from the token bucket.
  • the rate at which the token bucket outputs tokens is a key factor for the output rate of the data stream transmitted by the queue. Therefore, by determining the rate at which tokens are output from the token bucket, the rate at which the queue outputs data streams can be determined, thereby realizing the scheduling of the queue.
  • the processing device determines the scheduling parameters of at least one queue in the multiple queues according to the traffic characteristics of the multiple queues.
  • the rate and characteristic parameters determine scheduling parameters for at least one of the plurality of queues.
  • the characteristic parameter of the queue may be at least one of a priority and a weight of the queue.
  • a priority the higher the priority of a queue, the earlier the timing of outputting the data stream from the queue, and the queue with the higher priority outputting the data stream first. vice versa.
  • the queue identification itself may represent the priority of the queue. For example, the smaller the queue ID, the higher the priority of the ID queue. Taking FIG. 1 as an example, for queue 40 , queue 41 and queue 42 , queue 40 has the highest priority, queue 41 has the second priority, and queue 42 has the lowest priority.
  • the queue identifier itself does not represent the priority of the queue.
  • the processing device may pre-store the corresponding relationship between the identifier of the queue and the priority of the queue, and according to the identifier of the queue and the corresponding relationship, the The priority of the queue can be obtained.
  • the table shows the correspondence between the identifiers of the queues and the priorities of the queues. It can be seen from Table 1 that the priority of the queue 43 is higher than the priority of the queue 44 , which means that the bandwidth resources are preferentially allocated to the queue 43 , and the remaining bandwidth resources are allocated to the queue 44 .
  • the bandwidth resource may be the bandwidth resource corresponding to the branch node 30 .
  • the weight represents the proportion of the total bandwidth occupied by the rate of the output data flow of the queue. The higher the weight, the higher the proportion of the total bandwidth occupied; the lower the weight, the lower the proportion of the total bandwidth occupied.
  • the processing device may pre-store the correspondence between the queue identifiers and the weights, so that the weights corresponding to the queues may be obtained according to the queue identifiers and the corresponding relation.
  • the rate of output data flow of queue 45 can occupy 40% of the total bandwidth
  • the rate of output data flow of queue 46 can occupy 35% of the total bandwidth
  • the rate of output data flow of queue 47 can occupy 25% of the total bandwidth %.
  • the total bandwidth here refers to the bandwidth corresponding to the branch node 32 .
  • Tables 1 and 2 show the situation in which the characteristic parameter of the queue only includes the priority of the queue or the weight of the queue. As mentioned above, the characteristic parameter of the queue may also include both the priority and the weight of the queue.
  • the table shows the correspondence between the identifiers of the queues, the priorities of the queues, and the weights of the queues. It can be seen from Table 3 that the priority of queue 40 is higher than that of queue 41 and queue 42, so the bandwidth resources can be allocated to queue 40 first, 40% of the remaining bandwidth resources are allocated to queue 41, and 60% of the remaining bandwidth resources are allocated to queue 41. to queue 42.
  • the bandwidth resource therein may be the bandwidth resource corresponding to the branch node 30 .
  • the processing device may determine the scheduling parameter of at least one of the multiple queues according to the traffic characteristics of the multiple queues and the scheduling parameters of the root node.
  • the processing device may obtain the identifier of the root node according to the queue identifiers of the multiple queues and the mapping relationship, where the mapping relationship is the relationship between the queue identifiers of the multiple queues and the identifier of the root node. Mapping relations. Then, the processing device obtains the scheduling parameter of the root node according to the identifier of the root node, and determines the scheduling parameter of at least one queue in the multiple queues according to the traffic characteristics of the multiple queues and the scheduling parameter of the root node. In addition, the processing apparatus may pre-establish a mapping relationship between the identity of the root node and the scheduling parameters of the root node.
  • the scheduling parameters of the root node may be obtained through configuration, and may be determined according to the total bandwidth of the interface corresponding to the root node during configuration. It should be noted that the scheduling parameter of the root node can be the output rate of the "virtual token bucket" corresponding to the root node. Token bucket, the TM hardware entity only includes token buckets corresponding to leaf nodes, that is, one leaf node corresponds to one token bucket. For convenience of understanding and calculation, the scheduling parameter of the root node in this embodiment of the present application may be regarded as the output rate of the "virtual token bucket" of the root node.
  • the output rate of the "virtual token bucket" corresponding to the root node may be 100Gbps.
  • the root node corresponds to two leaf nodes, corresponding to queue 1 and queue 2 respectively.
  • the input rate of queue 1 is 70 Gbps
  • the input rate of queue 2 is 50 Gbps
  • the priority of queue 1 is higher than that of queue 2.
  • the processing device can determine that the output rate of queue 1 is 70 Gbps, and the output rate of queue 2 is 30 Gbps.
  • the output rate of the "virtual token bucket" corresponding to the root node may be 100Gbps.
  • the root node corresponds to two leaf nodes, corresponding to queue 1 and queue 2 respectively.
  • the input rate of queue 1 is 70 Gbps
  • the input rate of queue 2 is 30 Gbps
  • the weight of queue 1 is 0.6
  • the weight of queue 2 is 0.4.
  • the processing device can determine that the theoretical output rate of queue 1 is 60 Gbps, and the theoretical output rate of queue 2 is 40 Gbps. But since the input rate of queue 2 is less than the theoretical output rate, queue 2 takes the input rate as the actual output rate, that is, 30Gbps. In order not to waste bandwidth, the actual output rate of queue 1 may be greater than its theoretical output rate, eg, 70 (100-30) Gbps.
  • the processing device may determine the traffic characteristics of the at least one branch node according to the traffic characteristics of the plurality of queues, and then determine the traffic characteristics of the at least one branch node according to the traffic characteristics of the at least one branch node. and the scheduling parameters of the root node to determine the scheduling parameters of the at least one branch node, and finally, determine at least one queue among the multiple queues according to the traffic characteristics of the multiple queues and the scheduling parameters of the at least one branch node scheduling parameters.
  • the scheduling parameter of the branch node may be the output rate of the "virtual token bucket" corresponding to the branch node.
  • the root node corresponds to two branch nodes, namely branch node 1 and branch node 2, wherein branch node 1 corresponds to three leaf nodes, corresponding to queue 1, queue 2 and Queue 3, branch node 2 corresponds to a leaf node, and the leaf node corresponds to queue 4.
  • the input rate of queue 1 is 20Gbps
  • the input rate of queue 2 is 40Gbps
  • the input rate of queue 3 is 30Gbps
  • the input rate of queue 4 is 20Gbps.
  • the priority of queue 1 is high priority
  • the priority of queue 2 and queue 3 is low priority.
  • Cohort 2 and Cohort 3 have weights of 70% and 30%, respectively.
  • the processing device obtains the input rate of branch node 1 90Gbps according to the input rate of queue 1, the sum of the input rate of queue 2 and the input rate of queue 3, and determines the input rate of branch node 2 to be 20Gbps according to the input rate of queue 4.
  • the weight of branch node 1 is 70%, and the weight of branch node 2 is 30%.
  • the processing device determines that the bandwidth allocated to branch node 1 is 70G and the bandwidth allocated to branch node 2 is 30G according to the weight of branch node 1, the weight of branch node 2, and the total interface bandwidth corresponding to the root node. Since the priority of queue 1 is a high priority, the input rate of queue 1 can be determined as the output rate of queue 1, which is 20 Gbps.
  • the total bandwidth occupied by queue 2 and queue 3 is 50G.
  • the output rate of queue 2 is 30 Gbps, and the output rate of queue 3 is 15 Gbps.
  • the output rate of queue 4 is equal to the input rate, which is 20 Gbps.
  • the processing device may determine the corresponding branch node identifiers according to the queue identifiers and the mapping relationship of the multiple queues, and then according to the multiple queue identifiers and the mapping relationship.
  • the traffic characteristics of each queue determine the traffic characteristics of the corresponding branch node, so as to obtain the traffic characteristics of the branch node corresponding to the identification of the branch node.
  • determining the traffic characteristic of the corresponding branch node according to the traffic characteristics of the multiple queues may be to obtain the ingress rate of the branch node according to the sum of the ingress rates of the multiple queues.
  • the processing device After acquiring the respective entry rates of the queue 40, the queue 41 and the queue 42, the processing device knows that the three queues belong to the same one according to the mapping relationship between the respective queue identifiers of the three queues and the identifier of the branch node 30.
  • the branch node that is, the branch node 30, then obtains the entry rate corresponding to the identification of the branch node 30 according to the sum of the entry rates of the queue 40, the queue 41 and the queue 42, that is, the entry rate of the branch node 30.
  • the processing device determines the scheduling parameter of the at least one branch node according to the traffic characteristic of the at least one branch node and the scheduling parameter of the root node, it can be determined according to the mapping relationship between the identification of the at least one branch node and the identification of the root node.
  • the identity of the root node is obtained, and the scheduling parameters of the root node are obtained according to the identity of the root node, and then the scheduling parameters of the at least one branch node are determined according to the traffic characteristics corresponding to the identity of the at least one branch node and the scheduling parameters of the root node.
  • the processing device determines, according to the identifier of the branch node 30, the identifier of the branch node 31 and the mapping relationship with the branch node 20, the identifier corresponding to the branch node 20.
  • the ingress rate of the branch node 20 After acquiring the ingress rate of the branch node 32, the processing device determines the ingress rate corresponding to the identifier of the branch node 21, that is, the ingress rate of the branch node 21, according to the mapping relationship between the identifier of the branch node 32 and the identifier of the branch node 21.
  • the ingress rate of branch node 21 is the same as the ingress rate of branch node 32.
  • the processing device After obtaining the ingress rate corresponding to the identifier of the branch node 20 and the ingress rate corresponding to the identifier of the branch node 21, the processing device obtains the root node 10 according to the mapping relationship between the identifier of the branch node 20, the identifier of the branch node 21 and the identifier of the root node 10.
  • the identifier of the root node 10 is used to obtain the scheduling parameters of the root node.
  • the processing device obtains the scheduling parameter of the branch node 20 and the scheduling parameter of the branch node 21 according to the ingress rate and characteristic parameter corresponding to the identifier of the branch node 20 and the ingress rate and characteristic parameter corresponding to the identifier of the branch node 21 and the scheduling parameter of the root node .
  • the processing device obtains the scheduling parameters of the branch node 30 and the scheduling of the branch node 31 according to the ingress rate and characteristic parameter corresponding to the identification of the branch node 30 , the ingress rate and characteristic parameter corresponding to the identification of the branch node 31 , and the scheduling parameter of the branch node 20 . parameter.
  • the processing device obtains the scheduling parameter of the branch node 32 according to the ingress rate and characteristic parameter corresponding to the identifier of the branch node 21 and the scheduling parameter of the branch node 21 .
  • the processing device obtains respectively Scheduling parameters for queue 40, queue 41, and queue 42.
  • the rest of the queues are the same, and will not be repeated here.
  • the processing apparatus sends a scheduling message to the scheduling apparatus corresponding to the one queue in the TM hardware entity, where the scheduling message includes the scheduling parameter, where the scheduling parameter is used to schedule the queue corresponding to the leaf node.
  • the processing device may issue a scheduling message including the scheduling parameters to the scheduling device in the TM hardware entity, so that the scheduling device performs scheduling according to the queues corresponding to the scheduling parameters.
  • the processing device may determine the scheduling parameter of each queue in the multiple queues according to S103, or may determine one or more data stream transmissions from the multiple queues.
  • the scheduling parameters of the queue, and the scheduling parameters of one or more queues whose scheduling parameters have changed can also be determined from multiple queues. Compared with the first method, the latter two methods can save the number of scheduling messages issued by the processing device. , saving the processing resources of the processing device.
  • the scheduling apparatus receives the scheduling message, and schedules the queue corresponding to the leaf node according to the scheduling parameter.
  • the HQoS scheduling tree is implemented by software, that is, the mapping relationship between the identifiers of each node is generated, and the traffic characteristics of multiple queues are obtained, and the scheduling parameters of the queues are obtained according to the mapping relationship and the traffic characteristics of the multiple queues,
  • the mapping relationship needs to be changed, there is no need to replace the TM hardware entity, and the processing device only needs to obtain a new mapping relationship through configuration or delivery by the controller, etc., to achieve the purpose of flexible management of the queue and meet the actual transmission requirements. demand, saving scheduling resources.
  • HQoS scheduling tree can be modified to the HQoS scheduling tree shown in FIG.
  • the processing device can obtain the traffic characteristics of the data flow of the service 6 belonging to the user 1 according to the identifier of the queue 40, and obtain the scheduling parameters of the scheduling device corresponding to the queue 40 according to the traffic characteristics of the queue 40, and schedule the scheduling parameters.
  • the parameters are delivered to the scheduling device to schedule the queues 40, so as to realize the scheduling of the queues corresponding to the new HQoS scheduling tree. The whole process does not need to replace the TM hardware entity to meet the new traffic transmission requirements, improve the flexibility of queue management and save scheduling resources.
  • an embodiment of the present application further provides a processing apparatus 500 , and the processing apparatus 500 can implement the functions of the processing apparatus in the embodiment shown in FIG. 3 .
  • the apparatus 500 includes: a generating unit 501 , an obtaining unit 502 , a determining unit 503 and a sending unit 504 .
  • the generating unit 501 is configured to execute S101 in the embodiment shown in FIG. 3
  • the acquiring unit 502 is configured to execute S102 in the embodiment shown in FIG. 3
  • the determining unit 503 is configured to execute S103 in the embodiment shown in FIG. 3
  • the sending unit 504 is configured to execute S104 in the embodiment shown in FIG. 3 .
  • the generating unit 501 is configured to generate a hierarchical quality of service HQoS scheduling tree, where the HQoS scheduling tree is a tree structure used to describe nodes participating in scheduling in a communication network, and the HQoS scheduling tree includes a plurality of leaf nodes, Each leaf node in the multiple leaf nodes is used to identify a queue on the traffic management TM hardware entity, the TM hardware entity includes multiple queues, and the multiple leaf nodes are in one-to-one correspondence with the multiple queues;
  • an obtaining unit 502 configured to obtain the traffic characteristics of the multiple queues according to the multiple leaf nodes
  • a determining unit 503, configured to determine scheduling parameters of at least one of the multiple queues according to the traffic characteristics of the multiple queues, where the traffic characteristics of the multiple queues are the traffic characteristics of the data flows transmitted by the multiple queues ;
  • the sending unit 504 is configured to send a scheduling message to the scheduling device corresponding to the at least one queue in the TM hardware entity, where the scheduling message includes the scheduling parameter, and the scheduling parameter is used to schedule the corresponding leaf node. queue.
  • an embodiment of the present application further provides a scheduling apparatus 600 , and the scheduling apparatus 600 can implement the functions of the scheduling apparatus in the embodiment shown in FIG. 3 .
  • the scheduling apparatus 600 includes a receiving unit 601 and a scheduling unit 602 .
  • the receiving unit 601 and the scheduling unit 602 are configured to perform S105 in the embodiment described in FIG. 3 .
  • the receiving unit 601 is configured to receive a scheduling message from a processing device, where the scheduling message includes scheduling parameters of at least one of the multiple queues, and the TM hardware entity does not include the processing device;
  • a scheduling unit 602 configured to schedule the at least one queue according to scheduling parameters of the at least one queue.
  • FIG. 7 is a schematic structural diagram of a device 700 provided by an embodiment of the present application.
  • the processing apparatus 500 in FIG. 5 and the scheduling apparatus 600 in FIG. 6 may be implemented by the device shown in FIG. 7 .
  • the device 700 includes at least one processor 701 , a communication bus 702 and at least one network interface 704 , and optionally, the device 700 may further include a memory 703 .
  • the processor 701 may be a general-purpose central processing unit (CPU), an application-specific integrated circuit (ASIC), or one or more integrated circuits (integrated circuits) for controlling the execution of programs in the present application. , IC).
  • the processor may be used to implement the queue scheduling method provided in the embodiments of this application.
  • the device can be used to generate an HQoS scheduling tree, and after generating the HQoS scheduling tree, obtain the multiple queues according to the multiple leaf nodes determine the scheduling parameters of at least one of the multiple queues according to the traffic characteristics of the multiple queues, and send a scheduling message to the scheduling device corresponding to the at least one queue in the TM hardware entity, specifically For function implementation, reference may be made to the processing part of the corresponding processing device in the method embodiment.
  • the scheduling apparatus in FIG. 3 is implemented by the apparatus shown in FIG.
  • the apparatus may be configured to receive a scheduling message from the processing apparatus, where the scheduling message includes the scheduling of at least one of the multiple queues parameters, and schedule the at least one queue according to the scheduling parameters of the at least one queue.
  • the scheduling message includes the scheduling of at least one of the multiple queues parameters, and schedule the at least one queue according to the scheduling parameters of the at least one queue.
  • Communication bus 702 is used to transfer information between processor 701 , network interface 704 and memory 703 .
  • the memory 703 can be read-only memory (ROM) or other types of static storage devices that can store static information and instructions, and the memory 703 can also be random access memory (RAM) or can store information and other types of dynamic storage devices for instructions, also can be compact disc read-only Memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, Blu-ray optical disks, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, without limitation.
  • the memory 703 may exist independently and be connected to the processor 701 through the communication bus 702 .
  • the memory 703 may also be integrated with the processor 701 .
  • the memory 703 is used for storing program codes or instructions for executing the solutions of the present application, and the execution is controlled by the processor 701 .
  • the processor 701 is used to execute program codes or instructions stored in the memory 703 .
  • One or more software modules may be included in the program code.
  • the processor 701 may also store program codes or instructions for executing the solutions of the present application, in which case the processor 701 does not need to read the program codes or instructions from the memory 703 .
  • the network interface 704 may be a device such as a transceiver for communicating with other devices or a communication network, the communication network may be Ethernet, a radio access network (RAN), or a wireless local area network (wireless local area network, WLAN) or the like. In this embodiment of the present application, the network interface 704 may be configured to receive packets sent by other nodes in the segment routing network, and may also send packets to other nodes in the segment routing network.
  • the network interface 704 may be an ethernet (ethernet) interface, a fast ethernet (FE) interface, or a gigabit ethernet (GE) interface, or the like.
  • the device 700 may include multiple processors, such as the processor 701 and the processor 405 shown in FIG. 7 .
  • processors can be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (eg, computer program instructions).
  • FIG. 8 is a schematic structural diagram of a device 800 provided by an embodiment of the present application.
  • the processing device and the scheduling device in FIG. 3 can be implemented by the device shown in FIG. 8 .
  • the device 800 includes a main control board and one or more interface boards.
  • the main control board communicates with the interface board.
  • the main control board is also called the main processing unit (MPU) or the route processor card (route processor card).
  • the main control board includes a CPU and a memory.
  • the main control board is responsible for the control and management of each component in the device 800, including Route calculation, device management and maintenance functions.
  • Interface boards also known as line processing units (LPUs) or line cards, are used to receive and send messages.
  • LPUs line processing units
  • the communication between the main control board and the interface board or between the interface board and the interface board is through a bus.
  • the interface boards communicate through a switch fabric board.
  • the device 800 also includes a switch fabric board.
  • the switch fabric board is communicatively connected to the main control board and the interface board.
  • the switch fabric board is used for forwarding the interface board.
  • the data between them, the switch fabric board can also be called a switch fabric unit (SFU).
  • the interface board includes a CPU, a memory, a forwarding engine, and an interface card (IC), wherein the interface card may include one or more network interfaces.
  • the network interface can be an Ethernet interface, an FE interface, or a GE interface.
  • the CPU is connected in communication with the memory, the forwarding engine and the interface card, respectively.
  • the memory is used to store the forwarding table.
  • the forwarding engine is used to forward the received packet based on the forwarding table stored in the memory. If the destination address of the received packet is the IP address of the device 800, the packet is sent to the CPU of the main control board or the interface board for processing. Processing; if the destination address of the received packet is not the IP address of the device 800, then look up the forwarding table according to the destination, and if the next hop and outgoing interface corresponding to the destination address are found from the forwarding table, the packet Forwarding to the outbound interface corresponding to the destination address.
  • the forwarding engine may be a network processor (NP).
  • the interface card also known as the daughter card, can be installed on the interface board and is responsible for converting photoelectric signals into data frames, checking the validity of the data frames and forwarding them to the forwarding engine for processing or the interface board CPU.
  • the CPU can also perform the function of a forwarding engine, such as implementing soft forwarding based on a general-purpose CPU, so that a forwarding engine is not required in the interface board.
  • the forwarding engine may be implemented by an ASIC or a field programmable gate array (FPGA).
  • the memory that stores the forwarding table may also be integrated into the forwarding engine as part of the forwarding engine.
  • An embodiment of the present application further provides a chip system, including: a processor, where the processor is coupled with a memory, the memory is used to store a program or an instruction, and when the program or instruction is executed by the processor, the The chip system implements the method of the processing apparatus or the scheduling apparatus in the above-mentioned embodiment shown in FIG. 3 .
  • the number of processors in the chip system may be one or more.
  • the processor can be implemented by hardware or by software.
  • the processor may be a logic circuit, an integrated circuit, or the like.
  • the processor may be a general-purpose processor implemented by reading software codes stored in memory.
  • there may also be one or more memories in the chip system.
  • the memory may be integrated with the processor, or may be provided separately from the processor, which is not limited in this application.
  • the memory can be a non-transitory processor, such as a read-only memory ROM, which can be integrated with the processor on the same chip, or can be provided on different chips.
  • the setting method of the processor is not particularly limited.
  • the chip system may be an FPGA, an ASIC, a system on chip (system on chip, SoC), a CPU, an NP, or a digital signal processing circuit (digital signal processor, DSP), can also be a microcontroller (micro controller unit, MCU), can also be a programmable logic device (programmable logic device, PLD) or other integrated chips.
  • SoC system on chip
  • DSP digital signal processing circuit
  • MCU microcontroller
  • PLD programmable logic device
  • each step in the above method embodiments may be implemented by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the method steps disclosed in conjunction with the embodiments of the present application may be directly embodied as being executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • an embodiment of the present application further provides a queue scheduling system, including the processing apparatus 500 in the embodiment shown in FIG. 5 and the scheduling apparatus 600 in the embodiment shown in FIG. 6 .
  • Embodiments of the present application also provide a computer-readable storage medium, including instructions, which, when executed on a computer, cause the computer to execute the methods in the foregoing embodiments.
  • At least one item (piece) refers to one or more, and “multiple” refers to two or more.
  • At least one item(s) below” or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s).
  • at least one item (a) of a, b, or c can represent: a, b, c, ab, ac, bc, or abc, where a, b, c can be single or multiple .
  • “A and/or B” is considered to include A alone, B alone, and A+B.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical module division.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be acquired according to actual needs to achieve the purpose of the solution in this embodiment.
  • each module unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of software module units.
  • the integrated unit if implemented in the form of a software module unit and sold or used as a stand-alone product, may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .
  • the functions described in the present invention may be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请公开了一种队列调度方法、装置及系统,实现对队列的灵活管理,满足实际的传输需求,节约资源。其中,队列调度方法包括如下步骤:处理装置生成HQoS调度树,HQoS调度树包括多个叶子节点,多个叶子节点中的每个叶子节点用于标识流量管理TM硬件实体上的队列,TM硬件实体包括多个队列,多个叶子节点与多个队列一一对应;处理装置根据多个叶子节点获取多个队列的流量特性;处理装置根据多个队列的流量特性确定多个队列中至少一个队列的调度参数,多个队列的流量特性为多个队列传输的数据流的流量特性;处理装置向TM硬件实体中与至少一个队列对应的调度装置发送调度消息,调度消息包括至少一个队列的调度参数,调度参数用于调度至少一个队列。

Description

一种队列调度方法、装置及系统
本申请要求于2020年7月16日提交中国国家知识产权局、申请号为202010685543.7、发明名称为“一种队列调度方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信领域,尤其是一种队列调度方法、装置及系统。
背景技术
目前计算机网络的高速发展,使得对带宽、延迟、抖动敏感的语音、图像、重要数据越来越多地在网上传输。为了能够对数据传输性能提供不同的承诺和保证,广泛使用了服务质量(quality of service,QoS)技术来保证网络传输质量。随着用户规模的扩大,业务种类的增多,要求以太网设备不仅能够进一步细化区分业务流量,而且还能够对多个用户、多种业务等层级的数据流进行统一管理和分层调度,所以分层服务质量(hierarchical quality of service,HQoS)技术应运而生。HQoS技术将调度策略组装成了分层次的树状结构(下文简称“HQoS调度树”),通过该HQoS调度树实现对传输不同数据流的队列的调度。
在传统技术中,HQoS调度树由流量管理(traffic management,TM)硬件实体实现,这种方式下各个层级之间的映射关系是固定的,无法实现灵活管理,不能满足实际的传输需求,且会导致资源浪费。
发明内容
本申请实施例提供了一种队列调度方法、装置及系统,实现对队列的灵活管理,满足实际的传输需求,节约资源。
第一方面,提供了一种队列调度方法,该方法可以由处理装置执行,该处理装置可以是中央处理器(central processing unit,CPU)、网络处理器(network processor,NP)等。方法包括如下步骤:处理装置生成HQoS调度树,HQoS调度树是用于描述通信网络中参与调度的节点的树形结构,HQoS调度树包括多个叶子节点,多个叶子节点中的每个叶子节点用于标识TM硬件实体上的队列。TM硬件实体包括多个队列,多个叶子节点与多个队列一一对应。在生成HQoS调度树之后,处理装置可以根据多个叶子节点获取多个队列的流量特性,根据多个队列的流量特性确定多个队列中至少一个队列的调度参数,多个队列的流量特性为多个队列传输的数据流的流量特性。并且,处理装置向TM硬件实体中与至少一个队列对应的调度装置发送调度消息,调度消息包括至少一个队列的调度参数,调度参数用于调度至少一个队列。与传统技术的不同之处在于,该HQoS调度树由软件实现,在不需要更换TM硬件实体的前提下就可以实现对队列的灵活管理的目的,满足实际的传输需求,节约调度资源。
可选的,HQoS调度树还包括多个叶子节点的根节点。相应的,处理装置根据多个队 列的流量特性确定多个队列中至少一个队列的调度参数可以为:处理装置根据多个队列的流量特性和根节点的调度参数确定多个队列中至少一个队列的调度参数。由于HQoS调度树由软件实现,所以根节点和叶子节点之间的映射关系可以变更,而无需更换TM硬件实体,实现对队列的灵活管理。
可选的,HQoS调度树还包括一个根节点和根节点对应的至少一个分支节点,至少一个分支节点中的每个分支节点对应一个或多个叶子节点,不同的分支节点对应的叶子节点不同。相应的,处理装置根据多个队列的流量特性确定多个队列中至少一个队列的调度参数可以为:处理装置根据多个队列的流量特性确定至少一个分支节点的流量特性,根据至少一个分支节点的流量特性和根节点的调度参数确定至少一个分支节点的调度参数,以及根据多个队列的流量特性和至少一个分支节点的调度参数确定多个队列中至少一个队列的调度参数。由于HQoS调度树由软件实现,所以根节点、分支节点和叶子节点之间的映射关系可以变更,而无需更换TM硬件实体,实现对队列的灵活管理。
可选的,流量特性包括输入速率和队列标识。相应的,处理装置根据多个队列的流量特性确定多个队列中至少一个队列的调度参数可以为:处理装置根据多个队列的队列标识确定多个队列的特性参数,以及根据多个队列的输入速率和特性参数确定多个队列中至少一个队列的调度参数,其中,特性参数包括优先级和权重中的至少一个。
可选的,调度装置为令牌桶,调度参数为令牌桶输出令牌的速率。
可选的,TM硬件实体为专用集成电路ASIC芯片或可编程逻辑控制器PLC。
可选的,处理装置和TM硬件实体属于相同的网络设备,该网络设备可以是路由器、交换机或基站等。
第二方面,提供了一种队列调度方法,方法应用于TM硬件实体的调度装置,TM硬件实体还包括多个队列,方法包括如下步骤:调度装置接收来自处理装置的调度消息,调度消息包括多个队列中至少一个队列的调度参数,TM硬件实体不包括处理装置。调度装置根据至少一个队列的调度参数对至少一个队列进行调度。由于调度装置是根据来自不属于TM硬件实体的处理装置的调度消息对队列进行调度,所以当调度消息中对队列的调度参数发生变化时,不需要对TM硬件实体进行更换,从而实现对队列的灵活管理的目的,满足实际的传输需求,节约资源。
可选的,处理装置和TM硬件实体属于同一个网络设备。
第三方面,提供了一种处理装置,装置包括:生成单元,用于生成分层服务质量HQoS调度树,HQoS调度树是用于描述通信网络中参与调度的节点的树形结构,HQoS调度树包括多个叶子节点,多个叶子节点中的每个叶子节点用于标识流量管理TM硬件实体上的队列,TM硬件实体包括多个队列,多个叶子节点与多个队列一一对应;获取单元,用于根据多个叶子节点获取多个队列的流量特性;确定单元,用于根据多个队列的流量特性确定多个队列中至少一个队列的调度参数,多个队列的流量特性为多个队列传输的数据流的流量特性;发送单元,用于向TM硬件实体中与至少一个队列对应的调度装置发送调度消息,调度消息包括至少一个队列的调度参数,调度参数用于调度至少一个队列。
可选的,HQoS调度树还包括多个叶子节点的根节点;
确定单元,用于根据多个队列的流量特性和根节点的调度参数确定多个队列中至少一个队列的调度参数。
可选的,HQoS调度树还包括一个根节点和根节点对应的至少一个分支节点,至少一个分支节点中的每个分支节点对应一个或多个叶子节点,不同的分支节点对应的叶子节点不同;
确定单元,用于根据多个队列的流量特性确定至少一个分支节点的流量特性,根据至少一个分支节点的流量特性和根节点的调度参数确定至少一个分支节点的调度参数,根据多个队列的流量特性和至少一个分支节点的调度参数确定多个队列中至少一个队列的调度参数。
可选的,流量特性包括输入速率和队列标识;
确定单元,用于根据多个队列的队列标识确定多个队列的特性参数,并根据多个队列的输入速率和特性参数确定多个队列中至少一个队列的调度参数,特性参数包括优先级和权重中的至少一个。
可选的,调度装置为令牌桶,调度参数为令牌桶输出令牌的速率。
可选的,TM硬件实体为专用集成电路ASIC芯片或可编程逻辑控制器PLC。
可选的,处理装置和TM硬件实体属于相同的网络设备。
第四方面,提供了一种调度装置,装置属于TM硬件实体,流量管理TM硬件实体还包括多个队列,调度装置包括:接收单元,用于接收来自处理装置的调度消息,调度消息包括多个队列中至少一个队列的调度参数,TM硬件实体不包括处理装置;调度单元,用于根据至少一个队列的调度参数对至少一个队列进行调度。
可选的,处理装置和TM硬件实体属于同一个网络设备。
第五方面,提供了一种队列调度系统,包括上述第三方面的处理装置和第四方面的调度装置。
第六方面,提供了一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行第一方面的队列调度方法或执行第二方面的队列调度方法。
附图说明
图1为本申请实施例提供的HQoS调度树的示意图;
图2为本申请实施例提供的队列调度系统的示意图;
图3为本申请实施例提供的队列调度方法的流程示意图;
图4为本申请实施例提供的HQoS调度树的另一个示意图;
图5为本申请实施例提供的处理装置500的结构示意图;
图6为本申请实施例提供的调度装置600的结构示意图;
图7为本申请实施例提供的一种设备700的结构示意图;
图8为本申请实施例提供的一种设备800的结构示意图。
具体实施方式
在本申请实施例中,HQoS调度树是用于描述通信网络中参与调度的节点的树形结构,该树形结构至少包括多个叶子(leaf)节点和该多个叶子节点所属的根(root)节点,可选的,还可以包括分支节点。
叶子节点是HQoS调度树中最底层的节点,一个叶子节点用于标识一个队列(quene)。根节点是HQoS调度树中最顶层的节点。分支节点是HQoS调度树中中间层的节点,介于根节点和叶子节点之间。一个HQoS调度树可以包括一层或多层分支节点,每个分支节点对应一个或多个叶子节点。
其中,一个队列用于传输一条数据流,同一个分支节点对应的队列可以传输具有相同属性的数据流,例如属于同一个用户组、同一个用户或同一个业务等。
例如,参见图1,HQoS调度树包括四层树形结构,该树形结构的最顶层为根节点10,第二层包括分支节点20和分支节点21,第三层包括分支节点30、分支节点31和分支节点32,第四层包括叶子节点40、叶子节点41、叶子节点42、叶子节点43、叶子节点44、叶子节点45、叶子节点46和叶子节点47。其中,上述八个叶子节点分别标识一个队列,即一共标识8个队列,用于传输8条数据流。
分支节点20为分支节点30和分支节点31的上层分支节点,分支节点20用于标识用户组1的数据流,分支节点30用于标识属于用户组1的用户1的数据流,分支节点31用于标识属于用户组1的用户2的数据流。
分支节点30为叶子节点40、叶子节点41和叶子节点42的上层分支节点,其中,叶子节点40用于标识用户1的业务1的数据流,叶子节点41用于标识用户1的业务2的数据流,叶子节点42用于标识用户1的业务3的数据流。
分支节点31为叶子节点43和叶子节点44的上层分支节点,其中,叶子节点43用于标识用户2的业务1的数据流,叶子节点44用于标识用户2的业务4的数据流。
分支节点21为分支节点32的上层分支节点,分支节点21用于标识用户组2的数据流,分支节点32用于标识属于用户组2的用户3的数据流。分支节点32为叶子节点45、叶子节点46和叶子节点47的上层分支节点,其中,叶子节点45用于标识用户3的业务1的数据流,叶子节点46用于标识用户3的业务2的数据流,叶子节点47用于标识用户3的业务5的数据流。
传统的HQoS技术,HQoS调度树采用TM硬件实体实现,TM硬件实体包括队列和调度器,每个分支节点和根节点分别具有对应的调度器,根节点对应的调度器用于调度分支节点的调度器,分支节点的调度器用于调度叶子节点对应的队列。各层级的调度器之间的映射关系以及调度器和队列之间的映射关系是固定的,因为该映射关系是通过硬件连接实现的,所以无法实现对队列的灵活管理。如果需要改变各层级之间的映射关系,则需要更换TM硬件实体才可以实现。此外,如果某个分支节点或根节点对队列的数目的需求减少,那么没有被利用的队列就会造成资源的浪费。
例如,根节点10对应的调度器用于调度分支节点20的调度器和分支节点21的调度器,分支节点20的调度器用于调度分支节点30的调度器和分支节点31的调度器,分支节点21的调度器用于调度分支节点32的调度器,分支节点30的调度器用于调度叶子节点40、 叶子节点41和叶子节点42分别对应的队列,分支节点31的调度器用于调度叶子节点43和叶子节点44分别对应的队列,分支节点32用于调度叶子节点45、叶子节点46和叶子节点47分别对应的队列。也就是说,上述调度器的调度对象是固定的,不可以灵活改变,如果改变,则需要更换TM硬件实体。假设叶子节点40对应的队列没有数据流,而叶子节点43对应的队列数据流较多,依照传统技术也无法将叶子节点40对应的队列从分支节点30对应的调度器释放出来,而被分支节点31对应的调度器调度使用,所以不能满足实际的传输需求,在一定程度上造成资源的浪费。
为了解决该技术问题,本申请实施例提供了一种队列调度方法,该队列调度方法的主要构思是采用软件实现HQoS调度树,TM硬件实体仅包括多个队列和每个队列对应的一个调度装置,该调度装置例如为令牌桶。由于本申请实施例在软件层面设置各层级之间的映射关系,各层级之间的映射关系可以进行改变,从而实现对队列的灵活管理,满足实际的传输需求,减少资源的浪费。
图2为队列调度方法可能应用的一个硬件场景示意图,即该队列调度方法可以应用于队列调度系统,该系统包括网络设备10和设备11。
其中,该网络设备10可以为路由器、交换机或基站等。其中,当网络设备10为路由器或交换机时,网络设备10可以是接入(access)网、汇聚(aggresive)网或者核心(core)网的任意一个网络设备。当网络设备10为接入网的设备时,网络设备10例如为宽带接入服务器(broadband remote access server,BRAS)或数字用户线路接入复用器(digital subscriber line access multiplexer,DSLAM)等。
在图2中,网络设备10包括处理装置101和TM硬件实体102。
其中,处理装置101可以是中央处理器(central processing unit,CPU)、网络处理器(network processor,NP)等。
TM硬件实体102可以是专用集成电路(application specific integrated circuit,ASIC)芯片或可编程逻辑控制器(programmable logic controller,PLC)等,本申请实施例不做具体限定。
TM硬件实体102包括多个队列和每个队列对应的调度装置,该调度装置例如为令牌桶(token bucket)。TM硬件实体102可以根据令牌桶输出令牌的速率来确定对应队列输出数据流的速率。
需要说明的是,在图2中,处理装置101和TM硬件实体102集成在同一个网络设备(即网络设备10)中,在一些其他的应用场景中,TM硬件实体102可以独立于包括处理装置101的网络设备而存在。
设备11可以是服务器或终端设备,其中,终端设备例如可以为手机、平板电脑、个人计算机(personal computer,PC)、多媒体播放设备等。网络设备10与设备11之间通过网络通信,该网络可以是运营商网络,也可以是局域网。
网络设备10通过通信接口接收来自设备11的数据流,将该数据流传输到队列中。网络设备10的处理装置101根据队列的流量特性确定队列的调度参数,并向TM硬件实体102发送该调度消息,该调度消息包括该调度参数,该调度参数用于TM硬件实体102对 队列进行调度,以使对应的队列对应的数据流按照与调度参数对应的速率出队,通过通信接口向下一跳网络设备发送该数据流。
本申请实施例提供了一种队列调度方法及装置,实现HQoS调度树各层级之间的映射关系灵活管理的目的,节约调度资源。
参见图3,该图为本申请实施例提供的队列调度方法的流程示意图。
本申请实施例提供的队列调度方法包括如下步骤:
S101:处理装置生成HQoS调度树。
在本申请实施例中,处理装置可以是图2所示系统中的处理装置101。在本申请实施例中,处理装置预先生成HQoS调度树,与传统技术的不同之处在于,该HQoS调度树由软件实现。也就是说,HQoS调度树包括的根节点和叶子节点都是用软件代码来实现,其中,叶子节点可以用TM硬件实体(例如为图2所示的TM硬件实体102)上对应队列的标识进行表示,根节点可以用根节点的标识来表示。可选的,HQoS调度树还可以包括一层或多层分支节点,每个分支节点可以用对应的分支节点的标识来表示。
也就是说,HQoS调度树可以表示为各节点的标识之间的映射关系。以图1为例,HQoS调度树可以表示为如下映射关系的集合:
根节点10的标识分别与分支节点20的标识和分支节点21的标识之间的映射关系;
分支节点20的标识分别与分支节点30的标识和分支节点31的标识之间的映射关系;
分支节点21的标识与分支节点32的标识之间的映射关系;
分支节点30的标识分别与队列40的标识、队列41的标识以及队列42的标识之间的映射关系;
分支节点31的标识分别与队列43的标识和队列44的标识之间的映射关系;
分支节点32的标识分别与队列45的标识、队列46的标识和队列47的标识之间的映射关系。
由于HQoS调度树可以被表达为各层节点之间的映射关系,那么该HQoS调度树可以通过配置得到,或者是与处理装置通信网络管理系统(network management system,NMS)下发得到。其中,网络管理设备可以是控制器或终端设备等。
在生成HQoS调度树之后,处理装置可以继续执行下面的S102-S104。可以理解的是,S101这个步骤不需要每次在执行S102-S104之前都执行一次。处理装置可以在执行一次S101之后反复执行S102-S104。
S102:处理装置根据所述多个叶子节点获取所述多个队列的流量特性。
在本申请实施例中,叶子节点与队列具有映射关系,可选的,叶子节点可以表示为对应队列的标识。
在本申请实施例中,队列的流量特性为队列传输的数据流的流量特性,例如包括与队列对应的数据流的输入速率和队列标识。
S103:处理装置根据所述多个队列的流量特性确定所述多个队列中至少一个队列的调度参数。
在本申请实施例中,队列的调度参数是用于对队列进行调度的参数。例如,当调度装 置为令牌桶时,队列的调度参数为令牌桶输出令牌的速率。令牌桶输出令牌的速率是队列传输的数据流的输出速率的关键因素,所以,通过确定令牌桶输出令牌的速率就可以确定队列输出数据流的速率,实现对队列的调度。
具体的,处理装置根据多个队列的流量特性确定多个队列中至少一个队列的调度参数可以是处理装置根据多个队列的队列标识确定出多个队列的特性参数,并根据多个队列的输入速率和特性参数确定出多个队列中至少一个队列的调度参数。
其中,队列的特性参数可以是队列的优先级和权重中的至少一个。在多个队列均有对应的数据流的情况下,一个队列的优先级越高,代表该队列输出数据流的时机越早,优先级高的队列优先输出数据流。反之亦然。
在一些实施例中,队列标识本身可以代表队列的优先级。例如队列标识越小,标识队列优先级越高。以图1为例,对应队列40、队列41和队列42而言,队列40的优先级最高,队列41的优先级其次,队列42的优先级最低。
在另外一些实施例中,队列标识本身不代表队列的优先级,在这种情况下,处理装置可以预先存储队列的标识和队列的优先级之间对应关系,根据队列的标识和该对应关系就可以得到队列的优先级。
参见表1,该表示出了队列的标识和队列的优先级之间的对应关系。从表1可以看出,队列43的优先级高于队列44的优先级,表示优先将带宽资源分配给队列43,剩余带宽资源分配给队列44。该带宽资源可以为分支节点30对应的带宽资源。
表1
队列的标识 队列的优先级
队列43
队列44
而权重代表队列输出数据流的速率占用总带宽的比例。权重越高,表示占用的总带宽的比例越高;权重越低,表示占用的总带宽的比例越低。
在本申请实施例中,处理装置可以预先存储队列的标识和权重之间的对应关系,这样就可以根据队列的标识和该对应关系得到队列对应的权重。
参见表2,该表示出了队列的标识和队列的权重之间的对应关系。根据表2可以得知,队列45输出数据流的速率可以占用总带宽的40%,队列46输出数据流的速率可以占用总带宽的35%,队列47输出数据流的速率可以占用总带宽的25%。其中的总带宽是指分支节点32对应的带宽。
表2
队列的标识 队列的权重
队列45 40%
队列46 35%
队列47 25%
表1和表2示出了队列的特性参数仅包括队列的优先级或队列的权重的情况,如前文所提,队列的特性参数还可以既包括队列的优先级,也包括权重。
参见表3,该表示出了队列的标识和队列的优先级以及队列的权重的对应关系。从表3可以得知,队列40的优先级高于队列41和队列42的优先级,所以可以优先将带宽资源分配给队列40,剩余的带宽资源中的40%分配给队列41,60%分配给队列42。其中的带宽资源可以为分支节点30对应的带宽资源。
表3
Figure PCTCN2021098028-appb-000001
在本申请实施例中,如果HQoS调度树仅包括根节点和叶子节点,那么处理装置可以根据多个队列的流量特性和根节点的调度参数确定多个队列中至少一个队列的调度参数。
具体的,处理装置在得到多个队列的流量特性之后,可以根据多个队列的队列标识和映射关系得到根节点的标识,该映射关系为多个队列的队列标识和根节点的标识之间的映射关系。然后,处理装置根据该根节点的标识获取根节点的调度参数,并根据多个队列的流量特性以及根节点的调度参数确定多个队列中至少一个队列的调度参数。此外,处理装置可以预先建立根节点的标识和根节点的调度参数之间的映射关系。
其中,根节点的调度参数可以通过配置得到,在配置时可以根据根节点对应的接口的总带宽进行确定。需要说明的是,根节点的调度参数可以为根节点对应的“虚拟令牌桶”的输出速率,之所以说“虚拟令牌桶”是因为在TM硬件实体上不包括与根节点对应的令牌桶,TM硬件实体仅包括与叶子节点对应的令牌桶,即一个叶子节点对应一个令牌桶。为了理解方便以及计算,本申请实施例中根节点的调度参数可以视为根节点的“虚拟令牌桶”的输出速率。
例如,假设根节点对应的接口的总带宽为100G,那么根节点对应的“虚拟令牌桶”的输出速率可以为100Gbps。假设该根节点对应两个叶子节点,分别对应队列1和队列2。其中,队列1的输入速率为70Gbps,队列2的输入速率为50Gbps,且队列1的优先级高于队列2的优先级。那么处理装置可以确定队列1的输出速率为70Gbps,队列2的输出速率为30Gbps。
再例如,假设根节点对应的接口的总带宽为100G,那么根节点对应的“虚拟令牌桶”的输出速率可以为100Gbps。假设该根节点对应两个叶子节点,分别对应队列1和队列2。其中,队列1的输入速率为70Gbps,队列2的输入速率为30Gbps,且队列1的权重为0.6,队列2的权重为0.4。那么处理装置可以确定队列1在理论上的输出速率为60Gbps,队列2在理论上的输出速率为40Gbps。但由于队列2的输入速率小于理论输出速率,所以队列2将输入速率作为实际输出速率,即30Gbps。为了不浪费带宽,队列1的实际输出速率可以大于其理论输出速率,例如为70(100-30)Gbps。
如果HQoS调度树除了根节点和叶子节点以外,还包括分支节点,那么处理装置可以根据所述多个队列的流量特性确定所述至少一个分支节点的流量特性,然后根据至少一个分支节点的流量特性和所述根节点的调度参数确定所述至少一个分支节点的调度参数,最 后,根据所述多个队列的流量特性和所述至少一个分支节点的调度参数确定所述多个队列中至少一个队列的调度参数。其中,分支节点的调度参数可以为分支节点对应的“虚拟令牌桶”的输出速率。
例如,假设根节点对应的接口总带宽为100G,该根节点对应两个分支节点,分别为分支节点1和分支节点2,其中分支节点1对应三个叶子节点,分别对应队列1、队列2和队列3,分支节点2对应一个叶子节点,该叶子节点对应队列4。且队列1的输入速率为20Gbps,队列2的输入速率为40Gbps,队列3的输入速率为30Gbps,队列4的输入速率为20Gbps。队列1的优先级为高优先级,队列2和队列3的优先级为低优先级。队列2和队列3的权重分别为70%和30%。那么处理装置根据队列1的输入速率、队列2的输入速率之和和队列3的输入速率之和得到分支节点1的输入速率90Gbps,根据队列4的输入速率确定分支节点2的输入速率为20Gbps。分支节点1的权重为70%,分支节点2的权重为30%。那么,处理装置根据分支节点1的权重和分支节点2的权重,以及根节点对应的接口总带宽确定出给分支节点1分配的带宽为70G,给分支节点2分配的带宽为30G。由于队列1的优先级为高优先级,那么可以将队列1的输入速率确定为队列1的输出速率,即为20Gbps。因而,队列2和队列3的一共占用的带宽为50G,基于队列2和队列3的权重,队列2的输出速率为30Gbps,队列3的输出速率为15Gbps。对于分支节点2而言,由于只有一个队列,即队列4,且该队列4的输入速率小于给分支节点2分配的带宽,所以队列4的输出速率等于输入速率,即为20Gbps。
具体的,处理装置在根据所述多个队列的流量特性确定所述至少一个分支节点的流量特性时,可以根据多个队列的队列标识和映射关系确定对应的分支节点的标识,然后根据该多个队列的流量特性确定该对应的分支节点的流量特性,从而得到分支节点的标识对应的分支节点的流量特性。当流量特性包括入口速率时,根据多个队列的流量特性确定对应分支节点的流量特性可以是根据多个队列的入口速率之和得到分支节点的入口速率。
例如,处理装置在获取到队列40、队列41和队列42各自的入口速率之后,根据这三个队列各自的队列标识和与分支节点30的标识之间的映射关系知晓这三个队列属于同一个分支节点,即分支节点30,然后根据队列40、队列41和队列42的入口速率之和得到分支节点30的标识对应的入口速率,即分支节点30的入口速率。
处理装置根据至少一个分支节点的流量特性和所述根节点的调度参数确定所述至少一个分支节点的调度参数时,可以根据至少一个分支节点的标识和与根节点的标识之间的映射关系确定根节点的标识,并根据根节点的标识得到根节点的调度参数,然后根据与至少一个分支节点的标识对应的流量特性和根节点的调度参数确定至少一个分支节点的调度参数。
例如,处理装置在获取到分支节点30、分支节点31的入口速率之后,根据分支节点30的标识、分支节点31的标识和与分支节点20之间的映射关系,确定与分支节点20的标识对应的分支节点20的入口速率。处理装置在获取到分支节点32的入口速率之后,根据分支节点32的标识与分支节点21的标识的映射关系,确定与分支节点21的标识对应的入口速率,即分支节点21的入口速率。分支节点21的入口速率与分支节点32的入口速率 相同。在得到分支节点20的标识对应的入口速率和分支节点21的标识对应的入口速率之后,处理装置根据分支节点20的标识、分支节点21的标识与根节点10的标识的映射关系得到根节点10的标识,根据根节点10的标识获得根节点的调度参数。
然后,处理装置根据分支节点20的标识对应的入口速率和特性参数、分支节点21的标识对应的入口速率和特性参数以根节点的调度参数得到分支节点20的调度参数和分支节点21的调度参数。接着,处理装置根据分支节点30的标识对应的入口速率和特性参数、分支节点31的标识对应的入口速率和特性参数以及分支节点20的调度参数得到分支节点30的调度参数和分支节点31的调度参数。同理,处理装置根据分支节点21的标识对应的入口速率和特性参数以及分支节点21的调度参数得到分支节点32的调度参数。
最后,处理装置根据队列40的标识对应的入口速率和特性参数、队列41的标识对应的入口速率和特性参数、队列42的标识对应的入口速率和特性参数以及分支节点30的调度参数,分别得到队列40、队列41以及队列42的调度参数。其余队列同理,此处不再赘述。
S104:处理装置向所述TM硬件实体中与所述一个队列对应的调度装置发送调度消息,所述调度消息包括所述调度参数,所述调度参数用于调度所述叶子节点对应的队列。
在确定了队列的调度参数之后,处理装置可以向TM硬件实体中的调度装置下发包括调度参数的调度消息,以便调度装置根据调度参数对应的队列进行调度。
此外,需要说明的是,在本申请实施例中,处理装置可以根据S103确定出多个队列中每个队列的调度参数,也可以从多个队列中确定出有数据流传输的一个或多个队列的调度参数,还可以从多个队列中确定出调度参数发生变化的一个或多个队列的调度参数,相对于第一种方式,后两种方式可以节约处理装置下发的调度消息的数目,节约处理装置的处理资源。
S105:调度装置接收调度消息,并根据调度参数调度叶子节点对应的队列。
本申请实施例通过用软件实现HQoS调度树,即生成各节点的标识之间的映射关系,并获取多个队列的流量特性,根据该映射关系和多个队列的流量特性得到队列的调度参数,当需要对该映射关系进行变更时,无需更换TM硬件实体,处理装置只需要通过配置或控制器下发等方式获取到新的映射关系就可以实现对队列进行灵活管理的目的,满足实际的传输需求,节约调度资源。
例如,假设图1中分支节点30对应的用户1不再需要叶子节点40对应的业务1,而分支节点31对应的用户2需要增加一个新的业务,即业务6,那么可以图1所示的HQoS调度树可以修改为图4所示的HQoS调度树,即将分支节点30的标识和队列40的标识之间的映射关系修改为分支节点31的标识和队列40的标识之间的映射关系,在修改完成之后,处理装置可以根据队列40的标识获取属于用户1的业务6的数据流的流量特性,以及根据该队列40的流量特性得到该队列40对应的调度装置的调度参数,并将该调度参数下发到该调度装置中,以对队列40进行调度,从而实现基于新的HQoS调度树对应的队列的调度。整个过程不需要更换TM硬件实体就可以满足新的流量传输需求,提高队列管理的灵活性以及节约调度资源。
参见图5,本申请实施例还提供了一种处理装置500,该处理装置500可以实现图3所示实施例中处理装置的功能。其中,所述装置500包括:生成单元501、获取单元502、确定单元503和发送单元504。其中,生成单元501用于执行图3所示实施例中的S101,获取单元502用于执行图3所示实施例中的S102,确定单元503用于执行图3所示实施例中的S103,发送单元504用于执行图3所示实施例中的S104。
具体的,生成单元501,用于生成分层服务质量HQoS调度树,所述HQoS调度树是用于描述通信网络中参与调度的节点的树形结构,所述HQoS调度树包括多个叶子节点,所述多个叶子节点中的每个叶子节点用于标识流量管理TM硬件实体上的队列,所述TM硬件实体包括多个队列,所述多个叶子节点与所述多个队列一一对应;
获取单元502,用于根据所述多个叶子节点获取所述多个队列的流量特性;
确定单元503,用于根据所述多个队列的流量特性确定所述多个队列中至少一个队列的调度参数,所述多个队列的流量特性为所述多个队列传输的数据流的流量特性;
发送单元504,用于向所述TM硬件实体中与所述至少一个队列对应的调度装置发送调度消息,所述调度消息包括所述调度参数,所述调度参数用于调度所述叶子节点对应的队列。
该处理装置500中各个单元的具体执行步骤请参考前述方法实施例,此处不再赘述。
参见图6,本申请实施例还提供了一种调度装置600,该调度装置600可以实现图3所示实施例中调度装置的功能。其中,调度装置600包括接收单元601和调度单元602。接收单元601和调度单元602用于执行图3所述实施例中的S105。
具体的,接收单元601,用于接收来自处理装置的调度消息,所述调度消息包括所述多个队列中至少一个队列的调度参数,所述TM硬件实体不包括所述处理装置;
调度单元602,用于根据所述至少一个队列的调度参数对所述至少一个队列进行调度。
该调度装置600中各个单元的具体执行步骤请参考前述方法实施例,此处不再赘述。
图7是本申请实施例提供的一种设备700的结构示意图。图5中的处理装置500和图6中的调度装置600可以通过图7所示的设备来实现。参见图7,该设备700包括至少一个处理器701,通信总线702以及至少一个网络接口704,可选地,该设备700还可以包括存储器703。
处理器701可以是一个通用中央处理器(central processing unit,CPU)、特定应用集成电路(application-specific integrated circuit,ASIC)或一个或多个用于控制本申请方案程序执行的集成电路(integrated circuit,IC)。处理器可以用于实现本申请实施例中提供的队列调度的方法。
比如,当图3中的处理装置通过图7所示的设备来实现时,该设备可以用于生成HQoS调度树,在生成HQoS调度树之后,根据所述多个叶子节点获取所述多个队列的流量特性,根据所述多个队列的流量特性确定所述多个队列中至少一个队列的调度参数,并向所述TM 硬件实体中与所述至少一个队列对应的调度装置发送调度消息,具体功能实现可参考方法实施例中对应处理装置的处理部分。又比如,当图3中的调度装置通过图7所示的设备来实现时,该设备可以用于接收来自处理装置的调度消息,所述调度消息包括所述多个队列中至少一个队列的调度参数,并根据所述至少一个队列的调度参数对所述至少一个队列进行调度,具体功能实现可参考方法实施例中调度装置的处理部分。
通信总线702用于在处理器701、网络接口704和存储器703之间传送信息。
存储器703可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其它类型的静态存储设备,存储器703还可以是随机存取存储器(random access memory,RAM)或者可存储信息和指令的其它类型的动态存储设备,也可以是只读光盘(compact disc read-only Memory,CD-ROM)或其它光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其它磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其它介质,但不限于此。存储器703可以是独立存在,通过通信总线702与处理器701相连接。存储器703也可以和处理器701集成在一起。
可选地,存储器703用于存储执行本申请方案的程序代码或指令,并由处理器701来控制执行。处理器701用于执行存储器703中存储的程序代码或指令。程序代码中可以包括一个或多个软件模块。可选地,处理器701也可以存储执行本申请方案的程序代码或指令,在这种情况下处理器701不需要到存储器703中读取程序代码或指令。
网络接口704可以为收发器一类的装置,用于与其它设备或通信网络通信,通信网络可以为以太网、无线接入网(RAN)或无线局域网(wireless local area networks,WLAN)等。在本申请实施例中,网络接口704可以用于接收分段路由网络中的其他节点发送的报文,也可以向分段路由网络中的其他节点发送报文。网络接口704可以为以太接口(ethernet)接口、快速以太(fast ethernet,FE)接口或千兆以太(gigabit ethernet,GE)接口等。
在具体实现中,作为一种实施例,设备700可以包括多个处理器,例如图7中所示的处理器701和处理器405。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
图8是本申请实施例提供的一种设备800的结构示意图。图3中的处理装置和调度装置可以通过图8所示的设备来实现。参见图8所示的设备结构示意图,设备800包括主控板和一个或多个接口板。主控板与接口板通信连接。主控板也称为主处理单元(main processing unit,MPU)或路由处理卡(route processor card),主控板包括CPU和存储器,主控板负责对设备800中各个组件的控制和管理,包括路由计算、设备管理和维护功能。接口板也称为线处理单元(line processing unit,LPU)或线卡(line card),用于接收和发送报文。在一些实施例中,主控板与接口板之间或接口板与接口板之间通过总线通信。在一些实施例中,接口板之间通过交换网板通信,在这种情况下设备800也包括交换网板,交换网板与主控板、接口板通信连接,交换网板用于转发接口板之间的数据,交换网板也可 以称为交换网板单元(switch fabric unit,SFU)。接口板包括CPU、存储器、转发引擎和接口卡(interface card,IC),其中接口卡可以包括一个或多个网络接口。网络接口可以为Ethernet接口、FE接口或GE接口等。CPU与存储器、转发引擎和接口卡分别通信连接。存储器用于存储转发表。转发引擎用于基于存储器中保存的转发表转发接收到的报文,如果接收到的报文的目的地址为设备800的IP地址,则将该报文发送给主控板或接口板的CPU进行处理;如果接收到的报文的目的地址不是设备800的IP地址,则根据该目的地查转发表,如果从转发表中查找到该目的地址对应的下一跳和出接口,将该报文转发到该目的地址对应的出接口。转发引擎可以是网络处理器(network processor,NP)。接口卡也称为子卡,可安装在接口板上,负责将光电信号转换为数据帧,并对数据帧进行合法性检查后转发给转发引擎处理或接口板CPU。在一些实施例中,CPU也可执行转发引擎的功能,比如基于通用CPU实现软转发,从而接口板中不需要转发引擎。在一些实施例中,转发引擎可以通过ASIC或现场可编程门阵列(field programmable gate array,FPGA)实现。在一些实施例中,存储转发表的存储器也可以集成到转发引擎中,作为转发引擎的一部分。
本申请实施例还提供一种芯片系统,包括:处理器,所述处理器与存储器耦合,所述存储器用于存储程序或指令,当所述程序或指令被所述处理器执行时,使得该芯片系统实现上述图3所示实施例中处理装置或调度装置的方法。
可选地,该芯片系统中的处理器可以为一个或多个。该处理器可以通过硬件实现也可以通过软件实现。当通过硬件实现时,该处理器可以是逻辑电路、集成电路等。当通过软件实现时,该处理器可以是一个通用处理器,通过读取存储器中存储的软件代码来实现。可选地,该芯片系统中的存储器也可以为一个或多个。该存储器可以与处理器集成在一起,也可以和处理器分离设置,本申请并不限定。示例性的,存储器可以是非瞬时性处理器,例如只读存储器ROM,其可以与处理器集成在同一块芯片上,也可以分别设置在不同的芯片上,本申请对存储器的类型,以及存储器与处理器的设置方式不作具体限定。
示例性的,该芯片系统可以是FPGA,可以是ASIC,还可以是系统芯片(system on chip,SoC),还可以是CPU,还可以是NP,还可以是数字信号处理电路(digital signal processor,DSP),还可以是微控制器(micro controller unit,MCU),还可以是可编程控制器(programmable logic device,PLD)或其他集成芯片。
应理解,上述方法实施例中的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
此外,本申请实施例还提供了一种队列调度系统,包括图5所示实施例的处理装置500和图6所示实施例的调度装置600。
本申请实施例还提供了一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行前述实施例中的方法。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四” 等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包括,例如,包括了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
本申请中“至少一项(个)”是指一个或者多个,“多个”是指两个或两个以上。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。本申请中认为“A和/或B”包括单独A,单独B,和A+B。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑模块划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要获取其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各模块单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件模块单元的形式实现。
所述集成的单元如果以软件模块单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本发明所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一 个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
以上所述的具体实施方式,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施方式而已。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (20)

  1. 一种队列调度方法,其特征在于,所述方法包括:
    处理装置生成分层服务质量HQoS调度树,所述HQoS调度树是用于描述通信网络中参与调度的节点的树形结构,所述HQoS调度树包括多个叶子节点,所述多个叶子节点中的每个叶子节点用于标识流量管理TM硬件实体上的队列,所述TM硬件实体包括多个队列,所述多个叶子节点与所述多个队列一一对应;
    所述处理装置根据所述多个叶子节点获取所述多个队列的流量特性;
    所述处理装置根据所述多个队列的流量特性确定所述多个队列中至少一个队列的调度参数,所述多个队列的流量特性为所述多个队列传输的数据流的流量特性;
    所述处理装置向所述TM硬件实体中与所述至少一个队列对应的调度装置发送调度消息,所述调度消息包括所述至少一个队列的调度参数,所述调度参数用于调度所述至少一个队列。
  2. 根据权利要求1所述的方法,其特征在于,所述HQoS调度树还包括所述多个叶子节点的根节点;
    所述处理装置根据所述多个队列的流量特性确定所述多个队列中至少一个队列的调度参数包括:
    所述处理装置根据所述多个队列的流量特性和所述根节点的调度参数确定所述多个队列中至少一个队列的调度参数。
  3. 根据权利要求1所述的方法,其特征在于,所述HQoS调度树还包括一个根节点和所述根节点对应的至少一个分支节点,所述至少一个分支节点中的每个分支节点对应一个或多个叶子节点,不同的分支节点对应的叶子节点不同;
    所述处理装置根据所述多个队列的流量特性确定所述多个队列中至少一个队列的调度参数包括:
    所述处理装置根据所述多个队列的流量特性确定所述至少一个分支节点的流量特性;
    所述处理装置根据所述至少一个分支节点的流量特性和所述根节点的调度参数确定所述至少一个分支节点的调度参数;
    所述处理装置根据所述多个队列的流量特性和所述至少一个分支节点的调度参数确定所述多个队列中至少一个队列的调度参数。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述流量特性包括所述输入速率和队列标识;
    所述处理装置根据所述多个队列的流量特性确定所述多个队列中至少一个队列的调度参数包括:
    所述处理装置根据所述多个队列的队列标识确定所述多个队列的特性参数,所述特性参数包括优先级和权重中的至少一个;
    所述处理装置根据所述多个队列的输入速率和特性参数确定所述多个队列中至少一个队列的调度参数。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述调度装置为令牌桶,所述调度参数为所述令牌桶输出令牌的速率。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述TM硬件实体为专用集成电路ASIC芯片或可编程逻辑控制器PLC。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述处理装置和所述TM硬件实体属于相同的网络设备。
  8. 一种队列调度方法,其特征在于,所述方法应用于流量管理TM硬件实体的调度装置,所述TM硬件实体还包括多个队列,所述方法包括:
    所述调度装置接收来自处理装置的调度消息,所述调度消息包括所述多个队列中至少一个队列的调度参数,所述TM硬件实体不包括所述处理装置;
    所述调度装置根据所述至少一个队列的调度参数对所述至少一个队列进行调度。
  9. 根据权利要求8所述的方法,其特征在于,所述处理装置和所述TM硬件实体属于同一个网络设备。
  10. 一种处理装置,其特征在于,所述装置包括:
    生成单元,用于生成分层服务质量HQoS调度树,所述HQoS调度树是用于描述通信网络中参与调度的节点的树形结构,所述HQoS调度树包括多个叶子节点,所述多个叶子节点中的每个叶子节点用于标识流量管理TM硬件实体上的队列,所述TM硬件实体包括多个队列,所述多个叶子节点与所述多个队列一一对应;
    获取单元,用于根据所述多个叶子节点获取所述多个队列的流量特性;
    确定单元,用于根据所述多个队列的流量特性确定所述多个队列中至少一个队列的调度参数,所述多个队列的流量特性为所述多个队列传输的数据流的流量特性;
    发送单元,用于向所述TM硬件实体中与所述至少一个队列对应的调度装置发送调度消息,所述调度消息包括所述至少一个队列的调度参数,所述调度参数用于调度所述至少一个队列。
  11. 根据权利要求10所述的装置,其特征在于,所述HQoS调度树还包括所述多个叶子节点的根节点;
    所述确定单元,用于根据所述多个队列的流量特性和所述根节点的调度参数确定所述多个队列中至少一个队列的调度参数。
  12. 根据权利要求10所述的装置,其特征在于,所述HQoS调度树还包括一个根节点和所述根节点对应的至少一个分支节点,所述至少一个分支节点中的每个分支节点对应一个或多个叶子节点,不同的分支节点对应的叶子节点不同;
    所述确定单元,用于根据所述多个队列的流量特性确定所述至少一个分支节点的流量特性,根据所述至少一个分支节点的流量特性和所述根节点的调度参数确定所述至少一个分支节点的调度参数,根据所述多个队列的流量特性和所述至少一个分支节点的调度参数确定所述多个队列中至少一个队列的调度参数。
  13. 根据权利要求10-12任一项所述的装置,其特征在于,所述流量特性包括所述输入速率和队列标识;
    所述确定单元,用于根据所述多个队列的队列标识确定所述多个队列的特性参数,并根据所述多个队列的输入速率和特性参数确定所述多个队列中至少一个队列的调度参数,所述特性参数包括优先级和权重中的至少一个。
  14. 根据权利要求10-13任一项所述的装置,其特征在于,所述调度装置为令牌桶,所述调度参数为所述令牌桶输出令牌的速率。
  15. 根据权利要求10-14任一项所述的装置,其特征在于,所述TM硬件实体为专用集成电路ASIC芯片或可编程逻辑控制器PLC。
  16. 根据权利要求10-15任一项所述的装置,其特征在于,所述处理装置和所述TM硬件实体属于相同的网络设备。
  17. 一种调度装置,其特征在于,所述装置属于流量管理TM硬件实体,所述流量管理TM硬件实体还包括多个队列,所述调度装置包括:
    接收单元,用于接收来自处理装置的调度消息,所述调度消息包括所述多个队列中至少一个队列的调度参数,所述TM硬件实体不包括所述处理装置;
    调度单元,用于根据所述至少一个队列的调度参数对所述至少一个队列进行调度。
  18. 根据权利要求17所述的方法,其特征在于,所述处理装置和所述TM硬件实体属于同一个网络设备。
  19. 一种队列调度系统,其特征在于,包括如权利要求10-16任一项所述的处理装置和如权利要求17-18任一项所述的调度装置。
  20. 一种计算机可读存储介质,其特征在于,包括指令,当其在计算机上运行时,使 得计算机执行权利要求1-9任一项所述的队列调度方法。
PCT/CN2021/098028 2020-07-16 2021-06-03 一种队列调度方法、装置及系统 WO2022012204A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21843080.9A EP4175253A4 (en) 2020-07-16 2021-06-03 QUEUE SCHEDULING METHOD, DEVICE AND SYSTEM
US18/155,565 US20230155954A1 (en) 2020-07-16 2023-01-17 Queue scheduling method, apparatus, and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010685543.7A CN113949675A (zh) 2020-07-16 2020-07-16 一种队列调度方法、装置及系统
CN202010685543.7 2020-07-16

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/155,565 Continuation US20230155954A1 (en) 2020-07-16 2023-01-17 Queue scheduling method, apparatus, and system

Publications (1)

Publication Number Publication Date
WO2022012204A1 true WO2022012204A1 (zh) 2022-01-20

Family

ID=79326604

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/098028 WO2022012204A1 (zh) 2020-07-16 2021-06-03 一种队列调度方法、装置及系统

Country Status (4)

Country Link
US (1) US20230155954A1 (zh)
EP (1) EP4175253A4 (zh)
CN (1) CN113949675A (zh)
WO (1) WO2022012204A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101478475A (zh) * 2008-11-21 2009-07-08 中兴通讯股份有限公司 一种HQoS技术在T-MPLS网络中的实现方法
CN101610218A (zh) * 2009-07-15 2009-12-23 中兴通讯股份有限公司 一种HQoS策略树的管理方法及系统
CN101667974A (zh) * 2009-10-12 2010-03-10 中兴通讯股份有限公司 一种实现分层服务质量的方法及网络处理器
CN101958836A (zh) * 2010-10-12 2011-01-26 中兴通讯股份有限公司 层次化服务质量中队列资源管理方法及装置
US20120020210A1 (en) * 2010-05-18 2012-01-26 Lsi Corporation Byte-accurate scheduling in a network processor
CN106330710A (zh) * 2015-07-01 2017-01-11 中兴通讯股份有限公司 数据流调度方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104378309B (zh) * 2013-08-16 2019-05-21 中兴通讯股份有限公司 OpenFlow网络中实现QoS的方法、系统和相关设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101478475A (zh) * 2008-11-21 2009-07-08 中兴通讯股份有限公司 一种HQoS技术在T-MPLS网络中的实现方法
CN101610218A (zh) * 2009-07-15 2009-12-23 中兴通讯股份有限公司 一种HQoS策略树的管理方法及系统
CN101667974A (zh) * 2009-10-12 2010-03-10 中兴通讯股份有限公司 一种实现分层服务质量的方法及网络处理器
US20120020210A1 (en) * 2010-05-18 2012-01-26 Lsi Corporation Byte-accurate scheduling in a network processor
CN101958836A (zh) * 2010-10-12 2011-01-26 中兴通讯股份有限公司 层次化服务质量中队列资源管理方法及装置
CN106330710A (zh) * 2015-07-01 2017-01-11 中兴通讯股份有限公司 数据流调度方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4175253A4

Also Published As

Publication number Publication date
CN113949675A (zh) 2022-01-18
EP4175253A1 (en) 2023-05-03
US20230155954A1 (en) 2023-05-18
EP4175253A4 (en) 2023-11-29

Similar Documents

Publication Publication Date Title
WO2020182150A1 (zh) 报文处理方法、装置、设备及系统
WO2020103834A1 (zh) 时延敏感网络通信方法及其装置
US7701849B1 (en) Flow-based queuing of network traffic
US7283472B2 (en) Priority-based efficient fair queuing for quality of service classification for packet processing
KR100961712B1 (ko) 선호하는 경로 소스 라우팅, 다중의 보증 QoS와 자원유보, 관리 및 배포를 구비하는 고 성능의 통신 버스를제공하는 장치, 방법 및 컴퓨터 프로그램 생성물
Wang et al. Implementation of multipath network virtualization with SDN and NFV
WO2020073903A1 (zh) 时延敏感网络通信方法及其装置
WO2020024961A1 (zh) 数据处理方法、设备及系统
US11770327B2 (en) Data distribution method, data aggregation method, and related apparatuses
US20220263765A1 (en) Service Traffic Adjustment Method and Apparatus
WO2020177255A1 (zh) 无线接入网的资源分配方法及装置
WO2023142937A1 (zh) 一种网络拥塞控制方法及相关装置
CN115622952A (zh) 资源调度方法、装置、设备及计算机可读存储介质
JP2014158283A (ja) 予約ストリームの最大待ち時間の低減
WO2022160665A1 (zh) 一种报文转发的方法、报文处理方法及设备
US10382582B1 (en) Hierarchical network traffic scheduling using dynamic node weighting
CN112005528B (zh) 一种数据交换方法、数据交换节点及数据中心网络
US6795441B1 (en) Hierarchy tree-based quality of service classification for packet processing
CN109922003A (zh) 一种数据发送方法、系统及相关组件
CN111970149B (zh) 一种基于硬件防火墙qos的共享带宽实现方法
WO2022012204A1 (zh) 一种队列调度方法、装置及系统
US7339953B2 (en) Surplus redistribution for quality of service classification for packet processing
WO2019165855A1 (zh) 一种报文传输的方法及装置
WO2019200568A1 (zh) 一种数据通信方法及装置
JP2011091711A (ja) ノード及び送信フレーム振り分け方法並びにプログラム

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021843080

Country of ref document: EP

Effective date: 20230124

NENP Non-entry into the national phase

Ref country code: DE