WO2006069528A1 - A packet scheduling method in the packet service - Google Patents

A packet scheduling method in the packet service Download PDF

Info

Publication number
WO2006069528A1
WO2006069528A1 PCT/CN2005/002312 CN2005002312W WO2006069528A1 WO 2006069528 A1 WO2006069528 A1 WO 2006069528A1 CN 2005002312 W CN2005002312 W CN 2005002312W WO 2006069528 A1 WO2006069528 A1 WO 2006069528A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
priority
queue
traffic
data packet
Prior art date
Application number
PCT/CN2005/002312
Other languages
French (fr)
Chinese (zh)
Inventor
Wumao Chen
Xueyi Zhao
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Publication of WO2006069528A1 publication Critical patent/WO2006069528A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/621Individual queue per connection or flow, e.g. per VC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/525Queue scheduling by attributing bandwidth to queues by redistribution of residual bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/527Quantum based scheduling, e.g. credit or deficit based scheduling or token bank

Definitions

  • the present invention relates to a scheduling method, and more particularly to a scheduling method applicable to UBR+ (enhanced UBR, in which UBR is an Unspecified Bit Rate, i.e., "unspecified bit rate” abbreviation) in a packet service system.
  • UBR+ enhanced UBR, in which UBR is an Unspecified Bit Rate, i.e., "unspecified bit rate” abbreviation
  • UBR Unspecified Bit Rate
  • QoS quality of service
  • Etc quality of service
  • Different services require the network to provide different QoS parameters, and different QoS parameters need to be guaranteed by corresponding scheduling methods.
  • the so-called scheduling also known as flow control, is to control the order of the connected services to achieve the following objectives: 1) to ensure that each connection enjoys the bandwidth of its reservation; 2) the remaining bandwidth in the system In the case of the remaining bandwidth, the remaining bandwidth is allocated according to the QoS requirements of each connection; 3) The delay requirements of each connection are guaranteed.
  • the service goals of the existing UBR+ service QoS requirements are as follows: 1) Ensure the connection bandwidth with minimum bandwidth requirement, and the rest does not require the connection to do its best; 2) For the remaining bandwidth of the system, all connections are shared, including Connections with minimum bandwidth requirements; 3) Do their best in terms of latency.
  • Rim and fair scheduling are the most commonly used scheduling algorithms, including round robin algorithm (RR), eighted Round Robin (abbreviated as WRR), and DRR (abbreviation of Deficit Round-Robin). Although these algorithms can well guarantee the fairness of bandwidth allocation among users, or can ensure that each user shares the export bandwidth proportionally; In the following deficiencies: It is difficult to guarantee the minimum bandwidth of one or some high-priority users while the remaining bandwidth is shared by all users. In order to ensure that all users share the remaining bandwidth while ensuring high-priority user bandwidth, it is necessary to dynamically modify the scheduling weight of each user according to the number of connections of the user. For example, suppose the egress bandwidth is N.
  • the minimum bandwidth required for the user is guaranteed to be M, and the user and other L ordinary users are simultaneously assumed (assuming weights) To 1) be able to share the remaining bandwidth (NM), then the user's weight needs to be set to M*(L+l) / (N-M).
  • the number of high-priority users and the number of ordinary users may be randomly released, so it is cumbersome to calculate and set the weight of each user. More importantly, due to the fact that the number of user connections is variable at any time, this will make it difficult or even impossible to accurately set the weight values of each user.
  • Another common scheduling method is to implement bandwidth guarantee and sharing of UBR+ services by using traffic policing and PQ scheduling.
  • the traffic policing (Commi t Acces s Rate, abbreviated as CAR) sets an allowable traversal rate. Traffic within this rate range can pass, and traffic that exceeds this rate is discarded. Moreover, in order to ensure that each high-priority user can obtain the minimum required bandwidth, the sum of the minimum bandwidth required by all high-priority users when using traffic policing cannot exceed the total outgoing bandwidth.
  • PQ scheduling is used in this scheduling method to ensure that all high-priority user traffic that passes traffic policing always takes precedence over ordinary user traffic. The working principle is shown in Figure 1.
  • the packet 1 of the high-priority user passes the traffic policing 2, allowing the traffic in the rate range to pass through and entering the high-priority traffic queue 3; Packet 4 enters the normal traffic queue 5; then, through PQ scheduling 6, it ensures that high-priority user traffic takes precedence over ordinary user traffic.
  • the minimum bandwidth of high-priority user traffic is guaranteed, only ordinary users share the remaining bandwidth, and those that exceed the minimum bandwidth of those high-priority user traffic are directly discarded and are not shared with ordinary user traffic. Remaining bandwidth. Therefore, the above solution does not meet the bandwidth requirements of the UBR+ service.
  • the technical problem to be solved by the present invention is to provide a scheduling method for a single cartridge, which can solve the minimum bandwidth of one or some high-priority users, and can also achieve the remaining bandwidth by including higher priority users.
  • the problem shared by all users can fully meet the bandwidth requirements of the UBR+ service.
  • a packet scheduling method in a packet service is provided, which includes the following steps:
  • S1 User classification, dividing all users into high priority users and ordinary users;
  • S3 Perform traffic metrics on the data packets of the high-priority user, and use the data packet that does not exceed the traffic metric rate as the high-priority data packet, and use the data packet that exceeds the traffic metric rate and the data packet of the ordinary user as the common data pack;
  • S4 For the high priority data packet, send it to the sending queue of the egress; for normal data packets, check whether the egress is congested, and if the egress is congested, discard the excess normal data packet, otherwise the ordinary data packet is placed. Send out to the send queue of the exit.
  • the sending queue includes a high-priority traffic queue and a normal traffic queue.
  • the high-priority data packet is sent to the high-priority traffic queue, and all the ordinary data packets are sent to the normal traffic queue for transmission.
  • the absolute priority scheduling is also adopted in the step S4, so that the high priority traffic queue always takes precedence over the normal traffic queue.
  • the sending queue is implemented by using a FIFO queue, and the FIFO queue is provided with a high threshold and a low threshold; the high priority data packet is judged to enter the queue according to the high threshold threshold, and is discarded when the queue cannot be entered; The normal data packet is judged to enter the queue according to the low threshold threshold, and is discarded when the queue cannot be entered.
  • the difference between the high threshold and the low threshold is not less than the sum of the bursts of all the high priority users.
  • the high-priority traffic queue and the common traffic queue are respectively set with respective thresholds, and the high-priority data packets exceeding the high-priority queue threshold are marked as low-priority data packet processing, and the low-priority queue threshold is exceeded. The low priority packets are discarded.
  • the packet scheduling method in the packet service of the present invention performs traffic metrics on data packets of high-priority users, and divides data packets that do not exceed the traffic metric rate into high-priority data.
  • Packet which divides the data packet exceeding the traffic metric rate into ordinary data packets; and divides the ordinary user data packet into ordinary data packets; sends the high priority data packets into the queue, and controls the entry of the ordinary data packets when the egress is blocked.
  • the team ensures the minimum bandwidth guarantee for high-priority user traffic, and all users share the remaining bandwidth, which can fully meet the bandwidth requirements of UBR+ services.
  • FIG. 1 is a schematic diagram of a working principle of scheduling using traffic policing and PQ.
  • FIG. 2 is a flow chart of a packet scheduling method in the packet service of the present invention.
  • FIG. 3 is a schematic diagram showing the operation of a packet scheduling method in the packet service of the present invention.
  • FIG. 4 is a schematic diagram of a first embodiment of a packet scheduling method in a packet service of the present invention.
  • FIG. 5 is a schematic diagram of a second embodiment of a packet scheduling method in a packet service according to the present invention.
  • 6 is a schematic diagram of a third embodiment of a packet scheduling method in a packet service of the present invention.
  • the packet scheduling method in the packet service of the present invention is first in steps.
  • step S1 performs user classification, that is, divides all users into two categories: high priority users and ordinary users; then, determining the minimum bandwidth of each high priority user in step S2, thereby setting the traffic measurement rate; then, in step S3,
  • the data packet sent by the user is classified, that is, the traffic metric of the high-priority user is measured according to the traffic metric rate set in step S2, and the data packet of the high-priority user may be checked by using a leaky bucket algorithm or a token bucket algorithm.
  • 10 is the excess traffic, the packet that does not exceed the traffic metric rate is divided into the high priority packet 101, the packet exceeding the traffic metric rate is divided into the ordinary data packet 102; and the ordinary user data packet 20 is divided into ordinary packets.
  • Packet 202 is the excess traffic, the packet that does not exceed the traffic metric rate is divided into the high priority packet 101, the packet exceeding the traffic metric rate is divided into the ordinary data packet 102; and the ordinary user data packet 20 is divided into ordinary packets.
  • the data packet is processed at the exit of step S4, the high priority data packet 101 is placed in the sending queue of the egress, and sent out; the normal data packet 102, 202 is checked for the congestion status of the egress, and if the egress is congested, the packet is discarded.
  • the excess normal data packets 102, 202 if the exit is unblocked, put the ordinary data packets 102, 202 into the outgoing transmission queue and send them out. This ensures the minimum bandwidth guarantee for high-priority user traffic, and at the same time enables all users to fully share the remaining bandwidth.
  • a packet scheduling method in a packet service of the present invention First divide all users into high-priority users and ordinary users; then determine the minimum bandwidth of each high-priority user, and then set the traffic metric rate; then perform traffic metrics, packet classification.
  • the traffic metric only performs the measurement of the rate without discarding, that is, the data packet 10 of the high priority user is classified, and the data packet within the rate requirement range is divided into the high priority data packet 101, and the packet is entered.
  • To the high-priority traffic queue divide the data packet required by the original over-rate into the normal data packet 102, and use the same enqueue control as the ordinary data packet 20 of other ordinary users to enter the normal traffic queue.
  • the PQ scheduling module is used for scheduling at the exit to ensure that all high priority packets always take precedence over ordinary packets.
  • all users are first divided into high-priority users and ordinary users; then the minimum bandwidth of each high-priority user is determined, thereby setting Traffic metric rate; then traffic metrics, packet classification, classifying packets 10 of the high-priority user in the rate range into high-priority packets
  • the data packet 10 exceeding the rate range is divided into the ordinary data packet 102, 202 together with the data packet 20 of the ordinary user.
  • the FIFO (First In First Out) queue is then used at the exit for packet processing.
  • the FIFO queue is provided with a high threshold and a low threshold; the high priority data packet 101 is judged to enter the queue according to the high threshold threshold, and is discarded when the queue cannot be entered; the common data packet
  • the threshold value of the low threshold is judged and entered into the queue.
  • the embodiment of Figure 4 uses two queues, one with a higher priority and one with a lower priority, while the embodiment of Figure 5 uses only one FIFO queue, but simulates by setting two thresholds in this FIFO queue. 2 logical queues (physically only one queue).
  • the scheduling of high and low priority packets is achieved through these two logical queues. For example, a FIFO queue depth is up to 1000 packets, assuming a low priority threshold of 500, a high priority threshold of 1000, and the number of packets stored in this FIFO queue is less than 500.
  • the high priority data packet is prioritized over the ordinary data packet, and the minimum bandwidth of the high priority user is guaranteed;
  • the data packet of the out-of-rate range and the data packet of the ordinary user form a common data packet, so that the high-priority user and the ordinary user can share the remaining bandwidth.
  • the difference between the thresholds of the high-gate P ⁇ and the low threshold is not less than the sum of the bursts of all the high-priority users, that is, Burst data of all high-priority users can enter the queue, thus ensuring the bandwidth requirements of high-priority users.
  • the reason why the difference between the high and low thresholds is not less than the sum of the bursts of all high-priority users is to ensure that high-priority packets of multiple user bursts cannot be cached (discarded) because the queue capacity is too small.
  • all users are first divided into high-priority users and ordinary users; then the minimum bandwidth of each high-priority user is determined, thereby setting The traffic metric rate is then divided into a high priority packet 101, and the packet 10 exceeding the rate range is divided into the normal packet 102.
  • the high-priority traffic queue and the common traffic queue are respectively set with respective thresholds, and the high-priority data packets exceeding the high-priority queue threshold are not discarded, and they are marked as low-priority data packet processing. At this time, these high-priority data packets become low-priority data packets, and their processing is completely the same as low-priority data packets, and no distinction is made; for low-priority data packets exceeding the low priority queue threshold. Discard processing.
  • the low priority packet at this time may include two parts, one of which is a low priority packet, and the other is a low priority packet that is changed by a high priority packet that cannot enter the high priority queue.
  • the PQ scheduling module is used for scheduling at the exit to ensure that all high priority packets always take precedence over normal packets. This ensures the minimum bandwidth guarantee for high-priority user traffic, and all users share the remaining bandwidth, which can fully meet the bandwidth requirements of UBR+ services.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A packet scheduling method in the packet service includes: dividing users into the high priority user and the low priority user; determining the minimum bandwidth of the high priority user, and setting its flow rate measure; measuring the flow of the packet of the high priority user, then the packet being not out of the flow rate measure is the high priority packet and the packet being out of the flow rate measure is the general packet; regarding the packet of the general user as general packet; putting the high priority packet in the transmitting queue for transmission; for general packet, detecting the congestion condition of the output: discarding the excess general packet if the output was congested, and transmitting the general packet if the output was not congested. The present invention ensures the minimum bandwidth of the high priority user flow, whilst, all the users share the remainder bandwidth to completely provide the required bandwidth for UBR+ service.

Description

一种分组业务中的数据包调度方法  Packet scheduling method in packet service
技术领域 本发明涉及调度方法, 特别涉及可应用于分组业务系统中 UBR+ (增强 UBR, 其中 UBR是 Unspecified Bit Rate即 "未指定比特率" 缩写)业务 的一种调度方法。 TECHNICAL FIELD The present invention relates to a scheduling method, and more particularly to a scheduling method applicable to UBR+ (enhanced UBR, in which UBR is an Unspecified Bit Rate, i.e., "unspecified bit rate" abbreviation) in a packet service system.
背景技术 Background technique
在分组(packet) 业务系统中, 业务的质量保证(QoS, Quality of Service) 的参数包括分组时延( delay )、 时延抖动 ( delay jitter )、 丟 失率 (loss rate), 吞吐率(throughput)等, 不同的业务要求网络提供 不同的 QoS参数, 而不同的 QoS参数是需要相应的调度方法来保证的。 所 谓调度(scheduling), 又称流控( flow control ), 就是对连接的服务顺 序进行控制, 以达到下列几个目标: 1)保证每条连接享用其预约的带宽 ; 2)在系统有剩余带宽的情况下,根据各连接的 QoS要求分配剩余带宽; 3) 保证满足各条连接的时延要求。  In a packet service system, quality of service (QoS) parameters include packet delay, delay jitter, loss rate, and throughput. Etc. Different services require the network to provide different QoS parameters, and different QoS parameters need to be guaranteed by corresponding scheduling methods. The so-called scheduling, also known as flow control, is to control the order of the connected services to achieve the following objectives: 1) to ensure that each connection enjoys the bandwidth of its reservation; 2) the remaining bandwidth in the system In the case of the remaining bandwidth, the remaining bandwidth is allocated according to the QoS requirements of each connection; 3) The delay requirements of each connection are guaranteed.
而现有的 UBR+业务的 QoS要求所需要达到的服务目标为: 1 )保证有 最小带宽要求的连接带宽, 其余无要求连接尽力而为; 2)对于系统剩余 带宽, 则所有连接共享, 包括有最小带宽要求的连接; 3) 时延方面尽力 而为。  The service goals of the existing UBR+ service QoS requirements are as follows: 1) Ensure the connection bandwidth with minimum bandwidth requirement, and the rest does not require the connection to do its best; 2) For the remaining bandwidth of the system, all connections are shared, including Connections with minimum bandwidth requirements; 3) Do their best in terms of latency.
目前, 针对该业务比较常用的调度算法主要有以下两种: 第一种为轮 徇和公平调度算法 , 第 二种为 流量监管加绝对优先级 (Strict-Priority Queue, 缩写为 PQ ) 的调度算法。 在对这两种方法进 行伴细说明之前, 先做以下定义: 对于有最小带宽保证的连接称为高优先 级连接, 对于没有带宽要求的连接称之为普通连接; 对应的用户称为高优 先级用户以及普通用户。  Currently, there are two main scheduling algorithms commonly used for this service: the first is the rim and fair scheduling algorithm, and the second is the scheduling algorithm with Strict-Priority Queue (abbreviated as PQ). . Before describing the two methods, the following definitions are made: For a connection with minimum bandwidth guarantee, it is called high priority connection, for a connection without bandwidth requirement, it is called normal connection; The corresponding user is called high priority. Level users as well as regular users.
轮徇和公平调度是最为常用的调度算法, 主要包括有轮循算法( RR )、 力口权轮徇 ( eighted Round Robin , 缩写为 WRR )、 DRR ( Deficit Round-Robin的缩写)等。 虽然这些算法可以很好地保证带宽在各个用户 之间分配的公平性, 或者可以保证各个用户按比例分享出口带宽; 但是存 在着以下的缺陷: 难以做到保证某个或某些高优先级用户的最小带宽的同 时, 剩余带宽由所有用户共享的业务要求。 为了在保证高优先级用户带宽 的情况下, 让所有用户共享剩余带宽, 就必须要根据用户的连接数来动态 修改各用户的调度权重。 例如, 假设出口带宽为 N, 当有一个高优先级用 户时, 为了达到 UBR+的带宽要求, 即保证该用户的要求的最小带宽为 M, 同时做到该用户和其他 L个普通用户 (假定权重为 1 ) 能够共享剩余带宽 ( N-M ), 则该用户的权重需要设置为 M* ( L+l ) / ( N- M )。 然而在实际应用 情况下, 高优先级用户数和普通用户数可能随机发布, 因而计算和设置各 个用户的权重比较繁瑣。 更重要的是, 由于实际情况下, 用户连接数随时 可变,这将使得准确设置出各用户的权重值变得相当困难,甚至不可实现。 Rim and fair scheduling are the most commonly used scheduling algorithms, including round robin algorithm (RR), eighted Round Robin (abbreviated as WRR), and DRR (abbreviation of Deficit Round-Robin). Although these algorithms can well guarantee the fairness of bandwidth allocation among users, or can ensure that each user shares the export bandwidth proportionally; In the following deficiencies: It is difficult to guarantee the minimum bandwidth of one or some high-priority users while the remaining bandwidth is shared by all users. In order to ensure that all users share the remaining bandwidth while ensuring high-priority user bandwidth, it is necessary to dynamically modify the scheduling weight of each user according to the number of connections of the user. For example, suppose the egress bandwidth is N. When there is a high-priority user, in order to achieve the UBR+ bandwidth requirement, the minimum bandwidth required for the user is guaranteed to be M, and the user and other L ordinary users are simultaneously assumed (assuming weights) To 1) be able to share the remaining bandwidth (NM), then the user's weight needs to be set to M*(L+l) / (N-M). However, in actual applications, the number of high-priority users and the number of ordinary users may be randomly released, so it is cumbersome to calculate and set the weight of each user. More importantly, due to the fact that the number of user connections is variable at any time, this will make it difficult or even impossible to accurately set the weight values of each user.
另一种较为常用的调度办法是通过使用流量监管和 PQ调度, 来实现 UBR+业务的带宽保证和共享。 所谓流量监管 (Commi t Acces s Rate, 缩写 为 CAR ), 即设置一个允许通过速率, 在此速率范围内的流量可以通过, 对 于超出该速率的流量做丟弃处理。 并且, 为了保证每个高优先级用户可以 得到所要求的最小带宽, 在使用流量监管时所有高优先级用户所要求最小 带宽之和不能超过出口总带宽。 同时在本调度方法中使用 PQ调度, 从而 保证所有通过流量监管的高优先级用户流量总是优先于普通用户流量通 过。 其工作原理如图 1所示, 在图 1中, 高优先级用户的数据包 1通过流 量监管 2 ,允许在速率范围内的流量通过并进入到高优先级流量队列 3中; 而普通用户的数据包 4则进入到普通流量队列 5中; 然后通过 PQ调度 6 , 保证高优先级用户流量优先于普通用户流量通过。 图 1中, 虽然高优先级 用户流量的最小带宽是得到了保证, 但是只有普通用户分享剩余带宽, 那 些高优先级用户流量中超出最小带宽的部分被直接丢弃了, 并没有和普通 用户流量共享剩余带宽。因此,上述方案也没有达到 UBR+业务的带宽要求。  Another common scheduling method is to implement bandwidth guarantee and sharing of UBR+ services by using traffic policing and PQ scheduling. The traffic policing (Commi t Acces s Rate, abbreviated as CAR) sets an allowable traversal rate. Traffic within this rate range can pass, and traffic that exceeds this rate is discarded. Moreover, in order to ensure that each high-priority user can obtain the minimum required bandwidth, the sum of the minimum bandwidth required by all high-priority users when using traffic policing cannot exceed the total outgoing bandwidth. At the same time, PQ scheduling is used in this scheduling method to ensure that all high-priority user traffic that passes traffic policing always takes precedence over ordinary user traffic. The working principle is shown in Figure 1. In Figure 1, the packet 1 of the high-priority user passes the traffic policing 2, allowing the traffic in the rate range to pass through and entering the high-priority traffic queue 3; Packet 4 enters the normal traffic queue 5; then, through PQ scheduling 6, it ensures that high-priority user traffic takes precedence over ordinary user traffic. In Figure 1, although the minimum bandwidth of high-priority user traffic is guaranteed, only ordinary users share the remaining bandwidth, and those that exceed the minimum bandwidth of those high-priority user traffic are directly discarded and are not shared with ordinary user traffic. Remaining bandwidth. Therefore, the above solution does not meet the bandwidth requirements of the UBR+ service.
综上所述, 无论轮徇和公平调度算法, 还是流量监管加绝对优先级的 调度算法, 都只能够保证高优先级用户的最小带宽要求, 而很难或无法实 现高优先级用户与普通用户分享剩余带宽, 无法满足 UBR+业务的带宽要 求。  In summary, regardless of the rim and fair scheduling algorithm, or the traffic policing plus absolute priority scheduling algorithm, only the minimum bandwidth requirement of high-priority users can be guaranteed, and it is difficult or impossible to achieve high-priority users and ordinary users. Sharing the remaining bandwidth, unable to meet the bandwidth requirements of the UBR+ service.
发明内容 本发明要解决的技术问题在于, 提供一种筒单的调度方法, 可以在解 决某个或某些高优先级用户的最小带宽的同时, 还能够做到剩余带宽由包 括较高优先级用户的所有用户共享的问题,从而可以完全满足 UBR+业务的 带宽要求。 Summary of the invention The technical problem to be solved by the present invention is to provide a scheduling method for a single cartridge, which can solve the minimum bandwidth of one or some high-priority users, and can also achieve the remaining bandwidth by including higher priority users. The problem shared by all users can fully meet the bandwidth requirements of the UBR+ service.
本发明解决其技术问题所采用的技术方案是: 提供一种分组业务中的 数据包调度方法, 包括以下步驟:  The technical solution adopted by the present invention to solve the technical problem thereof is as follows: A packet scheduling method in a packet service is provided, which includes the following steps:
S1 : 用户分类, 将所有用户分成高优先级用户和普通用户;  S1: User classification, dividing all users into high priority users and ordinary users;
S2: 确定每个高优先级用户的流量度量速率;  S2: determining a traffic metric rate of each high priority user;
S3: 对高优先级用户的数据包进行流量度量, 将没有超出所述流量度 量速率的数据包作为高优先级数据包,将超出所述流量度量速率的数据包 和普通用户的数据包作为普通数据包;  S3: Perform traffic metrics on the data packets of the high-priority user, and use the data packet that does not exceed the traffic metric rate as the high-priority data packet, and use the data packet that exceeds the traffic metric rate and the data packet of the ordinary user as the common data pack;
S4:对于所述高优先级数据包,将其放到出口的发送队列中发送出去; 对普通数据包, 检查出口是否拥塞, 如果出口拥塞则丢弃超额的普通数据 包, 否则将普通数据包放到出口的发送队列中发送出去。  S4: For the high priority data packet, send it to the sending queue of the egress; for normal data packets, check whether the egress is congested, and if the egress is congested, discard the excess normal data packet, otherwise the ordinary data packet is placed. Send out to the send queue of the exit.
所述发送队列包括高优先级流量队列和普通流量队列, 所述高优先级 数据包放到所述高优先级流量队列中发送, 所有所述普通数据包放到普通 流量队列中发送。  The sending queue includes a high-priority traffic queue and a normal traffic queue. The high-priority data packet is sent to the high-priority traffic queue, and all the ordinary data packets are sent to the normal traffic queue for transmission.
所述步骤 S4 中还采用绝对优先级调度, 使高优先级流量队列总是优 先于普通流量队列。  The absolute priority scheduling is also adopted in the step S4, so that the high priority traffic queue always takes precedence over the normal traffic queue.
或者, 所述的发送队列使用 FIFO队列实现, 所述 FIFO队列设有高门 限和低门限; 所述高优先级数据包根据高门限阔值进行判断入队, 不能入 队时作丟弃处理; 所述普通数据包根据低门限阈值进行判断入队, 不能入 队时作丟弃处理。 所述高门限和低门限的阐值之差不小于所述所有高优先 级用户的突发度之和。  Alternatively, the sending queue is implemented by using a FIFO queue, and the FIFO queue is provided with a high threshold and a low threshold; the high priority data packet is judged to enter the queue according to the high threshold threshold, and is discarded when the queue cannot be entered; The normal data packet is judged to enter the queue according to the low threshold threshold, and is discarded when the queue cannot be entered. The difference between the high threshold and the low threshold is not less than the sum of the bursts of all the high priority users.
或者, 所述高优先级流量队列和普通流量队列分别设置有各自的门 限, 将超过高优先级队列门限的高优先級数据包标记为低优先级的数据包 处理, 对于超过低优先级队列门限的低优先级数据包进行丢弃处理。  Alternatively, the high-priority traffic queue and the common traffic queue are respectively set with respective thresholds, and the high-priority data packets exceeding the high-priority queue threshold are marked as low-priority data packet processing, and the low-priority queue threshold is exceeded. The low priority packets are discarded.
实施本发明的分组业务中的数据包调度方法, 对高优先级用户的数据 包进行流量度量, 将没有超出流量度量速率的数据包划分为高优先级数据 包, 而将超出流量度量速率的数据包划分为普通数据包; 并且对普通用户 数据包划分为普通数据包; 对高优先级数据包入队发送, 而在出口阻塞时 控制普通数据包的入队, 从而保证了高优先级用户流量的最小带宽保证, 同时所有用户共享剩余带宽, 可以完全满足 UBR+业务的带宽要求。 The packet scheduling method in the packet service of the present invention performs traffic metrics on data packets of high-priority users, and divides data packets that do not exceed the traffic metric rate into high-priority data. Packet, which divides the data packet exceeding the traffic metric rate into ordinary data packets; and divides the ordinary user data packet into ordinary data packets; sends the high priority data packets into the queue, and controls the entry of the ordinary data packets when the egress is blocked. The team ensures the minimum bandwidth guarantee for high-priority user traffic, and all users share the remaining bandwidth, which can fully meet the bandwidth requirements of UBR+ services.
附图说明 图 1是现有使用流量监管和 PQ进行调度的工作原理图。 BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic diagram of a working principle of scheduling using traffic policing and PQ.
图 2是本发明分组业务中的数据包调度方法的流程图。  2 is a flow chart of a packet scheduling method in the packet service of the present invention.
图 3是本发明分組业务中的数据包调度方法的工作原理图。  3 is a schematic diagram showing the operation of a packet scheduling method in the packet service of the present invention.
图 4是本发明分组业务中的数据包调度方法的第一实施例的示意图。 图 5是本发明分组业务中的数据包调度方法的第二实施例的示意图。 图 6是本发明分组业务中的数据包调度方法的第三实施例的示意图。 具体实施方式  4 is a schematic diagram of a first embodiment of a packet scheduling method in a packet service of the present invention. FIG. 5 is a schematic diagram of a second embodiment of a packet scheduling method in a packet service according to the present invention. 6 is a schematic diagram of a third embodiment of a packet scheduling method in a packet service of the present invention. detailed description
下面结合附图,对本发明的一种分组业务中的数据包调度方法作进一 步详细的描述。  The packet scheduling method in a packet service of the present invention will be further described in detail below with reference to the accompanying drawings.
如图 1和图 3所示, 本发明分组业务中的数据包调度方法首先在步骤 As shown in FIG. 1 and FIG. 3, the packet scheduling method in the packet service of the present invention is first in steps.
S1进行用户分类,即将所有用户分成高优先级用户和普通用户两类;接着, 在步驟 S2确定每个高优先级用户的的最小带宽,从而设置流量度量速率; 然后, 在步骤 S 3 , 对用户端发出的数据包进行分类, 即根据步骤 S2设置 的流量度量速率, 对高优先级用户的数据包 10进行流量度量, 可以采用 漏桶算法或令牌桶算法检查高优先级用户的数据包 10是否属于超额流量, 将没有超出流量度量速率的数据包划分为高优先级数据包 101 , 将超出流 量度量速率的数据包划分为普通数据包 102 ; 并且将普通用户数据包 20 划分为普通数据包 202。 最后, 在步骤 S4出口对数据包进行处理, 将高优 先级数据包 101放到出口的发送队列中, 发送出去; 对普通数据包 102、 202则检查出口的拥塞状况, 如果出口拥塞则丟弃超额的普通数据包 102、 202 , 如果出口畅通则将普通数据包 102、 202放到出口的发送队列中, 发 送出去。 从而保证了高优先级用户流量的最小带宽保证, 同时实现了所有 用户可以完全共享剩余带宽。 S1 performs user classification, that is, divides all users into two categories: high priority users and ordinary users; then, determining the minimum bandwidth of each high priority user in step S2, thereby setting the traffic measurement rate; then, in step S3, The data packet sent by the user is classified, that is, the traffic metric of the high-priority user is measured according to the traffic metric rate set in step S2, and the data packet of the high-priority user may be checked by using a leaky bucket algorithm or a token bucket algorithm. 10 is the excess traffic, the packet that does not exceed the traffic metric rate is divided into the high priority packet 101, the packet exceeding the traffic metric rate is divided into the ordinary data packet 102; and the ordinary user data packet 20 is divided into ordinary packets. Packet 202. Finally, the data packet is processed at the exit of step S4, the high priority data packet 101 is placed in the sending queue of the egress, and sent out; the normal data packet 102, 202 is checked for the congestion status of the egress, and if the egress is congested, the packet is discarded. The excess normal data packets 102, 202, if the exit is unblocked, put the ordinary data packets 102, 202 into the outgoing transmission queue and send them out. This ensures the minimum bandwidth guarantee for high-priority user traffic, and at the same time enables all users to fully share the remaining bandwidth.
参见图 4, 在本发明分组业务中的数据包调度方法的第一实施例中, 首先将所有用户划分为高优先級用户和普通用户; 然后确定每个高优先级 用户的最小带宽, 从而设置流量度量速率; 然后进行流量度量, 数据包分 类。 在本实施例中 , 流量度量只做速率的度量, 而不进行丢弃, 即把高优 先级用户的数据包 10进行分类, 在速率要求范围内的数据包划分为高优 先级数据包 101 , 进入到高优先级流量队列中; 把原来超速率要求的数据 包划分为普通数据包 102 ,同时和其他普通用户的普通数据包 20采用相同 的入队控制, 进入普通流量队列。 并且在出口处采用 PQ调度模块进行调 度, 保证所有高优先级数据包总是优先于普通数据包。 Referring to FIG. 4, in a first embodiment of a packet scheduling method in a packet service of the present invention, First divide all users into high-priority users and ordinary users; then determine the minimum bandwidth of each high-priority user, and then set the traffic metric rate; then perform traffic metrics, packet classification. In this embodiment, the traffic metric only performs the measurement of the rate without discarding, that is, the data packet 10 of the high priority user is classified, and the data packet within the rate requirement range is divided into the high priority data packet 101, and the packet is entered. To the high-priority traffic queue; divide the data packet required by the original over-rate into the normal data packet 102, and use the same enqueue control as the ordinary data packet 20 of other ordinary users to enter the normal traffic queue. And the PQ scheduling module is used for scheduling at the exit to ensure that all high priority packets always take precedence over ordinary packets.
参见图 5 , 在本发明分组业务中的数据包调度方法的第二实施例中, 首先将所有用户划分为高优先级用户和普通用户; 然后确定每个高优先级 用户的最小带宽, 从而设置流量度量速率; 然后进行流量度量, 数据包分 类, 将高优先级用户的在速率范围内的数据包 10划分为高优先级数据包 Referring to FIG. 5, in the second embodiment of the packet scheduling method in the packet service of the present invention, all users are first divided into high-priority users and ordinary users; then the minimum bandwidth of each high-priority user is determined, thereby setting Traffic metric rate; then traffic metrics, packet classification, classifying packets 10 of the high-priority user in the rate range into high-priority packets
101 , 超出速率范围的数据包 10与普通用户的数据包 20共同划分为普通 数据包 102、 202。 然后在出口处使用 FIFO (先入先出) 队列实现数据包 的处理。 所述 FIFO 队列设有高门限和低门限; 所述高优先级数据包 101 根据高门限阔值进行判断入队, 不能入队时作丢弃处理; 所述普通数据包101. The data packet 10 exceeding the rate range is divided into the ordinary data packet 102, 202 together with the data packet 20 of the ordinary user. The FIFO (First In First Out) queue is then used at the exit for packet processing. The FIFO queue is provided with a high threshold and a low threshold; the high priority data packet 101 is judged to enter the queue according to the high threshold threshold, and is discarded when the queue cannot be entered; the common data packet
102、 202 居低门限阔值进行判断入队, 不能入队时作丢弃处理。 102, 202 The threshold value of the low threshold is judged and entered into the queue.
这里所述的 FIFO 队列以及高、 低门限, 实际上涉及另外一种实现方 法。 图 4所述实施例使用两个队列, 一个优先级高, 一个优先级低, 而图 5所述实施例只使用一个 FIFO队列, 但是在这个 FIFO队列中通过设置两 个门限值, 来模拟 2个逻辑队列 (物理上只有一个队列)。 通过这两个逻 辑队列来实现对高、 低优先级数据包的调度。 例如, 一个 FIFO队列深度 为最多入队 1000个数据包,假设设置低优先级的门限值为 500, 高优先级 的门限值为 1000, 当这个 FIFO队列中存储的数据包个数小于 500时, 高、 低优先级的数据包都可以入队; 但是当 FIFO中存储的数据包大于 500且 小于 1000 时, 就只有高优先级的数据包才能入队, 低优先级的数据包则 被丢弃; 而到这个 FIFO队列中存储的数据包达到了 1000 (满) 时, 那高 优先级的数据包也无法入队, 做丢弃处理。 通过这种方法实现了一个物理 队列对两种优先级数据包的调度处理。  The FIFO queues described here, as well as the high and low thresholds, actually involve another implementation. The embodiment of Figure 4 uses two queues, one with a higher priority and one with a lower priority, while the embodiment of Figure 5 uses only one FIFO queue, but simulates by setting two thresholds in this FIFO queue. 2 logical queues (physically only one queue). The scheduling of high and low priority packets is achieved through these two logical queues. For example, a FIFO queue depth is up to 1000 packets, assuming a low priority threshold of 500, a high priority threshold of 1000, and the number of packets stored in this FIFO queue is less than 500. When high- and low-priority data packets can be enqueued, when the data packets stored in the FIFO are greater than 500 and less than 1000, only high-priority data packets can be enqueued, and low-priority data packets are Discarded; when the packet stored in this FIFO queue reaches 1000 (full), the high-priority packet cannot be enqueued and discarded. In this way, a physical queue is used to schedule two priority packets.
图 5中, 由于高门限的阈值大于低门限的阈值, 使得高优先级数据包 优先于普通数据包入队, 保证了高优先级用户的最小带宽; 并且, 由于超 出速率范围的数据包与普通用户的数据包共同组成普通数据包, 使得高优 先级用户与普通用户能够共享剩余带宽。 在出口发生拥塞时, 为了保证所 有高优先级用户的带宽要求能够得到保证, 所述高门 P艮和低门限的阈值之 差不小于所述所有高优先级用户的突发度之和, 即所有高优先级用户的突 发数据都可以进入队列中, 从而保证了高优先级用户的带宽要求。 之所以 高低门限阈值之差不小于所有高优先级用户的突发度之和,是要保证不至 于因为队列容量过小导致无法緩存(丟弃)多个用户突发的高优先级数据 包。 例如, 假如有 100个高优先级用户, 这些用户的最高速率是 1秒钟发 10个数据包, 而我们的 FIFO队列出口处理能力为 1秒钟处理 800个数据 包, 对于一种极限情况, 低优先级的数据包已经占据了 FIFO 队列中的所 有低门限值内的存储单元, 所以只要保证高低门限间的存储单元有 200 ( 1000 - 800 )个, 就不至于使得由于用户同时发送数据包而导致数据包 丢失。 In FIG. 5, since the threshold of the high threshold is greater than the threshold of the low threshold, the high priority data packet is prioritized over the ordinary data packet, and the minimum bandwidth of the high priority user is guaranteed; The data packet of the out-of-rate range and the data packet of the ordinary user form a common data packet, so that the high-priority user and the ordinary user can share the remaining bandwidth. When the congestion occurs at the egress, in order to ensure that the bandwidth requirements of all high-priority users can be guaranteed, the difference between the thresholds of the high-gate P艮 and the low threshold is not less than the sum of the bursts of all the high-priority users, that is, Burst data of all high-priority users can enter the queue, thus ensuring the bandwidth requirements of high-priority users. The reason why the difference between the high and low thresholds is not less than the sum of the bursts of all high-priority users is to ensure that high-priority packets of multiple user bursts cannot be cached (discarded) because the queue capacity is too small. For example, if there are 100 high-priority users, the maximum rate of these users is 10 packets in 1 second, and our FIFO queue export processing capability is 800 packets in 1 second. For a limit case, Low-priority data packets already occupy all the memory cells in the low-limit threshold of the FIFO queue, so as long as there are 200 (1000 - 800) memory cells between the high and low thresholds, it will not cause the user to send data at the same time. Packets cause packet loss.
参见图 6 , 在本发明分组业务中的数据包调度方法的笫三实施例中, 首先将所有用户划分为高优先级用户和普通用户; 然后确定每个高优先级 用户的最小带宽, 从而设置流量度量速率; 然后进行流量度量, 数据包分 类, 将高优先级用户的在速率范围内的数据包 10划分为高优先级数据包 101, 超出速率范围的数据包 10划分为普通数据包 102。 在出口处设有高 优先级流量队列和普通流量队列。 高优先级数据包 101放到高优先级流量 队列中, 普通用户的数据包则放到普通流量队列中。 所述高优先级流量队 列和普通流量队列分别设置有各自的门限,对于超过高优先级队列门限的 高优先级数据包并不进行丢弃处理, 而且把他们标记为低优先级的数据包 处理, 此时这些由高优先级数据包变成低优先级的数据包, 对其的处理完 全和低优先级数据包一样, 不再进行区分; 对于超过低优先级队列门限的 低优先级数据包进行丟弃处理。 此时的低优先級数据包可能包括两部分, 一部分为本来就是低优先级的数据包, 另一部分是由高优先级数据包无法 进入高优先级队列而被变为的低优先級数据包。 并且在出口处采用 PQ调 度模块进行调度, 保证所有高优先级数据包总是优先于普通数据包。 从而 保证了高优先级用户流量的最小带宽保证, 同时所有用户共享剩余带宽, 可以完全满足 UBR+业务的带宽要求。  Referring to FIG. 6, in the third embodiment of the packet scheduling method in the packet service of the present invention, all users are first divided into high-priority users and ordinary users; then the minimum bandwidth of each high-priority user is determined, thereby setting The traffic metric rate is then divided into a high priority packet 101, and the packet 10 exceeding the rate range is divided into the normal packet 102. There are high priority traffic queues and normal traffic queues at the exit. High-priority packets 101 are placed in the high-priority traffic queue, and ordinary users' packets are placed in the normal traffic queue. The high-priority traffic queue and the common traffic queue are respectively set with respective thresholds, and the high-priority data packets exceeding the high-priority queue threshold are not discarded, and they are marked as low-priority data packet processing. At this time, these high-priority data packets become low-priority data packets, and their processing is completely the same as low-priority data packets, and no distinction is made; for low-priority data packets exceeding the low priority queue threshold. Discard processing. The low priority packet at this time may include two parts, one of which is a low priority packet, and the other is a low priority packet that is changed by a high priority packet that cannot enter the high priority queue. The PQ scheduling module is used for scheduling at the exit to ensure that all high priority packets always take precedence over normal packets. This ensures the minimum bandwidth guarantee for high-priority user traffic, and all users share the remaining bandwidth, which can fully meet the bandwidth requirements of UBR+ services.

Claims

权 利 要 求 Rights request
1、 一种分组业务中的数据包调度方法, 包括以下步骤:  1. A packet scheduling method in a packet service, comprising the following steps:
S1 : 用户分类, 将所有用户分成高优先级用户和普通用户;  S1: User classification, dividing all users into high priority users and ordinary users;
S2: 确定每个高优先級用户的流量度量速率;  S2: determining a traffic metric rate of each high priority user;
其特征在于, 还包括以下步骤:  It is characterized in that it further comprises the following steps:
S3: 对高优先级用户的数据包进行流量度量, 将没有超出所述流量度 量速率的数据包作为高优先级数据包, 将超出所述流量度量速率的数据包 和普通用户的数据包作为普通数据包;  S3: Perform traffic measurement on the data packet of the high-priority user, and use the data packet that does not exceed the traffic metric rate as the high-priority data packet, and use the data packet that exceeds the traffic metric rate and the data packet of the ordinary user as the common data pack;
S4:对于所述高优先级数据包,将其放到出口的发送队列中发送出去; 对普通数据包, 检查出口是否拥塞, 如果拥塞则丟弃超额的普通数据包, 否则将普通数据包放到出口的发送队列中发送出去。  S4: For the high priority data packet, send it to the sending queue of the egress; for the normal data packet, check whether the egress is congested, and if the congestion is over, discard the excess normal data packet, otherwise the ordinary data packet is put Send out to the send queue of the exit.
2、 根据权利要求 1 所述的分组业务中的数据包调度方法, 其特征在 于, 所述发送队列包括高优先级流量队列和普通流量队列, 所述高优先级 数据包通过所述高优先级流量队列发送, 所述普通数据包通过普通流量队 列发送。  The packet scheduling method of the packet service according to claim 1, wherein the sending queue comprises a high priority traffic queue and a common traffic queue, and the high priority data packet passes the high priority The traffic queue is sent, and the normal data packet is sent through a common traffic queue.
3、 根据权利要求 1所述的分组业务中的数据包调度方法, 其特征在 于, 使高优先级流量队列的优先级优先于普通流量队列的优先级。  The packet scheduling method in the packet service according to claim 1, wherein the priority of the high priority traffic queue is prioritized over the priority of the normal traffic queue.
4、 根据权利要求 1 所述的分組业务中的数据包调度方法, 其特征在 于, 所述发送队列为 FIFO队列, 所述 FIFO队列设有高门限和低门限; 所 述高优先级数据包根据高门限阔值进行判断入队, 不能入队时作丟弃处 理; 所述普通数据包根据低门限阈值进行判断入队, 不能入队时作丢弃处 理。  The packet scheduling method in the packet service according to claim 1, wherein the sending queue is a FIFO queue, the FIFO queue is provided with a high threshold and a low threshold; and the high priority data packet is based on The high threshold value is judged to enter the queue, and the discarding process is not performed when the queue is not entered. The common data packet is judged to enter the queue according to the low threshold threshold, and is discarded when the queue cannot be entered.
5、 根据权利要求 4所述的分组业务中的数据包调度方法, 其特征在 于, 所述高门限和低门限的阈值之差不小于所述所有高优先级用户的突发 度之和。  The packet scheduling method in the packet service according to claim 4, wherein the difference between the thresholds of the high threshold and the low threshold is not less than the sum of the bursts of all the high priority users.
6、 根据权利要求 2或 3所述的分组业务中的数据包调度方法, 其特 征在于, 所述高优先级流量队列和普通流量队列分别设置有各自的门限, 将超过高优先级队列门限的高优先级数据包标记为低优先级的数据包处 理, 对于超过低优先级队列门限的低优先级凝  The data packet scheduling method in the packet service according to claim 2 or 3, wherein the high priority traffic queue and the normal traffic queue are respectively set with respective thresholds, which will exceed the high priority queue threshold. High-priority packets are marked for low-priority packet processing, for low-priority congestions that exceed low-priority queue thresholds
PCT/CN2005/002312 2004-12-29 2005-12-26 A packet scheduling method in the packet service WO2006069528A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNB2004100919192A CN100370787C (en) 2004-12-29 2004-12-29 Method for scheduling data packets in packet service
CN200410091919.2 2004-12-29

Publications (1)

Publication Number Publication Date
WO2006069528A1 true WO2006069528A1 (en) 2006-07-06

Family

ID=36614495

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2005/002312 WO2006069528A1 (en) 2004-12-29 2005-12-26 A packet scheduling method in the packet service

Country Status (2)

Country Link
CN (1) CN100370787C (en)
WO (1) WO2006069528A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101202701B (en) * 2006-12-12 2012-09-05 中兴通讯股份有限公司 Method for distributing band width of assemblage useable bit rate transaction in grouping network
CN101035008B (en) * 2007-04-17 2010-04-14 华为技术有限公司 Service scheduling method and its network convergence device
US7911956B2 (en) * 2007-07-27 2011-03-22 Silicon Image, Inc. Packet level prioritization in interconnection networks
CN101159903B (en) * 2007-10-23 2011-01-05 华为技术有限公司 Method and device of preventing and processing transmission carrying congestion
CN101296185B (en) * 2008-06-05 2011-12-14 杭州华三通信技术有限公司 Flow control method and device of equalization group
CN101360052B (en) * 2008-09-28 2011-02-09 成都市华为赛门铁克科技有限公司 Method and device for flow scheduling
CN101616096B (en) * 2009-07-31 2013-01-16 中兴通讯股份有限公司 Method and device for scheduling queue
CN101692648B (en) * 2009-08-14 2012-05-23 中兴通讯股份有限公司 Method and system for queue scheduling
CN101827033B (en) * 2010-04-30 2013-06-19 北京搜狗科技发展有限公司 Method and device for controlling network traffic and local area network system
CN108369531B (en) * 2016-07-12 2023-06-02 华为云计算技术有限公司 Method, device and system for controlling IO bandwidth and processing IO access request

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002045362A2 (en) * 2000-11-30 2002-06-06 Qualcomm Incorporated Method and apparatus for scheduling packet data transmissions in a wireless communication system
WO2002085054A2 (en) * 2001-04-12 2002-10-24 Qualcomm Incorporated Method and apparatus for scheduling packet data transmissions in a wireless communication system
US6801501B1 (en) * 1999-09-14 2004-10-05 Nokia Corporation Method and apparatus for performing measurement-based admission control using peak rate envelopes
EP1478140A1 (en) * 2003-04-24 2004-11-17 France Telecom Method and Apparatus for scheduling packets on a network link using priorities based on the incoming packet rates of the flow

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6647424B1 (en) * 1998-05-20 2003-11-11 Nortel Networks Limited Method and apparatus for discarding data packets
JP4484317B2 (en) * 2000-05-17 2010-06-16 株式会社日立製作所 Shaping device
KR20040000336A (en) * 2002-06-24 2004-01-03 마츠시타 덴끼 산교 가부시키가이샤 Packet transmitting apparatus, packet transmitting method, traffic conditioner, priority controlling mechanism, and packet shaper
FI112421B (en) * 2002-10-29 2003-11-28 Tellabs Oy Method and device for time allocation of transmission connection capacity between packet switched data communication flows

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6801501B1 (en) * 1999-09-14 2004-10-05 Nokia Corporation Method and apparatus for performing measurement-based admission control using peak rate envelopes
WO2002045362A2 (en) * 2000-11-30 2002-06-06 Qualcomm Incorporated Method and apparatus for scheduling packet data transmissions in a wireless communication system
WO2002085054A2 (en) * 2001-04-12 2002-10-24 Qualcomm Incorporated Method and apparatus for scheduling packet data transmissions in a wireless communication system
EP1478140A1 (en) * 2003-04-24 2004-11-17 France Telecom Method and Apparatus for scheduling packets on a network link using priorities based on the incoming packet rates of the flow

Also Published As

Publication number Publication date
CN100370787C (en) 2008-02-20
CN1798090A (en) 2006-07-05

Similar Documents

Publication Publication Date Title
WO2006069528A1 (en) A packet scheduling method in the packet service
US8169906B2 (en) Controlling ATM traffic using bandwidth allocation technology
US6687254B1 (en) Flexible threshold based buffering system for use in digital communication devices
US8130648B2 (en) Hierarchical queue shaping
US6256315B1 (en) Network to network priority frame dequeuing
JP4287157B2 (en) Data traffic transfer management method and network switch
Parris et al. Lightweight active router-queue management for multimedia networking
EP1086555A1 (en) Admission control method and switching node for integrated services packet-switched networks
US8248932B2 (en) Method and apparatus for fairly sharing excess bandwidth and packet dropping amongst subscribers of a data network
US6967923B1 (en) Bandwidth allocation for ATM available bit rate service
US20080304503A1 (en) Traffic manager and method for performing active queue management of discard-eligible traffic
US9197570B2 (en) Congestion control in packet switches
JP2004266389A (en) Method and circuit for controlling packet transfer
JP2001519973A (en) Prioritized access to shared buffers
US7522624B2 (en) Scalable and QoS aware flow control
US20060251091A1 (en) Communication control unit and communication control method
WO2022135202A1 (en) Method, apparatus and system for scheduling service flow
Yang et al. Scheduling with dynamic bandwidth allocation for DiffServ classes
Cisco Policing and Shaping Overview
Cisco Configuring IP QoS
Cisco Configuring Quality of Service
Cisco Configuring IP QOS
Cisco Configuring IP QOS
Cisco Traffic Shaping
JP3583711B2 (en) Bandwidth control device and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05822750

Country of ref document: EP

Kind code of ref document: A1