WO2017054566A1 - 一种防止cpu报文拥塞的方法及装置 - Google Patents

一种防止cpu报文拥塞的方法及装置 Download PDF

Info

Publication number
WO2017054566A1
WO2017054566A1 PCT/CN2016/091879 CN2016091879W WO2017054566A1 WO 2017054566 A1 WO2017054566 A1 WO 2017054566A1 CN 2016091879 W CN2016091879 W CN 2016091879W WO 2017054566 A1 WO2017054566 A1 WO 2017054566A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
cpu
message
priority
scheduling
Prior art date
Application number
PCT/CN2016/091879
Other languages
English (en)
French (fr)
Inventor
黎柏成
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2017054566A1 publication Critical patent/WO2017054566A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6295Queue scheduling characterised by scheduling criteria using multiple queues, one for each individual QoS, connection, flow or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Definitions

  • This document relates to, but is not limited to, the field of communication technologies, and relates to a method and apparatus for preventing CPU (Central Processing Unit) message congestion.
  • CPU Central Processing Unit
  • the CPU on the device will process protocol packets or other control messages. For example, broadcast packets, SNMP (Simple Network Management Protocol) packets, OAM (Operation Administration and Maintenance) messages, and so on.
  • the usual device connection model is that the CPU mounts a dedicated chip through a system bus device (such as a PCI (Peripheral Component Interconnect)), similar to a switch chip, a network processor chip, or an FPGA (Field-Programmable Gate Array).
  • PCI Peripheral Component Interconnect
  • FPGA Field-Programmable Gate Array
  • a common example in practice Assume that there are three priority types of packets sent to the CPU at the same time, and the packets with 0 to 2 priority are set to three types of packets. Assuming that the three types of messages are sent to the CPU at the same time, when the CPU takes the message for processing, the CPU always takes the message from a total buffer. This buffer does not distinguish between message types. Only when the CPU fetches a message from it can it know what type of message it is. It is like a black box. When any type of message is sent in large numbers, if other messages are sent at a lower rate than (for example, an order of magnitude difference), then the CPU will hit other packets. Being very low will cause some protocols to wait for a timeout and there is a situation where they are not working properly.
  • a typical method is to limit the rate of queues and set the rate of reported packets in a queue, which is assumed to be 1000 per second. If this rate is exceeded, it is discarded using a hardware mechanism or a software mechanism.
  • the embodiment of the invention provides a method and a device for preventing congestion of a CPU message, and solves the problem that all types of CPU packets sent by the CPU cannot be processed by the CPU in the congestion state of the CPU message in the related art.
  • the embodiment of the invention provides a method for preventing CPU packet congestion, including:
  • the packet scheduler establishes a queue of packets of different priorities according to the priority of each type of received packet.
  • the packet scheduler determines the scheduling time weight for scheduling the message queue according to the priority and bandwidth requirement of each packet queue.
  • the packet scheduler determines, according to the priority of each type of packet, a CPU scheduling priority that is used by the CPU to be scheduled by the CPU;
  • the message scheduler cyclically schedules queues of different priorities to the CPU according to the scheduling time weight, the CPU scheduling priority, and the cache state of the FIFO of the message queue to be scheduled.
  • the cyclically scheduling the queue of each priority message to the CPU includes:
  • the traffic scheduler performs flow control on the scheduled message queues according to the scheduling of different priority message queues to the CPU.
  • the performing flow control on the scheduled packet queue includes:
  • the first threshold is smaller than the second threshold, and the second threshold is smaller than the third threshold.
  • the packet scheduler determines, according to the priority and the bandwidth requirement of each packet queue, the scheduling time weights for scheduling the message queue, including:
  • the scheduling time weight corresponding to the message queue is determined according to the proportion of the time slice allocated between the different message queues.
  • the packet scheduler cyclically schedules the queues of different priorities to the CPU according to the scheduling time weight, the CPU scheduling priority, and the buffering status of the FIFO of the message queue to be scheduled to:
  • the scheduling opportunity is postponed to the packet queue of the next CPU scheduling priority
  • the packet queue is scheduled to the CPU according to the scheduling time weight.
  • the present invention also provides an apparatus for preventing CPU message congestion, including:
  • the packet queue module is set up, and the packet scheduler sets the packet queues of different priorities according to the priority of each type of received packets.
  • Determining the scheduling time weight and priority module and setting the packet scheduler to determine the scheduling time weight for scheduling the message queue according to the priority and bandwidth requirement of each message queue, and according to the priority of each type of packet Determining a CPU scheduling priority for the message queue to be scheduled by the CPU;
  • the scheduling message queue module is configured to: the message scheduler cyclically schedules the packet queues of different priorities according to the scheduling time weight, the CPU scheduling priority, and the buffering state of the FIFO of the message queue to be scheduled. .
  • the scheduling message queue module includes a flow control unit, configured to perform flow control on the scheduled message queue during the packet scheduler according to scheduling the message queues of different priorities to the CPU.
  • the flow control sub-unit is configured to perform flow control shaping control on the packet to be entered into the FIFO when the number of buffered packets of the FIFO is detected to reach the second threshold of the cache; and when the number of buffered packets in the FIFO is detected to reach the third threshold of the cache All the packets entering the FIFO message queue are discarded. When the number of buffered packets in the FIFO is detected to be lower than the first threshold of the cache, all the packets entering the FIFO message queue are resumed.
  • the first threshold is smaller than the second threshold, and the second threshold is smaller than the third threshold.
  • the embodiment of the invention further provides a computer readable storage medium, wherein the computer readable storage medium stores computer executable instructions, and when the computer executable instructions are executed, a method for preventing CPU message congestion is implemented.
  • the embodiment of the present invention combines a time slice rotation scheduling and a weighted priority queue, and uses a Qos (Quality of Service) scheduling method to discard packets, thereby preventing CPU from being flushed and fully utilizing CPU resources to prevent Some types of packets cannot be reported to the CPU for processing.
  • Qos Quality of Service
  • FIG. 1 is a flowchart of a method for preventing congestion of a CPU message according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an apparatus for preventing congestion of a CPU message according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a scheduler of three queues according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of scheduling behavior when a C bucket is empty according to an embodiment of the present invention.
  • FIG. 5 is a first schematic diagram of performing flow control on a scheduled message queue according to an embodiment of the present invention
  • FIG. 6 is a second schematic diagram of performing flow control on a scheduled message queue according to an embodiment of the present invention.
  • FIG. 7 is a third schematic diagram of performing flow control on a scheduled message queue according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for preventing congestion of a CPU packet according to an embodiment of the present invention. As shown in FIG. 1 , the method includes the following steps:
  • Step S101 The packet scheduler establishes a queue of packets of different priorities according to the priority of each type of received packet.
  • Step S102 The packet scheduler determines a scheduling time weight for scheduling the message queue according to the priority and bandwidth requirement of each packet queue.
  • Step S103 The packet scheduler determines, according to the priority of each type of packet, the CPU scheduling priority set by the CPU to be scheduled by the CPU;
  • Step S104 The message scheduler cyclically schedules different priority reports according to the scheduling time weight, the CPU scheduling priority, and the buffer status of the FIFO (First Input First Output) of the message queue to be scheduled.
  • the text queue is given to the CPU.
  • the packet scheduler performs flow control on the scheduled message queues according to scheduling the packet queues of different priorities to the CPU.
  • the performing flow control on the scheduled message queue includes: detecting the number of buffered packets in the FIFO of each priority message queue; and detecting that the number of buffered packets in the FIFO reaches the cache
  • the second threshold is used to perform flow control shaping on the packet to be entered into the FIFO, thereby controlling the packet entering the FIFO message queue; when detecting that the number of buffered packets in the FIFO reaches the third threshold of the cache, discarding all incoming FIFO messages
  • the message of the queue when it detects that the number of buffered packets in the FIFO is lower than the first threshold of the cache, it resumes receiving all incoming packets. Packets in the FIFO packet queue.
  • the first threshold is smaller than the second threshold, and the second threshold is smaller than the third threshold.
  • the packet scheduler determines, according to the priority and the bandwidth requirement of each packet queue, the scheduling time weight used for scheduling the message queue, including: according to each packet queue.
  • a time slice for scheduling a message queue is allocated for each message queue, and a scheduling time weight of the message queue is determined according to a ratio of time slices allocated between the different message queues.
  • the packet scheduler cyclically schedules queues of different priorities according to the scheduling time weight, the CPU scheduling priority, and the buffer status of the FIFO of the message queue to be scheduled.
  • the CPU includes: detecting the buffer status of the FIFO of the message queue to be scheduled, and determining whether there is a packet to be scheduled; if there is no packet to be scheduled, the scheduling opportunity is postponed to the next CPU scheduling priority. If the packet is to be scheduled, the packet queue is scheduled to the CPU according to the scheduling time weight.
  • FIG. 2 is a schematic diagram of an apparatus for preventing congestion of a CPU packet according to an embodiment of the present invention. As shown in FIG. 2, the method includes: a message queue module 201, a scheduling time weight and priority module 202, and a scheduling message queue module 203.
  • the packet-query module 201 is configured to set a packet queue of different priorities according to the priority of each type of packet received by the packet scheduler.
  • the scheduling message queue module 203 is configured to: the message scheduler cyclically schedules queues of different priorities according to the scheduling time weight, the CPU scheduling priority, and the buffer status of the FIFO of the message queue to be scheduled. .
  • the scheduling message queue module 203 includes a flow control unit, configured to queue the scheduled message during the message scheduler according to scheduling different priority message queues to the CPU. Line flow control.
  • the flow control unit includes: a detecting subunit configured to detect a number of buffered packets of a FIFO of each priority message queue; and a flow control subunit configured to detect a buffered message of the FIFO
  • the packet that is to enter the FIFO is flow-controlled and shaped; when it is detected that the number of buffered packets in the FIFO reaches the third threshold of the cache, all the packets entering the FIFO packet queue are discarded; When the number of buffered packets to the FIFO is lower than the first threshold of the cache, all the packets entering the FIFO message queue are restored.
  • the first threshold is smaller than the second threshold, and the second threshold is smaller than the third threshold.
  • the packet scheduler has three buckets, namely a, b, and c.
  • Each bucket is used to process a packet queue of one priority, that is, a packet queue of three priority types. , B / C.
  • the CPU priority of the processing packet queue A is 0, which occupies the time slice 2T; the CPU priority of the processing packet B queue is 2, which occupies the time slice 1T; the CPU priority of the processing packet queue C is 1, the occupation time Slice 3T. Therefore, the priority is A>C>B, and in the ideal state, the packet sent to the CPU is not concerned with the packet length, so the scheduling time weight is 2:1:3 (the time allocated between different queues)
  • the proportion of the slice is the scheduling time weight of the message queue).
  • the packet scheduler When the packet scheduler applies for the time slice, it first queries whether the FIFO of the message queue A is empty. If it is not empty, the time slice is preferentially used for the message queue A. If the time slice does not reach 2T, Then the next time slice is still used for message queue A.
  • the message queue A uses the time slice and requests the next time slice
  • the message queue C is empty
  • the message is retrieved from the message queue B. That is to say, when the message queue A uses the time slice, the next application time slice should be used for the message queue C, but if the FIFO of the message queue C is empty and no message is to be processed, then this time The slice will be provided for message queue B use. This is the time slice rotation, which guarantees that every message can be processed by the CPU.
  • the rate at which the packet enters the bucket is controlled to S. In other words, it is necessary to separately control the flow of each bucket. Hypothesis If the number of packets in the queue c bucket reaches the second threshold y, the CPU is considered to be busy. The rate of packets entering the c bucket is limited to S. If the packets of the a bucket are congested, only one control is required. The flow of the bucket, the flow of the b-bucket and the c-bucket does not need to be controlled.
  • This example can be implemented in the form of a token bucket plus a leaky bucket.
  • the purpose of traffic control for each queue is to make the rate of each type of packets sent to the CPU as smooth as possible, and set a lower threshold for high-priority packets to ensure low-priority packets. Will always be processed.
  • the packet scheduler is responsible for establishing a message queue, and receiving packets from the device and sending the packets to the CPU.
  • the message scheduler allocates a time slice for each type of message, and periodically applies to the CPU for a time slice, and distributes it to each message queue for use.
  • the time slice is preferentially allocated to the message queue A in the order of priority from high to low, when the message queue A is used. After using the scheduling time 2T, the next time slice will be used for the message queue C. If a time slice arrives and there is no pending message in the message queue C, the time slice will be used for the message in the message queue B.
  • the difference in message length may be disregarded.
  • the packet processing rate is N packets per second
  • the length of the time slice is T
  • the number of time slots applied per second is M
  • the weight of the message queue is w
  • the total weight is Q.
  • the rate at which the type message is reported to the CPU every second is: N*T*M*(w/Q).
  • the rate S at which the control packet enters the packet queue should be less than the rate (N*T*M*(w/Q)).
  • the scheduling method of the packet sent by the CPU provided by the embodiment of the present invention can be implemented by configuring the priority of the packet queue, the weight of the scheduling time, and the threshold value, and calculating the rate S according to the resource condition of the current system.
  • a round-robin scheduling algorithm is adopted for each message queue, and the feedback of the message queue to the CPU resource situation realizes the priority of the message queue and avoids congestion on the CPU, and strives for use for each queue.
  • the opportunity for CPU resources is adopted.
  • the rate of different queue out FIFOs is controlled, and different flow control thresholds and random drop rates are set, thereby controlling the rate at which each queue FIFO receives messages, ensuring low priority.
  • the queue can get the resources of the CPU.
  • the embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium stores computer executable instructions, and when the computer executable instructions are executed, a method for preventing CPU message congestion is implemented.
  • each module/unit in the above embodiment may be implemented in the form of hardware, for example, by implementing an integrated circuit to implement its corresponding function, or may be implemented in the form of a software function module, for example, executing a program stored in the memory by a processor. / instruction to achieve its corresponding function.
  • This application is not limited to any particular form of hardware and software. Combine.

Abstract

一种防止CPU报文拥塞的方法,包括:报文调度器根据收到的每类报文的优先级,建立不同优先级的报文队列;报文调度器根据每个报文队列对应的优先级和带宽需求,确定用于调度报文队列的调度时间权重;报文调度器根据每类报文类型的优先级,确定用于报文队列被CPU调度的CPU调度优先级;报文调度器根据所述调度时间权重、CPU调度优先级以及将要调度的报文队列的FIFO的缓存状态,循环调度不同优先级的报文队列给CPU。本发明实施例可以确保高优先级的报文能够被及时处理,低优先级的报文不会拥塞,可以控制入队报文的速率,最大限度的利用空闲的CPU资源。

Description

一种防止CPU报文拥塞的方法及装置 技术领域
本文涉及但不限于通信技术领域,涉及一种防止CPU(Central Processing Unit,中央处理器)报文拥塞的方法及装置。
背景技术
对于在网的通信系统设备来说,设备上的CPU将会进行协议报文或者是其他控制报文的处理。比如,广播报文、SNMP(Simple Network Management Protocol,简单网络管理协议)报文、OAM(Operation Administration and Maintenance,操作管理维护)报文等。通常的设备连接模型为CPU通过系统总线设备(如PCI(Peripheral Component Interconnect,外部设备互连总线))挂接专用芯片,类似于交换机芯片、网络处理器芯片或FPGA(Field-Programmable Gate Array,现场可编程门阵列)等,而专用芯片通过的通信端口,大多数为以太网口与CPU相连,将报文上送CPU。但是,有时候会出现由于CPU软件处理的速率跟不上报文上送的速率,而任一类型的报文大量上报CPU从而导致其他报文得不到CPU处理的现象。此时需要去解除CPU报文的拥塞状态,使得所有类型的上送CPU的报文都能有被CPU进行处理的机会。
举个在实际工作中一个常见的例子:假设有3个优先级类型的报文同时上送CPU,设定0至2优先级的报文为3种目标类型的报文。假设这3种报文同时上送CPU,那么CPU取报文进行处理时,CPU总会从一个总的缓冲区中取出报文。而这个缓冲区是不区分报文类型的,只有CPU从中取出报文时,才能知道这是一个什么类型的报文。它就像是一个黑盒子,当任一种类型的报文大量上送时,如果其他报文上送速率不如它,(比如,相差一个数量级),那么CPU对其他报文的命中率将会变得很低,就会导致有一些协议等待超时,出现了无法正常工作的情况。
为了解除这种情况,需要限制这种类型的报文上送,对其进行限速。通常的做法是,对队列进行限速,设定一个队列的上报报文的速率,假设为1000个每秒。如果超出了这个速率,则使用硬件机制或者是软件机制将其丢弃。
但是,这样的方法浪费了CPU的资源,该方法把报文上送的速率配死了,也就是说,当CPU空闲时,CPU有能力去处理大于该速率的报文,而上送速率已经无法改变了,这就在无形中浪费了CPU的资源。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本发明实施例提供一种防止CPU报文拥塞的方法及装置,解决了相关技术中在CPU报文的拥塞状态下使得所有类型的上送CPU报文无法被CPU进行处理的问题。
本发明实施例提供了一种防止CPU报文拥塞的方法,包括:
报文调度器根据收到的每类报文的优先级,建立不同优先级的报文队列;
报文调度器根据每个报文队列对应的优先级和带宽需求,确定用于调度报文队列的调度时间权重;
报文调度器根据每类报文的优先级,确定用于报文队列被CPU调度的CPU调度优先级;
报文调度器根据所述调度时间权重、CPU调度优先级以及将要调度的报文队列的先进先出FIFO的缓存状态,循环调度不同优先级的报文队列给CPU。
可选地,所述循环调度每个优先级的报文队列给CPU包括:
在报文调度器根据调度不同优先级的报文队列给CPU期间,对所调度的报文队列进行流量控制。
可选地,所述对所调度的报文队列进行流量控制包括:
对每个优先级的报文队列的FIFO的缓存报文数量进行检测;
当检测到FIFO的缓存报文数量达到缓存第二阈值时,对将要进入FIFO的报文进行流控整形;
当检测到FIFO的缓存报文数量达到缓存第三阈值时,丢弃所有进入FIFO报文队列的报文;
当检测到FIFO的缓存报文数量低于缓存第一阈值时,恢复接收所有进入FIFO报文队列的报文。
可选地,所述第一阈值小于所述第二阈值,所述第二阈值小于所述第三阈值。
可选地,所述报文调度器根据每个报文队列对应的优先级和带宽需求,确定用于调度报文队列的调度时间权重包括:
根据每个报文队列对应的优先级和带宽需求,为每个报文队列分配用于调度报文队列的时间片;
根据所述不同报文队列间分配的时间片的比例,确定报文队列对应的调度时间权重。
可选地,所述报文调度器根据所述调度时间权重、CPU调度优先级以及将要调度的报文队列的FIFO的缓存状态,循环调度不同优先级的报文队列给CPU包括:
对将要调度的报文队列的FIFO的缓存状态进行检测,判断是否存在需要调度的报文;
若不存在需要调度的报文,则将本次调度机会顺延给下一个CPU调度优先级的报文队列;
若存在需要调度的报文,则根据所述调度时间权重将所述报文队列调度给CPU。
本发明还提供一种防止CPU报文拥塞的装置,包括:
建立报文队列模块,设置为报文调度器根据收到的每类报文的优先级,建立不同优先级的报文队列;
确定调度时间权重和优先级模块,设置为报文调度器根据每个报文队列对应的优先级和带宽需求,确定用于调度报文队列的调度时间权重,以及根据每类报文的优先级,确定用于报文队列被CPU调度的CPU调度优先级;
调度报文队列模块,设置为报文调度器根据所述调度时间权重、CPU调度优先级以及将要调度的报文队列的先进先出FIFO的缓存状态,循环调度不同优先级的报文队列给CPU。
可选地,所述调度报文队列模块包括流量控制单元,设置为在报文调度器根据调度不同优先级的报文队列给CPU期间,对所调度的报文队列进行流量控制。
可选地,所述流量控制单元包括:
检测子单元,设置为对每个优先级的报文队列的FIFO的缓存报文数量进行检测;
流量控制子单元,设置为当检测到FIFO的缓存报文数量达到缓存第二阈值时,对将要进入FIFO的报文进行流控整形控;当检测到FIFO的缓存报文数量达到缓存第三阈值时,丢弃所有进入FIFO报文队列的报文;当检测到FIFO的缓存报文数量低于缓存第一阈值时,恢复接收所有进入FIFO报文队列的报文。
可选地,所述第一阈值小于所述第二阈值,所述第二阈值小于所述第三阈值。
本发明实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机可执行指令,所述计算机可执行指令被执行时实现防止CPU报文拥塞方法。
与相关技术相比较,本发明示例的有益效果在于:
本发明实施例结合时间片轮转调度和加权优先级队列,并且使用Qos(Quality of Service,服务质量)调度的方法进行报文丢弃的方法,从而实现了防止CPU冲高并且充分利用CPU资源,防止了某些类型的报文无法上报CPU进行处理。在阅读并理解了附图和详细描述后,可以明白其它方面。
附图说明
图1是本发明实施例提供的一种防止CPU报文拥塞的方法流程图;
图2是本发明实施例提供的一种防止CPU报文拥塞的装置示意图;
图3是本发明实施例提供的三个队列的调度器的示意图;
图4是本发明实施例提供的C桶为空时的调度行为的示意图;
图5是本发明实施例提供的对所调度的报文队列进行流量控制的第一示意图;
图6是本发明实施例提供的对所调度的报文队列进行流量控制的第二示意图;
图7是本发明实施例提供的对所调度的报文队列进行流量控制的第三示意图。
具体实施方式
以下结合附图对本发明实施例进行详细说明,应当理解,以下所说明的实施例仅用于说明和解释本申请,并不用于限定本申请。
图1是本发明实施例提供的一种防止CPU报文拥塞的方法流程图,如图1所示,包括以下步骤:
步骤S101:报文调度器根据收到的每类报文的优先级,建立不同优先级的报文队列;
步骤S102:报文调度器根据每个报文队列对应的优先级和带宽需求,确定用于调度报文队列的调度时间权重;
步骤S103:报文调度器根据每类报文类型的优先级,确定设置为报文队列被CPU调度的CPU调度优先级;
步骤S104:报文调度器根据所述调度时间权重、CPU调度优先级以及将要调度的报文队列的FIFO(First Input First Output,先入先出缓冲区)的缓存状态,循环调度不同优先级的报文队列给CPU。
其中,在报文调度器根据调度不同优先级的报文队列给CPU期间,对所调度的报文队列进行流量控制。在本实施例中,所述对所调度的报文队列进行流量控制包括:对每个优先级的报文队列的FIFO的缓存报文数量进行检测;当检测到FIFO的缓存报文数量达到缓存第二阈值时,对将要进入FIFO的报文进行流控整形,从而控制进入FIFO报文队列的报文;当检测到FIFO的缓存报文数量达到缓存第三阈值时,丢弃所有进入FIFO报文队列的报文;当检测到FIFO的缓存报文数量低于缓存第一阈值时,恢复接收所有进入 FIFO报文队列的报文。其中,所述第一阈值小于所述第二阈值,所述第二阈值小于所述第三阈值。
可选地,在本发明实施例中,所述报文调度器根据每个报文队列对应的优先级和带宽需求,确定用于调度报文队列的调度时间权重包括:根据每个报文队列对应的优先级和带宽需求,为每个报文队列分配用于调度报文队列的时间片;根据所述不同报文队列间分配的时间片的比例,确定报文队列的调度时间权重。
可选地,在本发明实施例中,所述报文调度器根据所述调度时间权重、CPU调度优先级以及将要调度的报文队列的FIFO的缓存状态,循环调度不同优先级的报文队列给CPU包括:对将要调度的报文队列的FIFO的缓存状态进行检测,判断是否存在需要调度的报文;若不存在需要调度的报文,则将本次调度机会顺延给下一个CPU调度优先级的报文队列;若存在需要调度的报文,则根据所述调度时间权重将所述报文队列调度给CPU。
图2是本发明实施例提供的防止CPU报文拥塞的装置示意图,如图2所示,包括:建立报文队列模块201、确定调度时间权重和优先级模块202以及调度报文队列模块203。
所述建立报文队列模块201,设置为报文调度器根据收到的每类报文的优先级,建立不同优先级的报文队列;
所述确定调度时间权重和优先级模块202,设置为报文调度器根据每个报文队列对应的优先级和带宽需求,确定用于调度报文队列的调度时间权重,以及根据每类报文类型的优先级,确定用于报文队列被CPU调度的CPU调度优先级;
所述调度报文队列模块203,设置为报文调度器根据所述调度时间权重、CPU调度优先级以及将要调度的报文队列的FIFO的缓存状态,循环调度不同优先级的报文队列给CPU。
可选地,所述调度报文队列模块203包括流量控制单元,设置为在报文调度器根据调度不同优先级的报文队列给CPU期间,对所调度的报文队列进 行流量控制。
可选地所述流量控制单元包括:检测子单元,设置为对每个优先级的报文队列的FIFO的缓存报文数量进行检测;流量控制子单元,设置为当检测到FIFO的缓存报文数量达到缓存第二阈值时,对将要进入FIFO的报文进行流控整形;当检测到FIFO的缓存报文数量达到缓存第三阈值时,丢弃所有进入FIFO报文队列的报文;以及当检测到FIFO的缓存报文数量低于缓存第一阈值时,恢复接收所有进入FIFO报文队列的报文。其中,所述第一阈值小于所述第二阈值,所述第二阈值小于所述第三阈值。
下面结合示例对本发明实施例的防止CPU报文拥塞的方法的实现过程进行介绍。
如图3所示,报文调度器有3个桶,分别为a、b和c,每个桶分别对用处理一个优先级的报文队列,也就是三种优先级类型的报文队列A、B/C。其中,处理报文队列A的CPU优先级为0,占用时间片2T;处理报文B队列的CPU优先级为2,占用时间片1T;处理报文队列C的CPU优先级为1,占用时间片3T。因此优先级为A>C>B,并且在理想的状态下,对于上送CPU的报文,并不关心其报文长度,因此调度时间权重为2:1:3(不同队列间分配的时间片的比例,即为报文队列的调度时间权重)。当报文调度器申请到时间片时,会先去查询报文队列A的FIFO是否为空,如果不为空,则优先将时间片给报文队列A使用,若该时间片没有达到2T,那么下一个时间片依旧给报文队列A使用。
如图4所示,当报文队列A使用完时间片之后,申请到下一个时间片时,如果报文队列C是空的,那么从报文队列B中取出报文。也就是说,当报文队列A使用完时间片之后,下一个申请到的时间片理应给报文队列C使用,但是如果报文队列C的FIFO为空,没有报文待处理,那么这个时间片将会提供给报文队列B使用。这就是时间片轮转,其可以保证每一种报文都能得到被CPU处理的机会。
如图5所示,当一个桶里面报文的个数达到第二阈值y时,将报文进入该桶的速率控制为S。也就是说,需要对每一个桶分别进行流量控制。假设 队列c桶内的报文数达到第二阈值y时,则认为CPU已经繁忙了,那么将进入c桶的报文的速率限制为S;假设a桶的报文拥塞了,那么只需要控制a桶的流量,b桶和c桶的流量则不需要控制。
如图6所示,当一个桶里面报文的个数达到第三阈值z时,则丢弃所有进入该桶的报文。当c桶里面报文队列内的报文数到达第三阈值z时,则认为c桶中的报文已经拥塞了,那么丢弃任何将要进入c桶报文。
如图7所示,当一个桶里面报文的个数达到第一阈值x时,则接收所有进入该桶的报文。当c桶里面报文队列内的报文数下降到x时,则恢复接收所有进入该报文队列的报文,不限制c桶报文进入队列。
本示例可以使用令牌桶加漏桶的形式来实现。对每一个队列进行流量控制的目的是为使得每一种报文上送CPU的速率可以尽量的平滑,同时对于高优先级的报文设定更低的阈值,可以确保低优先级的报文总会被处理。
总上所述,上述示例的技术方案包括:
1、建立一个报文调度器,可以由软件或者是硬件(如FPGA)去实现。报文调度器负责建立报文队列,从设备接收报文和发送报文给CPU。报文调度器为每一种报文分配时间片,通过定期向CPU申请时间片,循环分发给每个报文队列使用。
2、假设有3种类型的报文队列A、B、C需要上送CPU,则设定处理优先级为A>C>B,然后确定调度时间权重为2:1:3。
3、在申请到时间片的前提下,如果三种报文队列都有待处理的报文,则按优先级从高到低的顺序优先分配时间片给报文队列A使用,当报文队列A使用完调度时间2T后,下一个时间片将会给报文队列C使用。如果一个时间片到来,报文队列C里面已经没有待处理的报文了,那么该时间片将会给报文队列B里面的报文使用。
4、设定第一阈值x、第二阈值y、第三阈值z,x<y<z。当报文队列内的报文数达到y时,控制报文进入该报文队列的速率为S;当报文队列内的报文数达到z时,开始丢弃所有即将进入该报文队列的报文;当报文队列内的报文数下滑至x时,开始不再控制进入该报文队列的报文的速率。为了保证 低优先级的报文能充分得到被处理的机会,高优先级报文可以将其阈值配得比较小。
5、在本示例中,可以不考虑报文长度的差别。假设报文处理的速率为每秒N个报文,时间片长度为T,每秒钟申请到的时间片数为M,本报文队列的权重为w,总的权重和为Q,则该类型报文每秒钟上报CPU的速率为:N*T*M*(w/Q)。当报文队列内的报文数达到y时,控制报文进入该报文队列的速率S应该小于该速率(N*T*M*(w/Q))。
在实际的使用过程中,不仅仅是只有3种类型的报文会上报CPU,有可能是有很多种的,可以通过筛选出来报文的特征值,然后在报文调度器中创建报文的缓冲队列。通过配置报文队列的优先级,调度时间的权重以及设定阈值,并且根据当前系统的资源情况计算出速率S,即可实现本发明实施例所提供的上送CPU报文的调度方法。
综上所述,本发明实施例具有以下技术效果:
本发明实施例对每一个报文队列采用轮转调度的算法,通过报文队列对CPU资源情况的反馈,实现了报文队列的优先级并且避免了对CPU的拥塞,为每一个队列争取到使用CPU资源的机会。此外,通过采用加权队列的方式,对不同队列出FIFO的速率进行控制,通过设定不同的流控阈值和随机丢弃速率,从而控制每个队列FIFO接收报文的速率,确保低优先级的队列可以得到CPU的资源。
本发明实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机可执行指令,所述计算机可执行指令被执行时实现防止CPU报文拥塞的方法。
本领域普通技术人员可以理解上述方法中的全部或部分步骤可通过程序来指令相关硬件(例如处理器)完成,所述程序可以存储于计算机可读存储介质中,如只读存储器、磁盘或光盘等。可选地,上述实施例的全部或部分步骤也可以使用一个或多个集成电路来实现。相应地,上述实施例中的各模块/单元可以采用硬件的形式实现,例如通过集成电路来实现其相应功能,也可以采用软件功能模块的形式实现,例如通过处理器执行存储于存储器中的程序/指令来实现其相应功能。本申请不限制于任何特定形式的硬件和软件的 结合。本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或者等同替换,而不脱离本申请技术方案的精神和范围,均应涵盖在本申请的权利要求范围当中。

Claims (10)

  1. 一种防止CPU报文拥塞的方法,包括:
    报文调度器根据收到的每类报文的优先级,建立不同优先级的报文队列;
    报文调度器根据每个报文队列对应的优先级和带宽需求,确定用于调度报文队列的调度时间权重;
    报文调度器根据每类报文的优先级,确定用于报文队列被CPU调度的CPU调度优先级;
    报文调度器根据所述调度时间权重、CPU调度优先级以及将要调度的报文队列的先进先出FIFO的缓存状态,循环调度不同优先级的报文队列给CPU。
  2. 根据权利要求1所述的方法,其中,所述循环调度每个优先级的报文队列给CPU包括:
    在报文调度器根据调度不同优先级的报文队列给CPU期间,对所调度的报文队列进行流量控制。
  3. 根据权利要求2所述的方法,其中,所述对所调度的报文队列进行流量控制包括:
    对每个优先级的报文队列的FIFO的缓存报文数量进行检测;
    当检测到FIFO的缓存报文数量达到缓存第二阈值时,对将要进入FIFO的报文进行流控整形;
    当检测到FIFO的缓存报文数量达到缓存第三阈值时,丢弃所有进入FIFO报文队列的报文;
    当检测到FIFO的缓存报文数量低于缓存第一阈值时,恢复接收所有进入FIFO报文队列的报文。
  4. 根据权利要求3所述的方法,其中,所述第一阈值小于所述第二阈值,所述第二阈值小于所述第三阈值。
  5. 根据权利要求1所述的方法,其中,所述报文调度器根据每个报文队列对应的优先级和带宽需求,确定用于调度报文队列的调度时间权重包括:
    根据每个报文队列对应的优先级和带宽需求,为每个报文队列分配用于 调度报文队列的时间片;
    根据所述不同报文队列间分配的时间片的比例,确定报文队列对应的调度时间权重。
  6. 根据权利要求5所述的方法,其中,所述报文调度器根据所述调度时间权重、CPU调度优先级以及将要调度的报文队列的FIFO的缓存状态,循环调度不同优先级的报文队列给CPU包括:
    对将要调度的报文队列的FIFO的缓存状态进行检测,判断是否存在需要调度的报文;
    若不存在需要调度的报文,则将本次调度机会顺延给下一个CPU调度优先级的报文队列;
    若存在需要调度的报文,则根据所述调度时间权重将所述报文队列调度给CPU。
  7. 一种防止CPU报文拥塞的装置,包括:
    建立报文队列模块,设置为报文调度器根据收到的每类报文的优先级,建立不同优先级的报文队列;
    确定调度时间权重和优先级模块,设置为报文调度器根据每个报文队列对应的优先级和带宽需求,确定用于调度报文队列的调度时间权重,以及根据每类报文的优先级,确定用于报文队列被CPU调度的CPU调度优先级;
    调度报文队列模块,设置为报文调度器根据所述调度时间权重、CPU调度优先级以及将要调度的报文队列的先进先出FIFO的缓存状态,循环调度不同优先级的报文队列给CPU。
  8. 根据权利要求7所述的装置,其中,所述调度报文队列模块包括流量控制单元,设置为在报文调度器根据调度不同优先级的报文队列给CPU期间,对所调度的报文队列进行流量控制。
  9. 根据权利要求7所述的装置,其中,所述流量控制单元包括:
    检测子单元,设置为对每个优先级的报文队列的FIFO的缓存报文数量进行检测;
    流量控制子单元,设置为当检测到FIFO的缓存报文数量达到缓存第二阈值时,对将要进入FIFO的报文进行流控整形控;当检测到FIFO的缓存报文数量达到缓存第三阈值时,丢弃所有进入FIFO报文队列的报文;当检测到FIFO的缓存报文数量低于缓存第一阈值时,恢复接收所有进入FIFO报文队列的报文。
  10. 根据权利要求9所述的装置,其中,所述第一阈值小于所述第二阈值,所述第二阈值小于所述第三阈值。
PCT/CN2016/091879 2015-09-28 2016-07-27 一种防止cpu报文拥塞的方法及装置 WO2017054566A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510629688.4A CN106559354A (zh) 2015-09-28 2015-09-28 一种防止cpu报文拥塞的方法及装置
CN201510629688.4 2015-09-28

Publications (1)

Publication Number Publication Date
WO2017054566A1 true WO2017054566A1 (zh) 2017-04-06

Family

ID=58416733

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/091879 WO2017054566A1 (zh) 2015-09-28 2016-07-27 一种防止cpu报文拥塞的方法及装置

Country Status (2)

Country Link
CN (1) CN106559354A (zh)
WO (1) WO2017054566A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268326A (zh) * 2021-05-25 2021-08-17 西安交通大学 一种基于时间片轮转的公平流束调度方法及系统
CN113824651A (zh) * 2021-11-25 2021-12-21 上海金仕达软件科技有限公司 行情数据缓存处理方法、装置、存储介质及电子设备
CN116016362A (zh) * 2023-03-28 2023-04-25 北京六方云信息技术有限公司 流量控制方法、装置、终端设备以及存储介质
WO2023207628A1 (zh) * 2022-04-29 2023-11-02 华为技术有限公司 一种报文传输方法以及报文转发设备
CN117857475A (zh) * 2024-03-08 2024-04-09 中车南京浦镇车辆有限公司 一种以太网列车控制网络的数据传输调度方法及系统
CN117857475B (zh) * 2024-03-08 2024-05-14 中车南京浦镇车辆有限公司 一种以太网列车控制网络的数据传输调度方法及系统

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107547537B (zh) * 2017-08-29 2020-12-18 新华三技术有限公司 请求报文处理方法、装置、设备及存储介质
CN108173784B (zh) * 2017-12-29 2021-12-28 湖南恒茂高科股份有限公司 一种交换机的数据包缓存的老化方法及装置
CN108965005B (zh) * 2018-07-18 2021-05-14 烽火通信科技股份有限公司 网络设备的自适应限速方法及其系统
CN109412973B (zh) * 2018-09-19 2022-09-13 咪咕数字传媒有限公司 一种音频处理方法、装置及存储介质
CN109586780A (zh) * 2018-11-30 2019-04-05 四川安迪科技实业有限公司 卫星网络中防止报文阻塞的方法
CN113366805A (zh) 2019-02-03 2021-09-07 华为技术有限公司 报文调度方法、调度器、网络设备和网络系统
CN110941483B (zh) * 2019-10-23 2023-02-03 创耀(苏州)通信科技股份有限公司 一种队列处理方法、装置及设备
CN116636183A (zh) * 2021-05-31 2023-08-22 华为技术有限公司 计算机系统及总线流量控制方法
CN113590030B (zh) * 2021-06-30 2023-12-26 济南浪潮数据技术有限公司 一种队列调度方法、系统、设备以及介质
CN113923171B (zh) * 2021-08-26 2024-02-06 江苏智臻能源科技有限公司 一种基于负荷辨识检测平台的通信管理方法
CN116264567A (zh) * 2021-12-14 2023-06-16 中兴通讯股份有限公司 报文调度方法、网络设备及计算机可读存储介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002039680A2 (en) * 2000-11-08 2002-05-16 Motorola, Inc., A Corporation Of The State Of Delaware Method for class of service weight adaptation depending on the queue residence time
CN101459699A (zh) * 2008-12-25 2009-06-17 华为技术有限公司 一种网络地址转换方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101651615A (zh) * 2009-09-14 2010-02-17 中兴通讯股份有限公司 报文调度方法和装置
CN102231697A (zh) * 2011-06-17 2011-11-02 瑞斯康达科技发展股份有限公司 一种报文队列的带宽调度方法、报文上报方法及其装置
CN102420776B (zh) * 2012-01-12 2014-07-09 盛科网络(苏州)有限公司 动态调整入口资源分配阈值的方法及系统
CN104079501B (zh) * 2014-06-05 2017-06-13 邦彦技术股份有限公司 一种基于多优先级的队列调度方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002039680A2 (en) * 2000-11-08 2002-05-16 Motorola, Inc., A Corporation Of The State Of Delaware Method for class of service weight adaptation depending on the queue residence time
CN101459699A (zh) * 2008-12-25 2009-06-17 华为技术有限公司 一种网络地址转换方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SANLANG, ZHAXI. ET AL.: "A Study of Net's Congest Administrative Technique", JOURNAL OF SOUTHWEST UNIVERSITY FOR NATIONALITIES ( NATURAL SCIENCE EDITION, vol. 29, no. 05, 31 October 2003 (2003-10-31), pages 606 - 610 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268326A (zh) * 2021-05-25 2021-08-17 西安交通大学 一种基于时间片轮转的公平流束调度方法及系统
CN113824651A (zh) * 2021-11-25 2021-12-21 上海金仕达软件科技有限公司 行情数据缓存处理方法、装置、存储介质及电子设备
CN113824651B (zh) * 2021-11-25 2022-02-22 上海金仕达软件科技有限公司 行情数据缓存处理方法、装置、存储介质及电子设备
WO2023207628A1 (zh) * 2022-04-29 2023-11-02 华为技术有限公司 一种报文传输方法以及报文转发设备
CN116016362A (zh) * 2023-03-28 2023-04-25 北京六方云信息技术有限公司 流量控制方法、装置、终端设备以及存储介质
CN116016362B (zh) * 2023-03-28 2023-07-07 北京六方云信息技术有限公司 流量控制方法、装置、终端设备以及存储介质
CN117857475A (zh) * 2024-03-08 2024-04-09 中车南京浦镇车辆有限公司 一种以太网列车控制网络的数据传输调度方法及系统
CN117857475B (zh) * 2024-03-08 2024-05-14 中车南京浦镇车辆有限公司 一种以太网列车控制网络的数据传输调度方法及系统

Also Published As

Publication number Publication date
CN106559354A (zh) 2017-04-05

Similar Documents

Publication Publication Date Title
WO2017054566A1 (zh) 一种防止cpu报文拥塞的方法及装置
US8331387B2 (en) Data switching flow control with virtual output queuing
CN109479032B (zh) 网络设备中的拥塞避免
US10708200B2 (en) Traffic management in a network switching system with remote physical ports
US8553538B2 (en) Packet relay device and congestion control method
US7414973B2 (en) Communication traffic management systems and methods
US8274974B1 (en) Method and apparatus for providing quality of service across a switched backplane for multicast packets
US8144588B1 (en) Scalable resource management in distributed environment
TWI543568B (zh) 用於分封交換網路的系統及其方法、在分封交換網路中用於接收資料包的交換機中的方法
US8151067B2 (en) Memory sharing mechanism based on priority elevation
US9025456B2 (en) Speculative reservation for routing networks
US20150103667A1 (en) Detection of root and victim network congestion
JPH0657014B2 (ja) 適応選択形のパケット交換システムにおけるフロ−制御
CA2355473A1 (en) Buffer management for support of quality-of-service guarantees and data flow control in data switching
US9055009B2 (en) Hybrid arrival-occupancy based congestion management
US20080225705A1 (en) Monitoring, Controlling, And Preventing Traffic Congestion Between Processors
WO2008149207A2 (en) Traffic manager, method and fabric switching system for performing active queue management of discard-eligible traffic
US7843825B2 (en) Method and system for packet rate shaping
US9350659B1 (en) Congestion avoidance for network traffic
CN113315720A (zh) 一种数据流控制方法、系统及设备
JP2020072336A (ja) パケット転送装置、方法、及びプログラム
US7408876B1 (en) Method and apparatus for providing quality of service across a switched backplane between egress queue managers
EP3661139B1 (en) Network device
US9258236B2 (en) Per-class scheduling with rate limiting
US20150131446A1 (en) Enabling virtual queues with qos and pfc support and strict priority scheduling

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16850190

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16850190

Country of ref document: EP

Kind code of ref document: A1