CN109379304B - Fair scheduling method for reducing low-priority packet delay - Google Patents

Fair scheduling method for reducing low-priority packet delay Download PDF

Info

Publication number
CN109379304B
CN109379304B CN201811276225.4A CN201811276225A CN109379304B CN 109379304 B CN109379304 B CN 109379304B CN 201811276225 A CN201811276225 A CN 201811276225A CN 109379304 B CN109379304 B CN 109379304B
Authority
CN
China
Prior art keywords
queue
virtual queue
virtual
priority
data packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811276225.4A
Other languages
Chinese (zh)
Other versions
CN109379304A (en
Inventor
张卜方
刘淑涛
魏璇
尚云骅
刘扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN201811276225.4A priority Critical patent/CN109379304B/en
Publication of CN109379304A publication Critical patent/CN109379304A/en
Application granted granted Critical
Publication of CN109379304B publication Critical patent/CN109379304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6285Provisions for avoiding starvation of low priority queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a fair scheduling method which is suitable for a CICQ switching structure and has the advantages of low average delay, strong expandability and simple hardware implementation and can reduce the delay of a low-priority packet. It relates to the field of high-speed serial exchange input queuing scheduling. The invention aims at the problems that when different distribution strategies are designed according to the priority level of a priority level scheduling algorithm, fairness cannot be taken into account for low-priority-level services, and meanwhile, when the requirements of multiple users on different QoS are met, the expandability is poor and the complexity is high. The method has the advantages that the maximum time delay of low-priority packet transmission can be determined in a parameter configuration mode, the existing scheduling algorithm can be flexibly expanded according to design requirements, and meanwhile, the complexity is low, the universality is strong, and the method is suitable for all high-speed switching structures supporting priority services.

Description

Fair scheduling method for reducing low-priority packet delay
Technical Field
The invention relates to a high-speed serial exchange scheduling algorithm with better fairness, which is particularly suitable for a CICQ exchange circuit supporting priority scheduling and can realize QOS support of multi-priority services with better fairness and flexibility.
Background
The switching system is mainly responsible for the transmission of data from one port to another port, and mainly comprises three parts: input (output) port, switch fabric, control module. Wherein the control module is the core of the entire switching system. And the switch fabric and control module may also be combined into a switch core. Because the multi-port characteristic of the switch structure determines that different input/output ports only have one cell input or output in each time slot, the switch structure needs to set a buffer at a node when the switch structure wants good non-blocking characteristic. Thus, there are instances where input queuing and output queuing occur.
First, in a linear channel X, there are several cells waiting for output, since the input of the queue is limited only by the size of the buffer space of the queue, and only the cell at the head of the queue participates in the scheduling during the output process. Therefore, other cells in the buffer queue are blocked by the head cell, and other cells in the queue do not participate in scheduling, so that a head of line blocking (HOL) problem is generated. From mathematical analysis, the maximum throughput rate that can be achieved by the switch port is limited to 58.6%. The solutions are also many of the following: windowing, acceleration, and virtual output queue. Among them, the virtual output queue method is a main technique for solving this problem.
When the virtual output queue method is used, input contention is introduced. Input end competition means that a plurality of cells reaching different output ends in the same input end need to be transmitted in the same time slot; therefore, there is a need for an efficient and fair scheduling algorithm to resolve cell I/O contention, establish multiple I/O connections, and control the operation of the switch fabric to ensure collision-free cell transmission.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an efficient and fair scheduling algorithm, and the algorithm can be used for switching between strict priority polling scheduling and starvation-avoidance priority polling scheduling. The counting threshold of the scheduler is set by using the configurable threshold parameter, so that the average delay of the low-priority data packets is effectively reduced, and starvation of certain service caused by the average delay is avoided.
The technical scheme adopted by the invention is as follows:
a fair scheduling method for reducing low priority packet delay, comprising the steps of:
(1) configuring threshold parameters, caching an input data packet at an input end, generating labels of the data packet, dividing the labels of the data packet into different virtual queues according to the priority and a target port, and setting a counter for each virtual queue;
(2) each virtual queue sends an application to a scheduler according to the empty and full condition of the queue;
(3) the scheduler selects the virtual queue with the highest priority from the application signal set for authorization and arbitration, and the virtual queues of different target ports with the same priority are authorized and arbitrated in a polling mode; and simultaneously counting the counter of each virtual queue according to the counting rule: when the virtual queue is not empty and other virtual queues are authorized to be arbitrated, the counter value of the virtual queue is increased by 1, when the count reaches a configuration threshold parameter, the virtual queue is marked as over, otherwise, the virtual queue is marked as under;
(4) the scheduler judges whether a virtual queue marked as over exists, if so, the scheduler locks the virtual queue marked as over all the time, and preferentially authorizes and arbitrates the queue until the queue is in an empty over state; otherwise, returning to the step (3) until all the virtual queues are empty;
(5) and reading the label from the virtual queue according to the arbitration result, and selecting the data packet to be sent to the corresponding destination port according to the label.
Wherein, the label in step (1) includes the storage space position, the output port number and the priority information of the input data packet.
In the step (4), the scheduler always locks the virtual queue marked as over, and preferentially authorizes and arbitrates the queue until the queue is released in an empty over state, which specifically includes: the scheduler selects the virtual queue with the highest priority from the virtual queues marked as over to authorize arbitration, and the over virtual queue with the same priority authorizes arbitration by using a polling strategy until the virtual queue is in a space-time over state and is released.
Wherein the step (5) is specifically as follows:
and reading the tag from the virtual queue according to the arbitration result, extracting the storage position serial number and the output port number of the data packet in the tag, reading the corresponding data packet from the input end cache region according to the storage position serial number, and sending the data packet to the cross point corresponding to the output target port according to the output port number in the tag.
Compared with the background technology, the invention has the following advantages:
(1) the invention can effectively reduce the communication traffic between the modules by distinguishing the service priority from the target port through the virtual queue, so that the queue carries the priority and the routing information, thereby reducing the design difficulty and the complexity of a scheduling algorithm.
(2) The invention calculates the served frequency of each queue by setting a counter for each virtual queue, and determines the limit value of each queue by comparing with the preset threshold parameter, thereby limiting the maximum delay of low-priority service packets and preventing starvation of a certain queue or a certain service.
(3) The invention further expands different attributes of priority by increasing over and under queue classification, and ensures the real-time performance of the service more.
(4) The invention also has stronger flexibility and applicability, can be used in various queuing switching structures in cooperation with other algorithms, can flexibly configure threshold parameters according to the service types, and achieves better performance of switching.
Drawings
FIG. 1 is a flow chart of a scheduling algorithm implemented by the present invention.
FIG. 2 is a diagram of the tag generation and enqueue process and virtual queue structure for an incoming packet according to the present invention.
FIG. 3 is a block diagram of an implementation of the counter set for each queue of the present invention.
FIG. 4 is a schematic block diagram of the CICQ switch fabric of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings.
Referring to fig. 1, a fair scheduling method for reducing delay of low priority packets according to the present invention includes the following steps:
(1) configuring threshold parameters, caching an input data packet at an input end, generating labels of the data packet, dividing the labels of the data packet into different virtual queues according to the priority and a target port, and setting a counter for each virtual queue; the tag contains the memory location of the incoming packet, the output port number and the priority information.
FIG. 2 is a diagram of the tag generation and enqueue process and virtual queue structure for an incoming packet according to the present invention. And generating a label after the packet analysis is finished, wherein the label is stored in the virtual queue in the label queuing area. The queuing principle of the tags is as follows: and in the same priority, the labels of the same target port are cached in the same virtual queue in a first-in first-out mode, and the same virtual queue is cached in the first-in first-out mode.
(2) Each virtual queue sends an application to a scheduler according to the empty and full condition of the queue;
(3) the scheduler selects the virtual queue with the highest priority from the application signal set for authorization and arbitration, and the virtual queues of different target ports with the same priority are authorized and arbitrated in a polling mode; and simultaneously counting the counter of each virtual queue according to the counting rule: when the virtual queue is not empty and other virtual queues are authorized to be arbitrated, the counter value of the virtual queue is increased by 1, when the count reaches a configuration threshold parameter, the virtual queue is marked as over, otherwise, the virtual queue is marked as under;
(4) the scheduler judges whether a virtual queue marked as over exists, if so, the scheduler selects a virtual queue with the highest priority from the virtual queues marked as over to authorize and arbitrate, and an over virtual queue with the same priority authorizes and arbitrates by using a polling strategy until the virtual queue is in a space-time over state; otherwise, returning to the step (3) until all the virtual queues are empty;
(5) and reading the label from the virtual queue according to the arbitration result, and selecting the data packet to be sent to the corresponding destination port according to the label. The method comprises the following specific steps: and reading the tag from the virtual queue according to the arbitration result, extracting the storage position serial number and the output port number of the data packet in the tag, reading the corresponding data packet from the input end cache region according to the storage position serial number, and sending the data packet to the cross point corresponding to the output target port according to the output port number in the tag.
As shown in fig. 1, the packet parsing of the present invention mainly completes the enqueue function of the data packet. The method mainly comprises the following main processes of address application, label generation, enqueue control, virtual queue organization and scheduling application based on output code groups and priorities and result processing.
The address application mainly applies for an actual address from the RAM of the shared cache for a data packet to be enqueued, and the data packet can be stored in the RAM. Meanwhile, dequeuing is completed by receiving the packet storage sequence number in the label obtained by the scheduling result.
When a packet needs to be enqueued, the data packet processing function writes the idle address applied by the current packet and the acquired information corresponding to the data packet into the tag, wherein the information comprises a storage space position, an output port number, priority information and the like. And the enqueue control writes the newly obtained label into the corresponding virtual queue according to the priority and the target port information in the label, thereby finishing the storage of the data packet and the linkage of the label virtual queue organization form.
And after the scheduling is finished, reading corresponding queue tag information according to a scheduling result. And according to the information such as the data packet storage sequence number in the label information, the data packet is moved to the cross point or the next stage corresponding to the output port by matching with the output bus controller.
Wherein, the implementation process of the step (4) is as follows: when there is an over virtual queue with high priority, the over virtual queue with high priority is dispatched first; selecting an over virtual queue with the same priority from the virtual queues marked as 'over' by using a polling strategy; if no over virtual queue exists, a polling strategy is used for selecting from the queue of the under.
And (5) reading a head of queue tag of the corresponding virtual queue according to the arbitration result of the scheduler, and acquiring information such as a storage sequence number, an output port number and the like in the tag. And according to the information such as the storage sequence number of the data packet in the label information, matching with the output bus controller, and moving the data packet to a cross point or the next stage corresponding to the output port.
FIG. 3 is a schematic block diagram of the counter design in step (3). The judgment logic is based on the arbitration result and the current buffer condition of the virtual queue KN, if the queue has a packet, but the arbitration result pointer points to another queue, the counter is increased. When the value of the counter reaches a pre-configured value, the over signal of the queue is pulled high, otherwise, the under signal is pulled high.
FIG. 4 is a schematic block diagram of a CICQ switch fabric. Where N is the switching fabric port number. When a data packet enters from a certain port, virtual queuing is firstly carried out at an input port, an application is made to an input scheduler, and an arbitration result is received. And (5) according to the arbitration result and the output port information in the label, sending the corresponding data packet to the intersection corresponding to the output port of the data packet for caching. For example, a packet from port 1 destined for port 3 is destined for the row 1, column 3 intersection of FIG. 4 for buffering. The output scheduling selects the data packet to be sent to the output port according to the states of different cross points in the same column. The invention discloses a fair scheduling method for reducing low-priority packet delay, which is an input scheduling process.

Claims (4)

1. A fair scheduling method for reducing low priority packet delay comprising the steps of:
(1) configuring threshold parameters, caching an input data packet at an input end, generating labels of the data packet, dividing the labels of the data packet into different virtual queues according to the priority and a target port, and setting a counter for each virtual queue; wherein, the queuing principle of the tags is as follows: the labels of the same target port are cached in the same virtual queue according to a first-in first-out mode at the same priority, and the same virtual queue is cached according to the first-in first-out mode;
(2) each virtual queue sends an application to a scheduler according to the condition that the queue is empty and full;
(3) the scheduler selects the virtual queue with the highest priority from the application signal set for authorization and arbitration, and the virtual queues of different target ports with the same priority are authorized and arbitrated in a polling mode; and simultaneously counting the counter of each virtual queue according to the counting rule: when the virtual queue is not empty and other virtual queues are authorized to be arbitrated, the counter value of the virtual queue is increased by 1, when the count reaches a configuration threshold parameter, the virtual queue is marked as over, otherwise, the virtual queue is marked as under;
(4) the scheduler judges whether a virtual queue marked as over exists, if so, the scheduler locks the virtual queue marked as over all the time, and preferentially authorizes and arbitrates the queue until the queue is in an empty over state; otherwise, returning to the step (3) until all the virtual queues are empty;
(5) and reading the label from the virtual queue according to the arbitration result, and selecting the data packet to be sent to the corresponding destination port according to the label.
2. A fair scheduling method for reducing low priority packet delays as claimed in claim 1 wherein: the label in step (1) contains the storage space position, the output port number and the priority information of the input data packet.
3. A fair scheduling method for reducing delay of low priority packets according to claim 1, wherein: in the step (4), the scheduler always locks the virtual queue marked as over, preferentially authorizes and arbitrates the queue until the queue is in an empty over state, and specifically comprises the following steps: the scheduler selects the virtual queue with the highest priority from the virtual queues marked as over to authorize arbitration, and the over virtual queue with the same priority authorizes arbitration by using a polling strategy until the virtual queue is in a space-time over state and is released.
4. A fair scheduling method for reducing low priority packet delays as claimed in claim 1 wherein: the step (5) is specifically as follows:
and reading the tag from the virtual queue according to the arbitration result, extracting the storage position serial number and the output port number of the data packet in the tag, reading the corresponding data packet from the input end cache region according to the storage position serial number, and sending the data packet to the cross point corresponding to the output target port according to the output port number in the tag.
CN201811276225.4A 2018-10-30 2018-10-30 Fair scheduling method for reducing low-priority packet delay Active CN109379304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811276225.4A CN109379304B (en) 2018-10-30 2018-10-30 Fair scheduling method for reducing low-priority packet delay

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811276225.4A CN109379304B (en) 2018-10-30 2018-10-30 Fair scheduling method for reducing low-priority packet delay

Publications (2)

Publication Number Publication Date
CN109379304A CN109379304A (en) 2019-02-22
CN109379304B true CN109379304B (en) 2022-05-06

Family

ID=65390626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811276225.4A Active CN109379304B (en) 2018-10-30 2018-10-30 Fair scheduling method for reducing low-priority packet delay

Country Status (1)

Country Link
CN (1) CN109379304B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114531399B (en) * 2020-11-05 2023-09-19 中移(苏州)软件技术有限公司 Memory blocking balancing method and device, electronic equipment and storage medium
CN112491748B (en) * 2020-11-20 2022-05-27 中国电子科技集团公司第五十四研究所 Credit-based proportional fair scheduling method supporting data packet exchange

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1993001553A1 (en) * 1991-07-08 1993-01-21 Seiko Epson Corporation Microprocessor architecture capable of supporting multiple heterogeneous processors
CN1875584A (en) * 2003-10-31 2006-12-06 皇家飞利浦电子股份有限公司 Integrated circuit and method for avoiding starvation of data
TW200819990A (en) * 2006-10-27 2008-05-01 Nvidia Corp Priority adjustable system resources arbitration method
CN101478483A (en) * 2009-01-08 2009-07-08 中国人民解放军信息工程大学 Method for implementing packet scheduling in switch equipment and switch equipment
CN102394829A (en) * 2011-11-14 2012-03-28 上海交通大学 Reliability demand-based arbitration method in network on chip
CN104994132A (en) * 2015-05-20 2015-10-21 北京麓柏科技有限公司 Storage system, centralized control equipment and service quality control method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1993001553A1 (en) * 1991-07-08 1993-01-21 Seiko Epson Corporation Microprocessor architecture capable of supporting multiple heterogeneous processors
CN1875584A (en) * 2003-10-31 2006-12-06 皇家飞利浦电子股份有限公司 Integrated circuit and method for avoiding starvation of data
TW200819990A (en) * 2006-10-27 2008-05-01 Nvidia Corp Priority adjustable system resources arbitration method
CN101478483A (en) * 2009-01-08 2009-07-08 中国人民解放军信息工程大学 Method for implementing packet scheduling in switch equipment and switch equipment
CN102394829A (en) * 2011-11-14 2012-03-28 上海交通大学 Reliability demand-based arbitration method in network on chip
CN104994132A (en) * 2015-05-20 2015-10-21 北京麓柏科技有限公司 Storage system, centralized control equipment and service quality control method and device

Also Published As

Publication number Publication date
CN109379304A (en) 2019-02-22

Similar Documents

Publication Publication Date Title
US6654343B1 (en) Method and system for switch fabric flow control
EP0981878B1 (en) Fair and efficient scheduling of variable-size data packets in an input-buffered multipoint switch
EP1573950B1 (en) Apparatus and method to switch packets using a switch fabric with memory
EP2442499B1 (en) Data exchange method and data exchange structure
US20040083326A1 (en) Switch scheduling algorithm
US7289443B1 (en) Slow-start packet scheduling particularly applicable to systems including a non-blocking switching fabric and homogeneous or heterogeneous line card interfaces
CN109379304B (en) Fair scheduling method for reducing low-priority packet delay
Shen et al. Byte-focal: A practical load balanced switch
CN113014465A (en) Bus transmission interface system based on quality of service characteristics and design method
US8145823B2 (en) Parallel wrapped wave-front arbiter
US20060077973A1 (en) Output scheduling method of crosspoint buffered switch
US7623456B1 (en) Apparatus and method for implementing comprehensive QoS independent of the fabric system
US20040120321A1 (en) Input buffered switches using pipelined simple matching and method thereof
US11824791B2 (en) Virtual channel starvation-free arbitration for switches
US20050163127A1 (en) Buffer switch and scheduling method thereof
Qu et al. Designing fully distributed scheduling algorithms for contention-tolerant crossbar switches
KR100404376B1 (en) Partitioned Crossbar Switch with Multiple Input/Output Buffers
CN115473862B (en) Method and system for avoiding blocking of multicast packet queue head of switching chip
Mhamdi A Partially Buffered Crossbar packet switching architecture and its scheduling
Jeong et al. An advanced input-queued ATM switch with a pipelined approach to arbitration
Guo et al. Packet switch with internally buffered crossbars
CN115720216A (en) Switching network performance analysis method, network structure and scheduling method thereof
Pan et al. Hardware efficient two step iterative matching algorithms for VOQ switches
Lee et al. Circular window control schemes in fast packet switches
Lee et al. Circular window control schemes in fast packet switches

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant