WO2016065779A1 - 队列的调度方法及装置 - Google Patents

队列的调度方法及装置 Download PDF

Info

Publication number
WO2016065779A1
WO2016065779A1 PCT/CN2015/073274 CN2015073274W WO2016065779A1 WO 2016065779 A1 WO2016065779 A1 WO 2016065779A1 CN 2015073274 W CN2015073274 W CN 2015073274W WO 2016065779 A1 WO2016065779 A1 WO 2016065779A1
Authority
WO
WIPO (PCT)
Prior art keywords
scheduling
queue
state information
clock cycle
policy
Prior art date
Application number
PCT/CN2015/073274
Other languages
English (en)
French (fr)
Inventor
何波
宋军辉
刘明强
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2016065779A1 publication Critical patent/WO2016065779A1/zh

Links

Images

Definitions

  • the present invention relates to the field of communications, and in particular to a scheduling method and apparatus for a queue.
  • queue scheduling occurs when data enters different network bandwidth changes or multi-channel aggregation. Under the trend of increasing bandwidth of communication devices, the scheduling efficiency of multiple queues has higher requirements.
  • the multi-queue high-efficiency scheduling requires that the period of scheduling and dequeuing is shorter and shorter, and the packets that are scheduled to be dequeued require no interval, that is, back-to-back scheduling.
  • Queue scheduling needs to be scheduled according to the storage conditions of different queues.
  • the unit of queue scheduling is usually data packets or data fragments.
  • the number of packets or fragments needs to be stored in the chip cache, while the queue information and status table are also generally There is a chip cache.
  • the queue dequeue needs to read out the number of packets of the queue and the like from the cache, and immediately updates the cached content of the queue and queue information in the clock cycle of the scheduled dequeue.
  • the cache resources are placed in a fixed position on the chip, and the operation of the cache will bring delays in the cache unit and place and route.
  • the main clock frequency of the chip operation is required, and the delay caused by the traditional multi-queue scheduling makes the chip clock design more difficult.
  • the back-to-back scheduling method of the related art needs to complete the queue information from the queue status module in one clock cycle, and then perform queue scheduling according to the queue information. Finally, the queue status module is updated immediately after the queue is scheduled.
  • the queue information is usually stored in the chip cache. This queue information is read and updated between caches, resulting in a large buffer unit and place and route delay.
  • the system clock frequency In the implementation of the chip, in order to achieve the system processing bandwidth requirement, the system clock frequency has the lowest design requirement, and the large delay caused by the operation queue information module makes it difficult to implement the queue scheduling design in one clock cycle.
  • the main purpose of the present invention is to provide a scheduling method and device for a queue, which solves the problem in the related art that it is difficult to complete the acquisition of queue information in one clock cycle, queue scheduling according to queue information, and finally update the queue information in the queue completion state. Operational problems.
  • a scheduling method for a queue including: acquiring state information required for scheduling one or more queues in a first clock cycle, and storing; During the second clock cycle, the scheduling operation is performed according to the scheduling policy corresponding to the acquired state information.
  • the status information includes: the number of packets included in the queue.
  • performing the scheduling operation according to the scheduling policy corresponding to the obtained state information includes: determining whether the number of the packets is greater than a predetermined threshold; and when the determining result is yes, performing the one or more according to the first queue scheduling policy The queue performs the scheduling operation; when the determination result is no, the scheduling operation is performed according to the second queue scheduling policy, wherein the scheduling speed corresponding to the first queue scheduling policy is greater than the scheduling speed corresponding to the second queue scheduling policy.
  • the first queue scheduling operation is to perform a back-to-back outbound scheduling operation on a packet of a predetermined length in a unit period; and/or the second queue scheduling operation is performed in a period of not less than a predetermined number of unit periods. Queue scheduling operation.
  • the predetermined threshold is 3 messages.
  • the unit period is one clock cycle.
  • a scheduling apparatus for a queue including: an obtaining module, configured to acquire state information required for scheduling one or more queues in a first clock cycle, and store the state information;
  • the scheduling module is configured to perform a scheduling operation according to a scheduling policy corresponding to the acquired state information in a second clock cycle.
  • the status information includes: the number of packets included in the queue.
  • the scheduling module includes: a determining unit, configured to determine whether the number of the packets is greater than a predetermined threshold; and the first scheduling unit is configured to: when the determination result is yes, follow the first queue scheduling policy to the one or The plurality of queues perform a scheduling operation; the second scheduling unit is configured to perform a scheduling operation according to the second queue scheduling policy when the determination result is no, wherein the scheduling speed corresponding to the first queue scheduling policy is greater than the second queue The scheduling speed corresponding to the scheduling policy.
  • the first queue scheduling operation is to perform a back-to-back outbound scheduling operation on a packet of a predetermined length in a unit period; and/or the second queue scheduling operation is not less than a predetermined number of unit periods. Perform a queued scheduling operation.
  • the method for storing the state information of one or more queues in multiple clock cycles and performing the scheduling operation according to the scheduling policy corresponding to the state information solves the problem that it is difficult to implement the queue information in one clock cycle in the related art.
  • the acquisition and queue scheduling according to the queue information, and finally the operation of updating the queue information immediately after the queue is completed, reduces the influence of the chip cache unit and the layout and routing delay for storing the state information on the timing design, and further satisfies the chip.
  • FIG. 1 is a flowchart of a scheduling method of a queue according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a scheduling apparatus of a queue according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a preferred structure of a scheduling apparatus for a queue according to an embodiment of the present invention
  • FIG. 4 is a schematic structural diagram of an apparatus for high efficiency multi-queue scheduling according to an alternative embodiment of the present invention.
  • FIG. 5 is a flow diagram of a method of high efficiency multi-queue scheduling in accordance with an alternate embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for scheduling a queue according to an embodiment of the present invention. As shown in FIG. 1 , the process includes the following steps:
  • Step S102 Acquire state information required for scheduling one or more queues in a first clock cycle, and store the state information
  • Step S104 Perform a scheduling operation according to a scheduling policy corresponding to the acquired state information in the second clock cycle.
  • the state information of storing one or more queues is completed in multiple clock cycles, and the scheduling operation is performed according to the scheduling policy corresponding to the state information, which solves the difficulty in realizing the completion of the queue in one clock cycle in the related art.
  • the problem of obtaining information and performing queue scheduling according to queue information and finally updating the queue information immediately after the queue is completed reduces the influence of the chip buffer unit and the layout and routing delay for storing the state information on the timing design, and further satisfies The need for clock frequency in chip design.
  • the status information may include: the number of the packets included in the queue, and the status information may further include the length information of the packets in the queue.
  • performing the scheduling operation according to the scheduling policy corresponding to the acquired state information may be implemented in the following manner, where the manner includes the following steps:
  • Step S11 determining whether the number of packets is greater than a predetermined threshold
  • Step S12 When the determination result is yes, perform a scheduling operation on one or more queues according to the first queue scheduling policy
  • Step S13 When the determination result is no, the scheduling operation is performed according to the second queue scheduling policy, where the scheduling speed corresponding to the first queue scheduling policy is greater than the scheduling speed corresponding to the second queue scheduling policy.
  • the first queue scheduling operation is to perform a back-to-back outbound scheduling operation on a packet of a predetermined length in a unit period; and/or a second queue scheduling operation, to perform dequeuing in a period of not less than a predetermined number of unit periods Schedule operations.
  • the predetermined number of unit periods for the scheduling operation in the embodiment is preferably 3 unit periods, but it should be noted that the above three unit periods are merely examples, and those skilled in the art may perform scheduling according to actual conditions. The operating unit cycle is adjusted accordingly.
  • the predetermined threshold in the above step S11 may be selected as three messages.
  • the definition of the predetermined threshold is merely an optional embodiment, and is not limited to the present invention. Those skilled in the art can adjust the predetermined threshold according to actual conditions.
  • the unit period is set to one clock cycle.
  • a scheduling device is also provided, which is used to implement the foregoing embodiments and preferred embodiments, and details are not described herein.
  • the term "module” may implement a combination of software and/or hardware of a predetermined function.
  • the devices described in the following embodiments are preferably implemented in software, hardware, or a combination of software and hardware, is also contemplated.
  • FIG. 2 is a schematic structural diagram of a scheduling apparatus of a queue according to an embodiment of the present invention.
  • the apparatus includes: an obtaining module 22 configured to acquire one or more queues in a first clock cycle. Scheduling the required state information and storing the scheduling module 24, coupled to the acquisition module 24, and configured to perform the scheduling operation according to the scheduling policy corresponding to the acquired state information in the second clock cycle.
  • the status information includes: the number of packets included in the queue.
  • FIG. 3 is a schematic diagram of a preferred configuration of a queue scheduling apparatus according to an embodiment of the present invention.
  • the scheduling module 24 of the apparatus further includes: a determining unit 32, configured to determine whether the number of packets is greater than a predetermined threshold;
  • the first scheduling unit 34 is coupled to the determining unit 32, and is configured to perform a scheduling operation on one or more queues according to the first queue scheduling policy when the determination result is yes;
  • the second scheduling unit 36 is coupled to the determining unit 32, and is configured to perform a scheduling operation according to the second queue scheduling policy when the determination result is negative, wherein the scheduling speed corresponding to the first queue scheduling policy is greater than the second queue scheduling policy. The corresponding scheduling speed.
  • the first queue scheduling operation is to perform a back-to-back outbound scheduling operation on a packet of a predetermined length in a unit period; and/or the second queue scheduling operation is to perform a queue scheduling operation in a period of not less than a predetermined number of unit periods. .
  • FIG. 4 is a schematic diagram of a high-efficiency multi-queue scheduling apparatus structure according to an alternative embodiment of the present invention. As shown in FIG. 4, the apparatus includes:
  • the queue storage module 402 is configured to store the inbound queue message and store it in the internal block cache of the chip. If the number of queues is large, the queue storage module 402 can be stored in the external cache.
  • the queue status module 404 is coupled to the queue storage module 402, and is configured to record the number of packets buffered by each queue, and update the module after the packet is enqueued and dequeued.
  • the fast scheduling module 406 is coupled to the queue status module 404 and configured to quickly schedule the dequeue queue according to the queue information provided by the queue status module, and the scheduling dequeue period is one clock cycle.
  • the fast scheduling module 406 works when the number of packets stored in the queue is relatively large, and updates the queue status module 404 in the next clock cycle after the back-to-back scheduling is completed. Because the number of queue messages entering the fast scheduling module 406 is relatively large, the queue status module 404 of the scheduling queue is updated several clock cycles later, and does not cause misoperation of the scheduling.
  • the slow scheduling module 408 is configured to schedule the dequeue queue according to the queue information provided by the queue status module 404, and the scheduling and dequeue period is 3 clock cycles, and the slow scheduling module 408 works in the queue to store fewer packets.
  • the scheduling efficiency is relatively low, continuous back-to-back scheduling is not required, and the queue status module 404 is updated in the next clock cycle after each scheduling is completed.
  • the new round of scheduling needs to be performed after the queued state module 404 is updated, and the slow scheduling is scheduled according to the latest queue information.
  • the schedule table module 410 is configured to pre-store the queue information of the queue that has been scheduled to be dequeued from the fast/slow scheduling module, pre-store the information of 10 queue numbers and packet lengths to be dequeued, and the schedule uses a back pressure mechanism to control the scheduling module. Whether it works or not, the dequeue unit will perform the dequeue operation according to the queue information stored in the schedule.
  • Both fast and slow scheduling modules work together to ensure efficient scheduling of multiple queues.
  • the scheduling device In the transmission of high-bandwidth and large-traffic traffic, the scheduling device first enters the slow scheduling mode. Once the number of packets in the buffer reaches the threshold of fast scheduling, the scheduling mode enters the fast scheduling mode. When the traffic in the communication network decreases or stops, the scheduling mode is restored. To slow scheduling. Both scheduling modules do not need to update the queue status module of the queue information in one cycle, thereby reducing the impact of buffer queue update and placement and routing delay on the chip design clock frequency.
  • FIG. 5 is an alternative embodiment according to the present invention.
  • Step S502 determining whether the enqueue operation is enabled, if yes, executing S504, if otherwise continuing to wait;
  • Step S504 Update the queue status table operation, update the queue status table according to the enqueue operation or the scheduling dequeue operation, the queue status table records the number of packets and the length information of each queue, and the number of packets recorded in the queue status table triggers the operation of S506. ;
  • Step S506 determining whether the number of queue message records in the queue status table is greater than 3, if yes, the queue performs fast scheduling S510, if otherwise, performing slow scheduling S508;
  • Step S508 The slow queue scheduling operation is performed, and the queues with buffers less than or equal to 3 and greater than 0 packets are slowly scheduled. After the queue is scheduled, the queue information is updated in the next period.
  • the scheduling period is set to 3 cycles or longer according to system design requirements, and operations S504 and S512 are performed after the scheduling is completed;
  • the fast queue scheduling operation is performed, and the queues with more than three packets are buffered for fast scheduling, and the fast scheduling period is 1, ensuring that the packets with the length of one beat can be queued back to back with high efficiency; after scheduling the queue, the next period
  • the dequeue information updates the queue status table. Since the queue information update queue item will lag behind the scheduling period, Therefore, after the number of queue buffers is less than or equal to 3 packets, the fast queue scheduling operation is stopped, and after the scheduling is completed, operations S504 and S512 are performed;
  • Step S512 updating the schedule table operation, the step S508 and S510 scheduling results are stored in the schedule, and performing the S514 operation according to the schedule;
  • Step S514 Dequeue operation, and dequeuing according to the queue information.
  • the multi-queue scheduling efficiency and the optimized chip design timing are well unified.
  • the scheduling operation of one cycle is divided into multiple cycles to realize , greatly reducing the impact of chip cache unit and placement and routing delay on timing design, and is beneficial to meet the clock frequency requirements in chip design.
  • the state information of storing one or more queues is completed in multiple clock cycles, and the scheduling operation is performed according to the scheduling policy corresponding to the state information, thereby solving the difficulty in realizing related technologies.

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明提供了一种队列的调度方法及装置,其中该方法包括:在第一时钟周期内,获取对一个或多个队列进行调度所需要的状态信息,并存储;在第二时钟周期内,按照与获取的状态信息对应的调度策略执行调度操作。通过该方法解决了相关技术中难以实现在一个时钟周期完成队列信息的获取以及根据队列信息进行队列调度、最后在队列完成后立即更新队列信息操作的问题,减小了用于存储状态信息的芯片缓存单元和布局布线延时对时序设计的影响,进一步满足了芯片设计中对时钟频率的需求。

Description

队列的调度方法及装置 技术领域
本发明涉及通信领域,具体而言,涉及一种队列的调度方法及装置。
背景技术
在数据通信领域,数据进入不同网络的带宽变化或者多通道汇聚都会存在队列调度。在通信设备带宽越来越大的趋势下,对多队列的调度效率也有了更高的要求。多队列高效率调度要求调度出队的周期越来越短,调度出队的报文要求没有间隔,也就是背靠背调度。
队列调度需要根据不同队列入队报文存储情况进行调度;队列调度的单位一般是数据包或者数据分片,报文或者分片个数需要存储在芯片缓存中,而队列信息和状态表一般也存在芯片缓存中。背靠背高效率队列调度的传统设计中,队列出队需要从缓存读出该队列报文个数等信息,并在调度出队的当拍时钟周期立即更新该队列和队列信息的缓存内容。而缓存资源在芯片中都会放在固定位置,操作缓存时会带来缓存单元和布局布线的延时。在通信芯片设计时,芯片工作的主时钟频率都有要求,传统多队列调度带来的延时会使芯片时钟设计更加困难。
相关技术的背靠背调度方法需要在1个时钟周期完成从队列状态模块获取队列信息,再根据队列信息进行队列调度,最后队列调度完后立即更新队列状态模块。而队列信息通常是存在芯片缓存里,这种队列信息在缓存间读取更新操作造成缓存单元和布局布线延时会比较大。在芯片的实现中,为达到系统处理带宽要求,系统时钟频率有最低设计要求,操作队列信息模块造成的较大延时,使得1个时钟周期完成队列调度的设计实现起来非常困难。
针对相关技术中难以实现在一个时钟周期完成队列信息的获取以及根据队列信息进行队列调度、最后在队列完成后立即更新队列信息操作的问题,目前尚未提出有效的解决方案。
发明内容
本发明的主要目的在于提供一种队列的调度方法及装置,解决了相关技术中难以实现在一个时钟周期完成队列信息的获取以及根据队列信息进行队列调度、最后在队列完成州立即更新队列信息的操作的问题。
为了实现上述目的,根据本发明的一个实施例,提供了一种队列的调度方法,包括:在第一时钟周期内,获取对一个或多个队列进行调度所需要的状态信息,并存储;在第二时钟周期内,按照与获取的所述状态信息对应的调度策略执行调度操作。
优选地,所述状态信息包括:所述队列包含的报文数量。
优选地,按照与获取的所述状态信息对应的调度策略执行调度操作包括:判断所述报文数量是否大于预定阈值;在判断结果为是时,按照第一队列调度策略对所述一个或多个队列执行调度操作;在判断结果为否时,按照第二队列调度策略执行调度操作,其中,与所述第一队列调度策略对应的调度速度大于与第二队列调度策略对应的调度速度。
优选地,所述第一队列调度操作为在单位周期内对预定长度的报文进行背靠背出队列调度操作;和/或所述第二队列调度操作为在不小于预定数量个单位周期内进行出队列调度操作。
优选地,所述预定阈值为3个报文。
优选地,所述单位周期为一个时钟周期。
根据本发明的另一个实施例,提供了一种队列的调度装置,包括:获取模块,设置为在第一时钟周期内,获取对一个或多个队列进行调度所需要的状态信息,并存储;调度模块,设置为在第二时钟周期内,按照与获取的所述状态信息对应的调度策略执行调度操作。
优选地,所述状态信息包括:所述队列包含的报文数量。
优选地,所述调度模块包括:判断单元,设置为判断所述报文数量是否大于预定阈值;第一调度单元,设置为在判断结果为是时,按照第一队列调度策略对所述一个或多个队列执行调度操作;第二调度单元,设置为在判断结果为否时,按照第二队列调度策略执行调度操作,其中,与所述第一队列调度策略对应的调度速度大于与第二队列调度策略对应的调度速度。
优选地,所述第一队列调度操作,为在单位周期内对预定长度的报文进行背靠背出队列调度操作;和/或所述第二队列调度操作,为在不小于预定数量个单位周期内进行出队列调度操作。
通过本发明,在多个时钟周期内完成存储一个或多个队列的状态信息,并根据该状态信息对应的调度策略执行调度操作的方式,解决了相关技术中难以实现在一个时钟周期完成队列信息的获取以及根据队列信息进行队列调度、最后在队列完成后立即更新队列信息操作的问题,减小了用于存储状态信息的芯片缓存单元和布局布线延时对时序设计的影响,进一步满足了芯片设计中对时钟频率的需求。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的队列的调度方法的流程图;
图2是根据本发明实施例的队列的调度装置的结构示意图;
图3是根据本发明实施例的队列的调度装置的优选结构示意图;
图4是根据本发明可选实施例的高效率多队列调度的装置结构示意图;
图5是根据本发明可选实施例的高效率多队列调度的方法流程图。
具体实施方式
下文中将参考附图并结合实施例来详细说明本发明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
在本实施例中提供了一种本发明实施例的连接建立方法,图1是根据本发明实施例的队列的调度方法的流程图,如图1所示,该流程包括如下步骤:
步骤S102:在第一时钟周期内,获取对一个或多个队列进行调度所需要的状态信息,并存储;
步骤S104:在第二时钟周期内,按照与获取的状态信息对应的调度策略执行调度操作。
通过本实施例,在多个时钟周期内完成存储一个或多个队列的状态信息,并根据该状态信息对应的调度策略执行调度操作的方式,解决了相关技术中难以实现在一个时钟周期完成队列信息的获取以及根据队列信息进行队列调度、最后在队列完成后立即更新队列信息的操作的问题,减小了用于存储状态信息的芯片缓存单元和布局布线延时对时序设计的影响,进一步满足了芯片设计中对时钟频率的需求。
在本实施例的一个可选实施方式中,状态信息可以包括:队列包含的报文数量,此外,该状态信息还可以包括队列中报文的长度信息等。
为了能够执行调度策略调度的操作,在本实施例的一个可选实施方式中,按照与获取的状态信息对应的调度策略执行调度操作可以通过以下方式实现,该方式包括以下步骤:
步骤S11:判断报文数量是否大于预定阈值;
步骤S12:在判断结果为是时,按照第一队列调度策略对一个或多个队列执行调度操作;
步骤S13:在判断结果为否时,按照第二队列调度策略执行调度操作,其中,与第一队列调度策略对应的调度速度大于与第二队列调度策略对应的调度速度。
可选地,第一队列调度操作,为在单位周期内对预定长度的报文进行背靠背出队列调度操作;和/或第二队列调度操作,为在不小于预定数量个单位周期内进行出队列调度操作。需要说明的是本实施例中的对调度操作预定数量个单位周期优选的为3个单位周期,但需要说明的是上述3个单位周期仅仅是举例说明,本领域技术人员可以根据实际情况对调度的操作单位周期进行相应的调整。
在本实施例的一个实施例中,上述步骤S11中的预定阈值可选为3个报文。同样的,这里对预定阈值的限定也仅仅是一种可选的实施方式,并不对本发明构成限定,本领域技术人员可以根据实际情况对该预定阈值进行相应的调整。
可选地,为了能够实现芯片缓存单元和布局线延时的统一,将该单位周期设置一个时钟周期。
在本实施例中还提供了一种队列的调度装置,该装置用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能被构想的。
图2是根据本发明实施例的队列的调度装置的结构示意图,如图2所示,该装置包括:包括:获取模块22,设置为在第一时钟周期内,获取对一个或多个队列进行调度所需要的状态信息,并存储;调度模块24,与获取模块24耦合连接,设置为在第二时钟周期内,按照与获取的状态信息对应的调度策略执行调度操作。
可选地,状态信息包括:队列包含的报文数量。
图3是根据本发明实施例的队列的调度装置的优选结构示意图,如图3所示,该装置中的调度模块24还包括:判断单元32,设置为判断报文数量是否大于预定阈值;
第一调度单元34与判断单元32耦合连接,设置为在判断结果为是时,按照第一队列调度策略对一个或多个队列执行调度操作;
第二调度单元36与判断单元32耦合连接,设置为在判断结果为否时,按照第二队列调度策略执行调度操作,其中,与第一队列调度策略对应的调度速度大于与第二队列调度策略对应的调度速度。
可选地,第一队列调度操作为在单位周期内对预定长度的报文进行背靠背出队列调度操作;和/或第二队列调度操作为在不小于预定数量个单位周期内进行出队列调度操作。
为了更好的对本发明进行说明,下面结合本发明的可选实施及附图例进行说明。
本发明提供了一种高效率多队列调度的实现方法和装置,图4是根据本发明可选实施例的高效率多队列调度的装置结构示意图,如图4所示,该装置包括:
队列存储模块402,设置为存储入队队列报文,并存至芯片内部块状缓存,如果队列数多,可以存储在外挂缓存。
队列状态模块404与队列存储模块402耦合连接,设置为记录各队列缓存的报文数量,报文入队和出队后都更新该模块。
快速调度模块406,与队列状态模块404耦合连接,设置为根据队列状态模块提供的队列信息,快速调度出队队列,调度出队周期是1个时钟周期。
快速调度模块406工作在队列存储的报文数比较多的时候,在背靠背调度完成后下个时钟周期更新队列状态模块404。由于进入快速调度模块406的队列报文数比较多,调度队列的队列状态模块404晚几个时钟周期更新,不会造成调度的误操作。
慢速调度模块408,设置为根据队列状态模块404提供的队列信息,慢速调度出队队列,调度出队周期是3个时钟周期,慢速调度模块408工作在队列存储的报文数比较少的时候,调度效率比较低,不需要连续背靠背调度,每次调度完成后下个时钟周期更新队列状态模块404。新一轮调度需要在调度出队的信息在队列状态模块404更新完毕后进行,保证慢速调度都是根据最新的队列信息进行调度。
调度表模块410,设置为预存从快/慢速调度模块中,已经调度出队的队列信息,预存10个即将出队的队列号和报文长度等信息,调度表使用反压机制控制调度模块的是否工作,出队单元会根据调度表中存储的队列信息进行出队操作。
快速和慢速两种调度模块配合共同工作,可以保证多队列的高效率调度。在高带宽大流量的传输中,调度装置先进入慢速调度模式,一旦缓存中报文数达到快速调度的阈值,就进入快速调度模式,当通讯网络中流量减小或者停止,调度模式又恢复到慢速调度。快慢两种调度模块都不需要在1个周期内更新队列信息的队列状态模块,从而减小了缓存队列更新和布局布线延时对芯片设计时钟频率的影响。
为了对上述装置中的模块的功能的原理进行进一步的说明,本发明可选实施例还提供了一种高效率多队列调度的实现方法包括以下步骤,图5是根据本发明可选实施例的高效率多队列调度的方法流程图,如图5所示,该流程包括以下步骤:
步骤S502:判断是否收到入队操作使能,如收到则执行S504,如否则继续等待;
步骤S504:更新队列状态表操作,根据入队操作或者调度出队操作更新队列状态表,队列状态表记录的是各队列报文个数和长度信息,队列状态表记录的报文数触发S506操作;
步骤S506:判断队列状态表记录队列报文个数是否大于3,如是则该队列执行快速调度S510,如否则执行慢速调度S508;
步骤S508:慢速队列调度操作,缓存小于等于3并且大于0个报文的队列进行慢速调度,调度出队列后,下个周期出队信息更新队列状态表。调度周期按照系统设计要求设置为3个周期或更长,调度完成后操作S504和S512;
步骤S510:快速队列调度操作,缓存大于3个报文以上的队列进行快速调度,快速调度周期为1,确保长度为1拍的报文能背靠背高效率出队列;调度出队列后,下个周期出队信息更新队列状态表。由于出队列信息更新队列表项会滞后于调度周期, 因此在队列缓存数小于等于3个报文后,停止快速队列调度操作,调度完成后操作S504和S512;
步骤S512:更新调度表操作,步骤S508和S510调度结果存入调度表,根据调度表进行S514操作;
步骤S514:出队操作,按照队列信息进行出队操作。
通过本发明可选实施例,在多队列调度效率和优化芯片设计时序上做到了很好的统一,在达到背靠背高效率调度队列的前提下,将1个周期的调度操作分成多个周期来实现,大大减小了芯片缓存单元和布局布线延时对时序设计的影响,有利于满足芯片设计中时钟频率的需求。
以上仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。
工业实用性
基于本发明实施例提供的上述技术方案,在多个时钟周期内完成存储一个或多个队列的状态信息,并根据该状态信息对应的调度策略执行调度操作的方式,解决了相关技术中难以实现在一个时钟周期完成队列信息的获取以及根据队列信息进行队列调度、最后在队列完成后立即更新队列信息操作的问题,减小了用于存储状态信息的芯片缓存单元和布局布线延时对时序设计的影响,进一步满足了芯片设计中对时钟频率的需求。

Claims (10)

  1. 一种队列的调度方法,包括:
    在第一时钟周期内,获取对一个或多个队列进行调度所需要的状态信息,并存储;
    在第二时钟周期内,按照与获取的所述状态信息对应的调度策略执行调度操作。
  2. 根据权利要求1所述的方法,其中,所述状态信息包括:所述队列包含的报文数量。
  3. 根据权利要求2所述的方法,其中,按照与获取的所述状态信息对应的调度策略执行调度操作包括:
    判断所述报文数量是否大于预定阈值;
    在判断结果为是时,按照第一队列调度策略对所述一个或多个队列执行调度操作;
    在判断结果为否时,按照第二队列调度策略执行调度操作,其中,与所述第一队列调度策略对应的调度速度大于与第二队列调度策略对应的调度速度。
  4. 根据权利要求3所述的方法,其中,
    所述第一队列调度操作,为在单位周期内对预定长度的报文进行背靠背出队列调度操作;和/或,所述第二队列调度操作,为在预定数量个单位周期内进行出队列调度操作。
  5. 根据权利要求3所述的方法,其中,所述预定阈值为3个报文。
  6. 根据权利要求4所述的方法,其中,所述单位周期为一个时钟周期。
  7. 一种队列的调度装置,包括:
    获取模块,设置为在第一时钟周期内,获取对一个或多个队列进行调度所需要的状态信息,并存储;
    调度模块,设置为在第二时钟周期内,按照与获取的所述状态信息对应的调度策略执行调度操作。
  8. 根据权利要求7所述的装置,其中,所述状态信息包括:所述队列包含的报文数量。
  9. 根据权利要求8所述的装置,其中,所述调度模块包括:
    判断单元,设置为判断所述报文数量是否大于预定阈值;
    第一调度单元,设置为在判断结果为是时,按照第一队列调度策略对所述一个或多个队列执行调度操作;
    第二调度单元,设置为在判断结果为否时,按照第二队列调度策略执行调度操作,其中,与所述第一队列调度策略对应的调度速度大于与第二队列调度策略对应的调度速度。
  10. 根据权利要求9所述的装置,其中,
    所述第一队列调度操作为在单位周期内对预定长度的报文进行背靠背出队列调度操作;和/或所述第二队列调度操作为在不小于预定数量个单位周期内进行出队列调度操作。
PCT/CN2015/073274 2014-10-30 2015-02-25 队列的调度方法及装置 WO2016065779A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410606263.7A CN105634983A (zh) 2014-10-30 2014-10-30 队列的调度方法及装置
CN201410606263.7 2014-10-30

Publications (1)

Publication Number Publication Date
WO2016065779A1 true WO2016065779A1 (zh) 2016-05-06

Family

ID=55856492

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/073274 WO2016065779A1 (zh) 2014-10-30 2015-02-25 队列的调度方法及装置

Country Status (2)

Country Link
CN (1) CN105634983A (zh)
WO (1) WO2016065779A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109656169B (zh) * 2018-12-20 2021-08-27 深圳南方德尔汽车电子有限公司 多重调度表切换方法、装置、计算机设备及存储介质
CN112804162B (zh) * 2019-11-13 2024-04-09 深圳市中兴微电子技术有限公司 一种调度方法、装置、终端设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101242341A (zh) * 2007-02-07 2008-08-13 华为技术有限公司 一种报文调度方法及装置
CN103441954A (zh) * 2013-08-27 2013-12-11 福建星网锐捷网络有限公司 一种报文发送方法、装置及网络设备
CN103546392A (zh) * 2012-07-12 2014-01-29 中兴通讯股份有限公司 队列单周期调度方法和装置
CN104022965A (zh) * 2014-05-20 2014-09-03 华为技术有限公司 一种报文出队调度的方法和设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101242341A (zh) * 2007-02-07 2008-08-13 华为技术有限公司 一种报文调度方法及装置
CN103546392A (zh) * 2012-07-12 2014-01-29 中兴通讯股份有限公司 队列单周期调度方法和装置
CN103441954A (zh) * 2013-08-27 2013-12-11 福建星网锐捷网络有限公司 一种报文发送方法、装置及网络设备
CN104022965A (zh) * 2014-05-20 2014-09-03 华为技术有限公司 一种报文出队调度的方法和设备

Also Published As

Publication number Publication date
CN105634983A (zh) 2016-06-01

Similar Documents

Publication Publication Date Title
US8601181B2 (en) System and method for read data buffering wherein an arbitration policy determines whether internal or external buffers are given preference
JP5893762B2 (ja) クライアント装置上でパケット送信をスケジュールするためのシステム及び方法
CN104753818B (zh) 一种队列调度方法和装置
US8341437B2 (en) Managing power consumption and performance in a data storage system
WO2017206587A1 (zh) 一种优先级队列调度的方法及装置
JP2011505036A5 (zh)
US9684461B1 (en) Dynamically adjusting read data return sizes based on memory interface bus utilization
CN112084136A (zh) 队列缓存管理方法、系统、存储介质、计算机设备及应用
CN102662889B (zh) 中断处理方法、中断控制器及处理器
CN108462654B (zh) 增强型gjb289a总线通信管理和调度方法
US20140317220A1 (en) Device for efficient use of packet buffering and bandwidth resources at the network edge
CN106411778B (zh) 数据转发的方法及装置
US10554568B2 (en) Technologies for network round-trip time estimation
WO2016202158A1 (zh) 一种报文传输方法、装置及计算机可读存储介质
CN106293523A (zh) 一种对非易失性存储的io请求响应方法及装置
WO2016065779A1 (zh) 队列的调度方法及装置
WO2014075488A1 (zh) 队列管理方法及装置
CN108011845A (zh) 一种减少时延的方法和装置
WO2012119414A1 (zh) 交换网的流量控制方法和装置
EP2922257A1 (en) Traffic management scheduling method and apparatus
WO2012159362A1 (zh) 一种流量整形的方法及设备
JP2018505591A5 (zh)
US9590909B2 (en) Reducing TCP timeouts due to Incast collapse at a network switch
TWI609579B (zh) 資料封包訊務形塑技術
WO2018209781A1 (zh) 一种调度方法及终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15855552

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15855552

Country of ref document: EP

Kind code of ref document: A1