WO2016019554A1 - 一种队列管理的方法和装置 - Google Patents

一种队列管理的方法和装置 Download PDF

Info

Publication number
WO2016019554A1
WO2016019554A1 PCT/CN2014/083916 CN2014083916W WO2016019554A1 WO 2016019554 A1 WO2016019554 A1 WO 2016019554A1 CN 2014083916 W CN2014083916 W CN 2014083916W WO 2016019554 A1 WO2016019554 A1 WO 2016019554A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
sram
packet
message
threshold
Prior art date
Application number
PCT/CN2014/083916
Other languages
English (en)
French (fr)
Inventor
陆玉春
张健
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2014/083916 priority Critical patent/WO2016019554A1/zh
Priority to CN201480080687.2A priority patent/CN106537858B/zh
Priority to EP14899360.3A priority patent/EP3166269B1/en
Publication of WO2016019554A1 publication Critical patent/WO2016019554A1/zh
Priority to US15/425,466 priority patent/US10248350B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers

Definitions

  • the embodiments of the present invention relate to the field of communications, and in particular, to a method and an apparatus for queue management.
  • the rate of a single line card (English: line card, LC for short) is from 10 gigabit per second (English: gigabit per second, Gbps for short), 40 Gbps, lOOGbps, 200 Gbps to 400 Gbps and higher.
  • the processing capacity of the router's single line card also evolves from 15 million packets per second (English: mega packet per second, Mpps for short), 60Mpps, 150Mpps, 300Mpps to 600Mpps and higher, which is the line card for the router.
  • the speed of the memory poses a challenge.
  • the scheduling of the message queue can be accessed through the queue of the packet descriptor (English: packet descriptor, PD for short) (including the PD queue). Perform read and write operations).
  • the PD queue is stored in a dynamic random access memory (English: dynamic random access memory, DRAM for short). Therefore, when scheduling a message queue, it is necessary to perform read and write operations on the DRAM.
  • DRAM dynamic random access memory
  • the row cycle time of DRAM (English: Row Cycle Time, tRC for short) is about 40 ⁇ 50 nanoseconds (English: nanosecond, abbreviation: ns).
  • the DRAM access rate is approximately 25MPPS. Even if the access rate of the DRAM can be increased to 100 MPPS, the rate of the packet queue of 300 MPPS cannot be met. In the above technical solution, the rate of reading the PD queue is limited by the attributes of the DRAM, which affects the dequeue efficiency of the message queue. Summary of the invention
  • the technical problem to be solved by the embodiments of the present invention is to provide a method and a device for managing control information of a queue, which can solve the problem that the capacity and bandwidth limitation of the DRAM memory in the prior art affect the efficiency of packet queue dequeue.
  • a method for queue management including: - the PD queue is written into the DRAM, the PD queue includes a plurality of PDs, and the plurality of PDs correspond to a plurality of messages included in the first message queue.
  • At least one PD in the PD queue is written into a static random access memory (English: static random access memory, SRAM for short), and the at least one PD includes a queue header of the PD queue.
  • a static random access memory English: static random access memory, SRAM for short
  • the method further includes:
  • the at least one PD in the PD queue is written into the SRAM, and specifically includes:
  • At least one PD in the PD queue is written into the SRAM.
  • the determining that the credit of the first packet queue is greater than or equal to a preset first threshold and in the SRAM The available storage capacity is greater than or equal to the preset second threshold includes:
  • the set of the to-be-activated message queue includes a message queue that satisfies a preset condition to be activated, and the to-be-activated condition is that the credit of the message queue is greater than or Equal to the first threshold and the capacity of the storage space available in the SRAM when the determination is performed is less than the second threshold;
  • the writing the at least one PD in the PD queue to the SRAM specifically includes:
  • At least one PD in the PD queue is written into the SRAM if the capacity of the storage space available in the SRAM is greater than or equal to the second threshold and the set of the to-be-activated message queue is empty.
  • the set of to-be-activated message queues includes a message queue that satisfies a preset condition to be activated, and the to-be-activated condition is a message queue when the determination is performed.
  • the credit is greater than or equal to the first threshold and is available in the SRAM when the determination is performed - - the capacity of the storage space is less than the second threshold;
  • the writing the at least one PD in the PD queue to the SRAM specifically includes:
  • the PD queue is At least one PD is written in the SRAM.
  • the writing the PD queue to the DRAM further includes:
  • Determining that the credit of the first packet queue is greater than or equal to a preset first threshold and the capacity of the storage space available in the SRAM is greater than or equal to a preset second threshold further includes:
  • the writing the at least one PD in the PD queue to the SRAM specifically includes:
  • At least one PD in the PD queue is written into the SRAM based on the indication of the second state.
  • the method further includes: if the credit of the first packet queue is greater than or equal to the first threshold and the storage space available to the SRAM The capacity of the first packet queue is modified to a third state;
  • the method further includes:
  • the PD of the to-be-entered message is written into the SRAM.
  • the DRAM includes a plurality of memory blocks (English: bank), and the DRAM stores multiple Multiple PD queues corresponding to the message queue, and multiple queue heads of the multiple PD queues are respectively stored - stored in a plurality of banks, the plurality of queue headers corresponding to the plurality of banks, wherein the plurality of PD queues correspond to the plurality of queue headers.
  • the method further includes:
  • the method further includes:
  • the at least two PD queues respectively include at least two new queue headers
  • the at least two new queue headers are stored in the same bank, receiving at least two dequeue requests, placing the at least two dequeue requests into a dequeue request queue corresponding to the same bank, and Responding to the at least two dequeue requests using the dequeue request queue, the at least two dequeue requests corresponding to the at least two PD queues.
  • a device for queue management including:
  • a first write module configured to write a PD queue to the DRAM, where the PD queue includes multiple PDs, where the multiple PDs correspond to multiple packets included in the first packet queue.
  • a second write module configured to write at least one PD in the PD queue written by the first write module into the SRAM, where the at least one PD includes a queue header of the PD queue.
  • the method further includes:
  • a first determining module after the first writing module writes the PD queue into the DRAM, and the second writing module writes at least one PD in the PD queue into the SRM Before, determining that the credit of the first packet queue is greater than or equal to a preset first threshold, and the capacity of the storage space available in the SRAM is greater than or equal to a preset second threshold;
  • the second write module is specifically configured to write at least one PD in the PD queue into the SRAM if a capacity of a storage space available in the SRAM is greater than or equal to the second threshold.
  • the first determining module is further configured to:
  • the set of the to-be-activated message queue includes a message queue that satisfies a preset condition to be activated, and the to-be-activated condition is that the credit of the message queue is greater than or Equal to the first threshold and the capacity of the storage space available in the SRAM when the determination is performed is less than the second threshold;
  • the second write module is specifically configured to: if the capacity of the storage space available in the SRAM is greater than or equal to the second threshold and the set of the to-be-activated message queue is empty, the PD queue is At least one PD is written in the SRAM.
  • the first determining module is further configured to:
  • the set of to-be-activated message queues includes a message queue that satisfies a preset condition to be activated, and the to-be-activated condition is a message queue when the determination is performed.
  • the capacity of the storage space available in the SRAM is greater than the second threshold when the credit is greater than or equal to the first threshold and the determination is performed;
  • the second write module is specifically configured to: if the capacity of the storage space available in the SRAM is greater than or equal to a second threshold, and the first packet queue is the highest priority among the set of the to-be-activated message queues a message queue, at least one PD in the PD queue is written into the SRAM.
  • the first write module is further configured to:
  • the first determining module is further configured to:
  • the second write module is specifically configured to write at least one PD in the PD queue into the SRAM based on the indication of the second state.
  • the method further includes:
  • a modifying module configured to: if the credit of the first packet queue is greater than or equal to the first threshold, and the capacity of the storage space available to the SRAM is less than the second threshold, the first packet queue The state is modified to the third state;
  • the joining module is configured to add the first packet queue in the third state to the set of the to-be-activated message queue.
  • the method further includes:
  • a second determining module configured to determine whether the to-be-entered packet of the second packet queue meets the preset fast packet identification condition; a third write module, configured to: if the second determining module determines that the to-be-entered packet does not meet the preset fast packet identification condition, write the PD of the to-be-entered packet And the fourth writing module is configured to: if the second determining module determines that the to-be-entered packet meets the preset fast packet identification condition, then the to-be-queued packet is The PD is written into the SRAM.
  • the DRAM includes multiple banks, and the DRAM stores multiple message queues corresponding to multiple a plurality of PD headers, wherein the plurality of queue headers are respectively stored in a plurality of banks, the plurality of queue headers corresponding to the plurality of banks, the plurality of PD queues and the plurality of PD queues Queue header - corresponding.
  • the method further includes:
  • a first dequeuing module configured to perform a dequeuing operation of at least two of the plurality of message queues according to the plurality of queue headers stored in the plurality of banks.
  • the method further includes:
  • a second dequeue module configured to perform a dequeuing operation on at least two of the plurality of PD queues, where the at least two PD queues respectively include at least two new queues Head
  • a response module configured to: if the at least two new queue headers are stored in the same bank, receive at least two dequeue requests, and place the at least two dequeue requests into the corresponding bank
  • the team requests a queue and responds to the at least two dequeue requests using the dequeue request queue, the at least two dequeue requests corresponding to the at least two PD queues.
  • the PD queue corresponding to the first message queue is written into the DRAM, and at least one PD including the queue header in the PD queue is written into the SRAM. Therefore, at least one PD in the PD queue corresponding to the first message queue is stored in the SRAM.
  • the PD queue can be read.
  • the first packet queue is a first in first out (FIFO) queue
  • the first packet may be read by performing a read operation on the queue header of the PD queue. Dequeue operation of the queue.
  • the queue header of the PD queue is stored in the SRAM, and the read operation of the queue header of the PD queue is Through the read operation of the SRAM.
  • the rate at which SRAM is read is not limited by the properties of the DRAM. Therefore, in the foregoing technical solution, the dequeue efficiency of the PD queue is high, which helps improve the dequeue efficiency of the first packet queue.
  • FIG. 1 is a schematic flowchart of a method for queue management according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a data structure of a SRAM in FIG.
  • FIG. 3 is a schematic diagram showing the data structure of the DRAM of FIG. 1;
  • FIG. 4 is a schematic diagram of a fast packet identification according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an apparatus for queue management according to an embodiment of the present invention
  • FIG. 6 is another schematic structural diagram of an apparatus for queue management according to an embodiment of the present invention. detailed description
  • the packet queues involved in the embodiments of the present invention may all be FIFO queues. It is a memory controller. (English: memory controller) 0
  • the memory controller can control DRAM and SRAM.
  • the execution body of the queue management method and the queue management device provided by the embodiment of the present invention may all be a traffic management chip.
  • the traffic management chip includes the memory controller.
  • the traffic management chip can be coupled to the DRAM by using the memory controller.
  • the traffic management chip can be coupled to the SRAM through the memory controller.
  • the embodiment of the present invention provides - The execution body of the queue management method and the device for queue management can both be LC.
  • the LC includes the traffic management chip.
  • the execution body of the queue management method and the queue management device provided by the embodiment of the present invention may all be network devices.
  • the network device includes the line card.
  • the network device may be a router, a network switch, a firewall, a load balancer, a data center, a base station, a packet transport network (PTN) device, or a wavelength division multiplexing (English: wavelength division multiplexing, Abbreviation: WDM) equipment.
  • PDN packet transport network
  • the PD involved in the embodiment of the present invention may include a storage address of a corresponding message or a pointer for a corresponding message, if not stated to the contrary.
  • the PD may further include a time when the corresponding message is received.
  • the PD may further include a packet header of the corresponding packet.
  • the message queue to which the embodiment of the present invention relates may be stored in DRAM or SRAM.
  • the embodiment of the present invention does not limit the storage location of the message queue.
  • FIG. 1 is a schematic flowchart diagram of a queue management method according to an embodiment of the present invention. The method includes:
  • the PD queue is written into the DRAM, where the PD queue includes multiple PDs, and the multiple PDs correspond to multiple packets included in the first packet queue.
  • the first packet queue is associated with the PD queue.
  • the number of packets included in the first packet queue is equal to the number of PDs included in the PD queue.
  • the plurality of messages in the first message queue correspond to a plurality of PDs in the PD queue.
  • the traffic management chip writes the PD queue corresponding to the first queue to the DRAM.
  • the first message queue includes 8 messages. There are also 8 PDs in the PD queue associated with the first message queue. The eight packets in the first packet queue correspond to the eight PDs in the PD queue. For example, the traffic management chip writes the PD queue to the DRAM.
  • the traffic management chip writes at least one PD in the PD queue corresponding to the first packet queue into the SRAM.
  • the at least one PD written includes the queue header of the PD queue.
  • the PD corresponding to the queue header of the PD queue is used to describe the queue header corresponding message in the first packet queue.
  • the queue header corresponding message in the first message queue is the first one of the plurality of messages to be received.
  • S102 may further include: deleting the at least one PD in the DRAM.
  • the PD queue is stored in the SRAM and in the DRAM.
  • the method further includes:
  • the writing the at least one PD in the PD queue to the SRAM specifically includes:
  • At least one PD in the PD queue is written into the SRAM.
  • the credit of the first packet queue may be used to indicate the number of deportable packets of the first packet queue or the number of bytes of the dequeuable packet.
  • the traffic management chip may obtain the credit of the first packet queue from a queue descriptor (English: queue descriptor, QD) of the first packet queue.
  • the QD is used to describe the first packet queue. For the QD, refer to Table 3.
  • the traffic management chip determines whether the credit of the first message queue is greater than or equal to a preset first threshold, and determines whether the capacity of the available storage space in the SRAM is greater than or equal to a preset second threshold. If it is determined that the credit of the first message queue is greater than or equal to the preset first threshold and the capacity of the storage space available in the SRAM is greater than or equal to the preset second preset, S102 is performed.
  • the preset first threshold may be determined according to one or more of the priority of the first message queue, the queue length, the number of messages, the sleep time, and the capacity of the storage space available in the SRAM. For example, for the first packet queue, the higher the priority, the longer the queue length, the larger the number of packets, or the longer the sleep time, the smaller the preset first threshold can be. On the other hand, for the first packet ⁇ 1 J, the lower the priority, the shorter the queue length, the smaller the number of packets, or the shorter the sleep time, the larger the preset first threshold can be.
  • the first threshold when used to identify the number of dequeuable packets of the first packet queue, the first threshold may be greater than or equal to 1. When the first threshold is used to identify the number of bytes of the dequeuable packet of the first packet queue, the first threshold may be greater than or equal to the byte of one packet in the first packet queue. quantity.
  • the preset second threshold is greater than or equal to the size of one PD in the PD queue.
  • at least one PD in the PD queue can be written to the SRAM.
  • the queue header of the PD queue is first written into the SRAM, the team. - -
  • the end of the column is written to the SRAM.
  • each PD in the PD queue has a size of 8 bytes, and the preset second threshold is 8 bytes.
  • the available storage space in the SRAM is 25 bytes, and the available storage space in the SRAM is larger than the second threshold.
  • the available storage space in the S RAM can be written to a maximum of 3 PDs.
  • the PD queue includes 8 PDs before the write operation. After the traffic management chip meets the above capacity limit conditions, the first 3 PDs in the PD queue can be written into the SRAM.
  • the second threshold is also included:
  • the set of the to-be-activated message queue includes a message queue that satisfies a preset condition to be activated, and the to-be-activated condition is that the credit of the message queue is greater than or Equal to the first threshold and the capacity of the storage space available in the SRAM when the determination is performed is less than the second threshold;
  • the writing the at least one PD in the PD queue to the SRAM specifically includes:
  • At least one PD in the PD queue is written into the SRAM if the capacity of the storage space available in the SRAM is greater than or equal to the second threshold and the set of the to-be-activated message queue is empty.
  • the traffic management chip can maintain a set of to-be-activated message queues.
  • the set of to-be-activated message queues includes a message queue that satisfies a preset condition to be activated.
  • the to-be-activated condition is that the credit of the message queue is greater than or equal to the first threshold when the determination is performed and the storage space available in the SRAM is less than the second threshold when the determination is performed.
  • the set of to-be-activated message queues may include 0, 1 or more queues of messages to be activated. Each pending message queue corresponds to a priority.
  • the priority of the to-be-activated message queue is used to indicate the order in which at least one PD in the PD queue corresponding to the to-be-activated message queue is written to the SRAM. At least one PD of the corresponding PD queue of the priority-activated message queue is written into the SRAM before the at least one PD of the PD queue corresponding to the lower-priority to-be-activated message queue. Each of the at least one PD respectively includes a queue header of the corresponding PD queue.
  • the set of to-be-activated message queues is empty, indicating that the number of to-be-activated message queues in the set of to-be-activated message queues is 0.
  • the set of to-be-activated message queues is non-empty, indicating that the number of to-be-activated message queues in the set of to-be-activated message queues is greater than zero.
  • SRAM is managed in the form of Cache Line.
  • SRAM consists of Y Cache Lines.
  • Each Cache Line has an optional QD cache space, X PD cache spaces and one for each.
  • Each Cache Line can only be occupied by one PD queue, and one PD queue can occupy multiple Cache Lines.
  • the PDs in each Cache Line are sequentially stored to form a linked list fragment, and multiple Cache Lines are passed through the CTRL entry.
  • the N_PTR pointers are connected to form a single necklace table.
  • the unidirectional linked list forms the head portion of the PD queue of the message queue.
  • the head portion of the queue and the queue queue of the PD queue stored in the DRAM are connected by the HH_PTR pointer in the CTRL entry. .
  • the CTRL entry of the PD queue that occupies multiple Cache Lines is stored in the Cache Line of the first row.
  • Table 1 shows an example of a CTRL entry
  • Table 2 shows an example of a BTIMAP entry.
  • the traffic management chip can determine whether the Cache Line in the SRAM is available through the VALID_BITMAP in the BITMAP entry. For example, 1 means available, 0 means occupied. For example, suppose the Cache Line on line 1 of the SRAM is available, the second threshold is the size of 1 PD, and the first line is Cache.
  • the available storage space of the Line is 8 PDs, which is greater than the second threshold.
  • the traffic management chip allocates the Cache Line of the first line to the first message queue, and the PD cache space of the Cache Line of the first line.
  • the QD buffer space can store 8 PDs and 1 QD respectively. At this time, 8 PDs in the first message queue can be written into the 1st line Cache Line of the SRAM.
  • the set of to-be-activated message queues includes a message queue that satisfies a preset condition to be activated, and the to-be-activated condition is a message queue when the determination is performed.
  • the capacity of the storage space available in the SRAM is greater than the second threshold when the credit is greater than or equal to the first threshold and the determination is performed;
  • the writing the at least one PD in the PD queue to the SRAM specifically includes:
  • At least one PD in the PD queue is written into the SRAM.
  • the traffic management chip maintains a set of to-be-activated message queues.
  • the set of the message queue to be activated includes a message queue that satisfies a preset condition to be activated.
  • the condition to be activated is that the credit of the message queue is greater than or equal to the first threshold when the determination is performed, and the storage space available in the SRAM is smaller than the first time when the determination is performed.
  • the set of to-be-activated message queues may include 0, 1 or more to-be-activated message queues. Each pending message queue corresponds to a priority.
  • the priority of the to-be-activated message queue is used to indicate the order in which at least one PD in the PD queue corresponding to the to-be-activated message queue is written to the SRAM. At least one PD in the PD queue corresponding to the priority-to-be-activated message queue is written into the SRAM before the at least one PD in the PD queue corresponding to the lower-priority to-be-activated message queue. Each of the at least one PD respectively includes a queue header of the corresponding PD queue. Collection of message queues to be activated - - Empty indicates that the number of queues to be activated in the set of queues to be activated is 0. The set of to-be-activated message queues is non-empty, indicating that the number of to-be-activated message queues in the set of to-be-activated message queues is greater than zero.
  • the first packet queue is the highest priority packet queue in the set of the message queue to be activated, that is, the first packet queue is located in the head of the queue to be activated, and the PD queue associated with the first packet queue is included. At least one PD is written into the SRAM, and at least one PD includes a queue header of the first message queue.
  • the writing the PD queue to the DRAM further includes:
  • Determining that the credit of the first packet queue is greater than or equal to a preset first threshold and the capacity of the storage space available in the SRAM is greater than or equal to a preset second threshold further includes:
  • the writing the at least one PD in the PD queue to the SRAM specifically includes:
  • At least one PD in the PD queue is written into the SRAM based on the indication of the second state.
  • the state of the first packet queue includes a first state and a second state, where the first state indicates that all PD queues associated with the first message queue are stored in the DRAM; A part of the PD in the PD queue is stored in the SRAM, and another part of the PD is stored in the DRAM
  • the PD queues associated with the first message queue in the DRAM are all stored in the SRAM.
  • the number of PDs in the PD queue associated with the first message queue is 32. If 28 PDs in the PD queue are stored in the DRAM, 4 PDs are stored in the SRAM, or 32 PDs are all stored in the SRAM, indicating A message queue is in the second state; if all 32 PDs in the PD queue are stored in the DRAM, it indicates that the first message queue is in the first state.
  • Table 1 shows an example of the QD of the queue, and Table 2 shows an example of the PD of the message in the queue.
  • the traffic management chip determines that the credit of the first packet queue is greater than or equal to a preset first threshold and
  • the capacity of the storage space available in the SRAM is greater than or equal to the preset second threshold, and the set of the queue to be activated is determined to be empty, and the state of the first packet queue is changed to the second state, and the traffic management chip can be updated from the latest. Get the status of the credit and the set of queues to be activated in the first message queue in QD and
  • Table 3 shows the schematic diagram of the QD
  • Table 4 shows the schematic diagram of the PD. - -
  • the method for managing a queue further includes: if a credit of the first packet queue is greater than or equal to the first threshold and a storage space available to the SRAM The capacity is smaller than the second threshold, and the state of the first packet queue is modified to a third state;
  • the state of the first packet queue further includes a third state. If the credit of the first packet queue is greater than or equal to the first threshold and the available storage space in the SRAM is less than the second threshold, the first packet queue is configured. The state is changed to the third state. The first packet queue is added to the set of the queues to be activated, and the set of the queues to be activated is non-empty.
  • the method for managing the queue further includes: determining whether a packet to be enqueued in the second packet queue meets a preset fast packet identification condition; If the packet does not satisfy the preset fast packet identification condition, the PD of the to-be-entered packet is written into the DRAM; or - - if the to-be-entered message satisfies the preset fast packet identification condition, then the to-be-entered message is
  • the PD is written into the SRAM.
  • the fast packet refers to a message that will be dispatched and dequeued in a short time after entering the team.
  • the identifying method may be: when detecting that the second packet queue has a packet to be enqueued, generating a PD of the packet to be enqueued, and reading from the buffer space allocated in the SRAM.
  • the QD of the second message queue, and according to the QD read by the PD of the message to be enqueued, according to the updated QD it is determined whether the second message queue satisfies the preset fast packet identification condition, and the fast packet identification condition is satisfied.
  • CREDIT is the credit of the second packet queue
  • Q_LEN is the queue length of the second packet queue
  • P_LEN is the length of the packet to be enqueued
  • PKT_NUM is the number of packets in the second packet queue
  • threshold l is the second.
  • Threshold2 is the threshold of the credit of the second message queue. Equation A applies to the billing-based scheduler, and Equation B applies to the scheduler-based scheduler.
  • the flow management chip is provided with DRAM on the off-chip, SRAM and Buffer are set on the chip, the state of the second message queue is the second state, and a part of the PD queue corresponding to the second message queue is PD.
  • the storage DRAM another part is stored in the SRAM or the PD queue is all in the SRAM, and the Buffer is used to buffer the PD written in the DRAM, and its organization may be a FIFO structure.
  • the load balancing control is performed for the write request of the DRAM write PD to ensure that the written PD is evenly distributed to different banks in the DRAM.
  • the on-chip SRAM is used to store a part of the PD or all PDs in the PD queue of the second message queue.
  • the second message queue reads the PD into the SRAM from the DRAM by prefetching to improve the efficiency of the DRAM bus.
  • fast packet identification is performed on the packets to be enqueued in the second packet queue.
  • the PD of the packet to be enqueried bypasses the DRAM and is directly written.
  • the SRAM waits for the dequeue scheduling. If it is not a fast packet, the PD of the packet to be enqueued is buffered by the Buffer and then written into the DRAM.
  • the embodiment of the present invention introduces a fast packet identification mechanism, and after identifying that the packet to be enqueued is a fast packet, the PD corresponding to the packet to be enqueued bypasses the DRAM and directly enters the SRAM for scheduling, so that Effectively improve the efficiency of the fast package.
  • the DRAM includes multiple memory blocks bank, - storing, in the DRAM, a plurality of PD queues corresponding to a plurality of message queues, wherein the plurality of queue heads of the plurality of PD queues are respectively stored in a plurality of banks, the plurality of queue heads and the plurality of queues Bank - Correspondingly, the plurality of PD queues correspond to the plurality of queue heads.
  • the DRAM includes a plurality of storage blocks bank, and the DRAM stores a plurality of PD queues corresponding to the plurality of message queues, and the plurality of queue headers of the plurality of PD queues are respectively stored in the plurality of banks, and the plurality of queue heads and the plurality of queues are respectively stored in the plurality of banks.
  • the structure of the DRAM in this embodiment is as shown in FIG. 4, and the DRAM is divided into macro-cell MCELL arrays of Y rows*X columns, each macro cell stores one PD, and each column MCELL is a bank, DRAM bank
  • the number of X, M lines MCELL constitutes a memory block BLOCK.
  • M is an integer between 0 and Y
  • M is an integer secondary screen of 2
  • each BLOCK is provided with a bit indicating a busy state in the bitmap BITMAP. For example, when the memory block BLOCK is occupied, the corresponding bit position is 1, and the corresponding bit position 0 when the memory block BLOCK is available.
  • each memory block is associated with a NEXT_PTR pointer and a VOID_PTR pointer, the NEXT_PTR pointer points to the next memory block, the VOID_PTR pointer points to the first invalid macro unit MCELL in the memory block, and all macro units after the invalid macro unit The data is invalid.
  • the read operation reads the macro unit pointed to by the VOID_PTR-l pointer, it will trigger the memory block to be reclaimed.
  • a memory block can only be allocated to one queue.
  • the PDs of each queue are stored in the memory block in sequence to form a singly linked list.
  • the singly linked list in the same memory block is connected by the NEXT_PTR pointer and the VOID_PTR pointer.
  • the value of ⁇ is the number of the memory block and the number of columns of the memory block is ⁇ .
  • the memory block PX+0/1/2/3/.../X, type is 0/1/2/3/.../X-1/X, respectively.
  • - - The starting MCELL macrocells are located at bank 0/l/2/3/.../Xl/X, where I is an integer.
  • the method for queue management further includes: performing at least two of the plurality of message queues according to the plurality of queue headers stored in the plurality of banks Dequeue operation of a message queue.
  • the method for managing a queue further includes: performing a dequeuing operation on at least two of the plurality of PD queues, and performing the dequeuing operation At least two PD queues respectively contain at least two new queue headers;
  • the at least two new queue headers are stored in the same bank, receiving at least two dequeue requests, placing the at least two dequeue requests into a dequeue request queue corresponding to the same bank, and Responding to the at least two dequeue requests using the dequeue request queue, the at least two dequeue requests corresponding to the at least two PD queues.
  • the traffic management chip maintains at least two packet queues, and each packet queue corresponds to one PD queue, and needs to perform dequeuing operations on at least two PD queues in the plurality of PD queues, and perform at least two after the dequeuing operation.
  • the PD queues contain at least two new queue headers.
  • the traffic management chip maintains 512K packet queues, and the PD queue corresponding to the three packet queues needs to be dequeued.
  • the packet queue "J 1 corresponds to PD queue 1
  • the packet queue" 2 corresponds to PD queue 2
  • message queue 3 corresponds to PD queue 3
  • each PD queue includes 8 PDs. If it is necessary to perform dequeuing operations on the first two PDs of PD queue 1, the third PD of PD queue 1 becomes The queue head of the PD queue 2 is dequeued, and the second PD of the queue 2 becomes the new queue header. The first 3 PDs of the PD queue 3 are dequeued, and the PD queue 3 is The 4th PD becomes the new queue header.
  • At least two new queue headers are stored in the same bank of the DRAM, and at least two dequeue requests are received, at least two dequeue requests are placed in a dequeue request queue corresponding to the same bank, where Each bank in the DRAM corresponds to a dequeue request queue, and uses the dequeue request queue to respond to at least two dequeue requests, and processes the dequeue requests in the dequeue request queue in a first-in first-out order to avoid DRAM congestion. .
  • FIG. 5 is a schematic structural diagram of a queue management apparatus according to an embodiment of the present invention.
  • the device 1 for queue management includes a first writing module 10 and a second writing module 11.
  • the queue managed device 1 can be used to perform the method shown in FIG.
  • the first write module 10 is configured to write a PD queue into the DRAM, where the PD queue includes multiple PDs, and the multiple PDs correspond to multiple messages included in the first packet queue.
  • the second write module 11 is configured to write at least one PD in the PD queue into the SRAM, where the at least one PD includes a queue header of the PD queue.
  • the first write module 10 may be a memory controller for controlling the DRAM.
  • the second write module 11 may be a memory controller for controlling the SRAM.
  • the device 1 for queue management further includes: a first determining module, after the first writing module writes the PD queue into the DRAM, And determining, by the second writing module, that the credit of the first packet queue is greater than or equal to a preset first threshold and in the SRAM, before the at least one PD in the PD queue is written into the SRM.
  • the available storage space has a capacity greater than or equal to a preset second threshold;
  • the second write module is specifically configured to write at least one PD in the PD queue into the SRAM if a capacity of a storage space available in the SRAM is greater than or equal to the second threshold.
  • the first determining module is further configured to:
  • the set of the to-be-activated message queue includes a message queue that satisfies a preset condition to be activated, and the to-be-activated condition is that the credit of the message queue is greater than or Equal to the first threshold and the capacity of the storage space available in the SRAM when the determination is performed is less than the second threshold;
  • the second write module is specifically configured to: if the capacity of the storage space available in the SRAM is greater than or equal to the second threshold, and the set of the to-be-activated message queue is empty, at least the PD queue is A PD is written to the SRAM.
  • the first determining module is further configured to:
  • the set of to-be-activated message queues includes a message queue that satisfies a preset condition to be activated, and the to-be-activated condition is a message queue when the determination is performed.
  • the capacity of the storage space available in the SRAM is greater than the second threshold when the credit is greater than or equal to the first threshold and the determination is performed;
  • the second write module is specifically configured to: if a capacity of the storage space available in the SRAM is greater than or equal to the second threshold, and the first packet queue is a priority in the set of the to-be-activated message queues The highest-level message queue, at least one PD in the PD queue is written into the SRAM.
  • the first writing module is further configured to:
  • the first determining module is further configured to:
  • the second write module is specifically configured to write at least one PD in the PD queue into the SRAM based on the indication of the second state.
  • the management apparatus 1 of the queue further includes: a modifying module, configured to: if the credit of the first packet queue is greater than or equal to the first threshold, The capacity of the storage space available in the SRAM is smaller than the second threshold, and the state of the first packet queue is modified to a third state;
  • the joining module is configured to add the first packet queue in the third state to the set of the to-be-activated message queue.
  • the management device 1 of the queue further includes:
  • a second determining module configured to determine whether the to-be-entered packet of the second packet queue meets a preset fast packet identification condition
  • a third writing module configured to: when the second determining module determines that the to-be-queried packet does not meet the preset fast packet identification condition, write the PD of the to-be-entered packet to the Or a fourth write module, configured to: if the second determining module determines that the to-be-entered packet meets the preset fast packet identification condition, write the PD of the to-be-entered packet Into the SRAM.
  • the DRAM includes a plurality of banks, wherein the DRAM stores a plurality of PD queues corresponding to a plurality of packet queues, and the plurality of queue headers of the plurality of PD queues are respectively stored in multiple banks.
  • the plurality of queue headers corresponding to the plurality of banks corresponds to the plurality of PD queues and the plurality of - - Queue headers - Correspondence.
  • the management device 1 of the queue further includes:
  • a first dequeuing module configured to perform a dequeuing operation of at least two of the plurality of message queues according to the plurality of queue headers stored in the plurality of banks.
  • the management device 1 of the queue further includes: a second dequeuing module, configured to perform a dequeuing operation on at least two of the plurality of PD queues, After performing the dequeuing operation, the at least two PD queues respectively include at least two new queue headers;
  • a response module configured to: if the at least two new queue headers are stored in the same bank, receive at least two dequeue requests, and place the at least two dequeue requests into the corresponding bank
  • the team requests a queue and responds to the at least two dequeue requests using the dequeue request queue, the at least two dequeue requests corresponding to the at least two PD queues.
  • FIG. 6 is a schematic structural diagram of a queue management apparatus according to an embodiment of the present invention.
  • the device 2 for queue management includes a processor 61, a memory 62, and a communication interface 63.
  • the communication interface 63 is for communicating with an external device.
  • the number of processors 61 in the queue managed device may be one or more.
  • the number of processors 61 in Figure 6 is one.
  • processor 61, memory 62, and communication interface 63 may be connected by a bus or other means.
  • the processor 61, the memory 62, and the communication interface 63 are connected by a bus.
  • the queue managed device 2 can be used to perform the method shown in FIG. With regard to the meanings and examples of the terms involved in the embodiment, reference may be made to the corresponding embodiment of FIG. I will not repeat them here.
  • the program code is stored in the memory 62.
  • the processor 61 is configured to call the program code stored in the memory 62 for performing the following operations:
  • the PD queue includes a plurality of PDs, and the plurality of PDs correspond to a plurality of packets included in the first packet queue.
  • At least one PD in the PD queue is written into the SRAM, and the at least one PD includes a queue header of the PD queue.
  • the processor 61 performs the writing of the PD queue to the DRAM. After the middle, and before writing the at least one PD in the PD queue to the SRAM, the method is further configured to: determine that the credit of the first message queue is greater than or equal to a preset first threshold and the SRAM The capacity of the available storage space is greater than or equal to the preset second threshold;
  • the processor 61 executing the writing of at least one PD in the PD queue into the SRAM includes:
  • At least one PD in the PD queue is written into the SRAM.
  • the processor 61 performs the determining that the credit of the first message queue is greater than or equal to a preset first threshold and the capacity of the storage space available in the SRAM is greater than or equal to the pre-
  • the second threshold is also included:
  • the set of the to-be-activated message queue includes a message queue that satisfies a preset condition to be activated, and the to-be-activated condition is that the credit of the message queue is greater than or Equal to the first threshold and the capacity of the storage space available in the SRAM when the determination is performed is less than the second threshold;
  • the processor 61 executing the writing of at least one PD in the PD queue into the SRAM includes:
  • At least one PD in the PD queue is written into the SRAM if the capacity of the storage space available in the SRAM is greater than or equal to the second threshold and the set of the to-be-activated message queue is empty.
  • the processor 61 performs the determining that the credit of the first message queue is greater than or equal to a preset first threshold and the capacity of the storage space available in the SRAM is greater than or A second threshold equal to the preset includes:
  • the set of to-be-activated message queues includes a message queue that satisfies a preset condition to be activated, and the to-be-activated condition is a message queue when the determination is performed.
  • the capacity of the storage space available in the SRAM is greater than the second threshold when the credit is greater than or equal to the first threshold and the determination is performed;
  • the processor executing the writing of at least one PD in the PD queue to the SRAM specifically includes:
  • the capacity of the storage space available in the SRAM is greater than or equal to the second threshold, and the first packet queue is the highest priority packet queue in the set of the to-be-activated message queue, - - at least one PD in the PD queue is written into the SRAM.
  • the processor 61 executing the writing of the PD queue into the DRAM further includes:
  • the processor 61 performs the determining that the credit of the first message queue is greater than or equal to a preset first threshold and the capacity of the storage space available in the SRAM is greater than or equal to a preset second threshold.
  • the processor 61 executing the writing of at least one PD in the PD queue into the SRAM includes:
  • At least one PD in the PD queue is written into the SRAM based on the indication of the second state.
  • the processor 61 is further configured to:
  • processor 61 is also operative to:
  • the PD of the to-be-entered message is written into the SRAM.
  • the DRAM includes a plurality of banks, and the DRAM stores a plurality of PD queues corresponding to a plurality of message queues, and the plurality of queue heads of the plurality of PD queues are respectively stored in multiple In the bank, the plurality of queue headers correspond to the plurality of banks, and the plurality of PD queues correspond to the plurality of queue headers.
  • the processor 61 is further configured to:
  • the processor 61 is further configured to:
  • the at least two PD queues respectively include at least two new queue headers
  • the at least two new queue headers are stored in the same bank, receiving at least two dequeue requests, placing the at least two dequeue requests into a dequeue request queue corresponding to the same bank, and Responding to the at least two dequeue requests using the dequeue request queue, the at least two dequeue requests corresponding to the at least two PD queues.
  • the PD queue corresponding to the first message queue is written into the DRAM, and at least one PD including the queue header in the PD queue is written into the SRAM. Therefore, at least one PD in the PD queue corresponding to the first message queue is stored in the SRAM.
  • the dequeuing operation of the first message queue may be implemented by performing a read operation on the queue header of the PD queue.
  • the queue header of the PD queue is stored in the SRAM, and performing a read operation on the queue header of the PD queue is performed by performing a read operation on the SRAM.
  • the rate at which SRAM is read is not limited by the properties of the DRAM. Therefore, in the above technical solution, the PD queue has a high dequeue efficiency, which helps to improve the dequeue efficiency of the first packet queue.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (English: Read-Only Memory, abbreviated as: ROM) or a random memory (English: Random Access Memory, abbreviation: RAM). .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明实施例公开了一种队列管理的方法,包括:将PD队列写入DRAM中,所述PD队列包括多个PD,所述多个PD与第一报文队列包括的多个报文一一对应;将所述PD队列中至少一个PD写入SRAM中,所述至少一个PD包括所述PD队列的队列头。相应的,本发明实施例还公开了一种队列管理的装置。上述方案有助于提高报文队列的出队效率。

Description

一 一
一种队列管理的方法和装置
技术领域
本发明实施例涉及通信领域, 尤其涉及队列管理的方法和装置。
网络带宽需求的增长对路由器容量和集成度提出了更高要求。路由器的单 个线卡(英文: line card, 简称: LC )的速率从 10吉比特每秒(英文: gigabit per second, 简称: Gbps )、 40Gbps、 lOOGbps, 200Gbps演进到 400Gbps以及 更高。路由器的单个线卡的处理能力也从 15百万报文每秒(英文: mega packet per second, 简称: Mpps )、 60Mpps、 150Mpps、 300Mpps演进到到 600Mpps 以及更高, 这对路由器的线卡的存储器的速率带来挑战。
对报文队列进行调度(包括报文入队和报文出队)可以通过对该报文队列 对应的 4艮文描述符 (英文: packet descriptor, 简称: PD ) 队列进行访问 (包 括对 PD队列进行读操作和写操作) 实现。 现有技术中, PD队列存储在动态 随机存取存诸器 (英文: dynamic random access memory, 简称: DRAM )。 因 此, 对报文队列进行调度时, 需要通过对 DRAM进行读操作和写操作实现。 对 DRAM进行访问时, DRAM的访问速率受到了 DRAM的属性的限制。 例 如,报文队列的速率为 300MPPS。 DRAM的行循环时间(英文: Row Cycle Time, 简称: tRC ) 约为 40~50 纳秒(英文: nanosecond, 简称: ns )。 DRAM的访 问速率约为 25MPPS。 即使 DRAM的访问速率可以提高到 100MPPS, 仍然无 法满足 300MPPS的报文队列的速率的需求。 上述技术方案中, 对 PD队列进 行读操作的速率受到了 DRAM的属性的限制, 影响了报文队列的出队效率。 发明内容
本发明实施例所要解决的技术问题在于,提供一种队列的控制信息的管理 方法和装置, 可解决现有技术中 DRAM存储器的容量和带宽的限制影响报文 队列出队效率的问题。
第一方面, 提供了一种队列管理的方法, 包括: - - 将 PD队列写入 DRAM中, 所述 PD队列包括多个 PD, 所述多个 PD与 第一报文队列包括的多个报文——对应;
将所述 PD 队列中至少一个 PD 写入静态随机访问存储器 (英文: static random access memory, 简称: SRAM ) 中, 所述至少一个 PD包括所述 PD队 列的队列头。
结合第一方面,在第一种可能的实现方式中, 所述将 PD队列写入 DRAM 中之后, 以及所述将 PD队列中至少一个 PD写入 SRAM中之前, 所述方法还 包括:
确定所述第一报文队列的信用 (英文: credit ) 大于或者等于预设的第一 阔值且所述 SRAM中可用的存储空间的容量大于或者等于预设的第二阔值; 所述将所述 PD队列中至少一个 PD写入 SRAM中具体包括:
若所述 SRAM中可用的存储空间的容量大于或者等于所述第二阔值, 将 所述 PD队列中至少一个 PD写入所述 SRAM中。
结合第一方面的第一种可能的实现方式,在第二种可能的实现方式中, 所 述确定所述第一报文队列的信用大于或者等于预设的第一阔值且所述 SRAM 中可用的存储空间的容量大于或者等于预设的第二阔值, 还包括:
判定待激活报文队列的集合为空,所述待激活报文队列的集合包括满足预 设的待激活条件的报文队列,所述待激活条件为执行所述判定时报文队列的信 用大于或者等于所述第一阔值且执行所述判定时所述 SRAM中可用的存储空 间的容量小于所述第二阔值;
所述将所述 PD队列中至少一个 PD写入 SRAM中具体包括:
若所述 SRAM中可用的存储空间的容量大于或者等于所述第二阔值并且 所述待激活报文队列的集合为空, 将所述 PD 队列中至少一个 PD 写入所述 SRAM中。
结合第一种可能的实现方式,在第三种可能的实现方式中, 所述确定所述 第一报文队列的信用大于或者等于预设的第一阔值且所述 SRAM中可用的存 储空间的容量大于或者等于预设的第二阔值还包括:
判定所述待激活报文队列的集合为非空,所述待激活报文队列的集合包括 满足预设的待激活条件的报文队列,所述待激活条件为执行所述判定时报文队 列的信用大于或者等于所述第一阔值且执行所述判定时所述 SRAM中可用的 - - 存储空间的容量小于所述第二阔值;
所述将所述 PD队列中至少一个 PD写入 SRAM中具体包括:
若所述 SRAM中可用的存储空间的容量大于或者等于第二阔值并且所述 第一报文队列是所述待激活报文队列的集合中优先级最高的报文队列,将所述 PD队列中至少一个 PD写入所述 SRAM中。
结合第二或第三种可能的实现方式,在第四种可能的实现方式中, 所述将 PD队列写入 DRAM中还包括:
将所述第一报文队列的状态标识为第一状态;
所述确定所述第一报文队列的信用大于或者等于预设的第一阔值且所述 SRAM中可用的存储空间的容量大于或者等于预设的第二阔值还包括:
将所述第一报文队列的状态修改为第二状态;
所述将所述 PD队列中至少一个 PD写入 SRAM中具体包括:
基于所述第二状态的指示,将所述 PD队列中至少一个 PD写入所述 SRAM 中。
结合第四种可能的实现方式中, 在第五种可能的实现方式中, 还包括: 若所述第一报文队列的信用大于或者等于所述第一阔值且所述 SRAM可 用的存储空间的容量小于所述第二阔值,将所述第一报文队列的状态修改为第 三状态;
将处于所述第三状态的所述第一报文队列加入所述待激活报文队列的集 合中。
结合第一方面至第五种可能的实现方式中的任意一种,在第六种可能的实 现方式中, 还包括:
确定第二报文队列的待入队报文是否满足预设的快包识别条件; 若所述待入队报文不满足所述预设的快包识别条件,则将所述待入队报文 的 PD写入所述 DRAM中; 或者
若所述待入队报文满足所述预设的快包识别条件,则将所述待入队报文的 PD写入所述 SRAM中。
结合第一方面至第六种可能的实现方式中的任意一种,在第七种可能的实 现方式中, 所述 DRAM包括多个存储块(英文: bank ), 所述 DRAM中存储 有多个报文队列对应的多个 PD队列, 所述多个 PD队列的多个队列头分别存 - - 储在多个 bank中, 所述多个队列头与所述多个 bank——对应, 所述多个 PD 队列与所述多个队列头——对应。
结合第七方面, 在第八种可能的实现方式中, 还包括:
根据存储在所述多个 bank中的所述多个队列头执行所述多个报文队列中 的至少两个报文队列的出队操作。
结合第七方面, 在第九种可能的实现方式中, 还包括:
对所述多个 PD队列中的至少两个 PD队列执行出队操作, 执行所述出队 操作后所述至少两个 PD队列分别包含了至少两个新的队列头;
若所述至少两个新的队列头存储于同一个 bank中, 接收到至少两个出队 请求, 将所述至少两个出队请求置入所述同一个 bank对应的出队请求队列, 并使用所述出队请求队列对所述至少两个出队请求进行响应,所述至少两个出 队请求与所述至少两个 PD队列——对应。
第二方面, 提供了一种队列管理的装置, 包括:
第一写入模块, 用于将 PD队列写入 DRAM中, 所述 PD队列包括多个 PD, 所述多个 PD与第一报文队列包括的多个报文——对应;
第二写入模块, 用于将所述第一写入模块写入的所述 PD队列中至少一个 PD写入 SRAM中, 所述至少一个 PD包括所述 PD队列的队列头。
结合第二方面, 在第一种可能的实现方式中, 还包括:
第一确定模块, 用于所述第一写入模块将所述 PD 队列写入所述 DRAM 中之后,以及所述第二写入模块将所述 PD队列中至少一个 PD写入所述 SRM 中之前, 确定所述第一报文队列的信用大于或者等于预设的第一阔值且所述 SRAM中可用的存储空间的容量大于或者等于预设的第二阔值;
所述第二写入模块具体用于若所述 SRAM中可用的存储空间的容量大于 或者等于所述第二阔值, 将所述 PD队列中至少一个 PD写入所述 SRAM中。
结合第二方面的第一种可能的实现方式,在第二种可能的实现方式中, 所 述第一确定模块还用于:
判定待激活报文队列的集合为空,所述待激活报文队列的集合包括满足预 设的待激活条件的报文队列,所述待激活条件为执行所述判定时报文队列的信 用大于或者等于所述第一阔值且执行所述判定时所述 SRAM中可用的存储空 间的容量小于所述第二阔值; - - 所述第二写入模块具体用于若所述 SRAM中可用的存储空间的容量大于 或者等于所述第二阔值并且所述待激活报文队列的集合为空, 将所述 PD队列 中至少一个 PD写入所述 SRAM中。
结合第二方面的第一种可能的实现方式,在第三种可能的实现方式中, 所 述第一确定模块还用于:
判定所述待激活报文队列的集合为非空,所述待激活报文队列的集合包括 满足预设的待激活条件的报文队列,所述待激活条件为执行所述判定时报文队 列的信用大于或者等于所述第一阔值且执行所述判定时所述 SRAM中可用的 存储空间的容量小于所述第二阔值;
所述第二写入模块具体用于若所述 SRAM中可用的存储空间的容量大于 或者等于第二阔值并且所述第一报文队列是所述待激活报文队列的集合中优 先级最高的报文队列, 将所述 PD队列中至少一个 PD写入所述 SRAM中。
结合第二方面的第二或第三种可能的实现方式,在第四种可能的实现方式 中, 所述第一写入模块还用于:
将所述第一报文队列的状态标识为第一状态;
所述第一确定模块还用于:
将所述第一报文队列的状态修改为第二状态;
所述第二写入模块具体用于基于所述第二状态的指示, 将所述 PD队列中 至少一个 PD写入所述 SRAM中。
结合第二方面的第四种可能的实现方式,在第五种可能的实现方式中,还 包括:
修改模块,用于若所述第一报文队列的信用大于或者等于所述第一阔值且 所述 SRAM可用的存储空间的容量小于所述第二阔值, 将所述第一报文队列 的状态修改为第三状态;
加入模块,用于将处于所述第三状态所述第一报文队列加入所述待激活报 文队列的集合中。
结合第二方面至第五种可能的实现方式中的任意一种,在第六种可能的实 现方式中, 还包括:
第二确定模块,用于确定第二报文队列的待入队报文是否满足预设的快包 识别条件; - - 第三写入模块,用于若所述第二确定模块确定所述待入队报文不满足所述 预设的快包识别条件, 则将所述待入队报文的 PD写入所述 DRAM中; 或者 第四写入模块,用于若所述第二确定模块确定所述待入队报文满足所述预 设的快包识别条件, 则将所述待入队报文的 PD写入所述 SRAM中。
结合第二方面至第六种可能的实现方式中的任意一种,在第七种可能的实 现方式中, 所述 DRAM包括多个 bank, 所述 DRAM中存储有多个报文队列 对应的多个 PD队列, 所述多个 PD 队列的多个队列头分别存储在多个 bank 中, 所述多个队列头与所述多个 bank——对应, 所述多个 PD队列与所述多 个队列头——对应。
结合第二方面的第七种可能的实现方式,在第八种可能的实现方式中,还 包括:
第一出队模块, 用于根据存储在所述多个 bank中的所述多个队列头执行 所述多个报文队列中的至少两个报文队列的出队操作。
结合第二方面的第八种可能的实现方式,在第九种可能的实现方式中,还 包括:
第二出队模块, 用于对所述多个 PD队列中的至少两个 PD队列执行出队 操作, 执行所述出队操作后所述至少两个 PD队列分别包含了至少两个新的队 列头;
响应模块, 用于若所述至少两个新的队列头存储于同一个 bank中, 接收 到至少两个出队请求, 将所述至少两个出队请求置入所述同一个 bank对应的 出队请求队列, 并使用所述出队请求队列对所述至少两个出队请求进行响应, 所述至少两个出队请求与所述至少两个 PD队列——对应。
上述技术方案具有如下有益效果:
上述技术方案中, 将第一报文队列对应的 PD队列写入 DRAM中, 并将 PD队列中包括队列头的至少一个 PD写入 SRAM中。 因此, 第一报文队列对 应的 PD队列中至少一个 PD存储于 SRAM中。对所述第一报文队列进行出队 操作时, 可以通过对所述 PD队列进行读操作实现。 在所述第一报文队列是先 入先出 (英文: first in first out, 简称: FIFO ) 队列的情况下, 可以通过对所 述 PD队列的队列头执行读操作实现对所述第一报文队列的出队操作。所述 PD 队列的队列头存储在所述 SRAM中,对所述 PD队列的队列头执行读操作是通 过对所述 SRAM进行读操作实现的。 对 SRAM进行读操作的速率不会受到 DRAM的属性的限制。 因此, 上述技术方案中, PD队列的出队效率较高, 有 助于提高第一报文队列的出队效率。 附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例中所需要使 用的附图作简单地介绍。显而易见地, 下面描述中的附图仅仅是本发明的一些 实施例。 对于本领域普通技术人员来讲, 在不付出创造性劳动的前提下, 还可 以根据这些附图获得其他的附图。
图 1是本发明实施例提供的一种队列管理的方法的流程示意图; 图 2是图 1中 SRAM的数据结构示意图;
图 3是图 1中 DRAM的数据结构示意图;
图 4是本发明实施例的一种快包识别的示意图;
图 5是本发明实施例提供的一种队列管理的装置的结构示意图; 图 6是本发明实施例提供的一种队列管理的装置的另一结构示意图。 具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清 楚地描述。 显然, 所描述的实施例仅仅是本发明一部分实施例, 而不是全部的 实施例。基于本发明中的实施例, 本领域普通技术人员在没有作出创造性劳动 前提下所获得的所有其他实施例, 都属于本发明保护的范围。
如无相反的说明, 本发明实施例涉及到的报文队列都可以是 FIFO队列。 是存 4诸器控制器 (英文: memory controller )0 举例来说, memory controller能 够对 DRAM以及 SRAM进行控制。 进一步的, 本发明实施例提供的队列管理 的方法的执行主体以及队列管理的装置都可以是流量管理芯片。所述流量管理 芯片包含所述 memory controller 举例来说, 所述流量管理芯片可以通过所述 memory controller与所述 DRAM耦合。 举例来说, 所述流量管理芯片可以通 过所述 memory controller与所述 SRAM耦合。 进一步的, 本发明实施例提供 - - 的队列管理的方法的执行主体以及队列管理的装置都可以是 LC。 所述 LC包 括所述流量管理芯片。进一步的, 本发明实施例提供的队列管理的方法的执行 主体以及队列管理的装置都可以是网络设备。 所述网络设备包括所述线卡。 所 述网络设备可以是路由器、 网络交换机、 防火墙、 负载均衡器、 数据中心、 基 站、 报文传送网 (英文: packet transport network, 简称: PTN )设备或者波分 复用 (英文: wavelength division multiplexing, 简称: WDM )设备。
如无相反的说明, 本发明实施例涉及到的 PD可以包含对应的报文的存储 地址或者用于指向对应的报文的指针。 此外, 所述 PD还可以包括所述对应的 报文被接收的时间。 此外, 所述 PD还可以包括所述对应的报文的报文头。
如无相反的说明, 本发明实施例涉及到的报文队列可以存储在 DRAM或 者 SRAM中。 本发明实施例不限定报文队列的存储位置。
参见图 1, 为本发明实施例提供的一种队列管理方法的流程示意图。 所述 方法包括:
5101、 将 PD队列写入 DRAM中, 所述 PD队列包括多个 PD, 所述多个 PD与第一报文队列包括的多个报文——对应。
具体的, 第一报文队列与 PD队列关联。 第一报文队列中包括的报文的数 量和 PD队列中包括的 PD的数量相等。 第一报文队列中的多个报文与 PD队 列中的多个 PD——对应。 例如, 流量管理芯片将第一 文队列对应 PD队列 写入 DRAM中。
示例性的, 第一报文队列包括 8个报文。 第一报文队列关联的 PD队列中 也有 8个 PD。 第一报文队列中的 8个报文与 PD队列中的 8个 PD——对应。 例如, 流量管理芯片将 PD队列写入 DRAM中。
5102、 将所述 PD队列中至少一个 PD写入 SRAM中, 所述至少一个 PD 包括所述 PD队列的队列头。
具体的, 流量管理芯片将第一报文队列对应的 PD队列中至少一个 PD写 入 SRAM中。 写入的至少一个 PD包括 PD队列的队列头。 所述 PD队列的队 列头对应的 PD用于描述所述第一报文队列中的队列头对应报文。 所述第一报 文队列中的队列头对应报文是所述多个报文中最先被接收到的报文。
举例来说, S102还可以包括: 删除所述 DRAM中的所述至少一个 PD。 - - 所述 PD队列存储在所述 SRAM以及所述 DRAM中。
可选的, 所述将报文描述符 PD队列写入 DRAM中之后, 以及所述将 PD 队列中至少一个 PD写入 SRAM中之前, 所述方法还包括:
确定所述第一报文队列的信用大于或者等于预设的第一阔值且所述 SRAM中可用的存储空间的容量大于或者等于预设的第二阔值;
所述将所述 PD队列中至少一个 PD写入 SRAM中具体包括:
若所述 SRAM中可用的存储空间的容量大于或者等于所述第二阔值, 将 所述 PD队列中至少一个 PD写入所述 SRAM中。
具体的,第一报文队列的信用可以用于表示第一报文队列的可出队报文的 数量或者可出队报文的字节的数量。流量管理芯片可从第一报文队列的队列描 述符 (英文: queue descriptor, 简称: QD ) 中获取所述第一报文队列的信用。 所述 QD用于对所述第一报文队列进行描述。 关于所述 QD, 可参考表 3。 流 量管理芯片判断第一报文队列的信用是否大于或等于预设的第一阔值,同时判 断 SRAM中可用的存储空间的容量是否大于或等于预设的第二阔值。 若确定 第一报文队列的信用大于或等于预设的第一阔值且 SRAM中可用的存储空间 的容量大于或等于预设的第二预置, 执行 S102。
举例来说, 预设的第一阔值可以根据第一报文队列的优先级、 队列长度、 报文数、 休眠时间以及 SRAM中可用的存储空间的容量中的一个或者多个确 定。 例如, 对于第一报文队列, 其优先级越高、 队列长度越长、 报文数量越多 或者休眠时间越长, 则预先设置的第一阔值可以越小。 反之, 对于第一报文队 歹1 J, 其优先级越低、 队列长度越短、 报文数量越少或者休眠时间越短, 则预先 设置的第一阔值可以越大。
举例来说, 所述第一阔值用于标识第一报文队列的可出队报文的数量时, 所述第一阔值可以大于或者等于 1。 所述第一阔值用于标识第一报文队列的可 出队报文的字节的数量时,所述第一阔值可以大于或者等于第一报文队列中的 一个报文的字节的数量。
举例来说, 预设的第二阔值大于或等于 PD队列中一个 PD的大小。 这样 PD队列中至少一个 PD可以写入 SRAM中。 将 PD队列中的 PD写入 SRAM 中按照先进先出的原则进行写操作, PD队列的队列头最先写入 SRAM中, 队 - - 列尾最后写入 SRAM中。
示例性的, PD队列中每个 PD的大小为 8个字节, 预设的第二阔值为 8 个字节。 此时 SRAM中可用的存储空间的容量为 25个字节, 则 SRAM中可 用的存储空间的容量大于第二阔值, S RAM中可用的存储空间可以最多写入 3 个 PD。 PD队列在写操作之前包括 8个 PD, 流量管理芯片在满足上述容量限 制条件后, 可将 PD队列中前 3个 PD写入 SRAM中。
可选的,在本发明的一些实施例中, 所述确定所述第一报文队列的信用大 于或者等于预设的第一阔值且所述 SRAM中可用的存储空间的容量大于或者 等于预设的第二阔值, 还包括:
判定待激活报文队列的集合为空,所述待激活报文队列的集合包括满足预 设的待激活条件的报文队列,所述待激活条件为执行所述判定时报文队列的信 用大于或者等于所述第一阔值且执行所述判定时所述 SRAM中可用的存储空 间的容量小于所述第二阔值;
所述将所述 PD队列中至少一个 PD写入 SRAM中具体包括:
若所述 SRAM中可用的存储空间的容量大于或者等于所述第二阔值并且 所述待激活报文队列的集合为空, 将所述 PD 队列中至少一个 PD 写入所述 SRAM中。
具体的, 流量管理芯片中可以维护一个待激活报文队列的集合。待激活报 文队列的集合包括满足预设的待激活条件的报文队列。待激活条件为执行判定 时报文队列的信用大于或等于第一阔值且执行判定时 SRAM中可用的存储空 间小于第二阔值。 待激活报文队列的集合可以包括 0个、 1个或多个待激活报 文队列。每个待激活报文队列对应一个优先级。待激活报文队列的优先级用于 指示所述待激活报文队列对应的 PD队列中的至少一个 PD被写入所述 SRAM 的顺序。 优先级高的待激活报文队列的对应的 PD队列的至少一个 PD先于优 先级低的待激活报文队列对应的 PD队列的至少一个 PD被写入 SRAM中。其 中, 每个至少一个 PD分别包含对应的 PD队列的队列头。 待激活报文队列的 集合为空表示待激活报文队列的集合中待激活报文队列的数量为 0。 待激活报 文队列的集合为非空表示待激活报文队列的集合中待激活报文队列的数量大 于 0。 - - 示例性的, 如图 2所示, SRAM以 Cache Line的形式管理, SRAM由 Y 个 Cache Line组成, 每个 Cache Line内部有一个可选的 QD緩存空间、 X个 PD緩存空间和一个用来管理 Cache Line的 CTRL表项, 同时每个 Cache Line 还关联有一个表示该 Cache Line的属性信息的 BITMAP表项。其中,每个 Cache Line只能被一个 PD队列占用, 一个 PD队列可以占用多个 Cache Line, 每个 Cache Line中的 PD依次存放形成一个链表片段, 多个 Cache Line之间通过 CTRL表项中的 N_PTR指针连接形成单项链表, 单向链表构成了报文队列的 PD队列的队首部分, 队首部分与报文队列存储在 DRAM中的 PD队列的队尾 部分通过 CTRL表项中的 HH_PTR指针连接。 占用多个 Cache Line的 PD队 列的 CTRL表项存放在首行的 Cache Line中。
表 1所示的为 CTRL表项的一个示例, 表 2所示的为 BTIMAP表项的一 个示例。
Figure imgf000012_0001
表 2 - - 流量管理芯片可通过 BITMAP表项中的 VALID_BITMAP来判断 SRAM 中的 Cache Line是否可用, 例如 1表示可用, 0表示占用。 例如, 假设 SRAM 中第 1行的 Cache Line为可用状态,第二阔值为 1个 PD的大小,第 1行 Cache
Line可用的存储空间的容量为 8个 PD, 大于第二阔值, 流量管理芯片为第一 报文队列分配第 1行的 Cache Line, 第 1行的 Cache Line的 PD緩存空间和
QD緩存空间分别可存放 8个 PD和 1个 QD, 此时可以将第一报文队列中 8 个 PD写入 SRAM的第 1行 Cache Line中。 可选的,在本发明的一些实施例中, 所述确定所述第一报文队列的信用大 于或者等于预设的第一阔值且所述 SRAM中可用的存储空间的容量大于或者 等于预设的第二阔值还包括:
判定所述待激活报文队列的集合为非空,所述待激活报文队列的集合包括 满足预设的待激活条件的报文队列,所述待激活条件为执行所述判定时报文队 列的信用大于或者等于所述第一阔值且执行所述判定时所述 SRAM中可用的 存储空间的容量小于所述第二阔值;
所述将所述 PD队列中至少一个 PD写入 SRAM中具体包括:
若所述 SRAM中可用的存储空间的容量大于或者等于所述第二阔值且所 述第一报文队列是所述待激活报文队列的集合中优先级最高的报文队列,将所 述 PD队列中至少一个 PD写入所述 SRAM中。
具体的, 流量管理芯片中维护一个待激活报文队列的集合。待激活报文队 列的集合包括满足预设的待激活条件的报文队列,待激活条件为执行判定时报 文队列的信用大于或等于第一阔值且执行判定时 SRAM中可用的存储空间小 于第二阔值。 待激活报文队列的集合可以包括 0个、 1个或多个待激活报文队 歹' J。每个待激活报文队列对应一个优先级。待激活报文队列的优先级用于指示 所述待激活报文队列对应的 PD队列中的至少一个 PD被写入所述 SRAM的顺 序。 优先级高的待激活报文队列对应的 PD队列中的至少一个 PD先于优先级 低的待激活报文队列对应的 PD队列中的至少一个 PD被写入 SRAM中。其中, 每个至少一个 PD分别包含对应的 PD队列的队列头。 待激活报文队列的集合 - - 为空表示待激活报文队列的集合中待激活报文队列的数量为 0。 待激活报文队 列的集合为非空表示待激活报文队列的集合中待激活报文队列的数量大于 0。
若第一报文队列是待激活报文队列的集合中优先级最高的报文队列,即第 一报文队列位于待激活报文队列的头部, 将第一报文队列关联的 PD队列中至 少一个 PD写入 SRAM中, 至少一个 PD包括第一报文队列的队列头。
可选的, 所述将 PD队列写入 DRAM中还包括:
将所述第一报文队列的状态标识为第一状态;
所述确定所述第一报文队列的信用大于或者等于预设的第一阔值且所述 SRAM中可用的存储空间的容量大于或者等于预设的第二阔值还包括:
将所述第一报文队列的状态修改为第二状态;
所述将所述 PD队列中至少一个 PD写入 SRAM中具体包括:
基于所述第二状态的指示,将所述 PD队列中至少一个 PD写入所述 SRAM 中。
具体的, 第一报文队列的状态包括第一状态和第二状态, 第一状态表示第 一报文队列关联的 PD队列全部存储在 DRAM中; 第二状态表示的第一报文 队列关联的 PD队列中一部分的 PD存储在 SRAM中, 另一部分 PD存储在
DRAM中或者第一报文队列关联的 PD队列全部存储在 SRAM中。
例如, 第一报文队列关联的 PD队列中 PD数量为 32个, 如果 PD队列中 有 28个 PD存储在 DRAM, 4个 PD存储在 SRAM中或者 32个 PD全部存储 在 SRAM中, 则表明第一报文队列处于第二状态; 如果 PD队列中 32个 PD 全部存储在 DRAM中, 则表明第一报文队列处于第一状态。 表 1所示为队列 的 QD的示例, 表 2为队列中的报文的 PD的示例。
流量管理芯片确定第一报文队列的信用大于或等于预设的第一阔值且
SRAM中可用的存储空间的容量大于或等于预设的第二阔值,以及判定待激活 队列的集合为空,将第一报文队列的状态修改为第二状态, 流量管理芯片可以 从最新的 QD 中获取第一报文队列的信用和待激活队列集合的状态以及从
CTRL表项中获取 SRAM的存储空间的可用状态,表 3所示的是 QD的示意图, 表 4所示的是 PD的示意图。 - -
Figure imgf000015_0001
表 3
Figure imgf000015_0002
表 4
可选的, 在本发明的一些实施例中, 所述队列管理的方法还包括: 若所述第一报文队列的信用大于或者等于所述第一阔值且所述 SRAM可 用的存储空间的容量小于所述第二阔值,将所述第一报文队列的状态修改为第 三状态;
将处于所述第三状态的所述第一报文队列加入所述待激活报文队列的集 合中。
具体的, 第一报文队列的状态还包括第三状态,若第一报文队列的信用大 于或等于第一阔值且 SRAM中可用的存储空间小于第二阔值, 将第一报文队 列的状态修改为第三状态。将第一报文队列加入待激活报文队列的集合中, 则 此时待激活报文队列的集合为非空
可选的, 在本发明的一些实施例中, 所述队列管理的方法还包括: 确定第二报文队列的待入队报文是否满足预设的快包识别条件; 若所述待入队报文不满足所述预设的快包识别条件,则将所述待入队报文 的 PD写入所述 DRAM中; 或者 - - 若所述待入队报文满足所述预设的快包识别条件,则将所述待入队报文的
PD写入所述 SRAM中。
具体的, 快包是指在入队后在很短时间内就会被调度出队的报文。在本发 明的一些实施例中,识别方法可以是:检测到第二报文队列有待入队的报文时, 生成待入队的报文的 PD,从 SRAM中分配的緩存空间中读取第二报文队列的 QD, 并根据待入队的报文的 PD更新读取的 QD, 根据更新后的 QD判断目第 二报文队列是否满足预设的快包识别条件, 快包识别条件满足以下公式:
CREDIT-Q—LEN-P—LEN > thresholdl ( A )
CREDIT-PKT—NUM- 1 > threshold2 ( B )
其中 CREDIT为第二报文队列的信用, Q_LEN为第二报文队列的队列长 度, P_LEN为待入队的报文的长度, PKT_NUM为第二报文队列中的报文数, thresholdl为第二报文队列的信用的阔值。 threshold2为第二报文队列的信用的 阔值。 公式 A适用于基于记账的调度器, 公式 B 适用于逐包调度的调度器。
示例性的, 参见图 4, 流量管理芯片的片外设置有 DRAM, 片内设置有 SRAM和 Buffer, 第二报文队列的状态为第二状态, 第二报文队列对应的 PD 队列的一部分 PD存储 DRAM中, 另一部分存储在 SRAM中或者 PD队列全 部在 SRAM中, Buffer用来緩冲写入 DRAM的 PD, 其组织形式可以为 FIFO 结构。
针对 DRAM写 PD的写请求进行负载均衡控制,保证写入的 PD均匀的分 布到 DRAM中的不同 bank中。 片内设置 SRAM用来存储第二报文队列的 PD 队列中的一部分 PD或全部 PD, 第二报文队列通过预取的方式从 DRAM中将 PD读入 SRAM, 以提高 DRAM总线效率。 同时, 对第二报文队列的待入队报 文进行快包识别, 当识别出待入队报文的为快包时, 该待入队的报文的 PD旁 路掉 DRAM而直接写入 SRAM中等待出队调度, 如果不为快包, 该待入队的 报文的 PD通过 Buffer緩冲后写入 DRAM中。
由上可见, 本发明实施例引入快包识别机制,识别出待入队的报文为快包 后, 该待入队报文对应的 PD会旁路掉 DRAM直接进入 SRAM中等待调度, 这样可以有效的提高快包的出队效率。
可选的, 在本发明的一些实施例中, 所述 DRAM包括多个存储块 bank, - - 所述 DRAM中存储有多个报文队列对应的多个 PD队列,所述多个 PD队列的 多个队列头分别存储在多个 bank中,所述多个队列头与所述多个 bank——对 应, 所述多个 PD队列与所述多个队列头——对应。
具体的, DRAM包括多个存储块 bank, DRAM中存储有多个报文队列对 应的多个 PD队列, 多个 PD队列的多个队列头分别存储在多个 bank中, 多个 队列头与所述多个 bank——对应, 多个 PD队列与多个队列头——对应。
示例性的, 本实施例中 DRAM的结构如图 4, 所示, DRAM划分为 Y行 *X列的宏单元 MCELL阵列, 每个宏单元存储一个 PD, 每一列 MCELL为一 个 bank, DRAM中 bank的数量为 X, M行 MCELL组成一个内存块 BLOCK。 其中, M是 0和 Y之间的整数, M为 2的整数次幕,每个 BLOCK在位图 BITMAP 中设置有表示忙闲状态的比特位。 例如, 内存块 BLOCK被占用时对应的比特 位置 1, 内存块 BLOCK可用时对应的比特位置 0。 同时, 每个内存块关联有 NEXT_PTR指针和 VOID_PTR指针, NEXT_PTR指针指向下一个内存块, VOID_PTR指针指向该内存块中的第一个无效的宏单元 MCELL, 在无效的宏 单元后所有的宏单元的数据都是无效的, 一旦读操作读到 VOID_PTR-l 指针 指向的宏单元将会触发回收该内存块。 一个内存块只能分配给一个队列占用, 每个队列的 PD按照顺序存放在内存块中形成单向链表, 位于同一内存块中的 单向链表通过 NEXT_PTR指针和 VOID_PTR指针进行连接。
管理 DRAM 需要的管理内存的大小计算为: 内存块的数量 *(1+NEXT_PTR位宽 +VOID_PTR位宽)。 例如, 对于速率为 lOOGbps的报文 队列, 如果每个内存块 BLOCK 中包含 8行宏单元 MCELL (即 M=8), 管理 DRAM的所需的内存容量约为: 2*256K*(1+26)=14.15M比特; 如果每个内存 块 BLOCK 中包含 16行宏单元 MCELL (即 M=16) , 管理内存的大小为: 2*128K*(1+28)=7.424M比特。 每个内存块 BLOCK中包含的宏单元 MCELL 的行数越少, 表明对 DRAM的管理越精细, 需要的管理内存越大, 在实现过 程中可根据实际需求设定 M的值。
针对每个内存块赋予一个类型 Τ, Τ的值为内存块的编号对内存块的列数 量为 Τ。 示例性的, 内存块 PX+0/1/2/3/.../X, 类型分别为 0/1/2/3/.../X-1/X, - - 起始 MCELL宏单元分别位于 bank 0/l/2/3/.../ X-l/X, 其中 I为整数。
可选的, 在本发明的一些实施例中, 所述队列管理的方法还包括: 根据存储在所述多个 bank中的所述多个队列头执行所述多个报文队列中 的至少两个报文队列的出队操作。
可选的, 在本发明的一些实施例中, 所述队列管理的方法还包括: 对所述多个 PD队列中的至少两个 PD队列执行出队操作, 执行所述出队 操作后所述至少两个 PD队列分别包含了至少两个新的队列头;
若所述至少两个新的队列头存储于同一个 bank中, 接收到至少两个出队 请求, 将所述至少两个出队请求置入所述同一个 bank对应的出队请求队列, 并使用所述出队请求队列对所述至少两个出队请求进行响应,所述至少两个出 队请求与所述至少两个 PD队列——对应。
具体的, 流量管理芯片中维护至少多个报文队列,每个报文队列对应一个 PD队列,需要对多个 PD队列中的至少两个 PD队列执行出队操作,执行出队 操作后至少两个 PD队列分别包含了至少两个新的队列头。
示例性的, 流量管理芯片维护 512K个报文队列, 需要对其中的 3个报文 队列对应的 PD队列执行出队操作, 报文队歹 "J 1对应 PD队列 1, 报文队歹 "J 2 对应 PD队列 2, 报文队列 3对应 PD队列 3, 每个 PD队列中包括 8个 PD, 假设需要对 PD队列 1前 2个 PD执行出队操作, 则 PD队列 1的第 3个 PD 成为新的队列头, 对 PD队列 2队列头进行出队操作, 则 PD队列 2的第 2个 PD成为新的队列头, 对 PD队列 3的前 3个 PD执行出队操作, 则 PD队列 3 的第 4个 PD成为新的队列头。
具体的, 若至少两个新的队列头存储在 DRAM的同一个 bank中,接收到 至少两个出队请求时, 将至少两个出队请求放置于同一个 bank对应的出队请 求队列, 其中, DRAM中每个 bank对应一个出队请求队列, 使用出队请求队 列对至少两个出队请求进行响应,按照先进先出的顺序依次处理出队请求队列 中的出队请求, 避免 DRAM发生拥塞。
示例性的, PD队列 1、 PD队列 2和 PD队列 3的新队列头都位于 DRAM 中的 bankl中, 3个 PD队列同时接收到出队请求, 将 3个出队请求放置在出 队请求队列中, 放置的顺序可以是随机排列或按照 PD队列的序号进行排列, - - 流量管理芯片按照先进先出的顺序依次读取出队请求队列中的出队请求进行 处理, 避免 DRAM中的 bankl发生拥塞。 参见图 5, 为本发明实施例的一种队列的管理装置的结构示意图。 在本发 明实施例中,所述队列管理的装置 1包括第一写入模块 10和第二写入模块 11。 所述队列管理的装置 1可以用于执行图 1所示的方法。关于本实施例涉及的术 语的含义以及举例, 可以参考图 1对应的实施例。 此处不再赘述。
第一写入模块 10, 用于将 PD队列写入 DRAM中, 所述 PD队列包括多 个 PD, 所述多个 PD与第一报文队列包括的多个报文——对应。
第二写入模块 11,用于将所述 PD队列中至少一个 PD写入 SRAM中,所 述至少一个 PD包括所述 PD队列的队列头。
例如,所述第一写入模块 10可以是用于对所述 DRAM进行控制的 memory controller所述第二写入模块 11可以是用于对所述 SRAM进行控制的 memory controller o
可选的, 在本发明的一些实施例中, 所述队列管理的装置 1还包括: 第一确定模块, 用于所述第一写入模块将所述 PD 队列写入所述 DRAM 中之后,以及所述第二写入模块将所述 PD队列中至少一个 PD写入所述 SRM 中之前, 确定所述第一报文队列的信用大于或者等于预设的第一阔值且所述 SRAM中可用的存储空间的容量大于或者等于预设的第二阔值;
所述第二写入模块具体用于若所述 SRAM中可用的存储空间的容量大于 或者等于所述第二阔值, 将所述 PD队列中至少一个 PD写入所述 SRAM中。
可选的, 所述第一确定模块还用于:
判定待激活报文队列的集合为空,所述待激活报文队列的集合包括满足预 设的待激活条件的报文队列,所述待激活条件为执行所述判定时报文队列的信 用大于或者等于所述第一阔值且执行所述判定时所述 SRAM中可用的存储空 间的容量小于所述第二阔值;
所述第二写入模块具体用于若所述 SRAM中可用的存储空间的容量大于 或者等于所述第二阔值并且所述待激活报文队列的集合为空, 将所述 PD队列 中至少一个 PD写入所述 SRAM中。 - - 可选的, 所述第一确定模块还用于:
判定所述待激活报文队列的集合为非空,所述待激活报文队列的集合包括 满足预设的待激活条件的报文队列,所述待激活条件为执行所述判定时报文队 列的信用大于或者等于所述第一阔值且执行所述判定时所述 SRAM中可用的 存储空间的容量小于所述第二阔值;
所述第二写入模块具体用于若所述 SRAM中可用的存储空间的容量大于 或者等于所述第二阔值并且所述第一报文队列是所述待激活报文队列的集合 中优先级最高的报文队列,将所述 PD队列中至少一个 PD写入所述 SRAM中。
可选的, 所述第一写入模块还用于:
将所述第一报文队列的状态标识为第一状态;
所述第一确定模块还用于:
将所述第一报文队列的状态修改为第二状态;
所述第二写入模块具体用于基于所述第二状态的指示, 将所述 PD队列中 至少一个 PD写入所述 SRAM中。
可选的, 在本发明的一些实施例中, 所述队列的管理装置 1还包括: 修改模块,用于若所述第一报文队列的信用大于或者等于所述第一阔值且 所述 SRAM可用的存储空间的容量小于所述第二阔值, 将所述第一报文队列 的状态修改为第三状态;
加入模块,用于将处于所述第三状态所述第一报文队列加入所述待激活报 文队列的集合中。
可选的, 所述队列的管理装置 1还包括:
第二确定模块,用于确定第二报文队列的待入队报文是否满足预设的快包 识别条件;
第三写入模块,用于若所述第二确定模块确定所述待入队报文不满足所述 预设的快包识别条件, 则将所述待入队报文的 PD写入所述 DRAM中; 或者 第四写入模块,用于若所述第二确定模块确定所述待入队报文满足所述预 设的快包识别条件, 则将所述待入队报文的 PD写入所述 SRAM中。
可选的, 所述 DRAM包括多个 bank, 所述 DRAM中存储有多个报文队 列对应的多个 PD队列, 所述多个 PD队列的多个队列头分别存储在多个 bank 中, 所述多个队列头与所述多个 bank——对应, 所述多个 PD队列与所述多 - - 个队列头——对应。
可选的, 所述队列的管理装置 1还包括:
第一出队模块, 用于根据存储在所述多个 bank中的所述多个队列头执行 所述多个报文队列中的至少两个报文队列的出队操作。
可选的, 在本发明的一些实施例中, 所述队列的管理装置 1还包括: 第二出队模块, 用于对所述多个 PD队列中的至少两个 PD队列执行出队 操作, 执行所述出队操作后所述至少两个 PD队列分别包含了至少两个新的队 列头;
响应模块, 用于若所述至少两个新的队列头存储于同一个 bank中, 接收 到至少两个出队请求, 将所述至少两个出队请求置入所述同一个 bank对应的 出队请求队列, 并使用所述出队请求队列对所述至少两个出队请求进行响应, 所述至少两个出队请求与所述至少两个 PD队列——对应。
本发明实施例和上述方法实施例属于同一构思, 其带来的技术效果也相 同, 具体请参照上述方法实施例的描述, 此处不再赘述。
参见图 6, 为本发明实施例提供的一种队列的管理装置的结构示意图。 在 本发明实施例中, 所述队列管理的装置 2包括处理器 61、 存储器 62和通信接 口 63。通信接口 63用于与外部设备进行通信。 队列管理的装置中的处理器 61 的数量可以是一个或多个。 图 6中处理器 61的数量为 1。 本发明的一些实施 例中, 处理器 61、 存储器 62和通信接口 63可通过总线或其他方式连接。 图 6 中, 处理器 61、 存储器 62和通信接口 63通过总线连接。 所述队列管理的装 置 2可以用于执行图 1所示的方法。关于本实施例涉及的术语的含义以及举例, 可以参考图 1对应的实施例。 此处不再赘述。
其中, 存储器 62中存储程序代码。 处理器 61用于调用存储器 62中存储 的程序代码, 用于执行以下操作:
将 PD队列写入 DRAM中, 所述 PD队列包括多个 PD, 所述多个 PD与 第一报文队列包括的多个报文——对应;
将所述 PD队列中至少一个 PD写入 SRAM中, 所述至少一个 PD包括所 述 PD队列的队列头。
在本发明的一些实施例中,所述处理器 61执行所述将 PD队列写入 DRAM - - 中之后,以及所述将 PD队列中至少一个 PD写入 SRAM中之前,还用于执行: 确定所述第一报文队列的信用大于或者等于预设的第一阔值且所述 SRAM中可用的存储空间的容量大于或者等于预设的第二阔值;
所述处理器 61执行所述将所述 PD队列中至少一个 PD写入 SRAM中具 体包括:
若所述 SRAM中可用的存储空间的容量大于或者等于所述第二阔值, 将 所述 PD队列中至少一个 PD写入所述 SRAM中。
在本发明的一些实施例中, 处理器 61执行所述确定所述第一报文队列的 信用大于或者等于预设的第一阔值且所述 SRAM中可用的存储空间的容量大 于或者等于预设的第二阔值, 还包括:
判定待激活报文队列的集合为空,所述待激活报文队列的集合包括满足预 设的待激活条件的报文队列,所述待激活条件为执行所述判定时报文队列的信 用大于或者等于所述第一阔值且执行所述判定时所述 SRAM中可用的存储空 间的容量小于所述第二阔值;
所述处理器 61执行所述将所述 PD队列中至少一个 PD写入 SRAM中具 体包括:
若所述 SRAM中可用的存储空间的容量大于或者等于所述第二阔值并且 所述待激活报文队列的集合为空, 将所述 PD 队列中至少一个 PD 写入所述 SRAM中。
在本发明的一些实施例中, 所述处理器 61执行所述确定所述第一报文队 列的信用大于或者等于预设的第一阔值且所述 SRAM中可用的存储空间的容 量大于或者等于预设的第二阔值还包括:
判定所述待激活报文队列的集合为非空,所述待激活报文队列的集合包括 满足预设的待激活条件的报文队列,所述待激活条件为执行所述判定时报文队 列的信用大于或者等于所述第一阔值且执行所述判定时所述 SRAM中可用的 存储空间的容量小于所述第二阔值;
所述处理器执行所述将所述 PD队列中至少一个 PD写入 SRAM中具体包 括:
若所述 SRAM中可用的存储空间的容量大于或者等于所述第二阔值且所 述第一报文队列是所述待激活报文队列的集合中优先级最高的报文队列,将所 - - 述 PD队列中至少一个 PD写入所述 SRAM中。
在本发明的一些实施例中,所述处理器 61执行所述将 PD队列写入 DRAM 中还包括:
将所述第一报文队列的状态标识为第一状态;
所述处理器 61执行所述确定所述第一报文队列的信用大于或者等于预设 的第一阔值且所述 SRAM中可用的存储空间的容量大于或者等于预设的第二 阔值还包括:
将所述第一报文队列的状态修改为第二状态;
所述处理器 61执行所述将所述 PD队列中至少一个 PD写入 SRAM中具 体包括:
基于所述第二状态的指示,将所述 PD队列中至少一个 PD写入所述 SRAM 中。
在本发明的一些实施例中, 所述处理器 61还用于执行:
若所述第一报文队列的信用大于或者等于所述第一阔值且所述 SRAM可 用的存储空间的容量小于所述第二阔值,将所述第一报文队列的状态修改为第 三状态;
将处于所述第三状态的所述第一报文队列加入所述待激活报文队列的集 合中。
在本发明的一些实施例中, 处理器 61还用执行:
确定第二报文队列的待入队报文是否满足预设的快包识别条件; 若所述待入队报文不满足所述预设的快包识别条件,则将所述待入队报文 的 PD写入所述 DRAM中; 或者
若所述待入队报文满足所述预设的快包识别条件,则将所述待入队报文的 PD写入所述 SRAM中。
在本发明的一些实施例中, 所述 DRAM包括多个 bank, 所述 DRAM中 存储有多个报文队列对应的多个 PD队列, 所述多个 PD队列的多个队列头分 别存储在多个 bank中, 所述多个队列头与所述多个 bank——对应, 所述多个 PD队列与所述多个队列头——对应。
在本发明的一些实施例中, 所述处理器 61还用于执行:
根据存储在所述多个 bank中的所述多个队列头执行所述多个报文队列中 - - 的至少两个报文队列的出队操作。
在本发明的一些实施例中, 所述处理器 61还用于执行:
对所述多个 PD队列中的至少两个 PD队列执行出队操作 ,执行所述出队 操作后所述至少两个 PD队列分别包含了至少两个新的队列头;
若所述至少两个新的队列头存储于同一个 bank中, 接收到至少两个出队 请求, 将所述至少两个出队请求置入所述同一个 bank对应的出队请求队列, 并使用所述出队请求队列对所述至少两个出队请求进行响应,所述至少两个出 队请求与所述至少两个 PD队列——对应。
上述技术方案中, 将第一报文队列对应的 PD队列写入 DRAM中, 并将 PD队列中包括队列头的至少一个 PD写入 SRAM中。 因此, 第一报文队列对 应的 PD队列中至少一个 PD存储于 SRAM中。对所述第一报文队列进行出队 操作时, 可以通过对所述 PD 队列进行读操作实现。 在所述第一报文队列是 FIFO队列的情况下, 可以通过对所述 PD队列的队列头执行读操作实现对所 述第一报文队列的出队操作。所述 PD队列的队列头存储在所述 SRAM中,对 所述 PD队列的队列头执行读操作是通过对所述 SRAM进行读操作实现的。对 SRAM进行读操作的速率不会受到 DRAM的属性的限制。 因此, 上述技术方 案中, PD队列的出队效率较高, 有助于提高第一报文队列的出队效率。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程, 是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算 机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。 其中, 所述的存储介质可为磁碟、 光盘、 只读存储记忆体(英文: Read-Only Memory, 简称: ROM )或随机存 4诸记忆体 (英文: Random Access Memory, 简称: RAM )等。
以上所揭露的仅为本发明一种较佳实施例而已,当然不能以此来限定本发 明之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流 程, 并依本发明权利要求所作的等同变化, 仍属于发明所涵盖的范围。

Claims

权 利 要 求
1、 一种队列管理的方法, 其特征在于, 包括:
将报文描述符 PD队列写入动态随机访问存储器 DRAM中,所述 PD队列 包括多个 PD, 所述多个 PD与第一报文队列包括的多个报文——对应;
将所述 PD队列中至少一个 PD写入静态随机访问存储器 SRAM中,所述 至少一个 PD包括所述 PD队列的队列头。
2、 根据权利要求 1所述的方法, 其特征在于, 所述将报文描述符 PD队 列写入 DRAM中之后, 以及所述将 PD队列中至少一个 PD写入 SRAM中之 前, 所述方法还包括:
确定所述第一报文队列的信用大于或者等于预设的第一阔值且所述 SRAM中可用的存储空间的容量大于或者等于预设的第二阔值;
所述将所述 PD队列中至少一个 PD写入 SRAM中具体包括:
若所述 SRAM中可用的存储空间的容量大于或者等于所述第二阔值, 将 所述 PD队列中至少一个 PD写入所述 SRAM中。
3、 根据权利要求 2所述的方法, 其特征在于, 所述确定所述第一报文队 列的信用大于或者等于预设的第一阔值且所述 SRAM中可用的存储空间的容 量大于或者等于预设的第二阔值, 还包括:
判定待激活报文队列的集合为空,所述待激活报文队列的集合包括满足预 设的待激活条件的报文队列,所述待激活条件为执行所述判定时报文队列的信 用大于或者等于所述第一阔值且执行所述判定时所述 SRAM中可用的存储空 间的容量小于所述第二阔值;
所述将所述 PD队列中至少一个 PD写入 SRAM中具体包括:
若所述 SRAM中可用的存储空间的容量大于或者等于所述第二阔值并且 所述待激活报文队列的集合为空, 将所述 PD 队列中至少一个 PD 写入所述 SRAM中。
4、 根据权利要求 2所述的方法, 其特征在于, 所述确定所述第一报文队 列的信用大于或者等于预设的第一阔值且所述 SRAM中可用的存储空间的容 量大于或者等于预设的第二阔值还包括:
判定所述待激活报文队列的集合为非空,所述待激活报文队列的集合包括 满足预设的待激活条件的报文队列,所述待激活条件为执行所述判定时报文队 列的信用大于或者等于所述第一阔值且执行所述判定时所述 SRAM中可用的 存储空间的容量小于所述第二阔值;
所述将所述 PD队列中至少一个 PD写入 SRAM中具体包括:
若所述 SRAM中可用的存储空间的容量大于或者等于所述第二阔值且所 述第一报文队列是所述待激活报文队列的集合中优先级最高的报文队列,将所 述 PD队列中至少一个 PD写入所述 SRAM中。
5、 根据权利要求 3或 4所述的方法, 其特征在于, 所述将 PD队列写入 DRAM中还包括:
将所述第一报文队列的状态标识为第一状态;
所述确定所述第一报文队列的信用大于或者等于预设的第一阔值且所述 SRAM中可用的存储空间的容量大于或者等于预设的第二阔值还包括:
将所述第一报文队列的状态修改为第二状态;
所述将所述 PD队列中至少一个 PD写入 SRAM中具体包括:
基于所述第二状态的指示,将所述 PD队列中至少一个 PD写入所述 SRAM 中。
6、 根据权利要求 5所述的方法, 其特征在于, 还包括:
若所述第一报文队列的信用大于或者等于所述第一阔值且所述 SRAM可 用的存储空间的容量小于所述第二阔值,将所述第一报文队列的状态修改为第 三状态;
将处于所述第三状态的所述第一报文队列加入所述待激活报文队列的集 合中。
7、 根据权利要求 1-6任意一项所述的方法, 其特征在于, 还包括: 确定第二报文队列的待入队报文是否满足预设的快包识别条件; 若所述待入队报文不满足所述预设的快包识别条件,则将所述待入队报文 的 PD写入所述 DRAM中; 或者
若所述待入队报文满足所述预设的快包识别条件,则将所述待入队报文的 PD写入所述 SRAM中。
8、 根据权利要求 1-7任意一项所述的方法, 其特征在于, 所述 DRAM包 括多个存储块 bank,所述 DRAM中存储有多个报文队列对应的多个 PD队列, 所述多个 PD队列的多个队列头分别存储在多个 bank中, 所述多个队列头与 所述多个 bank——对应, 所述多个 PD队列与所述多个队列头——对应。
9、 根据权利要求 8所述的方法, 其特征在于, 还包括:
根据存储在所述多个 bank中的所述多个队列头执行所述多个报文队列中 的至少两个报文队列的出队操作。
10、 根据权利要求 8所述的方法, 其特征在于, 还包括:
对所述多个 PD队列中的至少两个 PD队列执行出队操作, 执行所述出队 操作后所述至少两个 PD队列分别包含了至少两个新的队列头;
若所述至少两个新的队列头存储于同一个 bank中, 接收到至少两个出队 请求, 将所述至少两个出队请求置入所述同一个 bank对应的出队请求队列, 并使用所述出队请求队列对所述至少两个出队请求进行响应,所述至少两个出 队请求与所述至少两个 PD队列——对应。
11、 一种队列管理的装置, 其特征在于, 包括:
第一写入模块, 用于将报文描述符 PD 队列写入动态随机访问存储器
DRAM中, 所述 PD队列包括多个 PD, 所述多个 PD与第一报文队列包括的 多个 4艮文——对应;
第二写入模块, 用于将所述第一写入模块写入的所述 PD队列中至少一个 PD写入静态随机访问存储器 SRAM中, 所述至少一个 PD包括所述 PD队列 的队列头。
12、 如权利要求 11所述的装置, 其特征在于, 还包括:
第一确定模块, 用于所述第一写入模块将所述 PD 队列写入所述 DRAM 中之后,以及所述第二写入模块将所述 PD队列中至少一个 PD写入所述 SRM 中之前, 确定所述第一报文队列的信用大于或者等于预设的第一阔值且所述
SRAM中可用的存储空间的容量大于或者等于预设的第二阔值;
所述第二写入模块具体用于若所述 SRAM中可用的存储空间的容量大于 或者等于所述第二阔值, 将所述 PD队列中至少一个 PD写入所述 SRAM中。
13、 如权利要求 12所述的装置, 其特征在于, 所述第一确定模块还用于: 判定待激活报文队列的集合为空,所述待激活报文队列的集合包括满足预 设的待激活条件的报文队列,所述待激活条件为执行所述判定时报文队列的信 用大于或者等于所述第一阔值且执行所述判定时所述 SRAM中可用的存储空 间的容量小于所述第二阔值;
所述第二写入模块具体用于若所述 SRAM中可用的存储空间的容量大于 或者等于所述第二阔值并且所述待激活报文队列的集合为空, 将所述 PD队列 中至少一个 PD写入所述 SRAM中。
14、 如权利要求 12所述的装置, 其特征在于, 所述第一确定模块还用于: 判定所述待激活报文队列的集合为非空,所述待激活报文队列的集合包括 满足预设的待激活条件的报文队列,所述待激活条件为执行所述判定时报文队 列的信用大于或者等于所述第一阔值且执行所述判定时所述 SRAM中可用的 存储空间的容量小于所述第二阔值;
所述第二写入模块具体用于若所述 SRAM中可用的存储空间的容量大于 或者等于所述第二阔值并且所述第一报文队列是所述待激活报文队列的集合 中优先级最高的报文队列,将所述 PD队列中至少一个 PD写入所述 SRAM中。
15、 如权利要求 13或 14所述的装置, 其特征在于, 所述第一写入模块还 用于:
将所述第一报文队列的状态标识为第一状态; 所述第一确定模块还用于:
将所述第一报文队列的状态修改为第二状态;
所述第二写入模块具体用于基于所述第二状态的指示, 将所述 PD队列中 至少一个 PD写入所述 SRAM中。
16、 如权利要求 15所述的装置, 其特征在于, 还包括:
修改模块,用于若所述第一报文队列的信用大于或者等于所述第一阔值且 所述 SRAM可用的存储空间的容量小于所述第二阔值, 将所述第一报文队列 的状态修改为第三状态;
加入模块,用于将处于所述第三状态所述第一报文队列加入所述待激活报 文队列的集合中。
17、 如权利要求 11-16任意一项所述的装置, 其特征在于, 还包括: 第二确定模块,用于确定第二报文队列的待入队报文是否满足预设的快包 识别条件;
第三写入模块,用于若所述第二确定模块确定所述待入队报文不满足所述 预设的快包识别条件, 则将所述待入队报文的 PD写入所述 DRAM中; 或者 第四写入模块,用于若所述第二确定模块确定所述待入队报文满足所述预 设的快包识别条件, 则将所述待入队报文的 PD写入所述 SRAM中。
18、 如权利要求 11-17任意一项所述的装置, 其特征在于, 所述 DRAM 包括多个存储块 bank, 所述 DRAM中存储有多个报文队列对应的多个 PD队 列, 所述多个 PD队列的多个队列头分别存储在多个 bank中, 所述多个队列 头与所述多个 bank——对应, 所述多个 PD队列与所述多个队列头——对应。
19、 如权利要求 18所述的装置, 其特征在于, 还包括:
第一出队模块, 用于根据存储在所述多个 bank中的所述多个队列头执行 所述多个报文队列中的至少两个报文队列的出队操作。
20、 如权利要求 18所述的装置, 其特征在于, 还包括: 第二出队模块, 用于对所述多个 PD队列中的至少两个 PD队列执行出队 操作, 执行所述出队操作后所述至少两个 PD队列分别包含了至少两个新的队 列头;
响应模块, 用于若所述至少两个新的队列头存储于同一个 bank中, 接收 到至少两个出队请求, 将所述至少两个出队请求置入所述同一个 bank对应的 出队请求队列, 并使用所述出队请求队列对所述至少两个出队请求进行响应, 所述至少两个出队请求与所述至少两个 PD队列——对应。
PCT/CN2014/083916 2014-08-07 2014-08-07 一种队列管理的方法和装置 WO2016019554A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2014/083916 WO2016019554A1 (zh) 2014-08-07 2014-08-07 一种队列管理的方法和装置
CN201480080687.2A CN106537858B (zh) 2014-08-07 2014-08-07 一种队列管理的方法和装置
EP14899360.3A EP3166269B1 (en) 2014-08-07 2014-08-07 Queue management method and apparatus
US15/425,466 US10248350B2 (en) 2014-08-07 2017-02-06 Queue management method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/083916 WO2016019554A1 (zh) 2014-08-07 2014-08-07 一种队列管理的方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/425,466 Continuation US10248350B2 (en) 2014-08-07 2017-02-06 Queue management method and apparatus

Publications (1)

Publication Number Publication Date
WO2016019554A1 true WO2016019554A1 (zh) 2016-02-11

Family

ID=55263032

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/083916 WO2016019554A1 (zh) 2014-08-07 2014-08-07 一种队列管理的方法和装置

Country Status (4)

Country Link
US (1) US10248350B2 (zh)
EP (1) EP3166269B1 (zh)
CN (1) CN106537858B (zh)
WO (1) WO2016019554A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3461085A4 (en) * 2016-06-28 2019-05-08 Huawei Technologies Co., Ltd. METHOD AND DEVICE FOR MANAGING QUEUE

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105162724B (zh) * 2015-07-30 2018-06-26 华为技术有限公司 一种数据入队与出队方法及队列管理单元
CN108462646B (zh) * 2017-02-17 2020-08-25 华为技术有限公司 一种报文处理方法及装置
CN108462652B (zh) * 2017-07-31 2019-11-12 新华三技术有限公司 一种报文处理方法、装置和网络设备
CN109802897B (zh) 2017-11-17 2020-12-01 华为技术有限公司 一种数据传输方法及通信设备
US10466906B2 (en) * 2017-12-19 2019-11-05 Western Digital Technologies, Inc. Accessing non-volatile memory express controller memory manager
CN112311696B (zh) * 2019-07-26 2022-06-10 瑞昱半导体股份有限公司 网络封包接收装置及方法
US20230120184A1 (en) * 2021-10-19 2023-04-20 Samsung Electronics Co., Ltd. Systems, methods, and devices for ordered access of data in block modified memory
CN116225665B (zh) * 2023-05-04 2023-08-08 井芯微电子技术(天津)有限公司 一种队列调度方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1595910A (zh) * 2004-06-25 2005-03-16 中国科学院计算技术研究所 一种网络处理器的数据包接收接口部件及其存储管理方法
CN101594299A (zh) * 2009-05-20 2009-12-02 清华大学 基于链表的交换网络中队列缓冲管理方法
CN103647726A (zh) * 2013-12-11 2014-03-19 华为技术有限公司 一种报文调度方法及装置
CN103731368A (zh) * 2012-10-12 2014-04-16 中兴通讯股份有限公司 一种处理报文的方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5787255A (en) * 1996-04-12 1998-07-28 Cisco Systems, Inc. Internetworking device with enhanced protocol translation circuit
US7236489B1 (en) * 2000-04-27 2007-06-26 Mosaid Technologies, Inc. Port packet queuing
US6675265B2 (en) * 2000-06-10 2004-01-06 Hewlett-Packard Development Company, L.P. Multiprocessor cache coherence system and method in which processor nodes and input/output nodes are equal participants
US7287092B2 (en) * 2003-08-11 2007-10-23 Sharp Colin C Generating a hash for a TCP/IP offload device
US7657706B2 (en) * 2003-12-18 2010-02-02 Cisco Technology, Inc. High speed memory and input/output processor subsystem for efficiently allocating and using high-speed memory and slower-speed memory
US20170153852A1 (en) * 2015-11-30 2017-06-01 Mediatek Inc. Multi-port memory controller capable of serving multiple access requests by accessing different memory banks of multi-bank packet buffer and associated packet storage design

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1595910A (zh) * 2004-06-25 2005-03-16 中国科学院计算技术研究所 一种网络处理器的数据包接收接口部件及其存储管理方法
CN101594299A (zh) * 2009-05-20 2009-12-02 清华大学 基于链表的交换网络中队列缓冲管理方法
CN103731368A (zh) * 2012-10-12 2014-04-16 中兴通讯股份有限公司 一种处理报文的方法和装置
CN103647726A (zh) * 2013-12-11 2014-03-19 华为技术有限公司 一种报文调度方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3166269A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3461085A4 (en) * 2016-06-28 2019-05-08 Huawei Technologies Co., Ltd. METHOD AND DEVICE FOR MANAGING QUEUE
US10951551B2 (en) 2016-06-28 2021-03-16 Huawei Technologies Co., Ltd. Queue management method and apparatus

Also Published As

Publication number Publication date
US20170147251A1 (en) 2017-05-25
EP3166269A1 (en) 2017-05-10
EP3166269A4 (en) 2017-08-30
EP3166269B1 (en) 2019-07-10
CN106537858B (zh) 2019-07-19
US10248350B2 (en) 2019-04-02
CN106537858A (zh) 2017-03-22

Similar Documents

Publication Publication Date Title
WO2016019554A1 (zh) 一种队列管理的方法和装置
US11916781B2 (en) System and method for facilitating efficient utilization of an output buffer in a network interface controller (NIC)
US11082366B2 (en) Method and apparatus for using multiple linked memory lists
US8656071B1 (en) System and method for routing a data message through a message network
US8225026B2 (en) Data packet access control apparatus and method thereof
US8499137B2 (en) Memory manager for a network communications processor architecture
US8761204B2 (en) Packet assembly module for multi-core, multi-thread network processors
US8910168B2 (en) Task backpressure and deletion in a multi-flow network processor architecture
US8321385B2 (en) Hash processing in a network communications processor architecture
US9280297B1 (en) Transactional memory that supports a put with low priority ring command
US11700209B2 (en) Multi-path packet descriptor delivery scheme
WO2009111971A1 (zh) 缓存数据写入系统及方法和缓存数据读取系统及方法
US8943507B2 (en) Packet assembly module for multi-core, multi-thread network processors
US11425057B2 (en) Packet processing
US10397144B2 (en) Receive buffer architecture method and apparatus
WO2009097788A1 (zh) 缓存数据处理方法、装置及系统
US10951549B2 (en) Reusing switch ports for external buffer network
US20160004445A1 (en) Devices and methods for interconnecting server nodes
CN110519180B (zh) 网卡虚拟化队列调度方法及系统
US10228852B1 (en) Multi-stage counters
WO2024001414A1 (zh) 报文的缓存方法、装置、电子设备及存储介质
CN115529275B (zh) 一种报文处理系统及方法
US7293130B2 (en) Method and system for a multi-level memory
CN118427137A (zh) 一种基于直接存储器访问dma的数据整序方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14899360

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2014899360

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014899360

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE