WO2018107681A1 - Processing method, device, and computer storage medium for queue operation - Google Patents

Processing method, device, and computer storage medium for queue operation Download PDF

Info

Publication number
WO2018107681A1
WO2018107681A1 PCT/CN2017/088613 CN2017088613W WO2018107681A1 WO 2018107681 A1 WO2018107681 A1 WO 2018107681A1 CN 2017088613 W CN2017088613 W CN 2017088613W WO 2018107681 A1 WO2018107681 A1 WO 2018107681A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
descriptor
information
message
address
Prior art date
Application number
PCT/CN2017/088613
Other languages
French (fr)
Chinese (zh)
Inventor
赵培培
王闯
闫振林
孟雄飞
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2018107681A1 publication Critical patent/WO2018107681A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/627Queue scheduling characterised by scheduling criteria for service slots or service orders policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling

Definitions

  • the present invention relates to the field of Internet technologies, and in particular, to a processing method and apparatus in a queue operation and a computer storage medium.
  • a queue management function is integrated, and packets are processed by a queue.
  • NP Network Processor
  • FAP Fabric Access Processor
  • a queue management function is integrated, and packets are processed by a queue.
  • NP Network Processor
  • FAP Fabric Access Processor
  • a queue management function is integrated, and packets are processed by a queue.
  • the difficulty of queue management for a large number of high-density queues increases, and the identifiers of queues and queues increase. That is, the storage of the Queue Descriptor (QD) requires a large amount of storage capacity. Under the premise of ensuring performance, reducing the cost of the chip is a problem that must be considered.
  • the prior patent documents describe the use of a low-cost, large-capacity, high-density off-chip cache, Dynamic Random Access Memory (DRAM), to compensate for on-chip buffers - static random access memory (SRAM, Static). Random Access Memory)
  • DRAM Dynamic Random Access Memory
  • SRAM static random access memory
  • SRAM Static random access memory
  • Random Access Memory The problem of insufficient storage capacity due to storage of high-density queues and QDs.
  • the patent document US008180966B2 "System and method for operating a packet buffer in an intermediate node" provides a message data buffering method, which uses DRAM bonding.
  • a small-capacity, high-speed on-chip cache (Cache) is used to store queues and QDs, which solves the problem of insufficient Cache storage capacity. Each queue has independent storage space in the Cache.
  • the Cache stores the fragmentation of the first packet of several queues, and the QD is also stored in the Cache. The remaining fragments and tail fragments of the queue are stored in the DRAM.
  • the packet fragment is written to the tail of the corresponding queue stored in the DRAM; when the new packet is dequeued, the first packet of the corresponding queue stored in the Cache is read for dequeuing, if stored. If the packets in the Cache are all dequeued, the newly enqueued packets will be stored in the Cache, or the packets stored in the DRAM will be moved to the Cache.
  • the QD is stored in the Cache or the DRAM, and the Content Addressable Memory (CAM) is triggered by the enqueue request to query whether the QD is stored in the Cache. If the query is obtained, the CAM returns the QD in the Cache.
  • the storage address in the memory reads the QD from the corresponding storage address of the Cache; if not, the QD is read from the DRAM, and the CAM will release a QD tag that is unrelated to the trigger operation, and the vacated QD tag table
  • the item stores the newly moved QD tag, and the discharged QD is moved from the Cache to the DRAM.
  • the read/write bandwidth of the CAM cannot be arbitrarily expanded.
  • the queue number maintained by the CAM frequently changes, which inevitably affects the efficiency of CAM retrieval.
  • the CAM of the same capacity occupies a larger area than the Cache, and the power consumption is larger.
  • the CAM cannot support enough QD tags at the same time, so the number of QDs placed in the Cache It will be affected by CAM capacity.
  • high-speed and high-performance traffic management requires on-chip QD storage with large capacity and large operating bandwidth. Cache cannot support large capacity and large read/write bandwidth at the same time. It can be seen that in the queue operation, the method requires frequent access to the Cache and the DRAM to acquire the QD, thereby resulting in low access efficiency of the QD.
  • the embodiment of the present invention is to provide a processing method, a device, and a computer storage medium in a queue operation, which can improve the access efficiency of the QD in the queue operation and realize the fast access of the QD.
  • An embodiment of the present invention provides a processing method in a queue operation, where the method includes:
  • mapping table storage location information and address information of the queue descriptor corresponding to the queue number
  • the queue descriptor is updated.
  • the performing queue operation on the to-be-processed message information according to the queue descriptor includes:
  • the queue operation when the queue operation is an enqueue operation, the queue operation is performed according to the queue descriptor, and the queue descriptor is used according to the queue descriptor. And the storage location information and the address information after the queue descriptor is updated, and updating the queue descriptor, including:
  • Updating the queue descriptor according to the message cache pointer and the message information of the enqueue operation, and storing the updated queue descriptor in the target address, where the target address is updated after the queue descriptor is updated The storage location information and the address determined by the address information.
  • the queue operation when the queue operation is a dequeuing operation, the queue operation is performed according to the queue descriptor, and the queue descriptor is used according to the queue descriptor. And the storage location information and the address information after the queue descriptor is updated, and updating the queue descriptor, including:
  • a message buffer pointer for releasing the dequeued message information
  • the method further includes:
  • the updated queue descriptor stored in the register is moved to the cache memory.
  • the storage location information and the address information of the updated queue descriptor corresponding to the queue number in the mapping table are updated.
  • the method further includes:
  • the queue descriptor of the queue stored in the cache and having the activity less than the preset second threshold is moved to the dynamic random access memory
  • a pointer to the queue descriptor that is moved out of the cache is released.
  • the method further includes:
  • the message descriptor in the to-be-processed message information and the message information to be processed are stopped from being queued until the second detection result is less than the preset.
  • the fourth threshold is restored, and the message descriptor in the to-be-processed message information and the message information to be processed are resumed, and the preset fourth threshold is smaller than the preset third. Threshold.
  • the embodiment of the present invention further provides a processing device in a queue operation, where the device includes: an obtaining module, a query module, a first moving module, and a first processing module;
  • the acquiring module is configured to obtain a queue number of a queue to which the packet information to be processed belongs;
  • the query module is configured to query, in the mapping table, storage location information and address information of the queue descriptor corresponding to the queue number;
  • the first moving module is configured to store location information and a ground according to the queue descriptor Address information, obtaining a queue descriptor, moving the queue descriptor into a register, and updating storage location information and address information of the queue descriptor corresponding to the queue number in the mapping table;
  • the first processing module is configured to perform a queue operation on the to-be-processed message information according to the queue descriptor, and after the queue operation, update the storage according to the queue descriptor and the queue descriptor. Location information and address information are updated for the queue descriptor.
  • the first processing module is configured to determine, according to the preset congestion avoidance policy, that the to-be-processed packet information is enqueued according to the queue descriptor and the queue number, according to the The queue descriptor performs the enqueue operation on the to-be-processed packet information; or performs the dequeuing operation according to the packet information to be processed according to the queue descriptor.
  • the first processing module when the queue is operated as a queue operation, includes: an application unit, a storage unit, and a first update unit;
  • the application unit is configured to apply for a message cache pointer to the dynamic random access memory
  • the storage unit is configured to store the to-be-processed message information and the message cache pointer in the dynamic random access memory according to the message buffer pointer;
  • the first update unit is configured to update the queue descriptor according to the message cache pointer and the message information of the queued operation, and store the updated queue descriptor in the target address, the target The address is an address determined by the storage location information and the address information after the queue descriptor is updated.
  • the first processing module when the queue operation is a dequeuing operation, includes: a reading unit, a releasing unit, and a second updating unit; wherein
  • the reading unit is configured to read the to-be-processed message information and the next queued message buffer pointer in the dynamic random access memory according to the queue descriptor, and the to-be-processed message information Carry out the team;
  • the release unit is configured to release a message cache pointer of the dequeued message information
  • the second update unit is configured to cache a pointer according to the next queued message And the message information of the dequeued operation, updating the queue descriptor, and storing the updated queue descriptor in a target address, where the target address is updated storage location information of the queue descriptor and The address determined by the address information.
  • the device further includes: a first detecting module, a second moving module, and an update mapping table module; wherein
  • the first detecting module is configured to detect a queue number of a queue corresponding to the queue descriptor before the update
  • the second moving module is configured to: when the queue number of the updated queue descriptor corresponding queue is different from the queue number of the queue queue corresponding to all the updated queue descriptors, the updated content stored in the register The queue descriptor is moved to the cache;
  • the update mapping table module is configured to update storage location information and address information of the updated queue descriptor corresponding to the queue number in the mapping table.
  • the device further includes: a second detecting module, a third moving module, and a releasing module; wherein
  • the second detecting module is configured to detect a space usage of the cache memory in real time, and obtain a first detection result
  • the third moving module is configured to: when the first detection result is greater than a preset first threshold, a queue description of a queue that is stored in the cache and whose activity is less than a preset second threshold Move to dynamic random access memory;
  • the release module is configured to release a pointer that moves out of the cache descriptor of the cache.
  • the device further includes: a third detecting module and a second processing module; wherein
  • the third detecting module is configured to detect a space usage of the register in real time, and obtain a second detection result
  • the second processing module is configured to: when the second detection result is greater than a preset third threshold, And stopping to output the message descriptor in the to-be-processed message information and the message information to be processed, and performing the enqueuing operation until the second detection result is less than the preset fourth threshold, and outputting the to-be-processed report.
  • the message descriptor in the text information and the message information to be processed are enqueued, and the preset fourth threshold is smaller than the preset third threshold.
  • the processing method, the device, and the computer storage medium in the queue operation provided by the embodiment of the present invention firstly obtain the queue number of the queue to which the packet information to be processed belongs; and then query the mapping table to correspond to the queue number.
  • the storage location information and the address information of the QD acquiring the QD according to the storage location information and the address information of the QD, moving the QD to a register (Reg), and updating the QD corresponding to the queue number in the Map table And storing the location information and the address information according to the QD, and performing the queue operation on the to-be-processed message information according to the QD, and after the queue operation, according to the QD, the updated storage location information and the address information of the QD Update the QD.
  • Reg register
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the processing method in the queue operation according to the embodiment of the invention.
  • the storage location and the storage address of the QD corresponding to the queue number of the queue to which the packet information to be processed belongs are searched through the Map table; and the QD is obtained according to the storage location and the storage address of the QD, and the QD is obtained.
  • the QD is moved to the Reg, and the storage location of the QD corresponding to the queue number in the Map is Reg and the storage address is the corresponding address in the Reg; the queue is processed according to the QD.
  • the QD is updated according to the QD, the updated storage location and the storage address of the QD, thereby ensuring the effectiveness of the QD dynamic access in the queue operation, and improving the queue operation.
  • the access efficiency of QD enables fast access of QD and ensures the system performance of integrated queue queuing management function.
  • Embodiment 1 is a schematic diagram of an implementation flow of Embodiment 1 of a processing method in a queue operation according to the present invention
  • FIG. 2 is a schematic diagram of mapping relationships between Reg, Cache, and DRAM in a Map table
  • FIG. 3 is a second schematic diagram of the implementation process of the first embodiment of the processing method in the queue operation according to the present invention.
  • FIG. 4 is a third schematic flowchart of the implementation process of the first embodiment of the processing method in the queue operation of the present invention.
  • FIG. 5 is a schematic flowchart of an implementation process of a second embodiment of a processing method in a queue operation according to the present invention
  • FIG. 6 is a schematic diagram of a real-time detection process of a Cache usage space
  • FIG. 7 is a schematic diagram showing an implementation flow of moving a QD from a Cache to a DRAM
  • FIG. 8a is a schematic diagram of a mapping relationship between a queue number and a QD
  • FIG. 8b is a schematic structural diagram of a Cache active linked list
  • FIG. 8c is a schematic structural diagram of a Cache idle list
  • FIG. 10 is a schematic diagram of an application scenario of Embodiment 5 of a processing method in a queue operation according to the present invention.
  • FIG. 11 is a schematic diagram of a moving process when reading a QD
  • FIG. 12 is a second schematic diagram of an application scenario of Embodiment 5 of a processing method in a queue operation according to the present invention.
  • FIG. 13 is a schematic structural diagram of a first embodiment of a processing apparatus in a queue operation according to the present invention.
  • FIG. 14 is a schematic diagram showing a detailed composition structure of a first processing module in the processing apparatus shown in FIG. 13;
  • FIG 15 is a second schematic diagram showing the detailed composition of the first processing module in the processing apparatus shown in Figure 13;
  • 16 is a schematic structural diagram of a second embodiment of a processing apparatus in a queue operation according to the present invention.
  • 17 is a schematic diagram of a real-time detection function module of a Cache usage space
  • FIG. 18 is a schematic diagram of a real-time detection function module of the usage space of Reg.
  • the processing method in the queue operation provided by the embodiment of the present invention is mainly applied to the system for integrating the queuing management function of the queue, and the storage location and the storage address of the QD corresponding to the queue number of the queue to which the packet information to be processed belongs are searched through the Map table.
  • FIG. 1 is a schematic diagram of an implementation flow of a first embodiment of a processing method in a queue operation according to the present invention.
  • the processing method in the queue operation in this embodiment includes the following steps:
  • Step 101 Obtain a queue number of a queue to which the packet information to be processed belongs.
  • the to-be-processed message information includes message data and a message descriptor, and the message descriptor includes a queue number and a message length.
  • the method for obtaining the queue number is different according to the queue operation of the message information to be processed, and the queue operation includes the enqueue operation and the dequeue operation; in this embodiment, in the enqueue operation, Receiving the to-be-processed message information from the network, determining the queue number according to the message descriptor in the message information; in the dequeuing operation, acquiring the port information waiting for scheduling and the queue information waiting to be scheduled, according to the port information and The queue information is calculated by a scheduling algorithm such as a Round-Robin (RR) scheduling algorithm or a Strict Priority (SP) scheduling algorithm.
  • RR Round-Robin
  • SP Strict Priority
  • Step 102 Query a storage location of a queue descriptor corresponding to the queue number in a mapping table. Information and address information;
  • FIG. 2 is a schematic diagram of the mapping relationship between the Reg, the Cache and the DRAM in the Map table, as shown in FIG. 2, the Map table Each row represents the storage location information and address information of the QD corresponding to the queue number.
  • the first row number "1" is the queue number
  • "in Reg” is the storage location information of the QD corresponding to the queue number 1, indicating The QD corresponding to the queue number 1 is stored in Reg
  • the "Reg pointer 5" is the address information of the QD corresponding to the queue number 1, indicating that the QD corresponding to the queue number 1 is stored in the address indicated by the pointer 5 in the Reg
  • the storage location information and the address information of the QD corresponding to the queue number are queried in the Map table according to the queue number of the queue to which the packet information to be processed belongs, and the storage location information indicates that the QD corresponding to the queue number is stored.
  • the QD display queue is empty, indicating that there is no QD corresponding to the queue number in Reg, Cache, and DRAM; address information Representation with the queue QD corresponding storage location address.
  • Step 103 Acquire a queue descriptor according to the storage location information and the address information of the queue descriptor, move the queue descriptor to a register, and update a storage of a queue descriptor corresponding to the queue number in the mapping table. Location information and address information;
  • a specific address is determined based on the storage location information and the address information of the QD, and the QD is read from the determined specific address; and the QD not stored in the Reg is moved to the QD according to the storage location information of the QD.
  • the storage location information of the QD corresponding to the queue number in the update map table is "in Reg”
  • the address information of the QD corresponding to the queue number in the Map table is updated as "the address in the storage location Reg".
  • the storage location information and the address information of the QD corresponding to the queue number are "in Reg, the address in the storage location Reg"; when the QD is stored in the DRAM, at this time, the address information of the QD is not Meaning, using the queue number to read the QD from the address corresponding to the queue number in the DRAM, applying a free address to the Reg, moving the QD stored in the DRAM to the free address of the Reg, and Updating the storage location information and address information of the QD corresponding to the queue number in the Map table is "in Reg, storing the address in the location Reg”; when the QD display queue is empty, at this time, applying to the Reg Free address, create a new QD at the free address Map table and update the corresponding queue number QD storage location information and address information as "Reg, the storage location address in Reg.”
  • Step 104 Perform a queue operation on the to-be-processed message information according to the queue descriptor, and after the queue operation, according to the queue descriptor, the updated storage location information, and the address information of the queue descriptor, The queue descriptor is updated.
  • the packet information to be queued is received from the network, and the queue number is determined according to the message descriptor in the packet information; or in the dequeuing operation, the port waiting for scheduling is obtained.
  • the information and the queue information waiting for scheduling are calculated according to the port information and the queue information by a scheduling algorithm such as an RR scheduling algorithm or an SP scheduling algorithm, and the queue number is calculated;
  • the address indicated in the information reads the QD from the Reg; when the QD is stored in the Cache, the address indicated in the address information of the QD is used from the Cache. Reading the QD, and applying a free address in the Reg, moving the QD in the storage Cache to the free address in the Reg, and updating the storage location information of the QD corresponding to the queue number in the Map table.
  • the address information is "in Reg, the address in the storage location Reg”; when the QD is stored in the DRAM, at this time, the queue number is used to read the QD from the address corresponding to the queue number in the DRAM, At the same time, applying a free address in the Reg, moving the QD stored in the DRAM to the free address in the Reg, and updating the storage location information and the address information of the QD corresponding to the queue number in the Map table as "In Reg, store the address in the location Reg"; when the QD display queue is empty, apply for a free address to the Reg, create a new QD at the free address, and update the map table corresponding to the queue number
  • the storage location information and address information of the QD is "in Reg, the address in the storage location Reg";
  • the storage location information and the address information of the QD corresponding to the queue number are queried according to the Map table, and the QD is obtained according to the storage location information and the address information of the QD; and the QD is moved to the Reg And updating the storage location information and the address information of the QD corresponding to the queue number in the Map table; and updating the QD according to the QD, the updated storage location information of the QD, and the address information to implement the QD
  • the Map table stores the storage location information and the address information of the QD corresponding to the queue number one by one.
  • the QD storage corresponding to the queue number can be obtained by querying the Map table.
  • QD When QD is stored in Cache or DRAM, only need to access Cache or DRAM once to get QD, improve QD acquisition efficiency, achieve fast QD acquisition; QD will be stored in Cache or DRAM Moving to the Reg, and updating the storage location information and the address information of the QD corresponding to the queue number in the Map table, after the queue operation, according to the QD, the updated storage location information of the QD, and The address information more QD New, so only need to access Reg when updating the QD, improve the storage efficiency of QD, and achieve fast storage of QD.
  • FIG. 3 is a schematic diagram of the implementation process of the first embodiment of the processing method in the queue operation according to the present invention. Based on the QD and the queue number, the packet information to be processed is determined according to a preset congestion avoidance policy. The team enters the packet information to be processed according to the QD. Referring to FIG. 3, step 104 specifically includes the following steps:
  • Step 1041 Apply a message buffer pointer to the dynamic random access memory
  • Step 1042 Store the to-be-processed message information and the message cache pointer into the dynamic random access memory according to the message buffer pointer.
  • Step 1043 Update the queue descriptor according to the packet buffer pointer and the packet information of the enqueue operation, and store the updated queue descriptor in the target address, where the target address is the queue description. The address determined by the updated storage location information and address information.
  • the QD includes a first pointer of the queue, a tail pointer of the queue, and a queue depth
  • the preset congestion avoidance policy may be a weighted random early detection algorithm; in this embodiment, according to the to-be-processed message information.
  • the QD of the queue and the queue number are used to determine whether the to-be-processed message information is queued by using a weighted random early detection algorithm; and when the judgment message information cannot be enqueued, the to-be-processed message information is performed. Discarding; when the message information is entered into the queue, the packet information to be processed is enqueued.
  • the queue in the QD is The tail pointer is updated to a message buffer pointer, and the queue depth in the QD is updated to the queue depth in the QD plus the message length in the message descriptor, and the updated QD is stored to be updated based on the QD.
  • the storage location information and the address information are determined by the address.
  • FIG. 4 is a third schematic flowchart of the implementation of the processing method in the queue operation of the present invention.
  • step 104 specifically includes the following steps:
  • Step 1044 Read, according to the queue descriptor, the to-be-processed message information and the next queued message buffer pointer in the dynamic random access memory, and dequeue the to-be-processed message information;
  • Step 1045 Release a message buffer pointer of the dequeued message information.
  • Step 1046 Update the queue descriptor according to the next queued message cache pointer and the message information of the dequeued operation, and store the updated queue descriptor in the target address,
  • the target address is an address determined by the storage location information and the address information after the queue descriptor is updated.
  • the first pointer of the queue in the QD is used to read the message data from the DRAM message data buffer area, and the message data is dequeued, and the first pointer of the queue in the QD is used to cache from the DRAM message descriptor.
  • the area reads the message descriptor, determines the message length, reads the message buffer pointer from the DRAM message buffer pointer buffer area by using the first pointer of the queue in the QD, and uses the first pointer of the queue in the QD to buffer the pointer from the DRAM message.
  • the cache area reads the next queued message buffer pointer;
  • a message buffer pointer for releasing the dequeued message information
  • FIG. 5 is a schematic flowchart of the implementation of the second embodiment of the processing method in the queue operation according to the present invention.
  • the processing method in the queue operation in the embodiment is the first embodiment of the processing method in the queue operation of the present invention.
  • the method further includes:
  • Step 105 Detect a queue number of a queue corresponding to the queue descriptor before the update
  • Step 106 When the queue number of the updated queue descriptor corresponding queue is different from the queue number of the queue corresponding to all the updated queue descriptors, the updated queue descriptor stored in the register is moved to the high speed. Buffer memory;
  • Step 107 Update storage location information and address information of the updated queue descriptor corresponding to the queue number in the mapping table.
  • the storage location information of the QD corresponding to the queue number 2 is "in the Cache” and the address information is "Cache pointer 1" in the Map table, and the QD is determined according to the storage location information and the address information of the QD.
  • the address 1 pointed to by the pointer pointer 1 stored in the Cache reads the QD from the address, applies a free address 6 in the Reg, moves the QD stored in the Cache to the Reg, and updates the Map table with
  • the storage location information and address information of the QD corresponding to the queue number 2 is "Reg in Reg, 6";
  • the tail pointer of the queue in the QD corresponding to the queue number 2 is updated to the message buffer pointer, and the queue depth in the QD corresponding to the queue number 2 is updated to the queue number. 2
  • the queue depth in the corresponding QD plus the message length 128, the updated QD is stored in the Address 6 of the Reg;
  • Queue number 2 is different from queue number 5 and queue number 10. In this case, a free address 6 is requested in the Cache, and the QD stored in the Reg address 6 is moved to the address 6 of the Cache.
  • the calculated queue number is 65536;
  • the storage location information of the QD corresponding to the queue number 65536 is "in DRAM” and the address information is "Null".
  • the queue number 65536 is used to read from the address 65536 of the DRAM. Taking QD, applying a free address 7 in Reg, moving the QD stored in the DRAM to Reg, and updating the storage location information and address information of the QD corresponding to the queue number 65536 in the Map table as "at” Reg, Reg pointer 7”;
  • the buffer area reads the message descriptor, determines that the message length is 128, and reads the message buffer pointer and the next queued message buffer from the DRAM message buffer pointer buffer area by using the first pointer of the queue in the QD. pointer;
  • the first pointer of the queue in the QD corresponding to the queue number 65536 is updated to the next queued message buffer pointer, and the queue depth in the QD corresponding to the queue number 65536 is updated to be the queue number 65536.
  • the queue depth in the corresponding QD is subtracted from the packet length 128, and the updated QD is stored in the Reg address 7;
  • the queue number 65536 is inconsistent with the queue number 4 and the queue number 9. In this case, a free address 7 is requested in the Cache, and the QD stored in the Reg address 7 is moved to the address 7 of the Cache;
  • the Cache has sufficient space to store the queue operation.
  • the QD in the process ensures the normal operation of the queue operation.
  • the space of the Cache can be detected in real time.
  • FIG. 6 is a schematic diagram of a real-time detection process of a Cache usage space. Referring to FIG. 6, the processing method in the queue operation of the present invention further includes:
  • Step 201 detecting a space usage of the cache memory in real time, and obtaining a first detection result
  • Step 202 When the first detection result is greater than a preset first threshold, moving a queue descriptor of a queue stored in the cache and having an activity less than a preset second threshold to a dynamic random access memory Having the space of the cache memory satisfy a preset condition;
  • the preset first threshold may be set according to actual needs.
  • the value of the first threshold may be a percentage of 90% to 98%; in this embodiment, the first threshold may be 95% is an example for detailed explanation.
  • the activity of the queue may be divided according to the length of time after the QD is stored in the Cache, for example, storage.
  • the QD queue that has not been queued for the longest time in the Cache has the lowest activity, and can be set to 0.
  • the queue is performing queue operation processing. Active, you can set the queue's activity to 10, and the remaining queues can be set to an integer value of 1 to 9.
  • the preset second threshold may be set according to actual needs.
  • a second threshold value of 0.5 may be used as an example for detailed description.
  • step 203 the pointer of the queue descriptor that is moved out of the cache is released.
  • the space usage of the Cache is detected in real time, and the first detection result is obtained;
  • a pointer to the QD that moves out of the Cache is released.
  • FIG. 7 is a schematic diagram of an implementation process of moving a QD from a Cache to a DRAM.
  • the implementation process of moving the QD from the Cache to the DRAM includes the following steps:
  • Step 301 Create a doubly linked list of queue numbers corresponding to all QDs stored in the Cache; if there is a trigger event that the QD corresponding to the queue number qN is released from the Reg to the Cache, go to step 302; if there is a queue number qM If the corresponding QD is moved from the Cache to the trigger event in the Reg, step 303 is performed;
  • Step 302 the queue number qN is added to the end of the linked list, the depth of the linked list is increased by 1, and step 304 is performed;
  • Step 303 the queue number qM is removed from the linked list, and the upper and lower nodes of the queue number qM are connected;
  • Step 304 Detect the number of the linked list, that is, the number of queue numbers corresponding to the QD existing in the Cache (ie, The number of queues is determined, whether the depth of the linked list is greater than a preset first threshold, if the depth of the linked list is greater than the preset first threshold, step 305 is performed, otherwise step 307 is performed;
  • Step 305 Move the QD corresponding to the first queue number in the Cache active linked list to the DRAM.
  • Step 306 releasing a pointer of the QD that moves out of the Cache
  • step 307 the QD in the Cache is not moved.
  • Figure 8a is a schematic diagram of the mapping relationship between the queue number and the QD.
  • the Cache Link Table has a depth of 1024, and can store 1024 queue numbers corresponding to the QD.
  • the mapping depth between the queue number qnum and the QD in the Cache is 1024.
  • There is a one-to-one correspondence with the QD in the Cache as shown in the Cache-qnum table in Figure 8a.
  • FIG. 8b is a schematic structural diagram of a Cache active linked list.
  • the next hop RAM depth of the Cache active linked list is 1024, and each entry stores the previous node corresponding to the Cache pointer (ie, the previous cache pointer) and the next node. (ie, the next cache pointer), and the Cache active list also contains the first pointer of the active linked list and the tail pointer of the active linked list, indicating the Cache address corresponding to the first and last nodes of the active linked list; in Figure 8b, the number of Cache active linked list nodes is Four, these four nodes represent the connection relationship of Cache addresses 0, 1, 2 and 3.
  • FIG. 8c is a schematic structural diagram of a Cache free link list.
  • the Cache free list includes a first pointer of the free list and a tail pointer of the free list, indicating a Cache free address corresponding to the first and last nodes of the free list, and a Cache idle list next hop.
  • RAM is used to manage the free address of the Cache.
  • the event that performs the queue operation processing is called an active event
  • the queue of the first queue number in the Cache active linked list is the most long time among the queues of all the queue numbers in the linked list.
  • the queue of the event that is, the queue with the lowest activity
  • the queue of the tail queue number in the Cache active list is the queue in which the active event has occurred recently. Therefore, when the depth of the linked list is greater than the preset first threshold, the Cache is active in the linked list.
  • the QD corresponding to the first queue number is moved to the DRAM.
  • the address cp is requested from the Cache idle list, the QD is written into the Cache with the cp as the address, and the queue number n is written to the Cache-qnum table.
  • the tail pointer of the Cache active linked list is used as the address, the cp is written to the next node, and the tail pointer of the Cache active linked list is written to the previous node by using cp as the address, and Cp is updated to the tail pointer of the active linked list, and the depth of the linked list is increased by 1;
  • the QD corresponding to the queue number m is read from the address cp' in the Cache, and the free address is stored in the Reg idle list.
  • the QD in the Cache is moved to the Reg; the next hop RAM of the Cache active list is read by the cp' address, and the last hop x and the next hop y of the queue number m are obtained, and the queue is queued with the queue number x as the address.
  • the number y is written to the next node, and the queue number x is written to the previous node with the queue number y as the address, that is, the queue number m is deleted from the active linked list, and the queue number x and the queue number y are connected, and the Cache is simultaneously
  • the active list depth is reduced by 1;
  • the number of active queues that are supported in advance is the first threshold value is Cache_th. If the Cache Link Table depth is greater than Cache_th, the first pointer of the Cache active linked list reads the queue number stored in the QD and Cache-qnum table in the Cache. The QD corresponding to the queue number is moved to the DRAM.
  • the real space of the Reg can be detected in real time.
  • FIG. 9 is a schematic diagram of a real-time detection process of the usage space of Reg, as shown in FIG.
  • the processing methods in the queue operation also include:
  • Step 401 Real-time detection of the space usage of the register, and obtaining a second detection result
  • Step 402 When the second detection result is greater than the preset third threshold, stop outputting the message descriptor in the to-be-processed message information and the message information to be processed, and perform the enrollment operation until the first detection result
  • the threshold is less than the preset fourth threshold, the message descriptor in the message information to be processed and the message information to be processed are restored to be queued, and the preset fourth threshold is smaller than the preset.
  • the third threshold is smaller than the preset.
  • the preset third threshold may be set according to actual needs.
  • the value of the third threshold may be a percentage of 90% to 98%; in this embodiment, the third threshold may be 95% is an example for detailed explanation.
  • the preset fourth threshold may be set according to actual needs.
  • the value of the fourth threshold may be a percentage of 80% to 88%; in this embodiment, the fourth threshold may be 85%.
  • the fourth threshold may be 85%.
  • the space usage of the Reg is detected in real time, and the second detection result is obtained;
  • the message descriptor in the message information is stopped, and the message information enqueue process is stopped, until the second detection result is less than 85%, and the message in the output message information is restored. Descriptor and resume message information into the team process.
  • FIG. 10 is a schematic diagram of an application scenario of the fifth embodiment of the processing method in the queue operation according to the present invention.
  • the application scenario includes the inbound direction. a message buffering module, an outgoing packet buffering module, a congestion avoidance module, an enqueue processing module, a queue scheduling module, a dequeue processing module, a message cache management module, a Map table, a QD management module, and a QD cache module; wherein, the QD management Module includes The migration management module, the QD cache module includes DRAM, Reg, and Cache.
  • FIG. 11 is a schematic diagram of the moving flow when the QD is read.
  • the moving flow when the QD is read specifically includes the following steps:
  • Step 501 The Map table receives query request information of the congestion avoidance module or the enqueue module or the dequeue module, where the query request information includes a queue number;
  • Step 502 Query the storage location information and the address information of the QD corresponding to the queue number in the Map table, and send the information to the QD management module.
  • Step 503 The moving management module in the QD management module processes the QD according to the storage location information and the address information of the QD queried by the Map table, and when the storage location information of the QD indicates that the QD is stored in the Reg, step 504 is performed; When the storage location information of the QD indicates that the QD is stored in the Cache, step 505 is performed; when the storage location information of the QD indicates that the QD is stored in the DRAM, step 508 is performed;
  • Step 504 the QD is obtained from the address determined by the QD-based storage location information and the address information, step 5010 is performed;
  • Step 505 applying for a free pointer to the Reg
  • Step 506 Acquire a QD from an address determined by the QD-based storage location information and the address information, and move the QD stored in the Cache to an address pointed to by the Reg idle pointer.
  • Step 507 the pointer of the QD that has moved out of the Cache is released, and step 5010 is performed;
  • Step 508 applying a free pointer to the Reg
  • Step 509 Acquire a QD from an address determined by the QD-based storage location information and the address information, and move the QD stored in the DRAM to an address pointed to by the Reg idle pointer;
  • step 5010 the QD is returned to the congestion avoidance module or the enqueue processing module or the dequeue processing module.
  • the process of enqueue operation processing is introduced, and the message information is received from the network, and the information is stored.
  • the Map table applies for querying a QD corresponding to the queue number
  • the Map table receives the query request of the congestion avoidance module, and the query request triggers the Map table to query the storage location information and the address information of the QD corresponding to the queue number, and sends the information to the QD management module;
  • the QD management module obtains the QD from the address determined by the QD-based storage location information and the address information, and moves the QD stored in the Cache or the DRAM to the Reg by the migration management module in the QD management module; updating the Map table and The storage location information and the address information of the QD corresponding to the queue number, and the QD, the updated storage location information and the address information of the QD are sent to the congestion avoidance module;
  • the congestion avoidance module After receiving the QD and the updated storage location information and the address information of the QD, the congestion avoidance module reads the egress port of the queue number, and uses the weighted random early according to the QD and the egress port of the queue number.
  • the detection algorithm determines whether the incoming message information can be enqueued, and sends the judgment result, the outbound port of the queue number, the QD, and the message descriptor in the message information to the enqueue processing module through the enrollment operation pipeline;
  • the congestion avoidance module determines whether the QD stored in the Reg needs to be released according to the queue number.
  • the congestion is congested. Avoiding the module decision to release the QD stored in the Reg, at this time, the migration management module moves the QD stored in the Reg corresponding to the queue number to the Cache;
  • the enqueue processing module receives the decision result, the outbound port of the queue number, the message descriptor in the QD and the message information, and processes the message information according to the judgment result, and the judgment result indicates that the message information does not enter In the team, the enqueue processing module discards the packet data directly after reading the packet data from the incoming packet buffer module; when the judgment result indicates that the packet information is enqueued, the enqueue processing module initiates an inquiry request to the Map table. Obtaining a QD corresponding to the queue number and the updated storage location information and address information of the QD;
  • the enqueue processing module applies the message buffer pointer to the message buffer management module, and writes the message information into the DRAM pointed to by the message buffer pointer through the enrollment operation pipeline; after the packet information is processed into the queue operation, the enqueue is entered.
  • the processing module updates the QD, and stores the updated QD into an address determined based on the updated storage location information and address information of the QD; after the processing by the enqueue processing module is completed, the packet information of the enqueue is provided.
  • Queue scheduling module scheduling scheduling;
  • the congestion avoidance module determines, according to the queue number, whether the QD corresponding to the queue number stored in the Reg needs to be released. When the queue number is inconsistent with the queue number processed by the queue operation, the congestion avoidance module decides The QD stored in Reg is released. At this time, the migration management module moves the QD corresponding to the queue number stored in Reg to the Cache.
  • the queue scheduling module obtains the port information and the queue information waiting for scheduling, and according to the port information and the queue information, calculates a queue number through a scheduling algorithm, and sends a team processing module;
  • the dequeue processing module sends the queue number to the Map table to query the QD corresponding to the queue number;
  • the Map table receives the query request of the dequeue processing module, and the query request triggers the Map table to query the storage location information and the address information of the QD corresponding to the queue number, and sends the information to the QD management module;
  • the QD management module obtains the QD from the address determined by the QD-based storage location information and the address information, and moves the QD stored in the Cache or the DRAM to the Reg by the migration management module in the QD management module; updating the Map table and The storage location information and the address information of the QD corresponding to the queue number, and the QD, the updated storage location information and the address information of the QD are sent to the team processing module;
  • the dequeue processing module uses the first pointer of the queue in the QD to read the message information in the DRAM, the message buffer pointer, and the next message buffer pointer that is being queued; after the packet information is dequeued, the team is sent out.
  • the processing module updates the QD, and stores the updated QD into an address determined based on the updated storage location information and address information of the QD;
  • the congestion avoidance module determines, according to the queue number, whether the QD corresponding to the queue number stored in the Reg needs to be released. When the queue number is inconsistent with the queue number processed by the queue operation, the congestion avoidance module decides The QD stored in Reg is released. At this time, the migration management module moves the QD corresponding to the queue number stored in Reg to the Cache.
  • the migration management module detects the space usage of the Reg and the Cache in real time; when the migration management module detects that the usage space of the Cache is greater than the preset first threshold, the Cache is The QD of the queue with the lowest activity is moved to the DRAM, and the pointer of the QD that moves out of the Cache is released.
  • the migration management module detects that the usage space of the Reg is greater than the preset third threshold, the packet is forwarded to the packet buffer module. The output of the message descriptor is stopped, and the message information enqueuing process is stopped.
  • the migration management module detects that the usage space of the Reg is less than the preset fourth threshold, the output message descriptor and the message information enqueuing process are resumed.
  • FIG. 12 is a schematic diagram of the application scenario of the fifth embodiment of the processing method in the queue operation according to the present invention.
  • the cell is received from the network, and the port is searched and ported according to the port number carried by the cell.
  • the storage location information and the address information of the corresponding QD, so that the QD is read from the address determined by the QD-based storage location information and the address information, and the tail pointer and the queue depth of the queue are determined according to the QD, wherein the port number is equivalent to the queue Write the Cell to the end of the queue, update the tail pointer of the queue corresponding to the QD to the storage address of the current enqueue Cell, add the queue depth in the QD to the current enqueued Cell length, and write the Cell data to the DRAM.
  • the Cell pointer is written to the Cell pointer buffer area in the DRAM.
  • the enqueue operation is completed; when dequeuing, the output scheduling queue number is selected according to the inter-queue RR scheduling rule, according to the queue number, in the Map table. Querying the storage location information and the address information of the QD corresponding to the queue number, thereby reading the QD from the address determined by the QD-based storage location information and the address information, QD and determines the queue head pointer queue depth, the read data output from the DRAM Cell with the head pointer of the queue, the queue with the pointer at the first reading from a DRAM is being queued Cell Pointer, and update the first pointer of the queue in the QD to the pointer, and update the queue depth in the QD to the queue depth in the QD minus the current dequeued Cell length.
  • the dequeue operation is completed.
  • the processing method in the queue operation of the present invention can also be applied to the device in which the cell Cell is managed by the virtual output queue VOQ queue output. Therefore, the application of the processing method in the queue operation of the present invention is not limited to the network device queue management system. It can be applied to all systems or devices that integrate queue queuing management functions.
  • the present invention also provides a processing device in a queue operation for implementing the specific details of the processing method in the queue operation of the present invention, achieving the same effect.
  • FIG. 13 is a schematic structural diagram of a first embodiment of a processing apparatus in a queue operation according to the present invention.
  • the processing apparatus in the queue operation in the embodiment includes: an obtaining module 61, a querying module 62, and a first moving module 63. And a first processing module 64; wherein
  • the obtaining module 61 is configured to obtain a queue number of a queue to which the packet information to be processed belongs.
  • the query module 62 is configured to query, in the mapping table, storage location information and address information of the queue descriptor corresponding to the queue number;
  • the first moving module 63 is configured to acquire a queue descriptor according to the storage location information and the address information of the queue descriptor, move the queue descriptor into a register, and update the mapping table with the queue number. Storage location information and address information of the corresponding queue descriptor;
  • the first processing module 64 is configured to perform queue operation on the to-be-processed message information according to the queue descriptor, and update the queue descriptor and the queue descriptor after performing the queue operation.
  • the location descriptor and the address information are stored, and the queue descriptor is updated.
  • the first processing module 64 is configured to determine, according to the preset congestion avoidance policy, that the to-be-processed packet information is enqueued according to the queue descriptor and the queue number, according to the queue.
  • the descriptor performs the enqueue operation on the to-be-processed message information; or performs the dequeuing operation according to the message information to be processed in the queue descriptor.
  • FIG. 14 is a schematic diagram showing the structure of the refinement of the first processing module in the processing apparatus shown in FIG. 13.
  • the first processing module 64 includes: a unit 641, a storage unit 642, and a first update unit 643; wherein
  • the application unit 641 is configured to apply for a message cache pointer to the dynamic random access memory
  • the storage unit 642 is configured to store the to-be-processed message information and the message cache pointer into the dynamic random access memory according to the message buffer pointer;
  • the first update unit 643 is configured to update the queue descriptor according to the message cache pointer and the message information of the queued operation, and store the updated queue descriptor in the target address,
  • the target address is an address determined by the storage location information and the address information after the queue descriptor is updated.
  • FIG. 15 is a second schematic diagram showing the structure of the first processing module in the processing apparatus shown in FIG. 13.
  • the first processing module 64 includes: reading Taking unit 644, a release unit 645 and a second update unit 646; wherein
  • the reading unit 644 is configured to read, according to the queue descriptor, the to-be-processed message information and the next queued message buffer pointer in the dynamic random access memory, and the to-be-processed message Information is sent out;
  • the release unit 645 is configured to release a message buffer pointer of the dequeued message information
  • the second update unit 646 is configured to update the queue descriptor according to the next message buffer pointer being queued and the message information of the dequeued operation, and store the updated queue descriptor To the target address, the target address is an address determined by the storage location information and the address information after the queue descriptor is updated.
  • FIG. 16 is a schematic structural diagram of a second embodiment of a processing device in a queue operation according to the present invention.
  • the processing device in the queue operation of the present embodiment includes an obtaining module 61, a query module 62, a first moving module 63, and a first processing module 64.
  • the first detecting module 65 is configured to detect a queue number of a queue corresponding to the queue descriptor before the update
  • the second moving module 66 is configured to store the updated directory in the register when the queue number of the updated queue descriptor corresponding queue is different from the queue number of the queue descriptor corresponding to all the updated queue descriptors. The queue descriptor is moved to the cache;
  • the update mapping table module 67 is configured to update storage location information and address information of the updated queue descriptor corresponding to the queue number in the mapping table.
  • the QD that needs to perform the queue operation processing needs to be moved to the Reg, and the QD that has completed the queue operation processing and stored in the Reg needs to be moved to the Cache, in order to ensure that the Cache has enough space to store the queue operation processing.
  • the QD ensures the normal operation of the queue operation.
  • the space of the Cache can be detected in real time.
  • FIG. 17 is a schematic diagram of a real-time detection function module of the Cache usage space.
  • the real-time detection function module of the Cache usage space includes: a second detection module 71, a third migration module 72, and a release module 73; among them,
  • the second detecting module 71 is configured to detect the space usage of the cache memory in real time, and obtain a first detection result
  • the third moving module 72 is configured to queue a queue that is stored in the cache and whose activity is less than a preset second threshold when the first detection result is greater than a preset first threshold.
  • the descriptor is moved to the dynamic random access memory;
  • the release module 73 is configured to release a pointer that moves out of the cache descriptor of the cache.
  • the real space of the Reg can be detected in real time.
  • FIG. 18 is a schematic diagram of a real-time detection function module of the usage space of the Reg.
  • the real-time detection function module of the usage space of the Reg includes: a third detection module 81 and a second processing module 82;
  • the third detecting module 81 is configured to detect the space usage of the register in real time, and obtain a second detection result
  • the second processing module 82 is configured to stop outputting the message descriptor in the to-be-processed message information and the message information to be processed when the second detection result is greater than the preset third threshold. After the second detection result is less than the preset fourth threshold, the message descriptor in the to-be-processed message information and the message information to be processed are resumed to be enqueued, and the preset The fourth threshold is less than the preset third threshold.
  • the unit 646 can be configured by a central processing unit (CPU), a microprocessor (MPU, a Micro Processor Unit), a digital signal processor (DSP), or a field programmable gate array (located in the mobile terminal). FPGA, Field Programmable Gate Array) and other implementations.
  • the embodiment of the invention further describes a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the processing method in the queue operation described in the foregoing embodiments. That is to say, after the computer executable instructions are executed by the processor, the processing method in the queue operation provided by any of the foregoing technical solutions can be implemented.
  • an embodiment of the present invention describes a computer storage medium, where the computer storage medium stores one or more programs, and the one or more programs may be Executed by one or more processors to implement the following steps:
  • mapping table storage location information and address information of the queue descriptor corresponding to the queue number
  • the queue descriptor is updated.
  • the one or more programs may also be executed by the one or more processors to implement the following steps:
  • the updated queue descriptor stored in the register is moved to the cache memory.
  • the storage location information and the address information of the updated queue descriptor corresponding to the queue number in the mapping table are updated.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, Or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk.
  • optical disk A medium that can store program code.
  • the technical solution of the embodiment of the present invention uses the Map table to query the storage location and the storage address of the QD corresponding to the queue number of the queue to which the packet information to be processed belongs, and obtain the QD according to the storage location and the storage address of the QD.
  • the QD is moved to the Reg, and the storage location of the QD corresponding to the queue number in the Map is Reg and the storage address is the corresponding address in the Reg; Performing a queue operation on the to-be-processed message information according to the QD, and updating the QD according to the QD, the updated storage location and the storage address of the QD after performing the queue operation, thereby ensuring the queue
  • the effectiveness of QD dynamic access in operation improves the access efficiency of QD in queue operations, realizes fast access of QD, and ensures the system performance of integrated queue queuing management function.

Abstract

Embodiments of the present invention disclose a processing method for a queue operation, comprising: obtaining a queue number of a queue including message information to be processed; querying, in a mapping table, storage location information and address information of a queue descriptor corresponding to the queue number; obtaining, according to the storage location information and the address information of the queue descriptor, the queue descriptor, moving the queue descriptor to a register, and updating, in the mapping table, the storage location information and the address information of the queue descriptor corresponding to the queue number; and performing, according to the queue descriptor, a queue operation on the message information to be processed, and updating, according to the storage location information and the address information of the queue descriptor and the updated queue descriptor, the queue descriptor after the queue operation is performed. The present invention further discloses a processing device and computer storage medium for a queue operation.

Description

一种队列操作中的处理方法、装置及计算机存储介质Processing method, device and computer storage medium in queue operation
相关申请的交叉引用Cross-reference to related applications
本申请基于申请号为201611158994.5、申请日为2016年12月13日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。The present application is filed on the basis of the Chinese Patent Application No. 201611158994.5, the entire disclosure of which is hereby incorporated by reference.
技术领域Technical field
本发明涉及互联网技术领域,尤其涉及一种队列操作中的处理方法、装置及计算机存储介质。The present invention relates to the field of Internet technologies, and in particular, to a processing method and apparatus in a queue operation and a computer storage medium.
背景技术Background technique
在网络器件中,如网络处理器(Network Processor,NP)、交换接入(Fabric Access Processor,FAP)、交换机、网桥、或流量管理芯片均集成了队列管理功能,报文按队列进行处理。伴随着业务增长,网络器件需支持的队列数呈指数级增长,已达到512K甚至1M以上存储容量的队列,对于数量巨大的高密度队列进行排队管理的难度随之上升,队列和队列的标识符即队列描述符(Queue Descriptor,QD)的存储都需要占用大量存储容量,在保证性能的前提下,降低芯片成本是必须要考虑的问题。In a network device, such as a Network Processor (NP), a Fabric Access Processor (FAP), a switch, a bridge, or a traffic management chip, a queue management function is integrated, and packets are processed by a queue. With the growth of the business, the number of queues that the network device needs to support increases exponentially, and the queues that have reached the storage capacity of 512K or more are increased. The difficulty of queue management for a large number of high-density queues increases, and the identifiers of queues and queues increase. That is, the storage of the Queue Descriptor (QD) requires a large amount of storage capacity. Under the premise of ensuring performance, reducing the cost of the chip is a problem that must be considered.
现有技术中,已有专利文献描述采用低成本大容量的高密度片外缓存——动态随机存储器(DRAM,Dynamic Random Access Memory),用来弥补片内缓存——静态随机存储器(SRAM,Static Random Access Memory)由于存储高密度队列和QD而导致存储容量不足的问题。例如专利文献US008180966B2《System and method for operating a packet buffer in an intermediate node》提供了一种报文数据缓存方法,该方法采用DRAM结合 小容量高速片内高速缓冲存储器(Cache)来存储队列和QD,解决了Cache存储容量不足的问题。每个队列在Cache中都有独立存储空间,Cache中存储若干队列的首报文的分片,并且QD也存储在Cache中,队列其余分片和尾分片则存储在DRAM中。新报文入队时,报文分片写入到存储在DRAM中的对应队列的尾部;新报文出队时,读取存储在Cache中的对应队列的首报文进行出队,如果存储在Cache中的报文分片已经全部出队,则会将新入队的报文分片存储在Cache中,或者会将存储在DRAM中的报文分片搬移至Cache中。In the prior art, the prior patent documents describe the use of a low-cost, large-capacity, high-density off-chip cache, Dynamic Random Access Memory (DRAM), to compensate for on-chip buffers - static random access memory (SRAM, Static). Random Access Memory) The problem of insufficient storage capacity due to storage of high-density queues and QDs. For example, the patent document US008180966B2 "System and method for operating a packet buffer in an intermediate node" provides a message data buffering method, which uses DRAM bonding. A small-capacity, high-speed on-chip cache (Cache) is used to store queues and QDs, which solves the problem of insufficient Cache storage capacity. Each queue has independent storage space in the Cache. The Cache stores the fragmentation of the first packet of several queues, and the QD is also stored in the Cache. The remaining fragments and tail fragments of the queue are stored in the DRAM. When a new packet is enqueued, the packet fragment is written to the tail of the corresponding queue stored in the DRAM; when the new packet is dequeued, the first packet of the corresponding queue stored in the Cache is read for dequeuing, if stored. If the packets in the Cache are all dequeued, the newly enqueued packets will be stored in the Cache, or the packets stored in the DRAM will be moved to the Cache.
上述方法虽然从一定程度上弥补了SRAM存储队列和QD而导致存储容量不足的问题,然而,对于高密度队列,QD的存储也需要消耗大量的存储空间,例如有1M存储容量的队列数,每个队列的QD的存储容量需要80bit,则需要80Mbit存储容量的SRAM,如果所有的QD都存储在Cache中,则在现有工艺下实现成本巨大,因此,在队列操作中,对于QD的存储和处理方法需要改进。专利文献US7277990《Method and apparatus providing efficient queue descriptor memory access》中提出了QD的存储处理改进方法。该改进方法中,QD存储在Cache或DRAM中,通过入队出队请求触发内容可寻址存储器(Content Addressable Memory,CAM),查询QD是否存储在Cache中,如果查询到,CAM返回QD在Cache中的存储地址,从Cache的对应存储地址中读取QD;如果未查询到,则从DRAM中读取QD,同时CAM将释放1个与触发操作无关队列的QD标签,腾出的QD标签表项存储新搬入的QD标签,排出的QD从Cache搬移至DRAM中。然而,该方法存在三个问题,第一,CAM的读写带宽不能任意扩展,当对CAM频繁写入时,此时CAM维护的队列号频繁变化,势必会影响CAM检索的效率;第二,同样容量的CAM比Cache占用更大的面积,功耗更大,CAM不能同时支持足够多的QD标签,故放在Cache的QD数量 会受到CAM容量影响;第三,高速高性能流量管理需要大容量大操作带宽的片内QD存储,Cache不能同时支持大容量和大读写带宽。可见,队列操作中,该方法由于需要频繁访问Cache和DRAM来获取QD,从而导致QD的存取效率低。Although the above method compensates for the shortage of storage capacity due to the SRAM storage queue and QD to some extent, for high-density queues, QD storage also needs to consume a large amount of storage space, for example, the number of queues with 1M storage capacity, each The QD storage capacity of the queue needs 80bit, and the SRAM of 80Mbit storage capacity is required. If all the QDs are stored in the Cache, the implementation cost is huge under the existing process. Therefore, in the queue operation, the QD is stored and The processing method needs to be improved. An improved storage processing method for QD is proposed in the patent document US Pat. No. 7,277,990, "Method and apparatus providing efficient queue descriptor memory access". In the improved method, the QD is stored in the Cache or the DRAM, and the Content Addressable Memory (CAM) is triggered by the enqueue request to query whether the QD is stored in the Cache. If the query is obtained, the CAM returns the QD in the Cache. The storage address in the memory reads the QD from the corresponding storage address of the Cache; if not, the QD is read from the DRAM, and the CAM will release a QD tag that is unrelated to the trigger operation, and the vacated QD tag table The item stores the newly moved QD tag, and the discharged QD is moved from the Cache to the DRAM. However, there are three problems in this method. First, the read/write bandwidth of the CAM cannot be arbitrarily expanded. When the CAM is frequently written, the queue number maintained by the CAM frequently changes, which inevitably affects the efficiency of CAM retrieval. Second, The CAM of the same capacity occupies a larger area than the Cache, and the power consumption is larger. The CAM cannot support enough QD tags at the same time, so the number of QDs placed in the Cache It will be affected by CAM capacity. Third, high-speed and high-performance traffic management requires on-chip QD storage with large capacity and large operating bandwidth. Cache cannot support large capacity and large read/write bandwidth at the same time. It can be seen that in the queue operation, the method requires frequent access to the Cache and the DRAM to acquire the QD, thereby resulting in low access efficiency of the QD.
因此,为解决在队列操作中QD存取效率低的问题,亟需寻找一种在队列操作中的处理方法。Therefore, in order to solve the problem of low QD access efficiency in queue operations, it is urgent to find a processing method in queue operation.
发明内容Summary of the invention
为解决现有存在的问题,本发明实施例期望提供一种队列操作中的处理方法、装置及计算机存储介质,能够提高在队列操作中QD的存取效率,实现QD的快速存取。In order to solve the existing problems, the embodiment of the present invention is to provide a processing method, a device, and a computer storage medium in a queue operation, which can improve the access efficiency of the QD in the queue operation and realize the fast access of the QD.
为达到上述目的,本发明的技术方案是这样实现的:In order to achieve the above object, the technical solution of the present invention is achieved as follows:
本发明实施例提供了一种队列操作中的处理方法,所述方法包括:An embodiment of the present invention provides a processing method in a queue operation, where the method includes:
获取待处理的报文信息所属队列的队列号;Obtain the queue number of the queue to which the packet information to be processed belongs.
在映射表中查询与所述队列号对应的队列描述符的存储位置信息和地址信息;Querying, in the mapping table, storage location information and address information of the queue descriptor corresponding to the queue number;
根据所述队列描述符的存储位置信息和地址信息,获取队列描述符,将所述队列描述符搬移至寄存器中,并更新映射表中与所述队列号对应的队列描述符的存储位置信息和地址信息;Obtaining a queue descriptor according to the storage location information and the address information of the queue descriptor, moving the queue descriptor to a register, and updating storage location information of the queue descriptor corresponding to the queue number in the mapping table and Address information;
根据所述队列描述符对所述待处理的报文信息进行队列操作,并在进行队列操作之后根据所述队列描述符、所述队列描述符更新后的存储位置信息和地址信息,对所述队列描述符进行更新。Performing a queue operation on the to-be-processed message information according to the queue descriptor, and after performing the queue operation, according to the queue descriptor, the updated storage location information and the address information of the queue descriptor, The queue descriptor is updated.
在一实施例中,所述根据所述队列描述符对所述待处理的报文信息进行队列操作,包括:In an embodiment, the performing queue operation on the to-be-processed message information according to the queue descriptor includes:
基于所述队列描述符和所述队列号,按照预设的拥塞避免策略,确定待处理的报文信息入队时,根据所述队列描述符对所述待处理的报文信息 进行入队操作;或者,And determining, according to the preset congestion avoidance policy, the packet information to be processed according to the queue descriptor, and the packet information to be processed according to the queue descriptor. Carry in the operation; or,
根据所述队列描述符对待处理的报文信息进行出队操作。Dequeuing the packet information to be processed according to the queue descriptor.
在一实施例中,当所述队列操作为入队操作时,所述根据所述队列描述符对所述待处理的报文信息进行队列操作,并在进行队列操作之后根据所述队列描述符、所述队列描述符更新后的存储位置信息和地址信息,对所述队列描述符进行更新,包括:In an embodiment, when the queue operation is an enqueue operation, the queue operation is performed according to the queue descriptor, and the queue descriptor is used according to the queue descriptor. And the storage location information and the address information after the queue descriptor is updated, and updating the queue descriptor, including:
申请指向动态随机存储器的报文缓存指针;Applying a message cache pointer to the dynamic random access memory;
根据所述报文缓存指针,将所述待处理的报文信息以及报文缓存指针存储至所述动态随机存储器中;And storing, according to the message buffer pointer, the to-be-processed message information and the message cache pointer into the dynamic random access memory;
根据所述报文缓存指针和已入队操作的报文信息,更新所述队列描述符,并将更新后的队列描述符存储至目标地址中,所述目标地址为所述队列描述符更新后的存储位置信息和地址信息所确定的地址。Updating the queue descriptor according to the message cache pointer and the message information of the enqueue operation, and storing the updated queue descriptor in the target address, where the target address is updated after the queue descriptor is updated The storage location information and the address determined by the address information.
在一实施例中,当所述队列操作为出队操作时,所述根据所述队列描述符对所述待处理的报文信息进行队列操作,并在进行队列操作之后根据所述队列描述符、所述队列描述符更新后的存储位置信息和地址信息,对所述队列描述符进行更新,包括:In an embodiment, when the queue operation is a dequeuing operation, the queue operation is performed according to the queue descriptor, and the queue descriptor is used according to the queue descriptor. And the storage location information and the address information after the queue descriptor is updated, and updating the queue descriptor, including:
根据所述队列描述符,在所述动态随机存储器读取待处理的报文信息和下一个正在排队的报文缓存指针,并将所述待处理的报文信息进行出队;Decoding the to-be-processed message information and the next queued message buffer pointer in the dynamic random access memory according to the queue descriptor, and dequeuing the to-be-processed message information;
释放已出队的报文信息的报文缓存指针;A message buffer pointer for releasing the dequeued message information;
根据所述下一个正在排队的报文缓存指针和所述已出队操作的报文信息,更新所述队列描述符,并将更新后的队列描述符存储至目标地址中,所述目标地址为所述队列描述符更新后的存储位置信息和地址信息所确定的地址。Updating the queue descriptor according to the next message buffer pointer being queued and the message information of the dequeued operation, and storing the updated queue descriptor in a target address, where the target address is The stored location information and the address determined by the address information after the queue descriptor is updated.
在一实施例中,所述对所述队列描述符进行更新之后,所述方法还包括: In an embodiment, after the updating the queue descriptor, the method further includes:
检测更新前的队列描述符对应队列的队列号;Detecting the queue number of the queue corresponding to the queue descriptor before the update;
当更新后的队列描述符对应队列的队列号与所有更新前的队列描述符对应队列的队列号都不一致时,将存储在所述寄存器中更新后的所述队列描述符搬移至高速缓冲存储器中;When the queue number of the queue descriptor corresponding to the updated queue is inconsistent with the queue number of the queue corresponding to all the queue descriptors before the update, the updated queue descriptor stored in the register is moved to the cache memory. ;
更新映射表中与队列号对应的更新后的所述队列描述符的存储位置信息和地址信息。The storage location information and the address information of the updated queue descriptor corresponding to the queue number in the mapping table are updated.
在一实施例中,所述方法还包括:In an embodiment, the method further includes:
实时检测所述高速缓冲存储器的空间使用情况,获得第一检测结果;Detecting the space usage of the cache memory in real time, and obtaining a first detection result;
当所述第一检测结果大于预设的第一阈值时,将存储在所述高速缓冲存储器中、且活跃度小于预设的第二阈值的队列的队列描述符搬移至动态随机存储器内;When the first detection result is greater than the preset first threshold, the queue descriptor of the queue stored in the cache and having the activity less than the preset second threshold is moved to the dynamic random access memory;
释放搬移出所述高速缓冲存储器的队列描述符的指针。A pointer to the queue descriptor that is moved out of the cache is released.
在一实施例中,所述方法还包括:In an embodiment, the method further includes:
实时检测所述寄存器的空间使用情况,获得第二检测结果;Real-time detecting the space usage of the register to obtain a second detection result;
当第二检测结果大于预设的第三阈值时,停止输出所述待处理的报文信息中的报文描述符和对待处理的报文信息进行入队操作,直至第二检测结果小于预设的第四阈值时,恢复输出所述待处理的报文信息中的报文描述符和对待处理的报文信息进行入队操作,所述预设的第四阈值小于所述预设的第三阈值。When the second detection result is greater than the preset third threshold, the message descriptor in the to-be-processed message information and the message information to be processed are stopped from being queued until the second detection result is less than the preset. The fourth threshold is restored, and the message descriptor in the to-be-processed message information and the message information to be processed are resumed, and the preset fourth threshold is smaller than the preset third. Threshold.
本发明实施例还提供了一种队列操作中的处理装置,所述装置包括:获取模块、查询模块、第一搬移模块和第一处理模块;其中,The embodiment of the present invention further provides a processing device in a queue operation, where the device includes: an obtaining module, a query module, a first moving module, and a first processing module;
所述获取模块,配置为获取待处理的报文信息所属队列的队列号;The acquiring module is configured to obtain a queue number of a queue to which the packet information to be processed belongs;
所述查询模块,配置为在映射表中查询与所述队列号对应的队列描述符的存储位置信息和地址信息;The query module is configured to query, in the mapping table, storage location information and address information of the queue descriptor corresponding to the queue number;
所述第一搬移模块,配置为根据所述队列描述符的存储位置信息和地 址信息,获取队列描述符,将所述队列描述符搬移至寄存器中,并更新映射表中与所述队列号对应的队列描述符的存储位置信息和地址信息;The first moving module is configured to store location information and a ground according to the queue descriptor Address information, obtaining a queue descriptor, moving the queue descriptor into a register, and updating storage location information and address information of the queue descriptor corresponding to the queue number in the mapping table;
所述第一处理模块,配置为根据所述队列描述符对所述待处理的报文信息进行队列操作,并在进行队列操作之后根据所述队列描述符、所述队列描述符更新后的存储位置信息和地址信息,对所述队列描述符进行更新。The first processing module is configured to perform a queue operation on the to-be-processed message information according to the queue descriptor, and after the queue operation, update the storage according to the queue descriptor and the queue descriptor. Location information and address information are updated for the queue descriptor.
在一实施例中,所述第一处理模块,具体配置为基于所述队列描述符和所述队列号,按照预设的拥塞避免策略,确定待处理的报文信息入队时,根据所述队列描述符对所述待处理的报文信息进行入队操作;或者,根据所述队列描述符对待处理的报文信息进行出队操作。In an embodiment, the first processing module is configured to determine, according to the preset congestion avoidance policy, that the to-be-processed packet information is enqueued according to the queue descriptor and the queue number, according to the The queue descriptor performs the enqueue operation on the to-be-processed packet information; or performs the dequeuing operation according to the packet information to be processed according to the queue descriptor.
在一实施例中,当所述队列操作为入队操作时,所述第一处理模块包括:申请单元、存储单元和第一更新单元;其中,In an embodiment, when the queue is operated as a queue operation, the first processing module includes: an application unit, a storage unit, and a first update unit;
所述申请单元,配置为申请指向动态随机存储器的报文缓存指针;The application unit is configured to apply for a message cache pointer to the dynamic random access memory;
所述存储单元,配置为根据所述报文缓存指针,将所述待处理的报文信息以及报文缓存指针存储至所述动态随机存储器中;The storage unit is configured to store the to-be-processed message information and the message cache pointer in the dynamic random access memory according to the message buffer pointer;
所述第一更新单元,配置为根据所述报文缓存指针和已入队操作的报文信息,更新所述队列描述符,并将更新后的队列描述符存储至目标地址中,所述目标地址为所述队列描述符更新后的存储位置信息和地址信息所确定的地址。The first update unit is configured to update the queue descriptor according to the message cache pointer and the message information of the queued operation, and store the updated queue descriptor in the target address, the target The address is an address determined by the storage location information and the address information after the queue descriptor is updated.
在一实施例中,当所述队列操作为出队操作时,所述第一处理模块包括:读取单元、释放单元和第二更新单元;其中,In an embodiment, when the queue operation is a dequeuing operation, the first processing module includes: a reading unit, a releasing unit, and a second updating unit; wherein
所述读取单元,配置为根据所述队列描述符,在所述动态随机存储器读取待处理的报文信息和下一个正在排队的报文缓存指针,并将所述待处理的报文信息进行出队;The reading unit is configured to read the to-be-processed message information and the next queued message buffer pointer in the dynamic random access memory according to the queue descriptor, and the to-be-processed message information Carry out the team;
所述释放单元,配置为释放已出队的报文信息的报文缓存指针;The release unit is configured to release a message cache pointer of the dequeued message information;
所述第二更新单元,配置为根据所述下一个正在排队的报文缓存指针 和所述已出队操作的报文信息,更新所述队列描述符,并将更新后的队列描述符存储至目标地址中,所述目标地址为所述队列描述符更新后的存储位置信息和地址信息所确定的地址。The second update unit is configured to cache a pointer according to the next queued message And the message information of the dequeued operation, updating the queue descriptor, and storing the updated queue descriptor in a target address, where the target address is updated storage location information of the queue descriptor and The address determined by the address information.
在一实施例中,所述装置还包括:第一检测模块、第二搬移模块和更新映射表模块;其中,In an embodiment, the device further includes: a first detecting module, a second moving module, and an update mapping table module; wherein
所述第一检测模块,配置为检测更新前的队列描述符对应队列的队列号;The first detecting module is configured to detect a queue number of a queue corresponding to the queue descriptor before the update;
所述第二搬移模块,配置为当更新后的队列描述符对应队列的队列号与所有更新前的队列描述符对应队列的队列号都不一致时,将存储在所述寄存器中更新后的所述队列描述符搬移至高速缓冲存储器中;The second moving module is configured to: when the queue number of the updated queue descriptor corresponding queue is different from the queue number of the queue queue corresponding to all the updated queue descriptors, the updated content stored in the register The queue descriptor is moved to the cache;
所述更新映射表模块,配置为更新映射表中与队列号对应的更新后的所述队列描述符的存储位置信息和地址信息。The update mapping table module is configured to update storage location information and address information of the updated queue descriptor corresponding to the queue number in the mapping table.
在一实施例中,所述装置还包括:第二检测模块、第三搬移模块和释放模块;其中,In an embodiment, the device further includes: a second detecting module, a third moving module, and a releasing module; wherein
所述第二检测模块,配置为实时检测所述高速缓冲存储器的空间使用情况,获得第一检测结果;The second detecting module is configured to detect a space usage of the cache memory in real time, and obtain a first detection result;
所述第三搬移模块,配置为当所述第一检测结果大于预设的第一阈值时,将存储在所述高速缓冲存储器中、且活跃度小于预设的第二阈值的队列的队列描述符搬移至动态随机存储器内;The third moving module is configured to: when the first detection result is greater than a preset first threshold, a queue description of a queue that is stored in the cache and whose activity is less than a preset second threshold Move to dynamic random access memory;
所述释放模块,配置为释放搬移出所述高速缓冲存储器的队列描述符的指针。The release module is configured to release a pointer that moves out of the cache descriptor of the cache.
上述装置中,所述装置还包括:第三检测模块、第二处理模块;其中,In the above device, the device further includes: a third detecting module and a second processing module; wherein
所述第三检测模块,配置为实时检测所述寄存器的空间使用情况,获得第二检测结果;The third detecting module is configured to detect a space usage of the register in real time, and obtain a second detection result;
所述第二处理模块,配置为当第二检测结果大于预设的第三阈值时, 停止输出所述待处理的报文信息中的报文描述符和对待处理的报文信息进行入队操作,直至第二检测结果小于预设的第四阈值时,恢复输出所述待处理的报文信息中的报文描述符和对待处理的报文信息进行入队操作,所述预设的第四阈值小于所述预设的第三阈值。The second processing module is configured to: when the second detection result is greater than a preset third threshold, And stopping to output the message descriptor in the to-be-processed message information and the message information to be processed, and performing the enqueuing operation until the second detection result is less than the preset fourth threshold, and outputting the to-be-processed report. The message descriptor in the text information and the message information to be processed are enqueued, and the preset fourth threshold is smaller than the preset third threshold.
本发明实施例提供的队列操作中的处理方法、装置及计算机存储介质,首先,获取待处理的报文信息所属队列的队列号;然后,在映射(Map)表中查询与所述队列号对应的QD的存储位置信息和地址信息;根据所述QD的存储位置信息和地址信息,获取QD,将所述QD搬移至寄存器(Reg)中,并更新Map表中与所述队列号对应的QD的存储位置信息和地址信息;最后,根据所述QD对所述待处理的报文信息进行队列操作,并在进行队列操作之后根据所述QD、所述QD更新后的存储位置信息和地址信息,对所述QD进行更新。The processing method, the device, and the computer storage medium in the queue operation provided by the embodiment of the present invention firstly obtain the queue number of the queue to which the packet information to be processed belongs; and then query the mapping table to correspond to the queue number. The storage location information and the address information of the QD; acquiring the QD according to the storage location information and the address information of the QD, moving the QD to a register (Reg), and updating the QD corresponding to the queue number in the Map table And storing the location information and the address information according to the QD, and performing the queue operation on the to-be-processed message information according to the QD, and after the queue operation, according to the QD, the updated storage location information and the address information of the QD Update the QD.
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行本发明实施例所述的队列操作中的处理方法。The embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the processing method in the queue operation according to the embodiment of the invention.
可见,本发明实施例通过Map表查询与待处理的报文信息所属队列的队列号对应的QD的存储位置和存储地址;根据所述QD的存储位置和存储地址,从而获取QD,将所述QD搬移至Reg中,并更新Map中与所述队列号对应的QD的存储位置为Reg和存储地址为在Reg中的对应地址;根据所述QD对所述待处理的报文信息进行队列操作,在进行队列操作之后根据所述QD、所述QD更新后的存储位置和存储地址,对所述QD进行更新,从而保证了队列操作中的QD动态存取的实效性,提高了在队列操作中QD的存取效率,实现了QD的快速存取,保证了集成队列排队管理功能的系统性能。 It can be seen that, in the embodiment of the present invention, the storage location and the storage address of the QD corresponding to the queue number of the queue to which the packet information to be processed belongs are searched through the Map table; and the QD is obtained according to the storage location and the storage address of the QD, and the QD is obtained. The QD is moved to the Reg, and the storage location of the QD corresponding to the queue number in the Map is Reg and the storage address is the corresponding address in the Reg; the queue is processed according to the QD. After the queue operation, the QD is updated according to the QD, the updated storage location and the storage address of the QD, thereby ensuring the effectiveness of the QD dynamic access in the queue operation, and improving the queue operation. The access efficiency of QD enables fast access of QD and ensures the system performance of integrated queue queuing management function.
附图说明DRAWINGS
图1为本发明队列操作中的处理方法实施例一的实现流程示意图之一;1 is a schematic diagram of an implementation flow of Embodiment 1 of a processing method in a queue operation according to the present invention;
图2为Map表中Reg、Cache以及DRAM间的映射关系示意图;2 is a schematic diagram of mapping relationships between Reg, Cache, and DRAM in a Map table;
图3为本发明队列操作中的处理方法实施例一的实现流程示意图之二;3 is a second schematic diagram of the implementation process of the first embodiment of the processing method in the queue operation according to the present invention;
图4为本发明队列操作中的处理方法实施例一的实现流程示意图之三;4 is a third schematic flowchart of the implementation process of the first embodiment of the processing method in the queue operation of the present invention;
图5为本发明队列操作中的处理方法实施例二的实现流程示意图;5 is a schematic flowchart of an implementation process of a second embodiment of a processing method in a queue operation according to the present invention;
图6为Cache的使用空间的实时检测流程示意图;6 is a schematic diagram of a real-time detection process of a Cache usage space;
图7为QD从Cache搬移至DRAM的实现流程示意图;7 is a schematic diagram showing an implementation flow of moving a QD from a Cache to a DRAM;
图8a为队列号与QD的映射关系示意图;FIG. 8a is a schematic diagram of a mapping relationship between a queue number and a QD;
图8b为Cache活跃链表的结构示意图;FIG. 8b is a schematic structural diagram of a Cache active linked list;
图8c为Cache空闲链表的结构示意图;FIG. 8c is a schematic structural diagram of a Cache idle list;
图9为Reg的使用空间的实时检测流程示意图;9 is a schematic diagram of a real-time detection flow of a usage space of Reg;
图10为本发明队列操作中的处理方法实施例五的应用场景示意图之一;10 is a schematic diagram of an application scenario of Embodiment 5 of a processing method in a queue operation according to the present invention;
图11为读取QD时的搬移流程示意图;11 is a schematic diagram of a moving process when reading a QD;
图12为本发明队列操作中的处理方法实施例五的应用场景示意图之二;FIG. 12 is a second schematic diagram of an application scenario of Embodiment 5 of a processing method in a queue operation according to the present invention;
图13为本发明队列操作中的处理装置实施例一的组成结构示意图;FIG. 13 is a schematic structural diagram of a first embodiment of a processing apparatus in a queue operation according to the present invention; FIG.
图14为图13所示处理装置中第一处理模块的细化组成结构示意图之一;14 is a schematic diagram showing a detailed composition structure of a first processing module in the processing apparatus shown in FIG. 13;
图15为图13所示处理装置中第一处理模块的细化组成结构示意图之二;Figure 15 is a second schematic diagram showing the detailed composition of the first processing module in the processing apparatus shown in Figure 13;
图16为本发明队列操作中的处理装置实施例二的组成结构示意图;16 is a schematic structural diagram of a second embodiment of a processing apparatus in a queue operation according to the present invention;
图17为Cache的使用空间的实时检测功能模块示意图;17 is a schematic diagram of a real-time detection function module of a Cache usage space;
图18为Reg的使用空间的实时检测功能模块示意图。 FIG. 18 is a schematic diagram of a real-time detection function module of the usage space of Reg.
具体实施方式detailed description
本发明实施例提供的队列操作中的处理方法,主要应用在集成队列排队管理功能的系统上,通过Map表查询与待处理的报文信息所属队列的队列号对应的QD的存储位置和存储地址;根据所述QD的存储位置和存储地址,从而获取QD,将所述QD搬移至Reg中,并更新Map中与所述队列号对应的QD的存储位置为Reg和存储地址为在Reg中的对应地址;根据所述QD对所述待处理的报文信息进行队列操作,在进行队列操作之后根据所述QD、所述QD更新后的存储位置和存储地址,对所述QD进行更新,能够提高队列操作中的QD的存取效率,实现QD的快速存取。The processing method in the queue operation provided by the embodiment of the present invention is mainly applied to the system for integrating the queuing management function of the queue, and the storage location and the storage address of the QD corresponding to the queue number of the queue to which the packet information to be processed belongs are searched through the Map table. Obtaining a QD according to the storage location and the storage address of the QD, moving the QD to the Reg, and updating the storage location of the QD corresponding to the queue number in the Map to be Reg and the storage address being in Reg Corresponding address; performing a queue operation on the to-be-processed message information according to the QD, and updating the QD according to the QD, the updated storage location and the storage address of the QD after performing the queue operation, Improve the access efficiency of QD in queue operations and achieve fast access to QD.
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。The implementation, functional features, and advantages of the present invention will be further described in conjunction with the embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
图1为本发明队列操作中的处理方法实施例一的实现流程示意图之一,参照图1所示,本实施例中队列操作中的处理方法包括以下步骤:FIG. 1 is a schematic diagram of an implementation flow of a first embodiment of a processing method in a queue operation according to the present invention. Referring to FIG. 1 , the processing method in the queue operation in this embodiment includes the following steps:
步骤101,获取待处理的报文信息所属队列的队列号;Step 101: Obtain a queue number of a queue to which the packet information to be processed belongs.
这里,所述待处理的报文信息包括报文数据和报文描述符,所述报文描述符包括队列号和报文长度。Here, the to-be-processed message information includes message data and a message descriptor, and the message descriptor includes a queue number and a message length.
作为一种实施方式,根据对待处理的报文信息进行队列操作的不同,获取队列号的方法也不同,队列操作包括入队操作和出队操作;在本实施例中,在入队操作中,从网络接收待处理的报文信息,根据报文信息中的报文描述符来确定队列号;在出队操作中,获取等待调度的端口信息和等待调度的队列信息,根据所述端口信息和队列信息,通过调度算法如时间片轮转法(Round-Robin,RR)调度算法或严格优先级(Strict Priority,SP)调度算法,计算得到队列号。As an implementation manner, the method for obtaining the queue number is different according to the queue operation of the message information to be processed, and the queue operation includes the enqueue operation and the dequeue operation; in this embodiment, in the enqueue operation, Receiving the to-be-processed message information from the network, determining the queue number according to the message descriptor in the message information; in the dequeuing operation, acquiring the port information waiting for scheduling and the queue information waiting to be scheduled, according to the port information and The queue information is calculated by a scheduling algorithm such as a Round-Robin (RR) scheduling algorithm or a Strict Priority (SP) scheduling algorithm.
步骤102,在映射表中查询与所述队列号对应的队列描述符的存储位置 信息和地址信息;Step 102: Query a storage location of a queue descriptor corresponding to the queue number in a mapping table. Information and address information;
这里,所述Map表用来存储与队列号一一对应的QD的存储位置信息和地址信息;图2为Map表中Reg、Cache以及DRAM间的映射关系示意图,参照图2所示,Map表中每行表示与队列号对应的QD的存储位置信息和地址信息,如第一行序号“1”为队列号,“在Reg中”为与队列号1对应的QD的存储位置信息,表示与队列号1对应的QD存储在Reg中,“Reg pointer 5”为与队列号1对应的QD的地址信息,表示与队列号1对应的QD存储在Reg中pointer 5所指示的地址;在本实施例中,根据待处理的报文信息所属队列的队列号,在Map表查询与所述队列号对应的QD的存储位置信息和地址信息,存储位置信息表示与所述队列号对应的QD是存储在Reg中、还是Cache中、还是DRAM中,或QD显示队列为空,所述QD显示队列为空表示在Reg中、Cache中和DRAM中都不存在与所述队列号对应的QD;地址信息表示与所述队列号对应的QD的存储位置中的地址。Here, the Map table is used to store the storage location information and the address information of the QD corresponding to the queue number one by one; FIG. 2 is a schematic diagram of the mapping relationship between the Reg, the Cache and the DRAM in the Map table, as shown in FIG. 2, the Map table Each row represents the storage location information and address information of the QD corresponding to the queue number. For example, the first row number "1" is the queue number, and "in Reg" is the storage location information of the QD corresponding to the queue number 1, indicating The QD corresponding to the queue number 1 is stored in Reg, and the "Reg pointer 5" is the address information of the QD corresponding to the queue number 1, indicating that the QD corresponding to the queue number 1 is stored in the address indicated by the pointer 5 in the Reg; In the example, the storage location information and the address information of the QD corresponding to the queue number are queried in the Map table according to the queue number of the queue to which the packet information to be processed belongs, and the storage location information indicates that the QD corresponding to the queue number is stored. In Reg, in Cache, or in DRAM, or QD display queue is empty, the QD display queue is empty, indicating that there is no QD corresponding to the queue number in Reg, Cache, and DRAM; address information Representation with the queue QD corresponding storage location address.
步骤103,根据所述队列描述符的存储位置信息和地址信息,获取队列描述符,将所述队列描述符搬移至寄存器中,并更新映射表中与所述队列号对应的队列描述符的存储位置信息和地址信息;Step 103: Acquire a queue descriptor according to the storage location information and the address information of the queue descriptor, move the queue descriptor to a register, and update a storage of a queue descriptor corresponding to the queue number in the mapping table. Location information and address information;
这里,基于所述QD的存储位置信息和地址信息确定一个具体地址,从确定的具体地址中读取QD;并根据所述QD的存储位置信息,将没有存储在Reg中的所述QD搬移至Reg中,更新Map表中与所述队列号对应的QD的存储位置信息为“在Reg中”,同时更新Map表中与所述队列号对应的QD的地址信息为“存储位置Reg中的地址”。Here, a specific address is determined based on the storage location information and the address information of the QD, and the QD is read from the determined specific address; and the QD not stored in the Reg is moved to the QD according to the storage location information of the QD. In the Reg, the storage location information of the QD corresponding to the queue number in the update map table is "in Reg", and the address information of the QD corresponding to the queue number in the Map table is updated as "the address in the storage location Reg". ".
作为一种实施方式,在本实施例中,根据所述QD的存储位置信息,判断所述QD是存储在Reg中、还是Cache中、还是DRAM中,或QD显示队列为空;当所述QD存储在Reg中时,则采用所述QD的地址信息中 所指示的地址从Reg中读取QD,此时,由于所述队列描述存储在Reg中,不需要对其进行搬移操作和更新Map表操作;当所述QD存储在Cache中时,则采用所述QD的地址信息中所指示的地址从Cache中读取QD,此时,向Reg申请一个空闲地址,将存储在Cache中的所述QD搬移至Reg的所述空闲地址中,并更新Map表中与所述队列号对应的QD的存储位置信息和地址信息为“在Reg中,存储位置Reg中的地址”;当所述QD存储在DRAM中时,此时,所述QD的地址信息无意义,采用所述队列号从DRAM中与所述队列号对应的地址读取QD,向Reg中申请一个空闲地址,将存储在DRAM中的所述QD搬移至Reg的所述空闲地址中,并更新Map表中与所述队列号对应的QD的存储位置信息和地址信息为“在Reg中,存储位置Reg中的地址”;当所述QD显示队列为空时,此时,向Reg申请一个空闲地址,在所述空闲地址新建QD,并更新Map表中与所述队列号对应的QD的存储位置信息和地址信息为“在Reg中,存储位置Reg中的地址”。In an embodiment, in the embodiment, determining, according to the storage location information of the QD, whether the QD is stored in the Reg, the Cache, or the DRAM, or the QD display queue is empty; when the QD is When stored in Reg, the address information of the QD is used. The indicated address reads the QD from the Reg. At this time, since the queue description is stored in the Reg, it is not required to perform the moving operation and update the Map table operation; when the QD is stored in the Cache, the The address indicated in the address information of the QD reads the QD from the Cache. At this time, the Reg applies for a free address, moves the QD stored in the Cache to the free address of the Reg, and updates the Map table. The storage location information and the address information of the QD corresponding to the queue number are "in Reg, the address in the storage location Reg"; when the QD is stored in the DRAM, at this time, the address information of the QD is not Meaning, using the queue number to read the QD from the address corresponding to the queue number in the DRAM, applying a free address to the Reg, moving the QD stored in the DRAM to the free address of the Reg, and Updating the storage location information and address information of the QD corresponding to the queue number in the Map table is "in Reg, storing the address in the location Reg"; when the QD display queue is empty, at this time, applying to the Reg Free address, create a new QD at the free address Map table and update the corresponding queue number QD storage location information and address information as "Reg, the storage location address in Reg."
步骤104,根据所述队列描述符对所述待处理的报文信息进行队列操作,并在进行队列操作之后根据所述队列描述符、所述队列描述符更新后的存储位置信息和地址信息,对所述队列描述符进行更新。Step 104: Perform a queue operation on the to-be-processed message information according to the queue descriptor, and after the queue operation, according to the queue descriptor, the updated storage location information, and the address information of the queue descriptor, The queue descriptor is updated.
作为一种实施方式,在入队操作中,从网络上接收待入队的报文信息,根据报文信息中的报文描述符确定队列号;或在出队操作中,获取等待调度的端口信息和等待调度的队列信息,根据所述端口信息和队列信息,通过调度算法如RR调度算法或SP调度算法,计算得到队列号;As an implementation manner, in the enqueue operation, the packet information to be queued is received from the network, and the queue number is determined according to the message descriptor in the packet information; or in the dequeuing operation, the port waiting for scheduling is obtained. The information and the queue information waiting for scheduling are calculated according to the port information and the queue information by a scheduling algorithm such as an RR scheduling algorithm or an SP scheduling algorithm, and the queue number is calculated;
在Map表查询与所述队列号对应的QD的存储位置信息和地址信息;Querying, in the Map table, storage location information and address information of the QD corresponding to the queue number;
根据所述QD的存储位置信息,判断所述QD是存储在Reg中、还是Cache中、还是DRAM中,或QD显示队列为空;当所述QD存储在Reg中时,采用所述QD的地址信息中所指示的地址从Reg中读取QD;当所述QD存储在Cache中时,采用所述QD的地址信息中所指示的地址从Cache 中读取QD,同时在Reg中申请一个空闲地址,将存储Cache中的所述QD搬移至Reg中的所述空闲地址中,并更新Map表中与所述队列号对应的QD的存储位置信息和地址信息为“在Reg中,存储位置Reg中的地址”;当所述QD存储在DRAM中时,此时,采用所述队列号从DRAM中与所述队列号对应的地址读取QD,同时在Reg中申请一个空闲地址,将存储在DRAM中的所述QD搬移至Reg中的所述空闲地址中,并更新Map表中与所述队列号对应的QD的存储位置信息和地址信息为“在Reg中,存储位置Reg中的地址”;当所述QD显示队列为空时,向Reg申请一个空闲地址,在所述空闲地址新建QD,并更新Map表中与所述队列号对应的QD的存储位置信息和地址信息为“在Reg中,存储位置Reg中的地址”;Determining, according to the storage location information of the QD, whether the QD is stored in the Reg, the Cache, or the DRAM, or the QD display queue is empty; when the QD is stored in the Reg, the address of the QD is adopted. The address indicated in the information reads the QD from the Reg; when the QD is stored in the Cache, the address indicated in the address information of the QD is used from the Cache. Reading the QD, and applying a free address in the Reg, moving the QD in the storage Cache to the free address in the Reg, and updating the storage location information of the QD corresponding to the queue number in the Map table. And the address information is "in Reg, the address in the storage location Reg"; when the QD is stored in the DRAM, at this time, the queue number is used to read the QD from the address corresponding to the queue number in the DRAM, At the same time, applying a free address in the Reg, moving the QD stored in the DRAM to the free address in the Reg, and updating the storage location information and the address information of the QD corresponding to the queue number in the Map table as "In Reg, store the address in the location Reg"; when the QD display queue is empty, apply for a free address to the Reg, create a new QD at the free address, and update the map table corresponding to the queue number The storage location information and address information of the QD is "in Reg, the address in the storage location Reg";
根据所述QD对所述待处理的报文信息进行队列操作,并在进行队列操作之后根据所述QD、所述QD更新后的存储位置信息和地址信息,对所述QD进行更新。Performing a queue operation on the to-be-processed message information according to the QD, and updating the QD according to the QD, the QD updated storage location information, and address information after performing the queue operation.
可以理解的是,在队列操作中,根据Map表查询与队列号对应的QD的存储位置信息和地址信息,根据所述QD的存储位置信息和地址信息,获取QD;将所述QD搬移至Reg中,并更新Map表中与所述队列号对应的QD的存储位置信息和地址信息;根据所述QD、更新后的QD的存储位置信息和地址信息,对所述QD进行更新就能实现QD的快速存取是因为:Map表中存储着与队列号一一对应的QD的存储位置信息和地址信息,在队列操作中,通过查询Map表就能得到与所述队列号对应的QD的存储位置和存储地址,从而当QD存储在Cache或DRAM中时,只需要一次访问Cache或DRAM就能获取到QD,提高QD的获取效率,实现QD的快速获取;将存储在Cache或DRAM中的QD搬移至Reg中,并更新Map表中与所述队列号对应的QD的存储位置信息和地址信息,在队列操作之后,根据所述QD、所述QD更新后的存储位置信息和地址信息对所述QD进行更 新,从而在更新所述QD时只需要访问Reg,提高QD的存储效率,实现QD的快速存储。It can be understood that, in the queue operation, the storage location information and the address information of the QD corresponding to the queue number are queried according to the Map table, and the QD is obtained according to the storage location information and the address information of the QD; and the QD is moved to the Reg And updating the storage location information and the address information of the QD corresponding to the queue number in the Map table; and updating the QD according to the QD, the updated storage location information of the QD, and the address information to implement the QD The fast access is because: the Map table stores the storage location information and the address information of the QD corresponding to the queue number one by one. In the queue operation, the QD storage corresponding to the queue number can be obtained by querying the Map table. Location and storage address, so when QD is stored in Cache or DRAM, only need to access Cache or DRAM once to get QD, improve QD acquisition efficiency, achieve fast QD acquisition; QD will be stored in Cache or DRAM Moving to the Reg, and updating the storage location information and the address information of the QD corresponding to the queue number in the Map table, after the queue operation, according to the QD, the updated storage location information of the QD, and The address information more QD New, so only need to access Reg when updating the QD, improve the storage efficiency of QD, and achieve fast storage of QD.
进一步地,图3为本发明队列操作中的处理方法实施例一的实现流程示意图之二,基于所述QD和所述队列号,按照预设的拥塞避免策略,确定待处理的报文信息入队时,根据所述QD对所述待处理的报文信息进行入队操作,参照图3所示,步骤104具体包括以下步骤:Further, FIG. 3 is a schematic diagram of the implementation process of the first embodiment of the processing method in the queue operation according to the present invention. Based on the QD and the queue number, the packet information to be processed is determined according to a preset congestion avoidance policy. The team enters the packet information to be processed according to the QD. Referring to FIG. 3, step 104 specifically includes the following steps:
步骤1041,申请指向动态随机存储器的报文缓存指针;Step 1041: Apply a message buffer pointer to the dynamic random access memory;
步骤1042,根据所述报文缓存指针,将所述待处理的报文信息以及报文缓存指针存储至所述动态随机存储器中;Step 1042: Store the to-be-processed message information and the message cache pointer into the dynamic random access memory according to the message buffer pointer.
步骤1043,根据所述报文缓存指针和已入队操作的报文信息,更新所述队列描述符,并将更新后的队列描述符存储至目标地址中,所述目标地址为所述队列描述符更新后的存储位置信息和地址信息所确定的地址。Step 1043: Update the queue descriptor according to the packet buffer pointer and the packet information of the enqueue operation, and store the updated queue descriptor in the target address, where the target address is the queue description. The address determined by the updated storage location information and address information.
这里,所述QD包括队列的首指针、队列的尾指针和队列深度,所述预设的拥塞避免策略可以为加权随机早期检测算法;在本实施例中,根据所述待处理的报文信息所属队列的QD和所述队列号,利用加权随机早期检测算法,判决所述待处理的报文信息是否入队;当判决报文信息不能入队时,将所述待处理的报文信息进行丢弃;当判决报文信息入队时,对所述待处理的报文信息进行入队操作。Here, the QD includes a first pointer of the queue, a tail pointer of the queue, and a queue depth, and the preset congestion avoidance policy may be a weighted random early detection algorithm; in this embodiment, according to the to-be-processed message information. The QD of the queue and the queue number are used to determine whether the to-be-processed message information is queued by using a weighted random early detection algorithm; and when the judgment message information cannot be enqueued, the to-be-processed message information is performed. Discarding; when the message information is entered into the queue, the packet information to be processed is enqueued.
作为一种实施方式,根据所述QD和所述队列号,利用加权随机早期检测算法,确定报文信息入队时,申请指向DRAM的报文缓存指针;As an implementation manner, according to the QD and the queue number, using a weighted random early detection algorithm to determine that a message cache pointer is directed to the DRAM when the message information is enqueued;
将待处理的报文信息中的报文数据写入到报文缓存指针指向的DRAM报文数据缓存区域,将待处理的报文信息中的报文描述符写入到报文缓存指针指向的DRAM报文描述符缓存区域,并将报文缓存指针写入到DRAM报文缓存指针缓存区域;Write the message data in the message information to be processed to the DRAM message data buffer area pointed to by the message buffer pointer, and write the message descriptor in the message information to be processed to the message buffer pointer. DRAM message descriptor buffer area, and write the message buffer pointer to the DRAM message buffer pointer buffer area;
对所述待处理的报文信息进行入队操作之后,将所述QD中的队列的 尾指针更新为报文缓存指针,将所述QD中的队列深度更新为所述QD中的队列深度加上报文描述符中的报文长度,将更新后的QD存储至基于所述QD更新后的存储位置信息和地址信息所确定的地址中。After performing the enqueue operation on the to-be-processed message information, the queue in the QD is The tail pointer is updated to a message buffer pointer, and the queue depth in the QD is updated to the queue depth in the QD plus the message length in the message descriptor, and the updated QD is stored to be updated based on the QD. The storage location information and the address information are determined by the address.
图4为本发明队列操作中的处理方法实施例一的实现流程示意图之三,当所述队列操作为出队操作时,参照图4所示,步骤104具体包括以下步骤:FIG. 4 is a third schematic flowchart of the implementation of the processing method in the queue operation of the present invention. When the queue operation is a dequeuing operation, as shown in FIG. 4, step 104 specifically includes the following steps:
步骤1044,根据所述队列描述符,在所述动态随机存储器读取待处理的报文信息和下一个正在排队的报文缓存指针,并将所述待处理的报文信息进行出队;Step 1044: Read, according to the queue descriptor, the to-be-processed message information and the next queued message buffer pointer in the dynamic random access memory, and dequeue the to-be-processed message information;
步骤1045,释放已出队的报文信息的报文缓存指针;Step 1045: Release a message buffer pointer of the dequeued message information.
步骤1046,根据所述下一个正在排队的报文缓存指针和所述已出队操作的报文信息,更新所述队列描述符,并将更新后的队列描述符存储至目标地址中,所述目标地址为所述队列描述符更新后的存储位置信息和地址信息所确定的地址。Step 1046: Update the queue descriptor according to the next queued message cache pointer and the message information of the dequeued operation, and store the updated queue descriptor in the target address, The target address is an address determined by the storage location information and the address information after the queue descriptor is updated.
作为一种实施方式,用QD中的队列的首指针从DRAM报文数据缓存区域读取报文数据,将报文数据进行出队,用QD中的队列的首指针从DRAM报文描述符缓存区域读取报文描述符,确定报文长度,用QD中的队列的首指针从DRAM报文缓存指针缓存区域读取报文缓存指针,并用QD中的队列的首指针从DRAM报文缓存指针缓存区域读取下一个在排队的报文缓存指针;As an implementation manner, the first pointer of the queue in the QD is used to read the message data from the DRAM message data buffer area, and the message data is dequeued, and the first pointer of the queue in the QD is used to cache from the DRAM message descriptor. The area reads the message descriptor, determines the message length, reads the message buffer pointer from the DRAM message buffer pointer buffer area by using the first pointer of the queue in the QD, and uses the first pointer of the queue in the QD to buffer the pointer from the DRAM message. The cache area reads the next queued message buffer pointer;
释放已出队的报文信息的报文缓存指针;A message buffer pointer for releasing the dequeued message information;
对所述待处理的报文信息进行出队操作之后,将所述QD中的队列的首指针更新为下一个在排队的报文缓存指针,将所述QD中的队列深度更新为所述QD中的队列深度减去报文描述符中的报文长度,将更新后的QD存储至基于所述QD更新后的存储位置信息和地址信息所确定的地址中。 After performing the dequeuing operation on the to-be-processed message information, updating the first pointer of the queue in the QD to the next queued message buffer pointer, and updating the queue depth in the QD to the QD The queue depth in the packet is subtracted from the message length in the message descriptor, and the updated QD is stored in the address determined based on the updated storage location information and address information of the QD.
图5为本发明队列操作中的处理方法实施例二的实现流程示意图,参照图5所示,本实施例的队列操作中的处理方法,是在本发明队列操作中的处理方法实施例一的步骤104之后还包括:FIG. 5 is a schematic flowchart of the implementation of the second embodiment of the processing method in the queue operation according to the present invention. Referring to FIG. 5, the processing method in the queue operation in the embodiment is the first embodiment of the processing method in the queue operation of the present invention. After step 104, the method further includes:
步骤105,检测更新前的队列描述符对应队列的队列号;Step 105: Detect a queue number of a queue corresponding to the queue descriptor before the update;
步骤106,当更新后的队列描述符对应队列的队列号与所有更新前的队列描述符对应队列的队列号都不一致时,将存储在所述寄存器中更新后的所述队列描述符搬移至高速缓冲存储器中;Step 106: When the queue number of the updated queue descriptor corresponding queue is different from the queue number of the queue corresponding to all the updated queue descriptors, the updated queue descriptor stored in the register is moved to the high speed. Buffer memory;
步骤107,更新映射表中队列号对应的更新后的所述队列描述符的存储位置信息和地址信息。Step 107: Update storage location information and address information of the updated queue descriptor corresponding to the queue number in the mapping table.
基于实施例一中队列操作中的处理方法,在本实施例中,对入队操作中的处理方法的具体实现举例进行详细说明。Based on the processing method in the queue operation in the first embodiment, in the embodiment, a specific implementation example of the processing method in the enqueue operation is described in detail.
从网络上接收待处理的报文信息,根据所述待处理的报文信息中的报文描述符确定队列号为2和报文长度为128;Receiving the to-be-processed message information from the network, determining that the queue number is 2 and the packet length is 128 according to the message descriptor in the to-be-processed message information;
参照图2所示,在Map表查询到与队列号2对应的QD的存储位置信息为“在Cache中”和地址信息为“Cache pointer 1”,根据QD的存储位置信息和地址信息,确定QD存储在Cache中的指针pointer 1指向的地址1,从该地址中读取QD,在Reg中申请一个空闲地址6,将存储在Cache中的所述QD搬移至Reg中,并更新Map表中与所述队列号2对应的QD的存储位置信息和地址信息为“在Reg中,Reg pointer 6”;Referring to FIG. 2, the storage location information of the QD corresponding to the queue number 2 is "in the Cache" and the address information is "Cache pointer 1" in the Map table, and the QD is determined according to the storage location information and the address information of the QD. The address 1 pointed to by the pointer pointer 1 stored in the Cache reads the QD from the address, applies a free address 6 in the Reg, moves the QD stored in the Cache to the Reg, and updates the Map table with The storage location information and address information of the QD corresponding to the queue number 2 is "Reg in Reg, 6";
根据所述QD和队列号2的出端口,利用加权随机早期检测算法,确定报文信息入队时,申请指向DRAM的报文缓存指针;Determining, according to the QD and the outbound port of the queue number 2, a weighted random early detection algorithm, when determining that the message information is enqueued, applying a message buffer pointer to the DRAM;
将待处理的报文信息中的报文数据写入到报文缓存指针指向的DRAM报文数据缓存区域,将待处理的报文信息中的报文描述符写入到报文缓存指针指向的DRAM报文描述符缓存区域,并将报文缓存指针写入到DRAM报文缓存指针缓存区域; Write the message data in the message information to be processed to the DRAM message data buffer area pointed to by the message buffer pointer, and write the message descriptor in the message information to be processed to the message buffer pointer. DRAM message descriptor buffer area, and write the message buffer pointer to the DRAM message buffer pointer buffer area;
对待处理的报文信息进行入队操作之后,将与队列号2对应的QD中的队列的尾指针更新为报文缓存指针,将与队列号2对应的QD中的队列深度更新为与队列号2对应的QD中的队列深度加上报文长度128,将更新后的QD存储至Reg的地址6中;After the packet information to be processed is queued, the tail pointer of the queue in the QD corresponding to the queue number 2 is updated to the message buffer pointer, and the queue depth in the QD corresponding to the queue number 2 is updated to the queue number. 2 The queue depth in the corresponding QD plus the message length 128, the updated QD is stored in the Address 6 of the Reg;
更新与队列号2对应的QD之后,检测到更新前的QD对应的队列号为5和10;After updating the QD corresponding to the queue number 2, it is detected that the queue numbers corresponding to the QD before the update are 5 and 10;
队列号2与队列号5和队列号10都不一致,此时在Cache中申请一个空闲地址6,将存储在Reg的地址6中的QD搬移至Cache的地址6中; Queue number 2 is different from queue number 5 and queue number 10. In this case, a free address 6 is requested in the Cache, and the QD stored in the Reg address 6 is moved to the address 6 of the Cache.
更新Map表中与队列号2对应的QD的存储位置信息和地址信息为“在Cache中,Cache pointer 6”。Update the storage location information and address information of the QD corresponding to the queue number 2 in the Map table as "Cache pointer 6 in the Cache".
进一步地,基于实施例一中队列操作中的处理方法,在本实施例中,对出队操作中的处理方法的具体实现举例进行详细说明。Further, based on the processing method in the queue operation in the first embodiment, in the embodiment, a specific implementation example of the processing method in the dequeuing operation is described in detail.
获取等待调度的端口信息和等待调度的队列信息,根据所述端口信息和队列信息,通过RR调度算法,计算得到队列号为65536;Obtaining the port information waiting for scheduling and the queue information waiting for scheduling, according to the port information and the queue information, using the RR scheduling algorithm, the calculated queue number is 65536;
参照图2所示,在Map表查询到与队列号65536对应的QD的存储位置信息为“在DRAM中”和地址信息为“Null”,此时,采用队列号65536从DRAM的地址65536中读取QD,在Reg中申请一个空闲地址7,将存储在DRAM中的所述QD搬移至Reg中,并更新Map表中与所述队列号65536对应的QD的存储位置信息和地址信息为“在Reg,Reg pointer 7”;Referring to FIG. 2, in the Map table, the storage location information of the QD corresponding to the queue number 65536 is "in DRAM" and the address information is "Null". At this time, the queue number 65536 is used to read from the address 65536 of the DRAM. Taking QD, applying a free address 7 in Reg, moving the QD stored in the DRAM to Reg, and updating the storage location information and address information of the QD corresponding to the queue number 65536 in the Map table as "at" Reg, Reg pointer 7”;
用所述QD中的队列的首指针从DRAM的报文数据缓存区域读取报文数据,并对报文数据进行出队,用所述QD中的队列的首指针从DRAM的报文描述符缓存区域读取报文描述符,确定报文长度为128,并用所述QD中的队列的首指针从DRAM的报文缓存指针缓存区域读取报文缓存指针和下一个正在排队的报文缓存指针;Reading the message data from the message data buffer area of the DRAM with the first pointer of the queue in the QD, and dequeuing the message data, using the first pointer of the queue in the QD from the DRAM message descriptor The buffer area reads the message descriptor, determines that the message length is 128, and reads the message buffer pointer and the next queued message buffer from the DRAM message buffer pointer buffer area by using the first pointer of the queue in the QD. pointer;
释放已出队的报文数据的报文缓存指针; a message buffer pointer for releasing the dequeued message data;
进行出队操作之后,将与队列号65536对应的QD中的队列的首指针更新为下一个正在排队的报文缓存指针,将与队列号65536对应的QD中的队列深度更新为与队列号65536对应的QD中的队列深度减去报文长度128,将更新后的QD存储至Reg的地址7中;After the dequeuing operation, the first pointer of the queue in the QD corresponding to the queue number 65536 is updated to the next queued message buffer pointer, and the queue depth in the QD corresponding to the queue number 65536 is updated to be the queue number 65536. The queue depth in the corresponding QD is subtracted from the packet length 128, and the updated QD is stored in the Reg address 7;
更新与队列号65536对应的QD之后,检测到更新前的QD对应的队列号为4和9;After updating the QD corresponding to the queue number 65536, it is detected that the queue numbers corresponding to the QD before the update are 4 and 9;
队列号65536与队列号4和队列号9都不一致,此时在Cache中申请一个空闲地址7,将存储在Reg的地址7中的QD搬移至Cache的地址7中;The queue number 65536 is inconsistent with the queue number 4 and the queue number 9. In this case, a free address 7 is requested in the Cache, and the QD stored in the Reg address 7 is moved to the address 7 of the Cache;
更新Map表中与队列号65536对应的QD的存储位置信息和地址信息为“在Cache中,Cache pointer 7”。Update the storage location information and address information of the QD corresponding to the queue number 65536 in the Map table as "Cache pointer 7 in the Cache".
进一步地,由于需要将正在进行队列操作处理的QD搬移至Reg中,同时需要将已完成队列操作处理并存储在Reg中的QD搬移至Cache中,因此,为了保证Cache有足够空间能够存储队列操作处理中的QD,保证队列操作处理的正常进行,在本发明队列操作中的处理方法实施例三中,还可以对Cache的使用空间进行实时检测。Further, since the QD that is performing the queue operation processing needs to be moved to the Reg, and the QD that has completed the queue operation processing and stored in the Reg needs to be moved to the Cache, the Cache has sufficient space to store the queue operation. The QD in the process ensures the normal operation of the queue operation. In the third embodiment of the processing method in the queue operation of the present invention, the space of the Cache can be detected in real time.
图6为Cache的使用空间的实时检测流程示意图,参照图6所示,本发明队列操作中的处理方法还包括:FIG. 6 is a schematic diagram of a real-time detection process of a Cache usage space. Referring to FIG. 6, the processing method in the queue operation of the present invention further includes:
步骤201,实时检测所述高速缓冲存储器的空间使用情况,获得第一检测结果;Step 201: detecting a space usage of the cache memory in real time, and obtaining a first detection result;
步骤202,当所述第一检测结果大于预设的第一阈值时,将存储在所述高速缓冲存储器中、且活跃度小于预设的第二阈值的队列的队列描述符搬移至动态随机存储器内,使得所述高速缓冲存储器的空间满足预设条件;Step 202: When the first detection result is greater than a preset first threshold, moving a queue descriptor of a queue stored in the cache and having an activity less than a preset second threshold to a dynamic random access memory Having the space of the cache memory satisfy a preset condition;
所述预设的第一阈值可以根据实际需要进行设置,例如该第一阈值的取值范围可以为90%到98%的百分值;在本实施例中,可以以第一阈值为 95%为例进行详细说明。The preset first threshold may be set according to actual needs. For example, the value of the first threshold may be a percentage of 90% to 98%; in this embodiment, the first threshold may be 95% is an example for detailed explanation.
将存储在Reg中的QD搬移至Cache中时,表示进行了一次队列操作处理,因此,在本实施例中,所述队列的活跃度可以以QD存储至Cache后的时间长短来划分,例如存储在Cache中最长时间没有进行队列操作处理的QD的队列的活跃度最低,可以设置为0,当检测到有QD从Reg搬移至Cache中时,说明有队列正在进行队列操作处理,此时队列活跃,可以设置队列的活跃度为10,其余的队列的活跃度可以设置为1到9的整数值。When the QD stored in the Reg is moved to the Cache, it indicates that the queue operation process is performed. Therefore, in this embodiment, the activity of the queue may be divided according to the length of time after the QD is stored in the Cache, for example, storage. The QD queue that has not been queued for the longest time in the Cache has the lowest activity, and can be set to 0. When it is detected that QD is moved from the Reg to the Cache, it indicates that the queue is performing queue operation processing. Active, you can set the queue's activity to 10, and the remaining queues can be set to an integer value of 1 to 9.
所述预设的第二阈值可以根据实际需要进行设置,在本实施例中,可以以第二阈值为0.5为例进行详细说明。The preset second threshold may be set according to actual needs. In this embodiment, a second threshold value of 0.5 may be used as an example for detailed description.
步骤203,释放搬移出所述高速缓冲存储器的队列描述符的指针。In step 203, the pointer of the queue descriptor that is moved out of the cache is released.
作为一种实施方式,实时检测Cache的空间使用情况,获得第一检测结果;As an implementation manner, the space usage of the Cache is detected in real time, and the first detection result is obtained;
当第一检测结果大于95%时,并且存储在Cache中的QD对应的队列的活跃度小于0.5时,将所述QD搬移至DRAM中;When the first detection result is greater than 95%, and the activity of the queue corresponding to the QD stored in the Cache is less than 0.5, the QD is moved to the DRAM;
释放搬移出Cache的所述QD的指针。A pointer to the QD that moves out of the Cache is released.
图7为QD从Cache搬移至DRAM的实现流程示意图,参照图7所示,QD从Cache搬移至DRAM的实现流程包括以下步骤:FIG. 7 is a schematic diagram of an implementation process of moving a QD from a Cache to a DRAM. Referring to FIG. 7, the implementation process of moving the QD from the Cache to the DRAM includes the following steps:
步骤301,将存储在Cache中的所有QD对应的队列号建立一条双向链表;若有与队列号qN对应的QD从Reg释放到Cache中的触发事件,则执行步骤302;若有与队列号qM对应的QD从Cache搬移至Reg中的触发事件,则执行步骤303;Step 301: Create a doubly linked list of queue numbers corresponding to all QDs stored in the Cache; if there is a trigger event that the QD corresponding to the queue number qN is released from the Reg to the Cache, go to step 302; if there is a queue number qM If the corresponding QD is moved from the Cache to the trigger event in the Reg, step 303 is performed;
步骤302,将队列号qN加入到链表尾部,链表深度加1,执行步骤304; Step 302, the queue number qN is added to the end of the linked list, the depth of the linked list is increased by 1, and step 304 is performed;
步骤303,将队列号qM从链表中剔除,并连接好队列号qM的上下节点; Step 303, the queue number qM is removed from the linked list, and the upper and lower nodes of the queue number qM are connected;
步骤304,检测链表深度即Cache中存在的QD对应的队列号的数量(即 队列数量),判断链表深度是否大于预设的第一阈值,若链表深度大于预设的第一阈值,则执行步骤305,否则执行步骤307;Step 304: Detect the number of the linked list, that is, the number of queue numbers corresponding to the QD existing in the Cache (ie, The number of queues is determined, whether the depth of the linked list is greater than a preset first threshold, if the depth of the linked list is greater than the preset first threshold, step 305 is performed, otherwise step 307 is performed;
步骤305,将Cache活跃链表中的首队列号对应的QD搬移至DRAM中;Step 305: Move the QD corresponding to the first queue number in the Cache active linked list to the DRAM.
步骤306,释放搬移出Cache的所述QD的指针; Step 306, releasing a pointer of the QD that moves out of the Cache;
步骤307,不搬移Cache中的QD。In step 307, the QD in the Cache is not moved.
为说明当链表深度大于预设的第一阈值时,将Cache活跃链表中的首队列号对应的QD搬移至DRAM中的原因,首先对队列号与QD的映射关系、Cache活跃链表和Cache空闲链表进行介绍。To explain the reason why the QD corresponding to the first queue number in the Cache active linked list is moved to the DRAM when the depth of the linked list is greater than the preset first threshold, first, the mapping relationship between the queue number and the QD, the Cache active linked list, and the Cache free linked list. Introduce.
图8a为队列号与QD的映射关系示意图,参照图8a所示,Cache链表深度为1024,能够存储1024个队列号对应的QD,队列号qnum与Cache中的QD的映射深度为1024,队列号与Cache中QD存在一一对应关系,如图8a中Cache-qnum表所示。Figure 8a is a schematic diagram of the mapping relationship between the queue number and the QD. As shown in Figure 8a, the Cache Link Table has a depth of 1024, and can store 1024 queue numbers corresponding to the QD. The mapping depth between the queue number qnum and the QD in the Cache is 1024. There is a one-to-one correspondence with the QD in the Cache, as shown in the Cache-qnum table in Figure 8a.
图8b为Cache活跃链表的结构示意图,参照图8b所示,Cache活跃链表下一跳RAM深度为1024,每一个表项存储对应Cache指针的上一个节点(即上一个cache指针)和下一个节点(即下一个cache指针),并且Cache活跃链表中也包含了活跃链表的首指针和活跃链表的尾指针,指示活跃链表的首尾节点对应的Cache地址;在图8b中,Cache活跃链表节点数有4个,这4个节点分别表示Cache地址0、1、2和3的连接关系。FIG. 8b is a schematic structural diagram of a Cache active linked list. Referring to FIG. 8b, the next hop RAM depth of the Cache active linked list is 1024, and each entry stores the previous node corresponding to the Cache pointer (ie, the previous cache pointer) and the next node. (ie, the next cache pointer), and the Cache active list also contains the first pointer of the active linked list and the tail pointer of the active linked list, indicating the Cache address corresponding to the first and last nodes of the active linked list; in Figure 8b, the number of Cache active linked list nodes is Four, these four nodes represent the connection relationship of Cache addresses 0, 1, 2 and 3.
图8c为Cache空闲链表的结构示意图,参照图8c所示,Cache空闲链表包含空闲链表的首指针和空闲链表的尾指针,指示空闲链表的首尾节点对应的Cache空闲地址,Cache空闲链表下一跳RAM用来管理Cache的空闲地址。FIG. 8c is a schematic structural diagram of a Cache free link list. As shown in FIG. 8c, the Cache free list includes a first pointer of the free list and a tail pointer of the free list, indicating a Cache free address corresponding to the first and last nodes of the free list, and a Cache idle list next hop. RAM is used to manage the free address of the Cache.
然后,将进行队列操作处理的事件称为活跃事件,Cache活跃链表中的首队列号的队列是链表中所有队列号的队列当中最长时间没有发生过活跃 事件的队列,即活跃度最低的队列,Cache活跃链表中的尾队列号的队列是最近发生过一次活跃事件的队列,因此,当链表深度大于预设的第一阈值时,将Cache活跃链表中的首队列号对应的QD搬移至DRAM中。Then, the event that performs the queue operation processing is called an active event, and the queue of the first queue number in the Cache active linked list is the most long time among the queues of all the queue numbers in the linked list. The queue of the event, that is, the queue with the lowest activity, the queue of the tail queue number in the Cache active list is the queue in which the active event has occurred recently. Therefore, when the depth of the linked list is greater than the preset first threshold, the Cache is active in the linked list. The QD corresponding to the first queue number is moved to the DRAM.
进一步地,结合图8a、图8b和图8c对QD从Cache搬移至DRAM的具体实现流程进行详细说明。Further, the specific implementation flow of moving the QD from the Cache to the DRAM will be described in detail with reference to FIG. 8a, FIG. 8b and FIG. 8c.
当检测到有与队列号n对应的QD从Reg搬移至Cache事件时,向Cache空闲链表申请地址cp,以cp为地址将QD写入到Cache中,将队列号n写入到Cache-qnum表中;针对Cache活跃链表下一跳RAM,以Cache活跃链表的尾指针为地址,将cp写入到下一个节点,以cp为地址将Cache活跃链表的尾指针写入到上一个节点,并将cp更新为活跃链表的尾指针,链表深度加1;When it is detected that the QD corresponding to the queue number n is moved from the Reg to the Cache event, the address cp is requested from the Cache idle list, the QD is written into the Cache with the cp as the address, and the queue number n is written to the Cache-qnum table. For the next hop RAM of the Cache active linked list, the tail pointer of the Cache active linked list is used as the address, the cp is written to the next node, and the tail pointer of the Cache active linked list is written to the previous node by using cp as the address, and Cp is updated to the tail pointer of the active linked list, and the depth of the linked list is increased by 1;
当检测到有与队列号m的QD从Cache中的地址cp'搬移至Reg事件时,从Cache中的地址cp'读取与队列号m对应的QD,同时在Reg空闲链表申请空闲地址将存储在Cache中的所述QD搬移至Reg中;以cp'为地址读取Cache活跃链表下一跳RAM,得到队列号m的上一跳x和下一跳y,以队列号x为地址将队列号y写入到下一个节点,以队列号y为地址将队列号x写入到上一个节点,即将队列号m从活跃链表中删除,将队列号x和队列号y连接起来,同时将Cache活跃链表深度减1;When it is detected that the QD with the queue number m is moved from the address cp' in the Cache to the Reg event, the QD corresponding to the queue number m is read from the address cp' in the Cache, and the free address is stored in the Reg idle list. The QD in the Cache is moved to the Reg; the next hop RAM of the Cache active list is read by the cp' address, and the last hop x and the next hop y of the queue number m are obtained, and the queue is queued with the queue number x as the address. The number y is written to the next node, and the queue number x is written to the previous node with the queue number y as the address, that is, the queue number m is deleted from the active linked list, and the queue number x and the queue number y are connected, and the Cache is simultaneously The active list depth is reduced by 1;
预先设定支持的活跃队列数量即第一阈值为Cache_th,若Cache链表深度大于Cache_th,用Cache活跃链表的首指针读取存储在Cache中的QD和Cache-qnum表中的队列号,将所述队列号对应的QD搬移至DRAM中。The number of active queues that are supported in advance is the first threshold value is Cache_th. If the Cache Link Table depth is greater than Cache_th, the first pointer of the Cache active linked list reads the queue number stored in the QD and Cache-qnum table in the Cache. The QD corresponding to the queue number is moved to the DRAM.
进一步地,为了保证Reg有足够空间能够存储队列操作处理的QD,保证队列操作处理的正常进行,在本发明队列操作中的处理方法实施例四中,还可以对Reg的使用空间进行实时检测。Further, in order to ensure that the Reg has enough space to store the QD of the queue operation processing and ensure the normal operation of the queue operation, in the fourth embodiment of the processing method in the queue operation of the present invention, the real space of the Reg can be detected in real time.
图9为Reg的使用空间的实时检测流程示意图,参照图9所示,本发 明队列操作中的处理方法还包括:FIG. 9 is a schematic diagram of a real-time detection process of the usage space of Reg, as shown in FIG. The processing methods in the queue operation also include:
步骤401,实时检测所述寄存器的空间使用情况,获得第二检测结果;Step 401: Real-time detection of the space usage of the register, and obtaining a second detection result;
步骤402,当第二检测结果大于预设的第三阈值时,停止输出所述待处理的报文信息中的报文描述符和对待处理的报文信息进行入队操作,直至第一检测结果小于预设的第四阈值时,恢复输出所述待处理的报文信息中的报文描述符和对待处理的报文信息进行入队操作,所述预设的第四阈值小于所述预设的第三阈值。Step 402: When the second detection result is greater than the preset third threshold, stop outputting the message descriptor in the to-be-processed message information and the message information to be processed, and perform the enrollment operation until the first detection result When the threshold is less than the preset fourth threshold, the message descriptor in the message information to be processed and the message information to be processed are restored to be queued, and the preset fourth threshold is smaller than the preset. The third threshold.
这里,所述预设的第三阈值可以根据实际需要进行设置,例如该第三阈值的取值范围可以为90%到98%的百分值;在本实施例中,可以以第三阈值为95%为例进行详细说明。Here, the preset third threshold may be set according to actual needs. For example, the value of the third threshold may be a percentage of 90% to 98%; in this embodiment, the third threshold may be 95% is an example for detailed explanation.
所述预设的第四阈值可以根据实际需要进行设置,例如该第四阈值的取值范围可以为80%到88%的百分值;在本实施例中,可以以第四阈值为85%为例进行详细说明。The preset fourth threshold may be set according to actual needs. For example, the value of the fourth threshold may be a percentage of 80% to 88%; in this embodiment, the fourth threshold may be 85%. Give an example for a detailed description.
具体地,实时检测Reg的空间使用情况,获得第二检测结果;Specifically, the space usage of the Reg is detected in real time, and the second detection result is obtained;
当第二检测结果大于95%时,停止输出报文信息中的报文描述符,并停止报文信息入队流程,直至第二检测结果小于85%时,恢复输出报文信息中的报文描述符,并恢复报文信息入队流程。When the second detection result is greater than 95%, the message descriptor in the message information is stopped, and the message information enqueue process is stopped, until the second detection result is less than 85%, and the message in the output message information is restored. Descriptor and resume message information into the team process.
在本发明队列操作中的处理方法实施例五中,为说明本发明队列操作中的处理方法的实际应用,结合应用场景图对本发明队列操作中的处理方法进行详细阐述。In the fifth embodiment of the processing method in the queue operation of the present invention, in order to explain the practical application of the processing method in the queue operation of the present invention, the processing method in the queue operation of the present invention is described in detail in conjunction with the application scenario diagram.
本发明队列操作的处理方法主要应用在网络器件队列管理系统上,图10为本发明队列操作中的处理方法实施例五的应用场景示意图之一,参照图10所示,该应用场景包括入向报文缓存模块、出向报文缓存模块、拥塞避免模块、入队处理模块、队列调度模块、出队处理模块、报文缓存管理模块、Map表、QD管理模块以及QD缓存模块;其中,QD管理模块包括 搬移管理模块,QD缓存模块包括DRAM、Reg和Cache。The processing method of the queue operation of the present invention is mainly applied to the network device queue management system. FIG. 10 is a schematic diagram of an application scenario of the fifth embodiment of the processing method in the queue operation according to the present invention. Referring to FIG. 10, the application scenario includes the inbound direction. a message buffering module, an outgoing packet buffering module, a congestion avoidance module, an enqueue processing module, a queue scheduling module, a dequeue processing module, a message cache management module, a Map table, a QD management module, and a QD cache module; wherein, the QD management Module includes The migration management module, the QD cache module includes DRAM, Reg, and Cache.
首先结合图10介绍队列操作处理中读取QD时的搬移流程,图11为读取QD时的搬移流程示意图,参照图11所示,读取QD时的搬移流程具体包括以下步骤:First, the moving flow when the QD is read in the queue operation processing will be described with reference to FIG. 10. FIG. 11 is a schematic diagram of the moving flow when the QD is read. Referring to FIG. 11, the moving flow when the QD is read specifically includes the following steps:
步骤501,Map表接收拥塞避免模块或入队模块或出队模块的查询请求信息,所述查询请求信息包括队列号;Step 501: The Map table receives query request information of the congestion avoidance module or the enqueue module or the dequeue module, where the query request information includes a queue number;
步骤502,在Map表中查询与队列号对应的QD的存储位置信息和地址信息,发送给QD管理模块;Step 502: Query the storage location information and the address information of the QD corresponding to the queue number in the Map table, and send the information to the QD management module.
步骤503,QD管理模块中的搬移管理模块根据Map表查询到的QD的存储位置信息和地址信息,对QD进行处理,当QD的存储位置信息指示QD存储在Reg中时,执行步骤504;当QD的存储位置信息指示QD存储在Cache中时,执行步骤505;当QD的存储位置信息指示QD存储在DRAM中时,执行步骤508;Step 503: The moving management module in the QD management module processes the QD according to the storage location information and the address information of the QD queried by the Map table, and when the storage location information of the QD indicates that the QD is stored in the Reg, step 504 is performed; When the storage location information of the QD indicates that the QD is stored in the Cache, step 505 is performed; when the storage location information of the QD indicates that the QD is stored in the DRAM, step 508 is performed;
步骤504,从基于QD的存储位置信息和地址信息所确定的地址中获取QD,执行步骤5010; Step 504, the QD is obtained from the address determined by the QD-based storage location information and the address information, step 5010 is performed;
步骤505,向Reg中申请空闲指针; Step 505, applying for a free pointer to the Reg;
步骤506,从基于QD的存储位置信息和地址信息所确定的地址中获取QD,同时将存储在Cache中的QD搬移至Reg空闲指针指向的地址中;Step 506: Acquire a QD from an address determined by the QD-based storage location information and the address information, and move the QD stored in the Cache to an address pointed to by the Reg idle pointer.
步骤507,释放已搬移出Cache的所述QD的指针,执行步骤5010;Step 507, the pointer of the QD that has moved out of the Cache is released, and step 5010 is performed;
步骤508,向Reg中申请空闲指针; Step 508, applying a free pointer to the Reg;
步骤509,从基于QD的存储位置信息和地址信息所确定的地址中获取QD,同时将存储在DRAM中的QD搬移至Reg空闲指针指向的地址中;Step 509: Acquire a QD from an address determined by the QD-based storage location information and the address information, and move the QD stored in the DRAM to an address pointed to by the Reg idle pointer;
步骤5010,将QD返回给拥塞避免模块或入队处理模块或出队处理模块。In step 5010, the QD is returned to the congestion avoidance module or the enqueue processing module or the dequeue processing module.
然后结合图10介绍入队操作处理流程,从网络接收报文信息,存入入 向报文缓存模块,将报文信息中的报文描述符发送给拥塞避免模块,拥塞避免模块根据所述报文描述符确定报文信息所属队列的队列号,并将所述队列号发送给Map表申请查询与所述队列号对应的QD;Then, in conjunction with FIG. 10, the process of enqueue operation processing is introduced, and the message information is received from the network, and the information is stored. Sending, to the message buffering module, the message descriptor in the message information to the congestion avoidance module, where the congestion avoidance module determines the queue number of the queue to which the message information belongs according to the message descriptor, and sends the queue number to the queue number The Map table applies for querying a QD corresponding to the queue number;
Map表接收拥塞避免模块的查询申请,由查询申请触发Map表查询与所述队列号对应的QD的存储位置信息和地址信息,发送给QD管理模块;The Map table receives the query request of the congestion avoidance module, and the query request triggers the Map table to query the storage location information and the address information of the QD corresponding to the queue number, and sends the information to the QD management module;
QD管理模块从基于QD的存储位置信息和地址信息所确定的地址中获取QD,并由QD管理模块中的搬移管理模块将存储在Cache或DRAM中的QD搬移至Reg中;更新Map表中与所述队列号对应的QD的存储位置信息和地址信息,将所述QD、所述QD更新后的存储位置信息和地址信息发送给拥塞避免模块;The QD management module obtains the QD from the address determined by the QD-based storage location information and the address information, and moves the QD stored in the Cache or the DRAM to the Reg by the migration management module in the QD management module; updating the Map table and The storage location information and the address information of the QD corresponding to the queue number, and the QD, the updated storage location information and the address information of the QD are sent to the congestion avoidance module;
拥塞避免模块接收到所述QD和所述QD更新后的存储位置信息和地址信息后,读取所述队列号的出端口,根据所述QD和所述队列号的出端口,利用加权随机早期检测算法判决进入的报文信息是否可以入队,将判决结果、所述队列号的出端口、QD和报文信息中的报文描述符经过入队操作流水线发送给入队处理模块;并在判决结果指示报文信息不入队时,拥塞避免模块根据所述队列号判决是否需要将存储在Reg中的QD释放,当所述队列号与正在进行队列操作处理的队列号都不一致时,拥塞避免模块判决将存储在Reg中的QD释放,此时,搬移管理模块将存储在Reg中与所述队列号对应的QD搬移至Cache;After receiving the QD and the updated storage location information and the address information of the QD, the congestion avoidance module reads the egress port of the queue number, and uses the weighted random early according to the QD and the egress port of the queue number. The detection algorithm determines whether the incoming message information can be enqueued, and sends the judgment result, the outbound port of the queue number, the QD, and the message descriptor in the message information to the enqueue processing module through the enrollment operation pipeline; When the result of the judgment indicates that the message information is not in the queue, the congestion avoidance module determines whether the QD stored in the Reg needs to be released according to the queue number. When the queue number is inconsistent with the queue number being processed by the queue operation, the congestion is congested. Avoiding the module decision to release the QD stored in the Reg, at this time, the migration management module moves the QD stored in the Reg corresponding to the queue number to the Cache;
入队处理模块接收判决结果、所述队列号的出端口、所述QD和报文信息中的报文描述符,根据判决结果,对报文信息进行处理,当判决结果指示报文信息不入队时,入队处理模块从入向报文缓存模块读取报文数据后直接将报文数据丢弃;当判决结果指示将报文信息入队时,入队处理模块向Map表发起查询申请,获取与所述队列号对应的QD和所述QD更新后的存储位置信息和地址信息; The enqueue processing module receives the decision result, the outbound port of the queue number, the message descriptor in the QD and the message information, and processes the message information according to the judgment result, and the judgment result indicates that the message information does not enter In the team, the enqueue processing module discards the packet data directly after reading the packet data from the incoming packet buffer module; when the judgment result indicates that the packet information is enqueued, the enqueue processing module initiates an inquiry request to the Map table. Obtaining a QD corresponding to the queue number and the updated storage location information and address information of the QD;
入队处理模块向报文缓存管理模块申请报文缓存指针,经入队操作流水线将报文信息写入到报文缓存指针指向的DRAM中;对报文信息进行入队操作处理之后,入队处理模块对所述QD进行更新,并将更新的QD存储至基于所述QD更新后的存储位置信息和地址信息所确定的地址中;入队处理模块处理完成后,入队的报文信息供队列调度模块调度;The enqueue processing module applies the message buffer pointer to the message buffer management module, and writes the message information into the DRAM pointed to by the message buffer pointer through the enrollment operation pipeline; after the packet information is processed into the queue operation, the enqueue is entered. The processing module updates the QD, and stores the updated QD into an address determined based on the updated storage location information and address information of the QD; after the processing by the enqueue processing module is completed, the packet information of the enqueue is provided. Queue scheduling module scheduling;
拥塞避免模块根据所述队列号判决是否需要将存储在Reg中的与所述队列号对应的QD释放,当所述队列号与正在进行队列操作处理的队列号都不一致时,拥塞避免模块判决将存储在Reg中的QD释放,此时,搬移管理模块将存储在Reg中与所述队列号对应的QD搬移至Cache中。The congestion avoidance module determines, according to the queue number, whether the QD corresponding to the queue number stored in the Reg needs to be released. When the queue number is inconsistent with the queue number processed by the queue operation, the congestion avoidance module decides The QD stored in Reg is released. At this time, the migration management module moves the QD corresponding to the queue number stored in Reg to the Cache.
最后结合图10介绍出队操作处理流程,队列调度模块获取等待调度的端口信息和队列信息,根据所述端口信息和队列信息,通过调度算法,计算得到队列号,发送给出队处理模块;Finally, the queue operation processing flow is introduced in conjunction with FIG. 10, and the queue scheduling module obtains the port information and the queue information waiting for scheduling, and according to the port information and the queue information, calculates a queue number through a scheduling algorithm, and sends a team processing module;
出队处理模块将所述队列号发送给Map表申请查询与所述队列号对应的QD;The dequeue processing module sends the queue number to the Map table to query the QD corresponding to the queue number;
Map表接收出队处理模块的查询申请,由查询申请触发Map表查询与所述队列号对应的QD的存储位置信息和地址信息,发送给QD管理模块;The Map table receives the query request of the dequeue processing module, and the query request triggers the Map table to query the storage location information and the address information of the QD corresponding to the queue number, and sends the information to the QD management module;
QD管理模块从基于QD的存储位置信息和地址信息所确定的地址中获取QD,并由QD管理模块中的搬移管理模块将存储在Cache或DRAM中的QD搬移至Reg中;更新Map表中与所述队列号对应的QD的存储位置信息和地址信息,将所述QD、所述QD更新后的存储位置信息和地址信息发送给出队处理模块;The QD management module obtains the QD from the address determined by the QD-based storage location information and the address information, and moves the QD stored in the Cache or the DRAM to the Reg by the migration management module in the QD management module; updating the Map table and The storage location information and the address information of the QD corresponding to the queue number, and the QD, the updated storage location information and the address information of the QD are sent to the team processing module;
出队处理模块用QD中的队列的首指针读取DRAM中的报文信息、报文缓存指针和下一个正在排队的报文缓存指针;对报文信息进行出队操作处理之后,出队队处理模块对所述QD进行更新,并将更新的QD存储至基于所述QD更新后的存储位置信息和地址信息所确定的地址中; The dequeue processing module uses the first pointer of the queue in the QD to read the message information in the DRAM, the message buffer pointer, and the next message buffer pointer that is being queued; after the packet information is dequeued, the team is sent out. The processing module updates the QD, and stores the updated QD into an address determined based on the updated storage location information and address information of the QD;
拥塞避免模块根据所述队列号判决是否需要将存储在Reg中的与所述队列号对应的QD释放,当所述队列号与正在进行队列操作处理的队列号都不一致时,拥塞避免模块判决将存储在Reg中的QD释放,此时,搬移管理模块将存储在Reg中与所述队列号对应的QD搬移至Cache中。The congestion avoidance module determines, according to the queue number, whether the QD corresponding to the queue number stored in the Reg needs to be released. When the queue number is inconsistent with the queue number processed by the queue operation, the congestion avoidance module decides The QD stored in Reg is released. At this time, the migration management module moves the QD corresponding to the queue number stored in Reg to the Cache.
为了保证Reg和Cache有足够空间能够存储队列操作处理的QD,搬移管理模块实时检测Reg和Cache的空间使用情况;当搬移管理模块检测到Cache的使用空间大于预设的第一阈值时,将Cache中活跃度最低的队列的QD搬移至DRAM中,释放搬移出Cache的QD的指针;当搬移管理模块检测到Reg的使用空间大于预设的第三阈值时,反压入向报文缓存模块,停止输出报文描述符,并停止报文信息入队流程,直到搬移管理模块检测到Reg的使用空间小于预设的第四阈值时,恢复输出报文描述符和报文信息入队流程。In order to ensure that the Reg and Cache have enough space to store the QD of the queue operation processing, the migration management module detects the space usage of the Reg and the Cache in real time; when the migration management module detects that the usage space of the Cache is greater than the preset first threshold, the Cache is The QD of the queue with the lowest activity is moved to the DRAM, and the pointer of the QD that moves out of the Cache is released. When the migration management module detects that the usage space of the Reg is greater than the preset third threshold, the packet is forwarded to the packet buffer module. The output of the message descriptor is stopped, and the message information enqueuing process is stopped. When the migration management module detects that the usage space of the Reg is less than the preset fourth threshold, the output message descriptor and the message information enqueuing process are resumed.
进一步地,图12为本发明队列操作中的处理方法实施例五的应用场景示意图之二,参照图12所示,从网络上接收Cell,根据Cell携带的端口号,在Map表中查询与端口号对应的QD的存储位置信息和地址信息,从而从基于QD的存储位置信息和地址信息所确定的地址中读取QD,根据QD确定队列的尾指针和队列深度,其中端口号可等同于队列号;将Cell写入到队列尾部,更新对应QD中的队列的尾指针为当前入队Cell的存储地址,将QD中的队列深度加上当前入队Cell长度,将Cell数据写入到DRAM中Cell数据缓存区域,将Cell指针写入到DRAM中Cell指针缓存区域,此时,入队操作处理完成;出队时,按照队列间RR调度规则选择输出调度队列号,根据队列号,在Map表中查询与队列号对应的QD的存储位置信息和地址信息,从而从基于QD的存储位置信息和地址信息所确定的地址中读取QD,根据QD确定队列的首指针和队列深度,用队列的首指针从DRAM中读取Cell数据进行输出,用队列的首指针从DRAM中读取下一个正在排队的Cell 指针,并将QD中的队列的首指针更新为该指针,将QD中的队列深度更新为QD中的队列深度减去当前出队Cell长度,此时,出队操作处理完成。Further, FIG. 12 is a schematic diagram of the application scenario of the fifth embodiment of the processing method in the queue operation according to the present invention. Referring to FIG. 12, the cell is received from the network, and the port is searched and ported according to the port number carried by the cell. The storage location information and the address information of the corresponding QD, so that the QD is read from the address determined by the QD-based storage location information and the address information, and the tail pointer and the queue depth of the queue are determined according to the QD, wherein the port number is equivalent to the queue Write the Cell to the end of the queue, update the tail pointer of the queue corresponding to the QD to the storage address of the current enqueue Cell, add the queue depth in the QD to the current enqueued Cell length, and write the Cell data to the DRAM. In the Cell data buffer area, the Cell pointer is written to the Cell pointer buffer area in the DRAM. At this time, the enqueue operation is completed; when dequeuing, the output scheduling queue number is selected according to the inter-queue RR scheduling rule, according to the queue number, in the Map table. Querying the storage location information and the address information of the QD corresponding to the queue number, thereby reading the QD from the address determined by the QD-based storage location information and the address information, QD and determines the queue head pointer queue depth, the read data output from the DRAM Cell with the head pointer of the queue, the queue with the pointer at the first reading from a DRAM is being queued Cell Pointer, and update the first pointer of the queue in the QD to the pointer, and update the queue depth in the QD to the queue depth in the QD minus the current dequeued Cell length. At this time, the dequeue operation is completed.
可见,本发明队列操作中的处理方法还可以应用在信元Cell按虚拟输出队列VOQ排队输出管理的装置上,因此,本发明队列操作中的处理方法的应用并不局限于网络器件队列管理系统,所有集成队列排队管理功能的系统或装置上都可应用。It can be seen that the processing method in the queue operation of the present invention can also be applied to the device in which the cell Cell is managed by the virtual output queue VOQ queue output. Therefore, the application of the processing method in the queue operation of the present invention is not limited to the network device queue management system. It can be applied to all systems or devices that integrate queue queuing management functions.
本发明还提供一种队列操作中的处理装置,用于实现本发明队列操作中的处理方法的具体细节,达到相同的效果。The present invention also provides a processing device in a queue operation for implementing the specific details of the processing method in the queue operation of the present invention, achieving the same effect.
图13为本发明队列操作中的处理装置实施例一的组成结构示意图,参照图13所示,本实施例中队列操作中的处理装置包括:获取模块61、查询模块62、第一搬移模块63和第一处理模块64;其中,FIG. 13 is a schematic structural diagram of a first embodiment of a processing apparatus in a queue operation according to the present invention. Referring to FIG. 13, the processing apparatus in the queue operation in the embodiment includes: an obtaining module 61, a querying module 62, and a first moving module 63. And a first processing module 64; wherein
所述获取模块61,配置为获取待处理的报文信息所属队列的队列号;The obtaining module 61 is configured to obtain a queue number of a queue to which the packet information to be processed belongs.
所述查询模块62,配置为在映射表中查询与所述队列号对应的队列描述符的存储位置信息和地址信息;The query module 62 is configured to query, in the mapping table, storage location information and address information of the queue descriptor corresponding to the queue number;
所述第一搬移模块63,配置为根据所述队列描述符的存储位置信息和地址信息,获取队列描述符,将所述队列描述符搬移至寄存器中,并更新映射表中与所述队列号对应的队列描述符的存储位置信息和地址信息;The first moving module 63 is configured to acquire a queue descriptor according to the storage location information and the address information of the queue descriptor, move the queue descriptor into a register, and update the mapping table with the queue number. Storage location information and address information of the corresponding queue descriptor;
所述第一处理模块64,配置为根据所述队列描述符对所述待处理的报文信息进行队列操作,并在进行队列操作之后根据所述队列描述符、所述队列描述符更新后的存储位置信息和地址信息,对所述队列描述符进行更新。The first processing module 64 is configured to perform queue operation on the to-be-processed message information according to the queue descriptor, and update the queue descriptor and the queue descriptor after performing the queue operation. The location descriptor and the address information are stored, and the queue descriptor is updated.
可选地,所述第一处理模块64,具体配置为基于所述队列描述符和所述队列号,按照预设的拥塞避免策略,确定待处理的报文信息入队时,根据所述队列描述符对所述待处理的报文信息进行入队操作;或者,根据所述队列描述符对待处理的报文信息进行出队操作。 Optionally, the first processing module 64 is configured to determine, according to the preset congestion avoidance policy, that the to-be-processed packet information is enqueued according to the queue descriptor and the queue number, according to the queue. The descriptor performs the enqueue operation on the to-be-processed message information; or performs the dequeuing operation according to the message information to be processed in the queue descriptor.
图14为图13所示处理装置中第一处理模块的细化组成结构示意图之一,参照图14所示,当所述队列操作为入队操作时,所述第一处理模块64包括:申请单元641、存储单元642和第一更新单元643;其中,FIG. 14 is a schematic diagram showing the structure of the refinement of the first processing module in the processing apparatus shown in FIG. 13. Referring to FIG. 14, when the queue operation is a queue operation, the first processing module 64 includes: a unit 641, a storage unit 642, and a first update unit 643; wherein
所述申请单元641,配置为申请指向动态随机存储器的报文缓存指针;The application unit 641 is configured to apply for a message cache pointer to the dynamic random access memory;
所述存储单元642,配置为根据所述报文缓存指针,将所述待处理的报文信息以及报文缓存指针存储至所述动态随机存储器中;The storage unit 642 is configured to store the to-be-processed message information and the message cache pointer into the dynamic random access memory according to the message buffer pointer;
所述第一更新单元643,配置为根据所述报文缓存指针和已入队操作的报文信息,更新所述队列描述符,并将更新后的队列描述符存储至目标地址中,所述目标地址为所述队列描述符更新后的存储位置信息和地址信息所确定的地址。The first update unit 643 is configured to update the queue descriptor according to the message cache pointer and the message information of the queued operation, and store the updated queue descriptor in the target address, The target address is an address determined by the storage location information and the address information after the queue descriptor is updated.
图15为图13所示处理装置中第一处理模块的细化组成结构示意图之二,参照图15所示,当所述队列操作为出队操作时,所述第一处理模块64包括:读取单元644、释放单元645和第二更新单元646;其中,FIG. 15 is a second schematic diagram showing the structure of the first processing module in the processing apparatus shown in FIG. 13. Referring to FIG. 15, when the queue operation is a dequeuing operation, the first processing module 64 includes: reading Taking unit 644, a release unit 645 and a second update unit 646; wherein
所述读取单元644,配置为根据所述队列描述符,在所述动态随机存储器读取待处理的报文信息和下一个正在排队的报文缓存指针,并将所述待处理的报文信息进行出队;The reading unit 644 is configured to read, according to the queue descriptor, the to-be-processed message information and the next queued message buffer pointer in the dynamic random access memory, and the to-be-processed message Information is sent out;
所述释放单元645,配置为释放已出队的报文信息的报文缓存指针;The release unit 645 is configured to release a message buffer pointer of the dequeued message information;
所述第二更新单元646,配置为根据所述下一个正在排队的报文缓存指针和所述已出队操作的报文信息,更新所述队列描述符,并将更新后的队列描述符存储至目标地址中,所述目标地址为所述队列描述符更新后的存储位置信息和地址信息所确定的地址。The second update unit 646 is configured to update the queue descriptor according to the next message buffer pointer being queued and the message information of the dequeued operation, and store the updated queue descriptor To the target address, the target address is an address determined by the storage location information and the address information after the queue descriptor is updated.
图16为本发明队列操作中的处理装置实施例二的组成结构示意图,本实施例的队列操作中的处理装置除了包括获取模块61、查询模块62、第一搬移模块63和第一处理模块64外,还包括:第一检测模块65、第二搬移模块66和更新映射表模块67;其中, FIG. 16 is a schematic structural diagram of a second embodiment of a processing device in a queue operation according to the present invention. The processing device in the queue operation of the present embodiment includes an obtaining module 61, a query module 62, a first moving module 63, and a first processing module 64. In addition, the first detecting module 65, the second moving module 66, and the update mapping table module 67;
所述第一检测模块65,配置为检测更新前的队列描述符对应队列的队列号;The first detecting module 65 is configured to detect a queue number of a queue corresponding to the queue descriptor before the update;
所述第二搬移模块66,配置为当更新后的队列描述符对应队列的队列号与所有更新前的队列描述符对应队列的队列号都不一致时,将存储在所述寄存器中更新后的所述队列描述符搬移至高速缓冲存储器中;The second moving module 66 is configured to store the updated directory in the register when the queue number of the updated queue descriptor corresponding queue is different from the queue number of the queue descriptor corresponding to all the updated queue descriptors. The queue descriptor is moved to the cache;
所述更新映射表模块67,配置为更新映射表中与队列号对应的更新后的所述队列描述符的存储位置信息和地址信息。The update mapping table module 67 is configured to update storage location information and address information of the updated queue descriptor corresponding to the queue number in the mapping table.
由于需要将正在进行队列操作处理的QD搬移至Reg中,同时需要将已完成队列操作处理并存储在Reg中的QD搬移至Cache中,因此,为了保证Cache有足够空间能够存储队列操作处理中的QD,保证队列操作处理的正常进行,在本发明队列操作中的处理装置实施例三中,还可以对Cache的使用空间进行实时检测。Because the QD that needs to perform the queue operation processing needs to be moved to the Reg, and the QD that has completed the queue operation processing and stored in the Reg needs to be moved to the Cache, in order to ensure that the Cache has enough space to store the queue operation processing. The QD ensures the normal operation of the queue operation. In the third embodiment of the processing device in the queue operation of the present invention, the space of the Cache can be detected in real time.
具体地,图17为Cache的使用空间的实时检测功能模块示意图,参照图17所示,Cache的使用空间的实时检测功能模块包括:第二检测模块71、第三搬移模块72和释放模块73;其中,Specifically, FIG. 17 is a schematic diagram of a real-time detection function module of the Cache usage space. Referring to FIG. 17, the real-time detection function module of the Cache usage space includes: a second detection module 71, a third migration module 72, and a release module 73; among them,
所述第二检测模块71,配置为实时检测所述高速缓冲存储器的空间使用情况,获得第一检测结果;The second detecting module 71 is configured to detect the space usage of the cache memory in real time, and obtain a first detection result;
所述第三搬移模块72,配置为当所述第一检测结果大于预设的第一阈值时,将存储在所述高速缓冲存储器中、且活跃度小于预设的第二阈值的队列的队列描述符搬移至动态随机存储器内;The third moving module 72 is configured to queue a queue that is stored in the cache and whose activity is less than a preset second threshold when the first detection result is greater than a preset first threshold. The descriptor is moved to the dynamic random access memory;
所述释放模块73,配置为释放搬移出所述高速缓冲存储器的队列描述符的指针。The release module 73 is configured to release a pointer that moves out of the cache descriptor of the cache.
为了保证Reg有足够空间能够存储队列操作处理的QD,保证队列操作处理的正常进行,在本发明队列操作中的处理装置实施例四中,还可以对Reg的使用空间进行实时检测。 In order to ensure that the Reg has enough space to store the QD of the queue operation processing and ensure the normal operation of the queue operation, in the fourth embodiment of the processing apparatus in the queue operation of the present invention, the real space of the Reg can be detected in real time.
具体地,图18为Reg的使用空间的实时检测功能模块示意图,参照图18所示,Reg的使用空间的实时检测功能模块包括:第三检测模块81、第二处理模块82;其中,Specifically, FIG. 18 is a schematic diagram of a real-time detection function module of the usage space of the Reg. Referring to FIG. 18, the real-time detection function module of the usage space of the Reg includes: a third detection module 81 and a second processing module 82;
所述第三检测模块81,配置为实时检测所述寄存器的空间使用情况,获得第二检测结果;The third detecting module 81 is configured to detect the space usage of the register in real time, and obtain a second detection result;
所述第二处理模块82,配置为当第二检测结果大于预设的第三阈值时,停止输出所述待处理的报文信息中的报文描述符和对待处理的报文信息进行入队操作,直至第二检测结果小于预设的第四阈值时,恢复输出所述待处理的报文信息中的报文描述符和对待处理的报文信息进行入队操作,所述预设的第四阈值小于所述预设的第三阈值。The second processing module 82 is configured to stop outputting the message descriptor in the to-be-processed message information and the message information to be processed when the second detection result is greater than the preset third threshold. After the second detection result is less than the preset fourth threshold, the message descriptor in the to-be-processed message information and the message information to be processed are resumed to be enqueued, and the preset The fourth threshold is less than the preset third threshold.
在实际应用中,所述获取模块61、查询模块62、第一搬移模块63、第一处理模块64、第一检测模块65、第二搬移模块66、更新映射表模块67、第二检测模块71、第三搬移模块72、释放模块73、第三检测模块81、第二处理模块82、以及申请单元641、存储单元642、第一更新单元643、读取单元644、释放单元645和第二更新单元646均可由位于移动终端中的中央处理器(CPU,Central Processing Unit)、微处理器(MPU,Micro Processor Unit)、数字信号处理器(DSP,Digital Signal Processor)、或现场可编程门阵列(FPGA,Field Programmable Gate Array)等实现。In an actual application, the obtaining module 61, the querying module 62, the first moving module 63, the first processing module 64, the first detecting module 65, the second moving module 66, the update mapping table module 67, and the second detecting module 71 The third moving module 72, the releasing module 73, the third detecting module 81, the second processing module 82, and the applying unit 641, the storage unit 642, the first updating unit 643, the reading unit 644, the releasing unit 645, and the second update The unit 646 can be configured by a central processing unit (CPU), a microprocessor (MPU, a Micro Processor Unit), a digital signal processor (DSP), or a field programmable gate array (located in the mobile terminal). FPGA, Field Programmable Gate Array) and other implementations.
本发明实施例还记载了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行前述各个实施例所述的队列操作中的处理方法。也就是说,所述计算机可执行指令被处理器执行之后,能够实现前述任意一个技术方案提供的队列操作中的处理方法。The embodiment of the invention further describes a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the processing method in the queue operation described in the foregoing embodiments. That is to say, after the computer executable instructions are executed by the processor, the processing method in the queue operation provided by any of the foregoing technical solutions can be implemented.
作为一种实施方式,本发明实施例记载了一种计算机存储介质,所述计算机存储介质存储有一个或者多个程序,所述一个或者多个程序可被一 个或者多个处理器执行,以实现以下步骤:As an embodiment, an embodiment of the present invention describes a computer storage medium, where the computer storage medium stores one or more programs, and the one or more programs may be Executed by one or more processors to implement the following steps:
获取待处理的报文信息所属队列的队列号;Obtain the queue number of the queue to which the packet information to be processed belongs.
在映射表中查询与所述队列号对应的队列描述符的存储位置信息和地址信息;Querying, in the mapping table, storage location information and address information of the queue descriptor corresponding to the queue number;
根据所述队列描述符的存储位置信息和地址信息,获取队列描述符,将所述队列描述符搬移至寄存器中,并更新映射表中与所述队列号对应的队列描述符的存储位置信息和地址信息;Obtaining a queue descriptor according to the storage location information and the address information of the queue descriptor, moving the queue descriptor to a register, and updating storage location information of the queue descriptor corresponding to the queue number in the mapping table and Address information;
根据所述队列描述符对所述待处理的报文信息进行队列操作,并在进行队列操作之后根据所述队列描述符、所述队列描述符更新后的存储位置信息和地址信息,对所述队列描述符进行更新。Performing a queue operation on the to-be-processed message information according to the queue descriptor, and after performing the queue operation, according to the queue descriptor, the updated storage location information and the address information of the queue descriptor, The queue descriptor is updated.
作为一种实施方式,执行对所述队列描述符进行更新的步骤之后,所述一个或者多个程序还可被所述一个或者多个处理器执行,以实现以下步骤:As an embodiment, after performing the step of updating the queue descriptor, the one or more programs may also be executed by the one or more processors to implement the following steps:
检测更新前的队列描述符对应队列的队列号;Detecting the queue number of the queue corresponding to the queue descriptor before the update;
当更新后的队列描述符对应队列的队列号与所有更新前的队列描述符对应队列的队列号都不一致时,将存储在所述寄存器中更新后的所述队列描述符搬移至高速缓冲存储器中;When the queue number of the queue descriptor corresponding to the updated queue is inconsistent with the queue number of the queue corresponding to all the queue descriptors before the update, the updated queue descriptor stored in the register is moved to the cache memory. ;
更新映射表中与队列号对应的更新后的所述队列描述符的存储位置信息和地址信息。The storage location information and the address information of the updated queue descriptor corresponding to the queue number in the mapping table are updated.
本领域技术人员应当理解,本实施例的计算机存储介质中各程序的功能,可参照前述各实施例所述的队列操作中的处理方法的相关描述而理解。It should be understood by those skilled in the art that the functions of the programs in the computer storage medium of the present embodiment can be understood by referring to the related description of the processing methods in the queue operations described in the foregoing embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统, 或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner, such as: multiple units or components may be combined, or Can be integrated into another system, Or some features can be ignored or not executed. In addition, the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理模块中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the above integration The unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。A person skilled in the art can understand that all or part of the steps of implementing the above method embodiments may be completed by using hardware related to the program instructions. The foregoing program may be stored in a computer readable storage medium, and the program is executed when executed. The foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk. A medium that can store program code.
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。凡在本发明的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本发明的保护范围之内。The above is only the preferred embodiment of the present invention and is not intended to limit the scope of the present invention. Any modifications, equivalent substitutions and improvements made within the spirit and scope of the present invention are included in the scope of the present invention.
工业实用性Industrial applicability
本发明实施例的技术方案通过Map表查询与待处理的报文信息所属队列的队列号对应的QD的存储位置和存储地址;根据所述QD的存储位置和存储地址,从而获取QD,将所述QD搬移至Reg中,并更新Map中与所述队列号对应的QD的存储位置为Reg和存储地址为在Reg中的对应地址; 根据所述QD对所述待处理的报文信息进行队列操作,在进行队列操作之后根据所述QD、所述QD更新后的存储位置和存储地址,对所述QD进行更新,从而保证了队列操作中的QD动态存取的实效性,提高了在队列操作中QD的存取效率,实现了QD的快速存取,保证了集成队列排队管理功能的系统性能。 The technical solution of the embodiment of the present invention uses the Map table to query the storage location and the storage address of the QD corresponding to the queue number of the queue to which the packet information to be processed belongs, and obtain the QD according to the storage location and the storage address of the QD. The QD is moved to the Reg, and the storage location of the QD corresponding to the queue number in the Map is Reg and the storage address is the corresponding address in the Reg; Performing a queue operation on the to-be-processed message information according to the QD, and updating the QD according to the QD, the updated storage location and the storage address of the QD after performing the queue operation, thereby ensuring the queue The effectiveness of QD dynamic access in operation improves the access efficiency of QD in queue operations, realizes fast access of QD, and ensures the system performance of integrated queue queuing management function.

Claims (15)

  1. 一种队列操作中的处理方法,所述方法包括:A processing method in a queue operation, the method comprising:
    获取待处理的报文信息所属队列的队列号;Obtain the queue number of the queue to which the packet information to be processed belongs.
    在映射表中查询与所述队列号对应的队列描述符的存储位置信息和地址信息;Querying, in the mapping table, storage location information and address information of the queue descriptor corresponding to the queue number;
    根据所述队列描述符的存储位置信息和地址信息,获取队列描述符,将所述队列描述符搬移至寄存器中,并更新映射表中与所述队列号对应的队列描述符的存储位置信息和地址信息;Obtaining a queue descriptor according to the storage location information and the address information of the queue descriptor, moving the queue descriptor to a register, and updating storage location information of the queue descriptor corresponding to the queue number in the mapping table and Address information;
    根据所述队列描述符对所述待处理的报文信息进行队列操作,并在进行队列操作之后根据所述队列描述符、所述队列描述符更新后的存储位置信息和地址信息,对所述队列描述符进行更新。Performing a queue operation on the to-be-processed message information according to the queue descriptor, and after performing the queue operation, according to the queue descriptor, the updated storage location information and the address information of the queue descriptor, The queue descriptor is updated.
  2. 根据权利要求1所述的方法,其中,所述根据所述队列描述符对所述待处理的报文信息进行队列操作,包括:The method of claim 1, wherein the performing queue operation on the to-be-processed message information according to the queue descriptor comprises:
    基于所述队列描述符和所述队列号,按照预设的拥塞避免策略,确定待处理的报文信息入队时,根据所述队列描述符对所述待处理的报文信息进行入队操作;或者,And determining, according to the preset congestion avoidance policy, the packet information to be processed according to the queue descriptor and the queue number, and performing the enqueue operation on the to-be-processed packet information according to the queue descriptor. ;or,
    根据所述队列描述符对待处理的报文信息进行出队操作。Dequeuing the packet information to be processed according to the queue descriptor.
  3. 根据权利要求2所述的方法,其中,当所述队列操作为入队操作时,所述根据所述队列描述符对所述待处理的报文信息进行队列操作,并在进行队列操作之后根据所述队列描述符、所述队列描述符更新后的存储位置信息和地址信息,对所述队列描述符进行更新,包括:The method according to claim 2, wherein when the queue operation is an enqueue operation, the queued message information is queued according to the queue descriptor, and is performed after the queue operation The queue descriptor, the updated storage location information and the address information of the queue descriptor, update the queue descriptor, including:
    申请指向动态随机存储器的报文缓存指针;Applying a message cache pointer to the dynamic random access memory;
    根据所述报文缓存指针,将所述待处理的报文信息以及报文缓存指针存储至所述动态随机存储器中;And storing, according to the message buffer pointer, the to-be-processed message information and the message cache pointer into the dynamic random access memory;
    根据所述报文缓存指针和已入队操作的报文信息,更新所述队列描述 符,并将更新后的队列描述符存储至目标地址中,所述目标地址为所述队列描述符更新后的存储位置信息和地址信息所确定的地址。Updating the queue description according to the message cache pointer and the message information of the queued operation And storing the updated queue descriptor in the target address, the target address being the address determined by the updated storage location information and the address information of the queue descriptor.
  4. 根据权利要求2所述的方法,其中,当所述队列操作为出队操作时,所述根据所述队列描述符对所述待处理的报文信息进行队列操作,并在进行队列操作之后根据所述队列描述符、所述队列描述符更新后的存储位置信息和地址信息,对所述队列描述符进行更新,包括:The method according to claim 2, wherein, when the queue operation is a dequeuing operation, the queue operation is performed on the message information to be processed according to the queue descriptor, and after performing a queue operation, according to the method The queue descriptor, the updated storage location information and the address information of the queue descriptor, update the queue descriptor, including:
    根据所述队列描述符,在所述动态随机存储器读取待处理的报文信息和下一个正在排队的报文缓存指针,并将所述待处理的报文信息进行出队;Decoding the to-be-processed message information and the next queued message buffer pointer in the dynamic random access memory according to the queue descriptor, and dequeuing the to-be-processed message information;
    释放已出队的报文信息的报文缓存指针;A message buffer pointer for releasing the dequeued message information;
    根据所述下一个正在排队的报文缓存指针和所述已出队操作的报文信息,更新所述队列描述符,并将更新后的队列描述符存储至目标地址中,所述目标地址为所述队列描述符更新后的存储位置信息和地址信息所确定的地址。Updating the queue descriptor according to the next message buffer pointer being queued and the message information of the dequeued operation, and storing the updated queue descriptor in a target address, where the target address is The stored location information and the address determined by the address information after the queue descriptor is updated.
  5. 根据权利要求1所述的方法,其中,所述对所述队列描述符进行更新之后,所述方法还包括:The method of claim 1, wherein after the updating the queue descriptor, the method further comprises:
    检测更新前的队列描述符对应队列的队列号;Detecting the queue number of the queue corresponding to the queue descriptor before the update;
    当更新后的队列描述符对应队列的队列号与所有更新前的队列描述符对应队列的队列号都不一致时,将存储在所述寄存器中更新后的所述队列描述符搬移至高速缓冲存储器中;When the queue number of the queue descriptor corresponding to the updated queue is inconsistent with the queue number of the queue corresponding to all the queue descriptors before the update, the updated queue descriptor stored in the register is moved to the cache memory. ;
    更新映射表中与队列号对应的更新后的所述队列描述符的存储位置信息和地址信息。The storage location information and the address information of the updated queue descriptor corresponding to the queue number in the mapping table are updated.
  6. 根据权利要求5所述的方法,其中,所述方法还包括:The method of claim 5 wherein the method further comprises:
    实时检测所述高速缓冲存储器的空间使用情况,获得第一检测结果;Detecting the space usage of the cache memory in real time, and obtaining a first detection result;
    当所述第一检测结果大于预设的第一阈值时,将存储在所述高速缓冲存储器中、且活跃度小于预设的第二阈值的队列的队列描述符搬移至动态 随机存储器内;When the first detection result is greater than a preset first threshold, moving a queue descriptor of a queue stored in the cache and having an activity less than a preset second threshold to a dynamic In random access memory;
    释放搬移出所述高速缓冲存储器的队列描述符的指针。A pointer to the queue descriptor that is moved out of the cache is released.
  7. 根据权利要求2所述的方法,其中,所述方法还包括:The method of claim 2, wherein the method further comprises:
    实时检测所述寄存器的空间使用情况,获得第二检测结果;Real-time detecting the space usage of the register to obtain a second detection result;
    当第二检测结果大于预设的第三阈值时,停止输出所述待处理的报文信息中的报文描述符和对待处理的报文信息进行入队操作,直至第二检测结果小于预设的第四阈值时,恢复输出所述待处理的报文信息中的报文描述符和对待处理的报文信息进行入队操作,所述预设的第四阈值小于所述预设的第三阈值。When the second detection result is greater than the preset third threshold, the message descriptor in the to-be-processed message information and the message information to be processed are stopped from being queued until the second detection result is less than the preset. The fourth threshold is restored, and the message descriptor in the to-be-processed message information and the message information to be processed are resumed, and the preset fourth threshold is smaller than the preset third. Threshold.
  8. 一种队列操作中的处理装置,所述装置包括:获取模块、查询模块、第一搬移模块和第一处理模块;其中,A processing device in a queue operation, the device comprising: an obtaining module, a query module, a first moving module, and a first processing module; wherein
    所述获取模块,配置为获取待处理的报文信息所属队列的队列号;The acquiring module is configured to obtain a queue number of a queue to which the packet information to be processed belongs;
    所述查询模块,配置为在映射表中查询与所述队列号对应的队列描述符的存储位置信息和地址信息;The query module is configured to query, in the mapping table, storage location information and address information of the queue descriptor corresponding to the queue number;
    所述第一搬移模块,配置为根据所述队列描述符的存储位置信息和地址信息,获取队列描述符,将所述队列描述符搬移至寄存器中,并更新映射表中与所述队列号对应的队列描述符的存储位置信息和地址信息;The first moving module is configured to acquire a queue descriptor according to the storage location information and the address information of the queue descriptor, move the queue descriptor to a register, and update the mapping table corresponding to the queue number Storage location information and address information of the queue descriptor;
    所述第一处理模块,配置为根据所述队列描述符对所述待处理的报文信息进行队列操作,并在进行队列操作之后根据所述队列描述符、所述队列描述符更新后的存储位置信息和地址信息,对所述队列描述符进行更新。The first processing module is configured to perform a queue operation on the to-be-processed message information according to the queue descriptor, and after the queue operation, update the storage according to the queue descriptor and the queue descriptor. Location information and address information are updated for the queue descriptor.
  9. 根据权利要求8所述的装置,其中,所述第一处理模块,具体配置为基于所述队列描述符和所述队列号,按照预设的拥塞避免策略,确定待处理的报文信息入队时,根据所述队列描述符对所述待处理的报文信息进行入队操作;或者,根据所述队列描述符对待处理的报文信息进行出队操作。 The device according to claim 8, wherein the first processing module is configured to determine, according to the preset congestion avoidance policy, that the to-be-processed message information is enqueued based on the queue descriptor and the queue number. And performing the enqueue operation on the to-be-processed packet information according to the queue descriptor; or performing a dequeuing operation according to the packet descriptor to be processed according to the queue descriptor.
  10. 根据权利要求9所述的装置,其中,当所述队列操作为入队操作时,所述第一处理模块包括:申请单元、存储单元和第一更新单元;其中,The apparatus of claim 9, wherein the first processing module comprises: an application unit, a storage unit, and a first update unit, when the queue is operated as a queue operation;
    所述申请单元,配置为申请指向动态随机存储器的报文缓存指针;The application unit is configured to apply for a message cache pointer to the dynamic random access memory;
    所述存储单元,配置为根据所述报文缓存指针,将所述待处理的报文信息以及报文缓存指针存储至所述动态随机存储器中;The storage unit is configured to store the to-be-processed message information and the message cache pointer in the dynamic random access memory according to the message buffer pointer;
    所述第一更新单元,配置为根据所述报文缓存指针和已入队操作的报文信息,更新所述队列描述符,并将更新后的队列描述符存储至目标地址中,所述目标地址为所述队列描述符更新后的存储位置信息和地址信息所确定的地址。The first update unit is configured to update the queue descriptor according to the message cache pointer and the message information of the queued operation, and store the updated queue descriptor in the target address, the target The address is an address determined by the storage location information and the address information after the queue descriptor is updated.
  11. 根据权利要求9所述的装置,其中,当所述队列操作为出队操作时,所述第一处理模块包括:读取单元、释放单元和第二更新单元;其中,The apparatus of claim 9, wherein the first processing module comprises: a reading unit, a releasing unit, and a second updating unit, when the queue is operated as a dequeue operation;
    所述读取单元,配置为根据所述队列描述符,在所述动态随机存储器读取待处理的报文信息和下一个正在排队的报文缓存指针,并将所述待处理的报文信息进行出队;The reading unit is configured to read the to-be-processed message information and the next queued message buffer pointer in the dynamic random access memory according to the queue descriptor, and the to-be-processed message information Carry out the team;
    所述释放单元,配置为释放已出队的报文信息的报文缓存指针;The release unit is configured to release a message cache pointer of the dequeued message information;
    所述第二更新单元,配置为根据所述下一个正在排队的报文缓存指针和所述已出队操作的报文信息,更新所述队列描述符,并将更新后的队列描述符存储至目标地址中,所述目标地址为所述队列描述符更新后的存储位置信息和地址信息所确定的地址。The second update unit is configured to update the queue descriptor according to the next queued message cache pointer and the message information of the dequeued operation, and store the updated queue descriptor to In the target address, the target address is an address determined by the storage location information and the address information after the queue descriptor is updated.
  12. 根据权利要求8所述的装置,其中,所述装置还包括:第一检测模块、第二搬移模块和更新映射表模块;其中,The device according to claim 8, wherein the device further comprises: a first detecting module, a second moving module, and an update mapping table module; wherein
    所述第一检测模块,配置为检测更新前的队列描述符对应队列的队列号;The first detecting module is configured to detect a queue number of a queue corresponding to the queue descriptor before the update;
    所述第二搬移模块,配置为当更新后的队列描述符对应队列的队列号与所有更新前的队列描述符对应队列的队列号都不一致时,将存储在所述 寄存器中更新后的所述队列描述符搬移至高速缓冲存储器中;The second moving module is configured to store, when the queue number of the updated queue descriptor corresponding queue is different from the queue number of the queue corresponding to all the updated queue descriptors, The updated queue descriptor in the register is moved to the cache memory;
    所述更新映射表模块,配置为更新映射表中与队列号对应的更新后的所述队列描述符的存储位置信息和地址信息。The update mapping table module is configured to update storage location information and address information of the updated queue descriptor corresponding to the queue number in the mapping table.
  13. 根据权利要求12所述的装置,其中,所述装置还包括:第二检测模块、第三搬移模块和释放模块;其中,The device according to claim 12, wherein the device further comprises: a second detecting module, a third moving module, and a releasing module; wherein
    所述第二检测模块,配置为实时检测所述高速缓冲存储器的空间使用情况,获得第一检测结果;The second detecting module is configured to detect a space usage of the cache memory in real time, and obtain a first detection result;
    所述第三搬移模块,配置为当所述第一检测结果大于预设的第一阈值时,将存储在所述高速缓冲存储器中、且活跃度小于预设的第二阈值的队列的队列描述符搬移至动态随机存储器内;The third moving module is configured to: when the first detection result is greater than a preset first threshold, a queue description of a queue that is stored in the cache and whose activity is less than a preset second threshold Move to dynamic random access memory;
    所述释放模块,配置为释放搬移出所述高速缓冲存储器的队列描述符的指针。The release module is configured to release a pointer that moves out of the cache descriptor of the cache.
  14. 根据权利要求9所述的装置,其中,所述装置还包括:第三检测模块、第二处理模块;其中,The device according to claim 9, wherein the device further comprises: a third detecting module, a second processing module; wherein
    所述第三检测模块,配置为实时检测所述寄存器的空间使用情况,获得第二检测结果;The third detecting module is configured to detect a space usage of the register in real time, and obtain a second detection result;
    所述第二处理模块,配置为当第二检测结果大于预设的第三阈值时,停止输出所述待处理的报文信息中的报文描述符和对待处理的报文信息进行入队操作,直至第二检测结果小于预设的第四阈值时,恢复输出所述待处理的报文信息中的报文描述符和对待处理的报文信息进行入队操作,所述预设的第四阈值小于所述预设的第三阈值。The second processing module is configured to stop outputting the message descriptor in the to-be-processed message information and the message information to be processed to perform the enqueue operation when the second detection result is greater than the preset third threshold. When the second detection result is less than the preset fourth threshold, the message descriptor in the to-be-processed message information and the message information to be processed are restored to perform the enqueue operation, and the preset fourth The threshold is less than the preset third threshold.
  15. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1至7任一项所述的队列操作中的处理方法。 A computer storage medium having stored therein computer executable instructions for performing a processing method in a queue operation according to any one of claims 1 to 7.
PCT/CN2017/088613 2016-12-13 2017-06-16 Processing method, device, and computer storage medium for queue operation WO2018107681A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611158994.5A CN108234348B (en) 2016-12-13 2016-12-13 Processing method and device in queue operation
CN201611158994.5 2016-12-13

Publications (1)

Publication Number Publication Date
WO2018107681A1 true WO2018107681A1 (en) 2018-06-21

Family

ID=62557883

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/088613 WO2018107681A1 (en) 2016-12-13 2017-06-16 Processing method, device, and computer storage medium for queue operation

Country Status (2)

Country Link
CN (1) CN108234348B (en)
WO (1) WO2018107681A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112350996A (en) * 2020-10-15 2021-02-09 中国船舶重工集团公司第七一六研究所 Communication message analysis system and method adaptable to protocol upgrading
CN113454957A (en) * 2019-02-22 2021-09-28 华为技术有限公司 Memory management method and device
CN114401072A (en) * 2021-12-12 2022-04-26 西安电子科技大学 Dynamic cache control method and system for frame splitting and reordering queue based on HINOC protocol
CN115242726A (en) * 2022-07-27 2022-10-25 阿里巴巴(中国)有限公司 Queue scheduling method and device and electronic equipment
CN115955441A (en) * 2022-11-22 2023-04-11 中国第一汽车股份有限公司 Management scheduling method and device based on TSN queue

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109656515A (en) * 2018-11-16 2019-04-19 深圳证券交易所 Operating method, device and the storage medium of queue message
CN112804156A (en) * 2019-11-13 2021-05-14 深圳市中兴微电子技术有限公司 Congestion avoidance method and device and computer readable storage medium
WO2021128104A1 (en) * 2019-12-25 2021-07-01 华为技术有限公司 Message buffering method, integrated circuit system, and storage medium
CN111526097B (en) * 2020-07-03 2020-10-30 新华三半导体技术有限公司 Message scheduling method, device and network chip
CN113343735B (en) * 2021-08-05 2021-11-05 深圳市成为信息技术有限公司 Tag processing method of reader-writer, reader-writer and storage medium
CN114844847A (en) * 2021-12-14 2022-08-02 合肥哈工轩辕智能科技有限公司 High-reliability real-time message distribution method and device
CN115277607B (en) * 2022-07-15 2023-12-26 天津市滨海新区信息技术创新中心 Two-stage mimicry judgment method under complex flow condition of heterogeneous system
CN117193669B (en) * 2023-11-06 2024-02-06 格创通信(浙江)有限公司 Discrete storage method, device and equipment for message descriptors and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060069854A1 (en) * 2004-09-30 2006-03-30 Sanjeev Jain Method and apparatus providing efficient queue descriptor memory access
CN103546392A (en) * 2012-07-12 2014-01-29 中兴通讯股份有限公司 Single queue cycle dispatching method and device
US20140181409A1 (en) * 2012-12-20 2014-06-26 Oracle International Corporation Method and system for queue descriptor cache management for a host channel adapter
CN103914341A (en) * 2013-01-06 2014-07-09 中兴通讯股份有限公司 Data queue de-queuing control method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750245B (en) * 2012-05-29 2015-11-18 中国人民解放军国防科学技术大学 Message method of reseptance, message receiver module, Apparatus and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060069854A1 (en) * 2004-09-30 2006-03-30 Sanjeev Jain Method and apparatus providing efficient queue descriptor memory access
CN103546392A (en) * 2012-07-12 2014-01-29 中兴通讯股份有限公司 Single queue cycle dispatching method and device
US20140181409A1 (en) * 2012-12-20 2014-06-26 Oracle International Corporation Method and system for queue descriptor cache management for a host channel adapter
CN103914341A (en) * 2013-01-06 2014-07-09 中兴通讯股份有限公司 Data queue de-queuing control method and device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113454957A (en) * 2019-02-22 2021-09-28 华为技术有限公司 Memory management method and device
CN113454957B (en) * 2019-02-22 2023-04-25 华为技术有限公司 Memory management method and device
US11695710B2 (en) 2019-02-22 2023-07-04 Huawei Technologies Co., Ltd. Buffer management method and apparatus
CN112350996A (en) * 2020-10-15 2021-02-09 中国船舶重工集团公司第七一六研究所 Communication message analysis system and method adaptable to protocol upgrading
CN114401072A (en) * 2021-12-12 2022-04-26 西安电子科技大学 Dynamic cache control method and system for frame splitting and reordering queue based on HINOC protocol
CN114401072B (en) * 2021-12-12 2024-02-06 西安电子科技大学 Dynamic buffer control method and system for frame disassembly reordering queue based on HINOC protocol
CN115242726A (en) * 2022-07-27 2022-10-25 阿里巴巴(中国)有限公司 Queue scheduling method and device and electronic equipment
CN115242726B (en) * 2022-07-27 2024-03-01 阿里巴巴(中国)有限公司 Queue scheduling method and device and electronic equipment
CN115955441A (en) * 2022-11-22 2023-04-11 中国第一汽车股份有限公司 Management scheduling method and device based on TSN queue

Also Published As

Publication number Publication date
CN108234348B (en) 2020-09-25
CN108234348A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
WO2018107681A1 (en) Processing method, device, and computer storage medium for queue operation
US11899596B2 (en) System and method for facilitating dynamic command management in a network interface controller (NIC)
US8543729B2 (en) Virtualised receive side scaling
US9154453B2 (en) Methods and systems for providing direct DMA
US6779084B2 (en) Enqueue operations for multi-buffer packets
US8505013B2 (en) Reducing data read latency in a network communications processor architecture
US8799507B2 (en) Longest prefix match searches with variable numbers of prefixes
US7149226B2 (en) Processing data packets
US20150288624A1 (en) Low-latency processing in a network node
US20110225168A1 (en) Hash processing in a network communications processor architecture
US20080126580A1 (en) Reflecting bandwidth and priority in network attached storage I/O
EP2830269B1 (en) Message processing method and device
US20150049769A1 (en) Socket management with reduced latency packet processing
US9769092B2 (en) Packet buffer comprising a data section and a data description section
US10419370B2 (en) Hierarchical packet buffer system
US20140047188A1 (en) Method and Multi-Core Communication Processor for Replacing Data in System Cache
TWI257790B (en) System for protocol processing engine
WO2011026353A1 (en) Route switching device and data cashing method thereof
US8223788B1 (en) Method and system for queuing descriptors
CN112953967A (en) Network protocol unloading device and data transmission system
US11949601B1 (en) Efficient buffer utilization for network data units
WO2016202113A1 (en) Queue management method, apparatus, and storage medium
WO2020168563A1 (en) Memory management method and apparatus
CN110519180A (en) Network card virtualization queue scheduling method and system
US11552907B2 (en) Efficient packet queueing for computer networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17881177

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17881177

Country of ref document: EP

Kind code of ref document: A1