WO2016179968A1 - Queue management method and device, and storage medium - Google Patents

Queue management method and device, and storage medium Download PDF

Info

Publication number
WO2016179968A1
WO2016179968A1 PCT/CN2015/092929 CN2015092929W WO2016179968A1 WO 2016179968 A1 WO2016179968 A1 WO 2016179968A1 CN 2015092929 W CN2015092929 W CN 2015092929W WO 2016179968 A1 WO2016179968 A1 WO 2016179968A1
Authority
WO
WIPO (PCT)
Prior art keywords
pointer
queue
chip
operation request
ram
Prior art date
Application number
PCT/CN2015/092929
Other languages
French (fr)
Chinese (zh)
Inventor
周谦
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2016179968A1 publication Critical patent/WO2016179968A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/621Individual queue per connection or flow, e.g. per VC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9015Buffering arrangements for supporting a linked list
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9036Common buffer combined with individual queues

Definitions

  • the present invention relates to a related art of queue management in the field of communication technologies, and in particular, to a queue management method, apparatus, and storage medium.
  • an implementation method of queue management adopts a fixed allocation space, that is, each queue allocates a fixed space, and address spaces of different queues cannot be shared.
  • This method is simple to implement, but since the address space of different queues cannot be shared, space waste is easily caused. And when there are a large number of queues, the space allocated by each queue is quite limited, and the anti-burst capability is weak; another implementation method is to use a shared space, to make a linked list for each queue, and to set a packet address space belonging to the same queue. Linked up, this method can improve the cache utilization, but the control is complicated, especially in the case of multi-linked list mode, and at the same time facing the instantaneous large-scale traffic of the network burst, the anti-burst capability is still insufficient.
  • the embodiment of the present invention is to provide a queue management method, device, and storage medium, which can effectively avoid packet loss under traffic bursts while improving access efficiency and reducing power consumption.
  • An embodiment of the present invention provides a queue management method, where the method includes:
  • the current operation request is an enqueue operation request of the message descriptor, according to the enqueue operation Requesting and buffering of on-chip random access memory (RAM), storing the message descriptor to an off-chip double-rate synchronous dynamic random access memory (DDR) or on-chip RAM And update the pointer list information;
  • RAM on-chip random access memory
  • the location information includes on-chip RAM location information and off-chip DDR location information.
  • the storing the message descriptor to the off-chip DDR or the on-chip RAM according to the enqueue operation request and the buffering condition of the on-chip RAM includes:
  • the queue number in the enqueue operation request is extracted, and the message descriptor is stored in the on-chip RAM according to the queue number.
  • the update pointer list information when it is determined that the current operation request is an enqueue operation request of the message descriptor, includes:
  • Extracting a queue number in the enqueue operation request Reading a corresponding queue tail pointer and a idle head pointer according to the queue number, updating the queue tail pointer to an address indicated by the idle head pointer, and then The idle header pointer points to the next address in the pointer list.
  • the updated pointer list information when it is determined that the current operation request is a dequeuing operation request of the message descriptor, the updated pointer list information includes:
  • Extracting a queue number in the enqueue operation request Reading a corresponding queue head pointer and an idle tail pointer according to the queue number, updating the idle tail pointer to an address indicated by the queue head pointer, and then The queue head pointer points to the next address in the pointer list.
  • the obtaining a queue head pointer corresponding to the dequeuing operation request is in the finger
  • the location information in the pin list, and the corresponding message descriptors are read according to the location information, including:
  • the corresponding message descriptor in the on-chip RAM is read according to the queue number.
  • the embodiment of the invention further provides a queue management device, the device comprising: a first processing module and a second processing module; wherein
  • the first processing module is configured to determine that the current operation request is an enqueue operation request of the message descriptor, and describe the message according to the enqueue operation request and the buffering condition of the on-chip random access memory RAM. Stores to the off-chip double rate synchronous dynamic random access memory DDR or on-chip RAM and updates the pointer list information;
  • the second processing module is configured to determine that the current operation request is a dequeue operation request of the message descriptor, and obtain location information of the queue head pointer corresponding to the dequeue operation request in the pointer list, according to the The location information reads the corresponding message descriptor and updates the pointer list information; wherein the location information includes on-chip RAM location information and off-chip DDR location information.
  • the first processing module is configured to calculate an off-chip DDR address when the current on-chip RAM cache has reached an upper limit according to the enqueue operation request, and the report is obtained according to the obtained off-chip DDR address.
  • the text descriptor is stored to the off-chip DDR;
  • the queue number in the enqueue operation request is extracted, and the message descriptor is stored in the on-chip RAM according to the queue number.
  • the first processing module is configured to extract a queue number in the enqueue operation request, read a corresponding queue tail pointer and a idle head pointer according to the queue number, and update the queue tail pointer to point Describe the address currently indicated by the idle head pointer, and then refer to the idle header The pin points to the next address in the list of pointers.
  • the second processing module is configured to extract a queue number in the enqueue operation request, read a corresponding queue head pointer and an idle tail pointer according to the queue number, and update the idle tail pointer to point The address currently indicated by the queue head pointer is then directed to the next address in the pointer list.
  • the second processing module is configured to extract a queue number in the enqueue operation request, read a corresponding queue head pointer according to the queue number, and determine that the queue head pointer is in the pointer list.
  • the off-chip DDR address is calculated, and the corresponding message descriptor in the off-chip DDR is read according to the obtained off-chip DDR address;
  • the corresponding message descriptor in the on-chip RAM is read according to the queue number.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program configured to execute the foregoing queue management method of the embodiment of the present invention.
  • the queue management method, device and storage medium provided by the embodiment of the present invention; determining that the current operation request is an enqueue operation request of the message descriptor, according to the enqueue operation request and the on-chip RAM cache situation,
  • the message descriptor is stored in the off-chip DDR or the on-chip RAM, and the pointer list information is updated; the current operation request is determined to be a dequeue operation request of the message descriptor, and the queue head pointer corresponding to the dequeue operation request is obtained.
  • the location information in the pointer list reads the corresponding message descriptor according to the location information, and updates the pointer list information; wherein the location information includes on-chip RAM location information and off-chip DDR location information.
  • the queue is managed by the pointer list and the on-chip plus off-chip method, and the storage location of the message descriptor is determined according to the buffer condition of the on-chip RAM, thereby effectively improving the access efficiency and reducing the power consumption while avoiding the traffic burst.
  • the packet loss situation is effectively improving the access efficiency and reducing the power consumption while avoiding the traffic burst.
  • FIG. 1 is a schematic flowchart of a queue management method according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of initializing a pointer list RAM according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of data relationships between respective RAMs according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of a queue management method according to Embodiment 2 of the present invention.
  • FIG. 5 is a schematic structural diagram of a queue management apparatus according to an embodiment of the present invention.
  • determining that the current operation request is an enqueue operation request of the message descriptor, and storing the message descriptor to the off-chip according to the enqueue operation request and the buffering condition of the on-chip RAM.
  • DDR or on-chip RAM and updating pointer list information; determining that the current operation request is a dequeue operation request of the message descriptor, and acquiring location information of the queue head pointer corresponding to the dequeue operation request in the pointer list, Reading the corresponding message descriptor according to the location information, and updating the pointer list information; wherein the location information includes on-chip RAM location information and off-chip DDR location information.
  • FIG. 1 is a schematic flowchart of a queue management method according to an embodiment of the present invention. As shown in FIG. 1 , a queue management method according to an embodiment of the present invention includes:
  • Step 101 Determine that the current operation request is an enqueue operation request of the message descriptor, and store the message descriptor to an off-chip DDR or an on-chip RAM according to the enqueue operation request and the buffering condition of the on-chip RAM. And update the pointer list information;
  • the storing the message descriptor to the off-chip DDR or the on-chip RAM according to the enqueue operation request and the buffering condition of the on-chip RAM includes:
  • the queue number in the enqueue operation request is extracted, and the message descriptor is stored in the on-chip RAM according to the queue number.
  • the determining that the current on-chip RAM has reached a maximum limit includes:
  • the preset buffer size of the on-chip RAM and the number of non-free pointers in the current pointer list recorded by the cache counter in real time it is determined whether the current on-chip RAM cache has reached the upper limit, if the number of non-free pointers in the current pointer list is less than the preset
  • the buffer size of the on-chip RAM determines that the current on-chip RAM cache has not reached the upper limit; otherwise, it determines that the current on-chip RAM cache has reached the upper limit;
  • the on-chip RAM is used to buffer message descriptor information, and its size can be set according to actual needs.
  • the size of the on-chip RAM is 4k, that is, the address of the on-chip RAM is 0. ⁇ 4k-1, that is, the depth of the on-chip RAM is 4 ⁇ 1024 addresses.
  • the calculating the off-chip DDR address comprises:
  • the off-chip DDR address may be a base address + a link intra-slice position offset; wherein The base address is a base address of the system configuration, and the message descriptor is stored in the off-chip DDR; the positional offset of the linked list is a relative address, and if the base address is 4k, the position of the linked list is offset. Move to 0. If the base address is 4k+1, the positional offset of the linked list is 1.
  • the updating the pointer list information includes:
  • the tail pointer RAM is used to store a queue tail pointer of all queues, and indicates a queue status of each queue; wherein the queue status includes: empty and non-empty; the tail pointer RAM
  • the size/depth of the tail can be set according to the actual situation.
  • the depth of the tail pointer RAM is equal to the number of queues.
  • the depth of the tail pointer RAM is 256 addresses, and the initialization is performed. The highest bit after the tail pointer RAM is 1 or all 1s, and the initialized queue state is empty;
  • a head pointer RAM for storing a queue head pointer of all queues, and indicating a queue status of each queue; wherein the queue status includes: null and non-empty; the size/depth of the head pointer RAM may be According to the actual situation, in one embodiment, the depth of the head pointer RAM is equal to the number of queues. In this embodiment, the depth of the head pointer RAM is 256 addresses, after the head pointer RAM is initialized. The highest bit is 1 or all 1s. It can be seen that the initialized queue status is empty.
  • the pointer linked list RAM is configured to store the pointer linked list, so that the queue pointer linked list and the free pointer linked list share a RAM, thereby eliminating the need for a separate first in first out queue (FIFO, First Input First Output) to store the idle pointer, thereby saving
  • FIFO First Input First Output
  • the hardware resource; the pointer list table RAM size can be set according to actual needs.
  • the on-chip RAM has a size of 4k, and the pointer list includes a pointer to the next address, which is initialized to a
  • the null pointer list as shown in Figure 2, the ptr RAM is the pointer list RAM, which is initialized.
  • the address of the ptr RAM is from 0 to 8191, and the content stored in the ptr RAM is the next address value, thereby forming a pointer list.
  • the pointer list includes a non-free pointer and a free pointer; wherein the number of the non-free pointers characterizes the total number of addresses occupied by the on-chip RAM and the message descriptor in the off-chip DDR, once for each queue entry
  • FIG. 3 is a schematic diagram of data relationships between the RAMs according to the embodiment of the present invention. ;
  • head_ptr ram is the head pointer RAM
  • tail_ptr ram is the tail pointer RAM.
  • Ptr ram is pointer list RAM
  • desc ram is on-chip RAM; among them, black in the pointer list RAM is a queue member in a queue, there are 6 queue members, for convenience of explanation, they are marked as 1 to 6 in order.
  • the address information of the latter data block is stored in the previous data block, because the label 6 is the last data of the queue, so the data in the pointer list RAM is invalid data; according to the head pointer and the tail pointer It can be seen that the queue number of the queue is 11, the address of the data block corresponding to the label 1 is determined by the value stored by the head pointer RAM address 11, and the data block address corresponding to the label 6 is determined by the value stored in the tail pointer RAM address 11.
  • Step 102 Determine that the current operation request is a dequeue operation request of the message descriptor, obtain location information of the queue head pointer corresponding to the dequeue operation request in the pointer list, and read corresponding information according to the location information.
  • Message descriptor and update pointer list information
  • the location information includes on-chip RAM location information and off-chip DDR location information
  • And acquiring the location information of the queue head pointer corresponding to the dequeuing operation request in the pointer list, and reading the corresponding message descriptor according to the location information includes:
  • the off-chip DDR address reads the corresponding message descriptor in the off-chip DDR according to the obtained off-chip DDR address;
  • the corresponding message descriptor in the on-chip RAM is read according to the queue number.
  • the updating the pointer list information includes:
  • Extracting a queue number in the enqueue operation request Reading a corresponding queue head pointer and an idle tail pointer according to the queue number, updating the idle tail pointer to an address indicated by the queue head pointer, and then The queue head pointer points to the next address in the pointer list.
  • the queue management method of the embodiment of the present invention includes:
  • Step 401 Receive an operation request and determine whether it is an enqueue operation request or a dequeue operation request of the message descriptor. If it is a enqueue operation request, step 402 is performed; if it is a dequeue operation request, step 404 is performed.
  • Step 402 Store the message descriptor into an off-chip DDR or an on-chip RAM according to the enqueue operation request and the buffering condition of the on-chip RAM;
  • the step includes: determining, according to the enqueue operation request, that the current on-chip RAM has reached the upper limit, calculating an off-chip DDR address, and storing the message descriptor to an off-chip DDR according to the obtained off-chip DDR address;
  • the queue number in the enqueue operation request is extracted, and the message descriptor is stored in the on-chip RAM according to the queue number.
  • the determining that the current on-chip RAM has reached a maximum limit includes:
  • the preset buffer size of the on-chip RAM and the number of non-free pointers in the current pointer list recorded by the cache counter in real time it is determined whether the current on-chip RAM cache has reached the upper limit, if the number of non-free pointers in the current pointer list is less than the preset
  • the buffer size of the on-chip RAM determines that the current on-chip RAM cache has not reached the upper limit; otherwise, it determines that the current on-chip RAM cache has reached the upper limit;
  • the on-chip RAM is used to buffer the message descriptor information, and the size thereof can be set according to actual needs.
  • the size of the on-chip RAM is 4k, that is, the address of the on-chip RAM is 0.
  • ⁇ 4k-1 that is, the depth of the on-chip RAM is 4 ⁇ 1024 addresses.
  • Step 403 Update the pointer list information, and perform step 406;
  • the step includes: extracting a queue number in the enqueue operation request, reading a corresponding queue tail pointer in the tail pointer RAM and a free head pointer in the pointer list table RAM according to the queue number, and updating the queue tail pointer to the location Describe the address currently indicated by the idle head pointer, and then point the free head pointer to the next address in the pointer list;
  • the tail pointer RAM is used to store a queue tail pointer of all queues, and indicates a queue status of each queue; wherein the queue status includes: empty and non-empty; the size/depth of the tail pointer RAM may be based on The actual situation is set.
  • the depth of the tail pointer RAM is equal to the number of queues.
  • the depth of the tail pointer RAM is 256 addresses, and the highest value after the tail pointer RAM is initialized. If the bit is 1 or all 1, it can be known that the initialized queue status is empty.
  • a head pointer RAM for storing a queue head pointer of all queues, and indicating a queue status of each queue; wherein the queue status includes: null and non-empty; the size/depth of the head pointer RAM may be According to the actual situation, in one embodiment, the depth of the head pointer RAM is equal to the number of queues. In this embodiment, the depth of the head pointer RAM is 256 addresses, after the head pointer RAM is initialized. The highest bit is 1 or all 1s. It can be seen that the initialized queue status is empty.
  • the pointer linked list RAM is configured to store the pointer linked list, so that the queue pointer linked list and the free pointer linked list share a RAM, thereby eliminating the need for a separate FIFO to store the free pointer, thereby saving hardware resources; the pointer linked list RAM size can be based on Actually, the setting needs to be performed.
  • the size of the on-chip RAM is 4k
  • the pointer list includes a pointer pointing to the next address, and is initialized into a list of empty pointers, as shown in FIG.
  • ptr The RAM is a pointer list RAM, which is initialized, the address of the ptr RAM is from 0 to 8191, and the content stored in the ptr RAM is the next address value, thereby forming a pointer list form; the pointer list includes non-idle pointers and a free pointer; wherein the number of non-free pointers characterizes the total number of addresses occupied by the message descriptors in the on-chip RAM and the off-chip DDR, and the number of non-idle pointers is incremented by one message descriptor per queue.
  • the on-chip RAM has a one-to-one correspondence with a part of the pointer list RAM, The RAM for packet descriptor information, the address information list pointer corresponding to the RAM; a schematic diagram of data relationships between each of Examples RAM embodiment of the present invention shown in Figure 3;
  • head_ptr ram is the head pointer RAM
  • tail_ptr ram is the tail pointer RAM
  • ptr ram is the pointer list RAM
  • desc ram is the on-chip RAM; wherein the black of the pointer list RAM is a queue member in the queue.
  • 6 queue members for convenience of explanation, are marked as 1 to 6 (all are virtual tags) in order, and the previous data block stores the address information of the latter data block, because the label 6 is the last data of the queue.
  • the data in the pointer list RAM is invalid data; according to the head pointer and the tail pointer, the queue number of the queue is 11, and the address of the data block corresponding to the label 1 is determined by the value stored by the head pointer RAM address 11, and the label 6 corresponds to The data block address is determined by the value stored in the tail pointer RAM address 11.
  • Step 404 Acquire location information of a queue head pointer corresponding to the dequeuing operation request in the pointer list, and read a corresponding message descriptor according to the location information.
  • the location information includes on-chip RAM location information and off-chip DDR location information
  • the step includes: extracting a queue number in the enqueue operation request, reading a corresponding queue head pointer in the head pointer RAM according to the queue number, and determining that the position of the queue head pointer in the pointer list is off-chip.
  • the off-chip DDR address is calculated, and the corresponding message descriptor in the off-chip DDR is read according to the obtained off-chip DDR address;
  • the corresponding message descriptor in the on-chip RAM is read according to the queue number.
  • Step 405 Update the pointer list information.
  • the step includes: extracting a queue number in the enqueue operation request, reading a corresponding queue head pointer and an idle tail pointer according to the queue number, and updating the idle tail pointer to point to an address currently indicated by the queue head pointer. And then directing the queue head pointer to the next address in the pointer list.
  • Step 406 End the current processing flow.
  • FIG. 5 is a schematic structural diagram of a queue management apparatus according to an embodiment of the present invention.
  • the configuration of the queue management device includes: a first processing module 51 and a second processing module 52;
  • the first processing module 51 is configured to determine that the current operation request is an enqueue operation request of the message descriptor, and store the message descriptor according to the enqueue operation request and the buffer condition of the on-chip RAM. Off-chip DDR or on-chip RAM, and update pointer list information;
  • the second processing module 52 is configured to determine that the current operation request is a dequeue operation request of the message descriptor, and obtain location information of the queue head pointer corresponding to the dequeue operation request in the pointer list, according to The location information reads the corresponding message descriptor and updates the pointer list information; wherein the location information includes on-chip RAM location information and off-chip DDR location information.
  • the first processing module 51 stores the message descriptor to an off-chip DDR or an on-chip RAM according to the enqueue operation request and the buffering condition of the on-chip RAM, including:
  • the first processing module 51 determines, according to the enqueue operation request, that the current on-chip RAM has reached the upper limit, calculates an off-chip DDR address, and stores the message descriptor into a slice according to the obtained off-chip DDR address.
  • External DDR
  • the first processing module 51 determines that the current upper limit of the on-chip RAM has reached the upper limit, including:
  • the first processing module 51 determines whether the current on-chip RAM cache has reached the upper limit according to the preset buffer size of the on-chip RAM and the number of non-free pointers in the current pointer list recorded by the cache counter in real time, if the current pointer list is not The number of free pointers is smaller than the preset buffer size of the on-chip RAM, and it is determined that the current on-chip RAM cache has not reached the upper limit; otherwise, it is determined that the current on-chip RAM has reached the upper limit;
  • the on-chip RAM is used to buffer message descriptor information, and its size can be set according to actual needs.
  • the size of the on-chip RAM is 4k, that is, the address of the on-chip RAM is 0. ⁇ 4k-1, that is, the depth of the on-chip RAM is 4 ⁇ 1024 addresses.
  • the first processing module 51 updates the pointer list information, including:
  • the first processing module 51 extracts the queue number in the enqueue operation request, reads the corresponding queue tail pointer and the idle head pointer according to the queue number, and updates the queue tail pointer to point to the current idle pointer. The indicated address, and then directing the free head pointer to a next address in the pointer list;
  • the tail pointer RAM is used to store the queue tail pointers of all queues, and indicates the queue status of each queue; wherein the queue status includes: empty and non-empty; the size/depth of the tail pointer RAM may be based on actual conditions.
  • the depth of the tail pointer RAM is equal to the number of queues.
  • the depth of the tail pointer RAM is 256 addresses, and the highest bit after initializing the tail pointer RAM is 1 or all 1, it can be seen that the initialized queue status is empty;
  • a head pointer RAM for storing a queue head pointer of all queues, and indicating a queue status of each queue; wherein the queue status includes: null and non-empty; the size/depth of the head pointer RAM may be According to the actual situation, in one embodiment, the depth of the head pointer RAM is equal to the number of queues. In this embodiment, the depth of the head pointer RAM is 256 addresses, after the head pointer RAM is initialized. The highest bit is 1 or all 1s. It can be seen that the initialized queue status is empty.
  • the pointer linked list RAM is used for storing the pointer linked list, so that the queue pointer linked list and the free pointer linked list share a piece of RAM, thereby eliminating the need for a separate first in first out queue (FIFO, First Input First Output) to store the idle pointer, thereby saving hardware resources.
  • the pointer list RAM size can be set according to actual needs.
  • the on-chip RAM has a size of 4k, and the pointer list includes a pointer pointing to the next address, and is initialized to a null pointer.
  • the ptr RAM is the pointer linked list RAM, which is initialized, the address of the ptr RAM is from 0 to 8191, and the content stored in the ptr RAM is the next address value, thereby forming a pointer list.
  • the pointer list includes a non-free pointer and a free pointer; wherein the number of the non-free pointers characterizes the total number of addresses occupied by the on-chip RAM and the message descriptor in the off-chip DDR, once for each queue entry
  • the text descriptor, the number of non-free pointers is incremented by one, and correspondingly, each time a message descriptor is dequeued, the number of non-free pointers is decremented by one
  • the on-chip RAM has a one-to-one correspondence with a part of the pointer list RAM.
  • the in-chip RAM is message descriptor information, and the pointer list table RAM is corresponding address information;
  • FIG. 3 is a schematic diagram of data relationships between respective RAMs according to an embodiment of the present invention
  • head_ptr ram is the head pointer RAM
  • tail_ptr ram is the tail pointer RAM
  • ptr ram is the pointer list RAM
  • desc ram is the on-chip RAM; wherein the black of the pointer list RAM is a queue member in the queue.
  • 6 queue members for convenience of explanation, are marked as 1 to 6 (all are virtual tags) in order, and the previous data block stores the address information of the latter data block, because the label 6 is the last data of the queue.
  • the data in the pointer list RAM is invalid data; according to the head pointer and the tail pointer, the queue number of the queue is 11, and the address of the data block corresponding to the label 1 is determined by the value stored by the head pointer RAM address 11, and the label 6 corresponds to The data block address is determined by the value stored in the tail pointer RAM address 11.
  • the second processing module 52 acquires location information of the queue head pointer corresponding to the dequeuing operation request in the pointer list, and reads corresponding message descriptors according to the location information, including :
  • the second processing module 52 extracts the queue number in the enqueue operation request, reads the corresponding queue head pointer according to the queue number, and determines that the position of the queue head pointer in the pointer list is off-chip DDR. Calculating an off-chip DDR address, and reading a corresponding message descriptor in the off-chip DDR according to the obtained off-chip DDR address;
  • the corresponding message descriptor in the on-chip RAM is read according to the queue number.
  • the pointer list information includes:
  • the second processing module 52 extracts the queue number in the enqueue operation request, reads the corresponding queue head pointer and the idle tail pointer according to the queue number, and updates the idle tail pointer to point to the current queue head pointer. The indicated address, and then the queue head pointer is directed to the next address in the list of pointers.
  • the first processing module 51 and the second processing module 52 proposed in the embodiments of the present invention may be implemented by a processor, and may also be implemented by a specific logic circuit; wherein the processor may be a processing on a mobile terminal or a server.
  • the processor may be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
  • the queue management method is implemented in the form of a software function module and is sold or used as a separate product, it may also be stored in a computer readable storage medium.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • program codes such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program, and the computer program is used to execute the foregoing queue management method of the embodiment of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Disclosed is a queue management method. The method comprises: determining a current operation request to be an enqueueing operation request of message descriptors, storing, according to the enqueueing operation request and a cache situation of an on-chip random access memory (RAM), the message descriptors to an off-chip double-data rate synchronous dynamic random access memory (DDR) or the on-chip RAM, and updating information about a pointer linked list; and determining a current operation request to be a dequeueing operation request of message descriptors, acquiring location information about a queue header pointer corresponding to the dequeueing operation request in the pointer linked list, reading, according to the location information, the corresponding message descriptors, and updating the information about the pointer linked list, wherein the location information comprises location information about the on-chip RAM and location information about the off-chip DDR. Also disclosed are a queue management device and a storage medium.

Description

一种队列管理方法、装置及存储介质Queue management method, device and storage medium 技术领域Technical field
本发明涉及通信技术领域中队列管理的相关技术,尤其涉及一种队列管理方法、装置及存储介质。The present invention relates to a related art of queue management in the field of communication technologies, and in particular, to a queue management method, apparatus, and storage medium.
背景技术Background technique
随着网络中数据业务流量越来越大,流量突发的现象频频发生。目前,队列管理的一种实施方法是采用固定分配空间,即每个队列分配固定的空间,不同队列的地址空间不能共享,该方法实施简单但由于不同队列的地址空间不能共享,容易造成空间浪费,并且当队列数较多时,每个队列分配的空间相当有限,抗突发能力弱;另一种实施方法是采用共享空间,给每个队列做一个链表,把属于同一队列的报文地址空间链接起来,该方法可提高缓存利用率,但控制复杂,尤其采用多链表方式下,同时面对网络突发瞬时超大流量,抗突发能力依然不够。As data traffic in the network grows larger, traffic bursts occur frequently. At present, an implementation method of queue management adopts a fixed allocation space, that is, each queue allocates a fixed space, and address spaces of different queues cannot be shared. This method is simple to implement, but since the address space of different queues cannot be shared, space waste is easily caused. And when there are a large number of queues, the space allocated by each queue is quite limited, and the anti-burst capability is weak; another implementation method is to use a shared space, to make a linked list for each queue, and to set a packet address space belonging to the same queue. Linked up, this method can improve the cache utilization, but the control is complicated, especially in the case of multi-linked list mode, and at the same time facing the instantaneous large-scale traffic of the network burst, the anti-burst capability is still insufficient.
因此,提供一种队列管理的技术方案,能够在提高访问效率的同时有效的避免流量突发下的丢包情况,已成为亟待解决的问题。Therefore, a technical solution for queue management is provided, which can effectively avoid the packet loss under the traffic burst while improving the access efficiency, and has become an urgent problem to be solved.
发明内容Summary of the invention
有鉴于此,本发明实施例期望提供一种队列管理方法、装置及存储介质,能够在提高访问效率、降低功耗的同时有效的避免流量突发下的丢包情况。In view of this, the embodiment of the present invention is to provide a queue management method, device, and storage medium, which can effectively avoid packet loss under traffic bursts while improving access efficiency and reducing power consumption.
为达到上述目的,本发明实施例的技术方案是这样实现的:To achieve the above objective, the technical solution of the embodiment of the present invention is implemented as follows:
本发明实施例提供了一种队列管理方法,所述方法包括:An embodiment of the present invention provides a queue management method, where the method includes:
确定当前的操作请求为报文描述符的入队操作请求,依据所述入队操 作请求及片内随机存取存储器(RAM,Random-Access Memory)的缓存情况,将所述报文描述符存储至片外双倍速率同步动态随机存储器(DDR,Double Data Rate)或片内RAM,并更新指针链表信息;Determining that the current operation request is an enqueue operation request of the message descriptor, according to the enqueue operation Requesting and buffering of on-chip random access memory (RAM), storing the message descriptor to an off-chip double-rate synchronous dynamic random access memory (DDR) or on-chip RAM And update the pointer list information;
确定当前的操作请求为报文描述符的出队操作请求,获取所述出队操作请求对应的队列头指针在所述指针链表中的位置信息,依据所述位置信息读取对应的报文描述符,并更新指针链表信息;其中,所述位置信息包括片内RAM位置信息及片外DDR位置信息。Determining that the current operation request is a dequeue operation request of the message descriptor, acquiring location information of the queue head pointer corresponding to the dequeue operation request in the pointer list, and reading the corresponding message description according to the location information And updating the pointer list information; wherein the location information includes on-chip RAM location information and off-chip DDR location information.
上述方案中,所述依据所述入队操作请求及片内RAM的缓存情况,将所述报文描述符存储至片外DDR或片内RAM包括:In the above solution, the storing the message descriptor to the off-chip DDR or the on-chip RAM according to the enqueue operation request and the buffering condition of the on-chip RAM includes:
依据所述入队操作请求确定当前片内RAM的缓存已达上限时,计算片外DDR地址,并依据得到的片外DDR地址将所述报文描述符存储至片外DDR;And determining, according to the enqueue operation request, that the current on-chip RAM has reached the upper limit, calculating an off-chip DDR address, and storing the message descriptor to an off-chip DDR according to the obtained off-chip DDR address;
确定当前片内RAM的缓存未达到上限时,提取所述入队操作请求中的队列号,并依据所述队列号将所述报文描述符存储至片内RAM。When it is determined that the current on-chip RAM has not reached the upper limit, the queue number in the enqueue operation request is extracted, and the message descriptor is stored in the on-chip RAM according to the queue number.
上述方案中,确定当前的操作请求为报文描述符的入队操作请求时,所述更新指针链表信息包括:In the above solution, when it is determined that the current operation request is an enqueue operation request of the message descriptor, the update pointer list information includes:
提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列尾指针及空闲头指针,更新所述队列尾指针指向所述空闲头指针当前所指示的地址,然后将所述空闲头指针指向所述指针链表中的下一个地址。Extracting a queue number in the enqueue operation request, reading a corresponding queue tail pointer and a idle head pointer according to the queue number, updating the queue tail pointer to an address indicated by the idle head pointer, and then The idle header pointer points to the next address in the pointer list.
上述方案中,确定当前的操作请求为报文描述符的出队操作请求时,所述更新指针链表信息包括:In the above solution, when it is determined that the current operation request is a dequeuing operation request of the message descriptor, the updated pointer list information includes:
提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列头指针及空闲尾指针,更新所述空闲尾指针指向所述队列头指针当前所指示的地址,然后将所述队列头指针指向所述指针链表中的下一个地址。Extracting a queue number in the enqueue operation request, reading a corresponding queue head pointer and an idle tail pointer according to the queue number, updating the idle tail pointer to an address indicated by the queue head pointer, and then The queue head pointer points to the next address in the pointer list.
上述方案中,所述获取所述出队操作请求对应的队列头指针在所述指 针链表中的位置信息,依据所述位置信息读取对应的报文描述符包括:In the above solution, the obtaining a queue head pointer corresponding to the dequeuing operation request is in the finger The location information in the pin list, and the corresponding message descriptors are read according to the location information, including:
提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列头指针,确定所述队列头指针在所述指针链表中的位置处于片外DDR时,计算片外DDR地址,依据得到的片外DDR地址读取片外DDR中对应的报文描述符;Extracting a queue number in the enqueue operation request, reading a corresponding queue head pointer according to the queue number, and determining an off-chip DDR address when the position of the queue head pointer in the pointer list is in an off-chip DDR Reading the corresponding message descriptor in the off-chip DDR according to the obtained off-chip DDR address;
确定所述队列头指针在所述指针链表中的位置处于片内RAM时,依据所述队列号读取片内RAM中对应的报文描述符。When it is determined that the position of the queue head pointer in the pointer list is in the on-chip RAM, the corresponding message descriptor in the on-chip RAM is read according to the queue number.
本发明实施例还提供了一种队列管理装置,所述装置包括:第一处理模块及第二处理模块;其中,The embodiment of the invention further provides a queue management device, the device comprising: a first processing module and a second processing module; wherein
所述第一处理模块,配置为确定当前的操作请求为报文描述符的入队操作请求,依据所述入队操作请求及片内随机存取存储器RAM的缓存情况,将所述报文描述符存储至片外双倍速率同步动态随机存储器DDR或片内RAM,并更新指针链表信息;The first processing module is configured to determine that the current operation request is an enqueue operation request of the message descriptor, and describe the message according to the enqueue operation request and the buffering condition of the on-chip random access memory RAM. Stores to the off-chip double rate synchronous dynamic random access memory DDR or on-chip RAM and updates the pointer list information;
所述第二处理模块,配置为确定当前的操作请求为报文描述符的出队操作请求,获取所述出队操作请求对应的队列头指针在所述指针链表中的位置信息,依据所述位置信息读取对应的报文描述符,并更新指针链表信息;其中,所述位置信息包括片内RAM位置信息及片外DDR位置信息。The second processing module is configured to determine that the current operation request is a dequeue operation request of the message descriptor, and obtain location information of the queue head pointer corresponding to the dequeue operation request in the pointer list, according to the The location information reads the corresponding message descriptor and updates the pointer list information; wherein the location information includes on-chip RAM location information and off-chip DDR location information.
上述方案中,所述第一处理模块,配置为依据所述入队操作请求确定当前片内RAM的缓存已达上限时,计算片外DDR地址,并依据得到的片外DDR地址将所述报文描述符存储至片外DDR;In the above solution, the first processing module is configured to calculate an off-chip DDR address when the current on-chip RAM cache has reached an upper limit according to the enqueue operation request, and the report is obtained according to the obtained off-chip DDR address. The text descriptor is stored to the off-chip DDR;
确定当前片内RAM的缓存未达到上限时,提取所述入队操作请求中的队列号,并依据所述队列号将所述报文描述符存储至片内RAM。When it is determined that the current on-chip RAM has not reached the upper limit, the queue number in the enqueue operation request is extracted, and the message descriptor is stored in the on-chip RAM according to the queue number.
上述方案中,所述第一处理模块,配置为提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列尾指针及空闲头指针,更新所述队列尾指针指向所述空闲头指针当前所指示的地址,然后将所述空闲头指 针指向所述指针链表中的下一个地址。In the above solution, the first processing module is configured to extract a queue number in the enqueue operation request, read a corresponding queue tail pointer and a idle head pointer according to the queue number, and update the queue tail pointer to point Describe the address currently indicated by the idle head pointer, and then refer to the idle header The pin points to the next address in the list of pointers.
上述方案中,所述第二处理模块,配置为提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列头指针及空闲尾指针,更新所述空闲尾指针指向所述队列头指针当前所指示的地址,然后将所述队列头指针指向所述指针链表中的下一个地址。In the above solution, the second processing module is configured to extract a queue number in the enqueue operation request, read a corresponding queue head pointer and an idle tail pointer according to the queue number, and update the idle tail pointer to point The address currently indicated by the queue head pointer is then directed to the next address in the pointer list.
上述方案中,所述第二处理模块,配置为提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列头指针,确定所述队列头指针在所述指针链表中的位置处于片外DDR时,计算片外DDR地址,依据得到的片外DDR地址读取片外DDR中对应的报文描述符;In the above solution, the second processing module is configured to extract a queue number in the enqueue operation request, read a corresponding queue head pointer according to the queue number, and determine that the queue head pointer is in the pointer list. When the location is in the off-chip DDR, the off-chip DDR address is calculated, and the corresponding message descriptor in the off-chip DDR is read according to the obtained off-chip DDR address;
确定所述队列头指针在所述指针链表中的位置处于片内RAM时,依据所述队列号读取片内RAM中对应的报文描述符。When it is determined that the position of the queue head pointer in the pointer list is in the on-chip RAM, the corresponding message descriptor in the on-chip RAM is read according to the queue number.
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质存储有计算机程序,该计算机程序配置为执行本发明实施例的上述队列管理方法。The embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program configured to execute the foregoing queue management method of the embodiment of the present invention.
本发明实施例所提供的队列管理方法、装置及存储介质;确定当前的操作请求为报文描述符的入队操作请求,依据所述入队操作请求及片内RAM的缓存情况,将所述报文描述符存储至片外DDR或片内RAM,并更新指针链表信息;确定当前的操作请求为报文描述符的出队操作请求,获取所述出队操作请求对应的队列头指针在所述指针链表中的位置信息,依据所述位置信息读取对应的报文描述符,并更新指针链表信息;其中,所述位置信息包括片内RAM位置信息及片外DDR位置信息。如此,采用指针链表及片内加片外的方式对队列进行管理,并依据片内RAM的缓存情况决定报文描述符的存储位置,在提高访问效率、降低功耗的同时有效的避免流量突发下的丢包情况。 The queue management method, device and storage medium provided by the embodiment of the present invention; determining that the current operation request is an enqueue operation request of the message descriptor, according to the enqueue operation request and the on-chip RAM cache situation, The message descriptor is stored in the off-chip DDR or the on-chip RAM, and the pointer list information is updated; the current operation request is determined to be a dequeue operation request of the message descriptor, and the queue head pointer corresponding to the dequeue operation request is obtained. The location information in the pointer list reads the corresponding message descriptor according to the location information, and updates the pointer list information; wherein the location information includes on-chip RAM location information and off-chip DDR location information. In this way, the queue is managed by the pointer list and the on-chip plus off-chip method, and the storage location of the message descriptor is determined according to the buffer condition of the on-chip RAM, thereby effectively improving the access efficiency and reducing the power consumption while avoiding the traffic burst. The packet loss situation.
附图说明DRAWINGS
图1为本发明实施例一队列管理方法流程示意图;1 is a schematic flowchart of a queue management method according to an embodiment of the present invention;
图2为本发明实施例指针链表RAM初始化示意图;2 is a schematic diagram of initializing a pointer list RAM according to an embodiment of the present invention;
图3为本发明实施例各个RAM间数据关系示意图;3 is a schematic diagram of data relationships between respective RAMs according to an embodiment of the present invention;
图4为本发明实施例二队列管理方法流程示意图;4 is a schematic flowchart of a queue management method according to Embodiment 2 of the present invention;
图5为本发明实施例队列管理装置组成结构示意图。FIG. 5 is a schematic structural diagram of a queue management apparatus according to an embodiment of the present invention.
具体实施方式detailed description
在在本发明实施例中,确定当前的操作请求为报文描述符的入队操作请求,依据所述入队操作请求及片内RAM的缓存情况,将所述报文描述符存储至片外DDR或片内RAM,并更新指针链表信息;确定当前的操作请求为报文描述符的出队操作请求,获取所述出队操作请求对应的队列头指针在所述指针链表中的位置信息,依据所述位置信息读取对应的报文描述符,并更新指针链表信息;其中,所述位置信息包括片内RAM位置信息及片外DDR位置信息。In the embodiment of the present invention, determining that the current operation request is an enqueue operation request of the message descriptor, and storing the message descriptor to the off-chip according to the enqueue operation request and the buffering condition of the on-chip RAM. DDR or on-chip RAM, and updating pointer list information; determining that the current operation request is a dequeue operation request of the message descriptor, and acquiring location information of the queue head pointer corresponding to the dequeue operation request in the pointer list, Reading the corresponding message descriptor according to the location information, and updating the pointer list information; wherein the location information includes on-chip RAM location information and off-chip DDR location information.
实施例一 Embodiment 1
图1所示为本发明实施例一队列管理方法流程示意图;如图1所示,本发明实施例队列管理方法包括:FIG. 1 is a schematic flowchart of a queue management method according to an embodiment of the present invention; as shown in FIG. 1 , a queue management method according to an embodiment of the present invention includes:
步骤101:确定当前的操作请求为报文描述符的入队操作请求,依据所述入队操作请求及片内RAM的缓存情况,将所述报文描述符存储至片外DDR或片内RAM,并更新指针链表信息;Step 101: Determine that the current operation request is an enqueue operation request of the message descriptor, and store the message descriptor to an off-chip DDR or an on-chip RAM according to the enqueue operation request and the buffering condition of the on-chip RAM. And update the pointer list information;
这里,所述依据所述入队操作请求及片内RAM的缓存情况,将所述报文描述符存储至片外DDR或片内RAM包括:Here, the storing the message descriptor to the off-chip DDR or the on-chip RAM according to the enqueue operation request and the buffering condition of the on-chip RAM includes:
依据所述入队操作请求确定当前片内RAM的缓存已达上限时,计算片外DDR地址,并依据得到的片外DDR地址将所述报文描述符存储至片外DDR; And determining, according to the enqueue operation request, that the current on-chip RAM has reached the upper limit, calculating an off-chip DDR address, and storing the message descriptor to an off-chip DDR according to the obtained off-chip DDR address;
确定当前片内RAM的缓存未达到上限时,提取所述入队操作请求中的队列号,并依据所述队列号将所述报文描述符存储至片内RAM。When it is determined that the current on-chip RAM has not reached the upper limit, the queue number in the enqueue operation request is extracted, and the message descriptor is stored in the on-chip RAM according to the queue number.
在一实施例中,所述确定当前片内RAM的缓存已达上限包括:In an embodiment, the determining that the current on-chip RAM has reached a maximum limit includes:
依据预设的片内RAM的缓存大小及缓存计数器实时记录的当前指针链表中非空闲指针的数量确定当前片内RAM的缓存是否已达上限,如果当前指针链表中非空闲指针的数量小于预设的片内RAM的缓存大小,确定当前片内RAM的缓存未达到上限,否则,确定当前片内RAM的缓存已达上限;According to the preset buffer size of the on-chip RAM and the number of non-free pointers in the current pointer list recorded by the cache counter in real time, it is determined whether the current on-chip RAM cache has reached the upper limit, if the number of non-free pointers in the current pointer list is less than the preset The buffer size of the on-chip RAM determines that the current on-chip RAM cache has not reached the upper limit; otherwise, it determines that the current on-chip RAM cache has reached the upper limit;
这里,所述片内RAM用于缓存报文描述符信息,其大小可依据实际需要进行设定,在一实施例中,所述片内RAM的大小为4k,即片内RAM的地址为0~4k-1,即所述片内RAM的深度为4×1024个地址。Here, the on-chip RAM is used to buffer message descriptor information, and its size can be set according to actual needs. In an embodiment, the size of the on-chip RAM is 4k, that is, the address of the on-chip RAM is 0. ~4k-1, that is, the depth of the on-chip RAM is 4×1024 addresses.
在一实施例中,所述计算片外DDR地址包括:In an embodiment, the calculating the off-chip DDR address comprises:
依据所述片内RAM的大小及缓存计数器记录的当前指针链表中非空闲指针的数量计算片外DDR地址;所述片外DDR地址可以为基地址+链表片内位置偏移;其中,所述基地址为系统配置的、报文描述符存储在片外DDR中的基地址;所述链表片内位置偏移为一个相对地址,若所述基地址为4k,则所述链表片内位置偏移为0,如果所述基地址为4k+1,则所述链表片内位置偏移为1。Calculating an off-chip DDR address according to the size of the on-chip RAM and the number of non-free pointers in the current pointer list recorded by the cache counter; the off-chip DDR address may be a base address + a link intra-slice position offset; wherein The base address is a base address of the system configuration, and the message descriptor is stored in the off-chip DDR; the positional offset of the linked list is a relative address, and if the base address is 4k, the position of the linked list is offset. Move to 0. If the base address is 4k+1, the positional offset of the linked list is 1.
在一实施例中,所述更新指针链表信息包括:In an embodiment, the updating the pointer list information includes:
提取所述入队操作请求中的队列号,依据所述队列号读取尾指针RAM中对应的队列尾指针及指针链表RAM中的空闲头指针,更新所述队列尾指针指向所述空闲头指针当前所指示的地址,然后将所述空闲头指针指向所述指针链表中的下一个地址;Extracting a queue number in the enqueue operation request, reading a corresponding queue tail pointer in the tail pointer RAM and a free head pointer in the pointer list table RAM according to the queue number, and updating the queue tail pointer to the idle head pointer The currently indicated address, and then directing the free head pointer to the next address in the pointer list;
这里,所述尾指针RAM用于存储所有队列的队列尾指针,并指示每个队列的队列状态;其中,所述队列状态包括:空和非空;所述尾指针RAM 的大小/深度可依据实际情况进行设定,在一实施例中,所述尾指针RAM的深度等于队列数,在本实施例中,所述尾指针RAM的深度为256个地址,初始化所述尾指针RAM后的最高比特为1或全为1,可知初始化的队列状态为空;Here, the tail pointer RAM is used to store a queue tail pointer of all queues, and indicates a queue status of each queue; wherein the queue status includes: empty and non-empty; the tail pointer RAM The size/depth of the tail can be set according to the actual situation. In an embodiment, the depth of the tail pointer RAM is equal to the number of queues. In this embodiment, the depth of the tail pointer RAM is 256 addresses, and the initialization is performed. The highest bit after the tail pointer RAM is 1 or all 1s, and the initialized queue state is empty;
相应的,还存在头指针RAM用于存储所有队列的队列头指针,并指示每个队列的队列状态;其中,所述队列状态包括:空和非空;所述头指针RAM的大小/深度可依据实际情况进行设定,在一实施例中,所述头指针RAM的深度等于队列数,在本实施例中,所述头指针RAM的深度为256个地址,初始化所述头指针RAM后的最高比特为1或全为1,可知初始化的队列状态为空;Correspondingly, there is also a head pointer RAM for storing a queue head pointer of all queues, and indicating a queue status of each queue; wherein the queue status includes: null and non-empty; the size/depth of the head pointer RAM may be According to the actual situation, in one embodiment, the depth of the head pointer RAM is equal to the number of queues. In this embodiment, the depth of the head pointer RAM is 256 addresses, after the head pointer RAM is initialized. The highest bit is 1 or all 1s. It can be seen that the initialized queue status is empty.
所述指针链表RAM用于存储所述指针链表,使队列指针链表及空闲指针链表共享一块RAM,由此可以不用单独的先入先出队列(FIFO,First Input First Output)来存储空闲指针,节省了硬件资源;所述指针链表RAM大小可依据实际需要进行设定,在一实施例中,所述片内RAM的大小为4k,所述指针链表包括了指向下一个地址的指针,初始化为一张空指针链表,如图2所示,ptr RAM为指针链表RAM,对其进行初始化,ptr RAM的地址从0~8191,ptr RAM中存储的内容是下一个地址值,由此形成一张指针链表形式;所述指针链表包括非空闲指针及空闲指针;其中,所述非空闲指针的数目表征了片内RAM及片外DDR中报文描述符所占用的地址的总数目,每入队一次报文描述符,非空闲指针的数目加一,相应的,每出队一次报文描述符,非空闲指针的数目减一;所述片内RAM与所述指针链表RAM的一部分存在一一对应关系,所述片内RAM中为报文描述符信息,所述指针链表RAM中为对应的地址信息;如图3所示为本发明实施例各个RAM间数据关系示意图;The pointer linked list RAM is configured to store the pointer linked list, so that the queue pointer linked list and the free pointer linked list share a RAM, thereby eliminating the need for a separate first in first out queue (FIFO, First Input First Output) to store the idle pointer, thereby saving The hardware resource; the pointer list table RAM size can be set according to actual needs. In an embodiment, the on-chip RAM has a size of 4k, and the pointer list includes a pointer to the next address, which is initialized to a The null pointer list, as shown in Figure 2, the ptr RAM is the pointer list RAM, which is initialized. The address of the ptr RAM is from 0 to 8191, and the content stored in the ptr RAM is the next address value, thereby forming a pointer list. a form; the pointer list includes a non-free pointer and a free pointer; wherein the number of the non-free pointers characterizes the total number of addresses occupied by the on-chip RAM and the message descriptor in the off-chip DDR, once for each queue entry The text descriptor, the number of non-free pointers plus one, correspondingly, each time a message descriptor is dequeued, the number of non-free pointers is decremented by one; the on-chip RAM and the pointer chain There is a one-to-one correspondence between a part of the RAM, the message descriptor information in the on-chip RAM, and the corresponding address information in the pointer list table RAM; FIG. 3 is a schematic diagram of data relationships between the RAMs according to the embodiment of the present invention. ;
在图3中,head_ptr ram为头指针RAM,tail_ptr ram为尾指针RAM, ptr ram为指针链表RAM,desc ram为片内RAM;其中,指针链表RAM中黑色的为一个队列中的队列成员,共有6个队列成员,为了便于说明,按顺序分别标记为1至6(均为虚拟标记),前一个数据块中存储的是后一个数据块的地址信息,因为标号6是该队列的最后一个数据,因此该指针链表RAM中的数据为无效数据;根据头指针及尾指针可知该队列的队列号为11,标号1对应的数据块的地址由头指针RAM地址11存储的值确定,标号6对应的数据块地址由尾指针RAM地址11中存储的值确定。In Figure 3, head_ptr ram is the head pointer RAM, and tail_ptr ram is the tail pointer RAM. Ptr ram is pointer list RAM, desc ram is on-chip RAM; among them, black in the pointer list RAM is a queue member in a queue, there are 6 queue members, for convenience of explanation, they are marked as 1 to 6 in order. For the virtual tag), the address information of the latter data block is stored in the previous data block, because the label 6 is the last data of the queue, so the data in the pointer list RAM is invalid data; according to the head pointer and the tail pointer It can be seen that the queue number of the queue is 11, the address of the data block corresponding to the label 1 is determined by the value stored by the head pointer RAM address 11, and the data block address corresponding to the label 6 is determined by the value stored in the tail pointer RAM address 11.
步骤102:确定当前的操作请求为报文描述符的出队操作请求,获取所述出队操作请求对应的队列头指针在所述指针链表中的位置信息,依据所述位置信息读取对应的报文描述符,并更新指针链表信息;Step 102: Determine that the current operation request is a dequeue operation request of the message descriptor, obtain location information of the queue head pointer corresponding to the dequeue operation request in the pointer list, and read corresponding information according to the location information. Message descriptor and update pointer list information;
这里,所述位置信息包括片内RAM位置信息及片外DDR位置信息;Here, the location information includes on-chip RAM location information and off-chip DDR location information;
所述获取所述出队操作请求对应的队列头指针在所述指针链表中的位置信息,依据所述位置信息读取对应的报文描述符包括:And acquiring the location information of the queue head pointer corresponding to the dequeuing operation request in the pointer list, and reading the corresponding message descriptor according to the location information includes:
提取所述入队操作请求中的队列号,依据所述队列号读取头指针RAM中对应的队列头指针,确定所述队列头指针在所述指针链表中的位置处于片外DDR时,计算片外DDR地址,依据得到的片外DDR地址读取片外DDR中对应的报文描述符;Extracting a queue number in the enqueue operation request, reading a corresponding queue head pointer in the head pointer RAM according to the queue number, and determining that the position of the queue head pointer in the pointer list is in an off-chip DDR, The off-chip DDR address reads the corresponding message descriptor in the off-chip DDR according to the obtained off-chip DDR address;
确定所述队列头指针在所述指针链表中的位置处于片内RAM时,依据所述队列号读取片内RAM中对应的报文描述符。When it is determined that the position of the queue head pointer in the pointer list is in the on-chip RAM, the corresponding message descriptor in the on-chip RAM is read according to the queue number.
在一实施例中,所述更新指针链表信息包括:In an embodiment, the updating the pointer list information includes:
提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列头指针及空闲尾指针,更新所述空闲尾指针指向所述队列头指针当前所指示的地址,然后将所述队列头指针指向所述指针链表中的下一个地址。Extracting a queue number in the enqueue operation request, reading a corresponding queue head pointer and an idle tail pointer according to the queue number, updating the idle tail pointer to an address indicated by the queue head pointer, and then The queue head pointer points to the next address in the pointer list.
实施例二 Embodiment 2
图4所示为本发明实施例二队列管理方法流程示意图;如图4所示, 本发明实施例队列管理方法包括:4 is a schematic flowchart of a queue management method according to Embodiment 2 of the present invention; as shown in FIG. 4, The queue management method of the embodiment of the present invention includes:
步骤401:接收操作请求并判断是报文描述符的入队操作请求还是出队操作请求,如果是入队操作请求,执行步骤402;如果是出队操作请求,执行步骤404。Step 401: Receive an operation request and determine whether it is an enqueue operation request or a dequeue operation request of the message descriptor. If it is a enqueue operation request, step 402 is performed; if it is a dequeue operation request, step 404 is performed.
步骤402:依据所述入队操作请求及片内RAM的缓存情况,将所述报文描述符存储至片外DDR或片内RAM;Step 402: Store the message descriptor into an off-chip DDR or an on-chip RAM according to the enqueue operation request and the buffering condition of the on-chip RAM;
本步骤包括:依据所述入队操作请求确定当前片内RAM的缓存已达上限时,计算片外DDR地址,并依据得到的片外DDR地址将所述报文描述符存储至片外DDR;The step includes: determining, according to the enqueue operation request, that the current on-chip RAM has reached the upper limit, calculating an off-chip DDR address, and storing the message descriptor to an off-chip DDR according to the obtained off-chip DDR address;
确定当前片内RAM的缓存未达到上限时,提取所述入队操作请求中的队列号,并依据所述队列号将所述报文描述符存储至片内RAM。When it is determined that the current on-chip RAM has not reached the upper limit, the queue number in the enqueue operation request is extracted, and the message descriptor is stored in the on-chip RAM according to the queue number.
在一实施例中,所述确定当前片内RAM的缓存已达上限包括:In an embodiment, the determining that the current on-chip RAM has reached a maximum limit includes:
依据预设的片内RAM的缓存大小及缓存计数器实时记录的当前指针链表中非空闲指针的数量确定当前片内RAM的缓存是否已达上限,如果当前指针链表中非空闲指针的数量小于预设的片内RAM的缓存大小,确定当前片内RAM的缓存未达到上限,否则,确定当前片内RAM的缓存已达上限;According to the preset buffer size of the on-chip RAM and the number of non-free pointers in the current pointer list recorded by the cache counter in real time, it is determined whether the current on-chip RAM cache has reached the upper limit, if the number of non-free pointers in the current pointer list is less than the preset The buffer size of the on-chip RAM determines that the current on-chip RAM cache has not reached the upper limit; otherwise, it determines that the current on-chip RAM cache has reached the upper limit;
这里,所述片内RAM用于缓存报文描述符信息,其大小可依据实际需要进行设定,在本实施例中,所述片内RAM的大小为4k,即片内RAM的地址为0~4k-1,即所述片内RAM的深度为4×1024个地址。Here, the on-chip RAM is used to buffer the message descriptor information, and the size thereof can be set according to actual needs. In this embodiment, the size of the on-chip RAM is 4k, that is, the address of the on-chip RAM is 0. ~4k-1, that is, the depth of the on-chip RAM is 4×1024 addresses.
步骤403:更新指针链表信息,并执行步骤406;Step 403: Update the pointer list information, and perform step 406;
本步骤包括:提取所述入队操作请求中的队列号,依据所述队列号读取尾指针RAM中对应的队列尾指针及指针链表RAM中的空闲头指针,更新所述队列尾指针指向所述空闲头指针当前所指示的地址,然后将所述空闲头指针指向所述指针链表中的下一个地址; The step includes: extracting a queue number in the enqueue operation request, reading a corresponding queue tail pointer in the tail pointer RAM and a free head pointer in the pointer list table RAM according to the queue number, and updating the queue tail pointer to the location Describe the address currently indicated by the idle head pointer, and then point the free head pointer to the next address in the pointer list;
这里,所述尾指针RAM用于存储所有队列的队列尾指针,并指示每个队列的队列状态;其中,所述队列状态包括:空和非空;所述尾指针RAM的大小/深度可依据实际情况进行设定,在一实施例中,所述尾指针RAM的深度等于队列数,在本实施例中,所述尾指针RAM的深度为256个地址,初始化所述尾指针RAM后的最高比特为1或全为1,可知初始化的队列状态为空;Here, the tail pointer RAM is used to store a queue tail pointer of all queues, and indicates a queue status of each queue; wherein the queue status includes: empty and non-empty; the size/depth of the tail pointer RAM may be based on The actual situation is set. In an embodiment, the depth of the tail pointer RAM is equal to the number of queues. In this embodiment, the depth of the tail pointer RAM is 256 addresses, and the highest value after the tail pointer RAM is initialized. If the bit is 1 or all 1, it can be known that the initialized queue status is empty.
相应的,还存在头指针RAM用于存储所有队列的队列头指针,并指示每个队列的队列状态;其中,所述队列状态包括:空和非空;所述头指针RAM的大小/深度可依据实际情况进行设定,在一实施例中,所述头指针RAM的深度等于队列数,在本实施例中,所述头指针RAM的深度为256个地址,初始化所述头指针RAM后的最高比特为1或全为1,可知初始化的队列状态为空;Correspondingly, there is also a head pointer RAM for storing a queue head pointer of all queues, and indicating a queue status of each queue; wherein the queue status includes: null and non-empty; the size/depth of the head pointer RAM may be According to the actual situation, in one embodiment, the depth of the head pointer RAM is equal to the number of queues. In this embodiment, the depth of the head pointer RAM is 256 addresses, after the head pointer RAM is initialized. The highest bit is 1 or all 1s. It can be seen that the initialized queue status is empty.
所述指针链表RAM用于存储所述指针链表,使队列指针链表及空闲指针链表共享一块RAM,由此可以不用单独的FIFO来存储空闲指针,节省了硬件资源;所述指针链表RAM大小可依据实际需要进行设定,在本实施例中,所述片内RAM的大小为4k,所述指针链表包括了指向下一个地址的指针,初始化为一张空指针链表,如图2所示,ptr RAM为指针链表RAM,对其进行初始化,ptr RAM的地址从0~8191,ptr RAM中存储的内容是下一个地址值,由此形成一张指针链表形式;所述指针链表包括非空闲指针及空闲指针;其中,所述非空闲指针的数目表征了片内RAM及片外DDR中报文描述符所占用的地址的总数目,每入队一次报文描述符,非空闲指针的数目加一,相应的,每出队一次报文描述符,非空闲指针的数目减一;所述片内RAM与所述指针链表RAM的一部分存在一一对应关系,所述片内RAM中为报文描述符信息,所述指针链表RAM中为对应的地址信息;如图3所示为本发明实施例各个RAM间数据关系示意图; The pointer linked list RAM is configured to store the pointer linked list, so that the queue pointer linked list and the free pointer linked list share a RAM, thereby eliminating the need for a separate FIFO to store the free pointer, thereby saving hardware resources; the pointer linked list RAM size can be based on Actually, the setting needs to be performed. In this embodiment, the size of the on-chip RAM is 4k, and the pointer list includes a pointer pointing to the next address, and is initialized into a list of empty pointers, as shown in FIG. 2, ptr The RAM is a pointer list RAM, which is initialized, the address of the ptr RAM is from 0 to 8191, and the content stored in the ptr RAM is the next address value, thereby forming a pointer list form; the pointer list includes non-idle pointers and a free pointer; wherein the number of non-free pointers characterizes the total number of addresses occupied by the message descriptors in the on-chip RAM and the off-chip DDR, and the number of non-idle pointers is incremented by one message descriptor per queue. Correspondingly, each time a message descriptor is dequeued, the number of non-free pointers is decremented by one; the on-chip RAM has a one-to-one correspondence with a part of the pointer list RAM, The RAM for packet descriptor information, the address information list pointer corresponding to the RAM; a schematic diagram of data relationships between each of Examples RAM embodiment of the present invention shown in Figure 3;
在图3中,head_ptr ram为头指针RAM,tail_ptr ram为尾指针RAM,ptr ram为指针链表RAM,desc ram为片内RAM;其中,指针链表RAM中黑色的为一个队列中的队列成员,共有6个队列成员,为了便于说明,按顺序分别标记为1至6(均为虚拟标记),前一个数据块中存储的是后一个数据块的地址信息,因为标号6是该队列的最后一个数据,因此该指针链表RAM中的数据为无效数据;根据头指针及尾指针可知该队列的队列号为11,标号1对应的数据块的地址由头指针RAM地址11存储的值确定,标号6对应的数据块地址由尾指针RAM地址11中存储的值确定。In Figure 3, head_ptr ram is the head pointer RAM, tail_ptr ram is the tail pointer RAM, ptr ram is the pointer list RAM, and desc ram is the on-chip RAM; wherein the black of the pointer list RAM is a queue member in the queue. 6 queue members, for convenience of explanation, are marked as 1 to 6 (all are virtual tags) in order, and the previous data block stores the address information of the latter data block, because the label 6 is the last data of the queue. Therefore, the data in the pointer list RAM is invalid data; according to the head pointer and the tail pointer, the queue number of the queue is 11, and the address of the data block corresponding to the label 1 is determined by the value stored by the head pointer RAM address 11, and the label 6 corresponds to The data block address is determined by the value stored in the tail pointer RAM address 11.
步骤404:获取所述出队操作请求对应的队列头指针在所述指针链表中的位置信息,依据所述位置信息读取对应的报文描述符;Step 404: Acquire location information of a queue head pointer corresponding to the dequeuing operation request in the pointer list, and read a corresponding message descriptor according to the location information.
这里,所述位置信息包括片内RAM位置信息及片外DDR位置信息;Here, the location information includes on-chip RAM location information and off-chip DDR location information;
本步骤包括:提取所述入队操作请求中的队列号,依据所述队列号读取头指针RAM中对应的队列头指针,确定所述队列头指针在所述指针链表中的位置处于片外DDR时,计算片外DDR地址,依据得到的片外DDR地址读取片外DDR中对应的报文描述符;The step includes: extracting a queue number in the enqueue operation request, reading a corresponding queue head pointer in the head pointer RAM according to the queue number, and determining that the position of the queue head pointer in the pointer list is off-chip. In the case of DDR, the off-chip DDR address is calculated, and the corresponding message descriptor in the off-chip DDR is read according to the obtained off-chip DDR address;
确定所述队列头指针在所述指针链表中的位置处于片内RAM时,依据所述队列号读取片内RAM中对应的报文描述符。When it is determined that the position of the queue head pointer in the pointer list is in the on-chip RAM, the corresponding message descriptor in the on-chip RAM is read according to the queue number.
步骤405:更新指针链表信息;Step 405: Update the pointer list information.
本步骤包括:提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列头指针及空闲尾指针,更新所述空闲尾指针指向所述队列头指针当前所指示的地址,然后将所述队列头指针指向所述指针链表中的下一个地址。The step includes: extracting a queue number in the enqueue operation request, reading a corresponding queue head pointer and an idle tail pointer according to the queue number, and updating the idle tail pointer to point to an address currently indicated by the queue head pointer. And then directing the queue head pointer to the next address in the pointer list.
步骤406:结束本次处理流程。Step 406: End the current processing flow.
实施例三 Embodiment 3
图5为本发明实施例队列管理装置组成结构示意图;如图5所示,本 发明实施例队列管理装置组成包括:第一处理模块51及第二处理模块52;其中,FIG. 5 is a schematic structural diagram of a queue management apparatus according to an embodiment of the present invention; The configuration of the queue management device includes: a first processing module 51 and a second processing module 52;
所述第一处理模块51,配置为确定当前的操作请求为报文描述符的入队操作请求,依据所述入队操作请求及片内RAM的缓存情况,将所述报文描述符存储至片外DDR或片内RAM,并更新指针链表信息;The first processing module 51 is configured to determine that the current operation request is an enqueue operation request of the message descriptor, and store the message descriptor according to the enqueue operation request and the buffer condition of the on-chip RAM. Off-chip DDR or on-chip RAM, and update pointer list information;
所述第二处理模块52,配置为确定当前的操作请求为报文描述符的出队操作请求,获取所述出队操作请求对应的队列头指针在所述指针链表中的位置信息,依据所述位置信息读取对应的报文描述符,并更新指针链表信息;其中,所述位置信息包括片内RAM位置信息及片外DDR位置信息。The second processing module 52 is configured to determine that the current operation request is a dequeue operation request of the message descriptor, and obtain location information of the queue head pointer corresponding to the dequeue operation request in the pointer list, according to The location information reads the corresponding message descriptor and updates the pointer list information; wherein the location information includes on-chip RAM location information and off-chip DDR location information.
在一实施例中,所述第一处理模块51依据所述入队操作请求及片内RAM的缓存情况,将所述报文描述符存储至片外DDR或片内RAM,包括:In an embodiment, the first processing module 51 stores the message descriptor to an off-chip DDR or an on-chip RAM according to the enqueue operation request and the buffering condition of the on-chip RAM, including:
所述第一处理模块51依据所述入队操作请求确定当前片内RAM的缓存已达上限时,计算片外DDR地址,并依据得到的片外DDR地址将所述报文描述符存储至片外DDR;The first processing module 51 determines, according to the enqueue operation request, that the current on-chip RAM has reached the upper limit, calculates an off-chip DDR address, and stores the message descriptor into a slice according to the obtained off-chip DDR address. External DDR;
确定当前片内RAM的缓存未达到上限时,提取所述入队操作请求中的队列号,并依据所述队列号将所述报文描述符存储至片内RAM;Determining, when the current on-chip RAM cache does not reach the upper limit, extracting the queue number in the enqueue operation request, and storing the message descriptor to the on-chip RAM according to the queue number;
其中,所述第一处理模块51确定当前片内RAM的缓存已达上限包括:The first processing module 51 determines that the current upper limit of the on-chip RAM has reached the upper limit, including:
所述第一处理模块51依据预设的片内RAM的缓存大小及缓存计数器实时记录的当前指针链表中非空闲指针的数量确定当前片内RAM的缓存是否已达上限,如果当前指针链表中非空闲指针的数量小于预设的片内RAM的缓存大小,确定当前片内RAM的缓存未达到上限,否则,确定当前片内RAM的缓存已达上限;The first processing module 51 determines whether the current on-chip RAM cache has reached the upper limit according to the preset buffer size of the on-chip RAM and the number of non-free pointers in the current pointer list recorded by the cache counter in real time, if the current pointer list is not The number of free pointers is smaller than the preset buffer size of the on-chip RAM, and it is determined that the current on-chip RAM cache has not reached the upper limit; otherwise, it is determined that the current on-chip RAM has reached the upper limit;
这里,所述片内RAM用于缓存报文描述符信息,其大小可依据实际需要进行设定,在一实施例中,所述片内RAM的大小为4k,即片内RAM的地址为0~4k-1,即所述片内RAM的深度为4×1024个地址。 Here, the on-chip RAM is used to buffer message descriptor information, and its size can be set according to actual needs. In an embodiment, the size of the on-chip RAM is 4k, that is, the address of the on-chip RAM is 0. ~4k-1, that is, the depth of the on-chip RAM is 4×1024 addresses.
在一实施例中,确定当前的操作请求为报文描述符的入队操作请求时,所述第一处理模块51更新指针链表信息包括:In an embodiment, when the current operation request is determined to be an enqueue operation request of the message descriptor, the first processing module 51 updates the pointer list information, including:
所述第一处理模块51提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列尾指针及空闲头指针,更新所述队列尾指针指向所述空闲头指针当前所指示的地址,然后将所述空闲头指针指向所述指针链表中的下一个地址;The first processing module 51 extracts the queue number in the enqueue operation request, reads the corresponding queue tail pointer and the idle head pointer according to the queue number, and updates the queue tail pointer to point to the current idle pointer. The indicated address, and then directing the free head pointer to a next address in the pointer list;
这里,尾指针RAM用于存储所有队列的队列尾指针,并指示每个队列的队列状态;其中,所述队列状态包括:空和非空;所述尾指针RAM的大小/深度可依据实际情况进行设定,在一实施例中,所述尾指针RAM的深度等于队列数,在本实施例中,所述尾指针RAM的深度为256个地址,初始化所述尾指针RAM后的最高比特为1或全为1,可知初始化的队列状态为空;Here, the tail pointer RAM is used to store the queue tail pointers of all queues, and indicates the queue status of each queue; wherein the queue status includes: empty and non-empty; the size/depth of the tail pointer RAM may be based on actual conditions. In the embodiment, the depth of the tail pointer RAM is equal to the number of queues. In this embodiment, the depth of the tail pointer RAM is 256 addresses, and the highest bit after initializing the tail pointer RAM is 1 or all 1, it can be seen that the initialized queue status is empty;
相应的,还存在头指针RAM用于存储所有队列的队列头指针,并指示每个队列的队列状态;其中,所述队列状态包括:空和非空;所述头指针RAM的大小/深度可依据实际情况进行设定,在一实施例中,所述头指针RAM的深度等于队列数,在本实施例中,所述头指针RAM的深度为256个地址,初始化所述头指针RAM后的最高比特为1或全为1,可知初始化的队列状态为空;Correspondingly, there is also a head pointer RAM for storing a queue head pointer of all queues, and indicating a queue status of each queue; wherein the queue status includes: null and non-empty; the size/depth of the head pointer RAM may be According to the actual situation, in one embodiment, the depth of the head pointer RAM is equal to the number of queues. In this embodiment, the depth of the head pointer RAM is 256 addresses, after the head pointer RAM is initialized. The highest bit is 1 or all 1s. It can be seen that the initialized queue status is empty.
指针链表RAM用于存储所述指针链表,使队列指针链表及空闲指针链表共享一块RAM,由此可以不用单独的先入先出队列(FIFO,First Input First Output)来存储空闲指针,节省了硬件资源;所述指针链表RAM大小可依据实际需要进行设定,在一实施例中,所述片内RAM的大小为4k,所述指针链表包括了指向下一个地址的指针,初始化为一张空指针链表,如图2所示,ptr RAM为指针链表RAM,对其进行初始化,ptr RAM的地址从0~8191,ptr RAM中存储的内容是下一个地址值,由此形成一张指针链表 形式;所述指针链表包括非空闲指针及空闲指针;其中,所述非空闲指针的数目表征了片内RAM及片外DDR中报文描述符所占用的地址的总数目,每入队一次报文描述符,非空闲指针的数目加一,相应的,每出队一次报文描述符,非空闲指针的数目减一;所述片内RAM与所述指针链表RAM的一部分存在一一对应关系,所述片内RAM中为报文描述符信息,所述指针链表RAM中为对应的地址信息;如图3所示为本发明实施例各个RAM间数据关系示意图;The pointer linked list RAM is used for storing the pointer linked list, so that the queue pointer linked list and the free pointer linked list share a piece of RAM, thereby eliminating the need for a separate first in first out queue (FIFO, First Input First Output) to store the idle pointer, thereby saving hardware resources. The pointer list RAM size can be set according to actual needs. In an embodiment, the on-chip RAM has a size of 4k, and the pointer list includes a pointer pointing to the next address, and is initialized to a null pointer. The linked list, as shown in Figure 2, the ptr RAM is the pointer linked list RAM, which is initialized, the address of the ptr RAM is from 0 to 8191, and the content stored in the ptr RAM is the next address value, thereby forming a pointer list. a form; the pointer list includes a non-free pointer and a free pointer; wherein the number of the non-free pointers characterizes the total number of addresses occupied by the on-chip RAM and the message descriptor in the off-chip DDR, once for each queue entry The text descriptor, the number of non-free pointers is incremented by one, and correspondingly, each time a message descriptor is dequeued, the number of non-free pointers is decremented by one; the on-chip RAM has a one-to-one correspondence with a part of the pointer list RAM. The in-chip RAM is message descriptor information, and the pointer list table RAM is corresponding address information; FIG. 3 is a schematic diagram of data relationships between respective RAMs according to an embodiment of the present invention;
在图3中,head_ptr ram为头指针RAM,tail_ptr ram为尾指针RAM,ptr ram为指针链表RAM,desc ram为片内RAM;其中,指针链表RAM中黑色的为一个队列中的队列成员,共有6个队列成员,为了便于说明,按顺序分别标记为1至6(均为虚拟标记),前一个数据块中存储的是后一个数据块的地址信息,因为标号6是该队列的最后一个数据,因此该指针链表RAM中的数据为无效数据;根据头指针及尾指针可知该队列的队列号为11,标号1对应的数据块的地址由头指针RAM地址11存储的值确定,标号6对应的数据块地址由尾指针RAM地址11中存储的值确定。In Figure 3, head_ptr ram is the head pointer RAM, tail_ptr ram is the tail pointer RAM, ptr ram is the pointer list RAM, and desc ram is the on-chip RAM; wherein the black of the pointer list RAM is a queue member in the queue. 6 queue members, for convenience of explanation, are marked as 1 to 6 (all are virtual tags) in order, and the previous data block stores the address information of the latter data block, because the label 6 is the last data of the queue. Therefore, the data in the pointer list RAM is invalid data; according to the head pointer and the tail pointer, the queue number of the queue is 11, and the address of the data block corresponding to the label 1 is determined by the value stored by the head pointer RAM address 11, and the label 6 corresponds to The data block address is determined by the value stored in the tail pointer RAM address 11.
在一实施例中,所述第二处理模块52获取所述出队操作请求对应的队列头指针在所述指针链表中的位置信息,依据所述位置信息读取对应的报文描述符,包括:In an embodiment, the second processing module 52 acquires location information of the queue head pointer corresponding to the dequeuing operation request in the pointer list, and reads corresponding message descriptors according to the location information, including :
所述第二处理模块52提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列头指针,确定所述队列头指针在所述指针链表中的位置处于片外DDR时,计算片外DDR地址,依据得到的片外DDR地址读取片外DDR中对应的报文描述符;The second processing module 52 extracts the queue number in the enqueue operation request, reads the corresponding queue head pointer according to the queue number, and determines that the position of the queue head pointer in the pointer list is off-chip DDR. Calculating an off-chip DDR address, and reading a corresponding message descriptor in the off-chip DDR according to the obtained off-chip DDR address;
确定所述队列头指针在所述指针链表中的位置处于片内RAM时,依据所述队列号读取片内RAM中对应的报文描述符。When it is determined that the position of the queue head pointer in the pointer list is in the on-chip RAM, the corresponding message descriptor in the on-chip RAM is read according to the queue number.
在一实施例中,确定当前的操作请求为报文描述符的出队操作请求时, 所述第二处理模块52更新指针链表信息包括:In an embodiment, when it is determined that the current operation request is a dequeuing operation request of the message descriptor, The updating, by the second processing module 52, the pointer list information includes:
所述第二处理模块52提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列头指针及空闲尾指针,更新所述空闲尾指针指向所述队列头指针当前所指示的地址,然后将所述队列头指针指向所述指针链表中的下一个地址。The second processing module 52 extracts the queue number in the enqueue operation request, reads the corresponding queue head pointer and the idle tail pointer according to the queue number, and updates the idle tail pointer to point to the current queue head pointer. The indicated address, and then the queue head pointer is directed to the next address in the list of pointers.
本发明实施例中提出的第一处理模块51及第二处理模块52都可以通过处理器来实现,当然也可通过具体的逻辑电路实现;其中所述处理器可以是移动终端或服务器上的处理器,在实际应用中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。The first processing module 51 and the second processing module 52 proposed in the embodiments of the present invention may be implemented by a processor, and may also be implemented by a specific logic circuit; wherein the processor may be a processing on a mobile terminal or a server. In practical applications, the processor may be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
本发明实施例中,如果以软件功能模块的形式实现上述队列管理方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本发明实施例不限制于任何特定的硬件和软件结合。In the embodiment of the present invention, if the queue management method is implemented in the form of a software function module and is sold or used as a separate product, it may also be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions. A computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention. The foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
相应地,本发明实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机程序,该计算机程序用于执行本发明实施例的上述队列管理方法。Correspondingly, the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program, and the computer program is used to execute the foregoing queue management method of the embodiment of the present invention.
以上所述仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。 The above is only the preferred embodiment of the present invention and is not intended to limit the scope of the present invention.

Claims (11)

  1. 一种队列管理方法,所述方法包括:A queue management method, the method comprising:
    确定当前的操作请求为报文描述符的入队操作请求,依据所述入队操作请求及片内随机存取存储器RAM的缓存情况,将所述报文描述符存储至片外双倍速率同步动态随机存储器DDR或片内RAM,并更新指针链表信息;Determining that the current operation request is an enqueue operation request of the message descriptor, and storing the message descriptor to the off-chip double rate synchronization according to the enqueue operation request and the buffering condition of the on-chip random access memory RAM Dynamic random access memory DDR or on-chip RAM, and update pointer list information;
    确定当前的操作请求为报文描述符的出队操作请求,获取所述出队操作请求对应的队列头指针在所述指针链表中的位置信息,依据所述位置信息读取对应的报文描述符,并更新指针链表信息;其中,所述位置信息包括片内RAM位置信息及片外DDR位置信息。Determining that the current operation request is a dequeue operation request of the message descriptor, acquiring location information of the queue head pointer corresponding to the dequeue operation request in the pointer list, and reading the corresponding message description according to the location information And updating the pointer list information; wherein the location information includes on-chip RAM location information and off-chip DDR location information.
  2. 根据权利要求1所述方法,其中,所述依据所述入队操作请求及片内RAM的缓存情况,将所述报文描述符存储至片外DDR或片内RAM包括:The method of claim 1, wherein the storing the message descriptor to an off-chip DDR or an on-chip RAM according to the enqueue operation request and the buffering condition of the on-chip RAM comprises:
    依据所述入队操作请求确定当前片内RAM的缓存已达上限时,计算片外DDR地址,并依据得到的片外DDR地址将所述报文描述符存储至片外DDR;And determining, according to the enqueue operation request, that the current on-chip RAM has reached the upper limit, calculating an off-chip DDR address, and storing the message descriptor to an off-chip DDR according to the obtained off-chip DDR address;
    确定当前片内RAM的缓存未达到上限时,提取所述入队操作请求中的队列号,并依据所述队列号将所述报文描述符存储至片内RAM。When it is determined that the current on-chip RAM has not reached the upper limit, the queue number in the enqueue operation request is extracted, and the message descriptor is stored in the on-chip RAM according to the queue number.
  3. 根据权利要求1或2所述方法,其中,确定当前的操作请求为报文描述符的入队操作请求时,所述更新指针链表信息包括:The method according to claim 1 or 2, wherein when the current operation request is determined to be a queue operation request of the message descriptor, the update pointer list information includes:
    提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列尾指针及空闲头指针,更新所述队列尾指针指向所述空闲头指针当前所指示的地址,然后将所述空闲头指针指向所述指针链表中的下一个地址。Extracting a queue number in the enqueue operation request, reading a corresponding queue tail pointer and a idle head pointer according to the queue number, updating the queue tail pointer to an address indicated by the idle head pointer, and then The idle header pointer points to the next address in the pointer list.
  4. 根据权利要求1或2所述方法,其中,确定当前的操作请求为报文描述符的出队操作请求时,所述更新指针链表信息包括: The method according to claim 1 or 2, wherein when the current operation request is determined to be a dequeuing operation request of the message descriptor, the updated pointer list information includes:
    提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列头指针及空闲尾指针,更新所述空闲尾指针指向所述队列头指针当前所指示的地址,然后将所述队列头指针指向所述指针链表中的下一个地址。Extracting a queue number in the enqueue operation request, reading a corresponding queue head pointer and an idle tail pointer according to the queue number, updating the idle tail pointer to an address indicated by the queue head pointer, and then The queue head pointer points to the next address in the pointer list.
  5. 根据权利要求1或2所述方法,其中,所述获取所述出队操作请求对应的队列头指针在所述指针链表中的位置信息,依据所述位置信息读取对应的报文描述符包括:The method according to claim 1 or 2, wherein the acquiring the location information of the queue head pointer corresponding to the dequeuing operation request in the pointer list, and reading the corresponding message descriptor according to the location information includes :
    提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列头指针,确定所述队列头指针在所述指针链表中的位置处于片外DDR时,计算片外DDR地址,依据得到的片外DDR地址读取片外DDR中对应的报文描述符;Extracting a queue number in the enqueue operation request, reading a corresponding queue head pointer according to the queue number, and determining an off-chip DDR address when the position of the queue head pointer in the pointer list is in an off-chip DDR Reading the corresponding message descriptor in the off-chip DDR according to the obtained off-chip DDR address;
    确定所述队列头指针在所述指针链表中的位置处于片内RAM时,依据所述队列号读取片内RAM中对应的报文描述符。When it is determined that the position of the queue head pointer in the pointer list is in the on-chip RAM, the corresponding message descriptor in the on-chip RAM is read according to the queue number.
  6. 一种队列管理装置,所述装置包括:第一处理模块及第二处理模块;其中,A queue management device, the device comprising: a first processing module and a second processing module; wherein
    所述第一处理模块,配置为确定当前的操作请求为报文描述符的入队操作请求,依据所述入队操作请求及片内随机存取存储器RAM的缓存情况,将所述报文描述符存储至片外双倍速率同步动态随机存储器DDR或片内RAM,并更新指针链表信息;The first processing module is configured to determine that the current operation request is an enqueue operation request of the message descriptor, and describe the message according to the enqueue operation request and the buffering condition of the on-chip random access memory RAM. Stores to the off-chip double rate synchronous dynamic random access memory DDR or on-chip RAM and updates the pointer list information;
    所述第二处理模块,配置为确定当前的操作请求为报文描述符的出队操作请求,获取所述出队操作请求对应的队列头指针在所述指针链表中的位置信息,依据所述位置信息读取对应的报文描述符,并更新指针链表信息;其中,所述位置信息包括片内RAM位置信息及片外DDR位置信息。The second processing module is configured to determine that the current operation request is a dequeue operation request of the message descriptor, and obtain location information of the queue head pointer corresponding to the dequeue operation request in the pointer list, according to the The location information reads the corresponding message descriptor and updates the pointer list information; wherein the location information includes on-chip RAM location information and off-chip DDR location information.
  7. 根据权利要求6所述装置,其中,所述第一处理模块,配置为依据所述入队操作请求确定当前片内RAM的缓存已达上限时,计算片外DDR地址,并依据得到的片外DDR地址将所述报文描述符存储至片外DDR; The device according to claim 6, wherein the first processing module is configured to calculate an off-chip DDR address when the current on-chip RAM cache has reached an upper limit according to the enqueue operation request, and according to the obtained off-chip The DDR address stores the message descriptor to an off-chip DDR;
    确定当前片内RAM的缓存未达到上限时,提取所述入队操作请求中的队列号,并依据所述队列号将所述报文描述符存储至片内RAM。When it is determined that the current on-chip RAM has not reached the upper limit, the queue number in the enqueue operation request is extracted, and the message descriptor is stored in the on-chip RAM according to the queue number.
  8. 根据权利要求6或7所述装置,其中,所述第一处理模块,配置为提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列尾指针及空闲头指针,更新所述队列尾指针指向所述空闲头指针当前所指示的地址,然后将所述空闲头指针指向所述指针链表中的下一个地址。The device according to claim 6 or 7, wherein the first processing module is configured to extract a queue number in the enqueue operation request, and read a corresponding queue tail pointer and an idle head pointer according to the queue number, Updating the queue tail pointer to the address currently indicated by the free head pointer, and then directing the free head pointer to the next address in the pointer list.
  9. 根据权利要求6或7所述装置,其中,所述第二处理模块,配置为提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列头指针及空闲尾指针,更新所述空闲尾指针指向所述队列头指针当前所指示的地址,然后将所述队列头指针指向所述指针链表中的下一个地址。The device according to claim 6 or 7, wherein the second processing module is configured to extract a queue number in the enqueue operation request, and read a corresponding queue head pointer and an idle tail pointer according to the queue number, Updating the idle tail pointer to the address currently indicated by the queue head pointer, and then directing the queue head pointer to the next address in the pointer list.
  10. 根据权利要求6或7所述装置,其中,所述第二处理模块,配置为提取所述入队操作请求中的队列号,依据所述队列号读取对应的队列头指针,确定所述队列头指针在所述指针链表中的位置处于片外DDR时,计算片外DDR地址,依据得到的片外DDR地址读取片外DDR中对应的报文描述符;The device according to claim 6 or 7, wherein the second processing module is configured to extract a queue number in the enqueue operation request, read a corresponding queue head pointer according to the queue number, and determine the queue When the position of the head pointer in the pointer list is in the off-chip DDR, the off-chip DDR address is calculated, and the corresponding message descriptor in the off-chip DDR is read according to the obtained off-chip DDR address;
    确定所述队列头指针在所述指针链表中的位置处于片内RAM时,依据所述队列号读取片内RAM中对应的报文描述符。When it is determined that the position of the queue head pointer in the pointer list is in the on-chip RAM, the corresponding message descriptor in the on-chip RAM is read according to the queue number.
  11. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行权利要求1至5任一项所述的队列管理方法。 A computer storage medium having stored therein computer executable instructions for performing the queue management method of any one of claims 1 to 5.
PCT/CN2015/092929 2015-05-13 2015-10-27 Queue management method and device, and storage medium WO2016179968A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510241030.6A CN106302238A (en) 2015-05-13 2015-05-13 A kind of queue management method and device
CN201510241030.6 2015-05-13

Publications (1)

Publication Number Publication Date
WO2016179968A1 true WO2016179968A1 (en) 2016-11-17

Family

ID=57247731

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/092929 WO2016179968A1 (en) 2015-05-13 2015-10-27 Queue management method and device, and storage medium

Country Status (2)

Country Link
CN (1) CN106302238A (en)
WO (1) WO2016179968A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111124355A (en) * 2019-12-12 2020-05-08 东软集团股份有限公司 Information processing method and device, readable storage medium and electronic equipment
CN111143065A (en) * 2019-12-25 2020-05-12 杭州安恒信息技术股份有限公司 Data processing method, device, equipment and medium
CN111488496A (en) * 2020-04-30 2020-08-04 湖北师范大学 Sliding window based Tango tree construction method and system
CN112306945A (en) * 2019-07-30 2021-02-02 安徽寒武纪信息科技有限公司 Data synchronization method and device and related product
CN112615796A (en) * 2020-12-10 2021-04-06 北京时代民芯科技有限公司 Queue management system considering storage utilization rate and management complexity
CN113225307A (en) * 2021-03-18 2021-08-06 西安电子科技大学 Optimization method, system and terminal for pre-reading descriptors in offload engine network card
CN114189569A (en) * 2020-08-31 2022-03-15 华为技术有限公司 Data transmission method, device and system
CN114610661A (en) * 2022-03-10 2022-06-10 北京百度网讯科技有限公司 Data processing device and method and electronic equipment

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025184B (en) * 2016-02-01 2021-03-16 深圳市中兴微电子技术有限公司 Data management method and device
CN106982175B (en) * 2017-04-05 2019-08-23 数据通信科学技术研究所 A kind of communication control unit and communication control method based on RAM
CN108632171B (en) * 2017-09-07 2020-03-31 视联动力信息技术股份有限公司 Data processing method and device based on video network
CN109474543B (en) * 2017-09-07 2023-11-14 深圳市中兴微电子技术有限公司 Queue resource management method, device and storage medium
CN109697022B (en) * 2017-10-23 2022-03-04 深圳市中兴微电子技术有限公司 Method and device for processing message descriptor PD and computer readable storage medium
CN109656515A (en) * 2018-11-16 2019-04-19 深圳证券交易所 Operating method, device and the storage medium of queue message
CN111459417B (en) * 2020-04-26 2023-08-18 中国人民解放军国防科技大学 Non-lock transmission method and system for NVMeoF storage network
CN112822126B (en) * 2020-12-30 2022-08-26 苏州盛科通信股份有限公司 Message storage method, message in-out queue method and storage scheduling device
CN113157465B (en) * 2021-04-25 2022-11-25 无锡江南计算技术研究所 Message sending method and device based on pointer linked list
CN114785714B (en) * 2022-03-01 2023-08-22 阿里巴巴(中国)有限公司 Message transmission delay detection method, storage medium and equipment
CN114817091B (en) * 2022-06-28 2022-09-27 井芯微电子技术(天津)有限公司 FWFT FIFO system based on linked list, implementation method and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090034549A1 (en) * 2007-08-01 2009-02-05 Texas Instruments Incorporated Managing Free Packet Descriptors in Packet-Based Communications
US20090034548A1 (en) * 2007-08-01 2009-02-05 Texas Instruments Incorporated Hardware Queue Management with Distributed Linking Information
CN102130833A (en) * 2011-03-11 2011-07-20 中兴通讯股份有限公司 Memory management method and system of traffic management chip chain tables of high-speed router

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7822051B1 (en) * 2007-10-24 2010-10-26 Ethernity Networks Ltd. Method and system for transmitting packets
CN101499956B (en) * 2008-01-31 2012-10-10 中兴通讯股份有限公司 Hierarchical buffer zone management system and method
US8266344B1 (en) * 2009-09-24 2012-09-11 Juniper Networks, Inc. Recycling buffer pointers using a prefetch buffer
CN102957629B (en) * 2011-08-30 2015-07-08 华为技术有限公司 Method and device for queue management
CN103179050B (en) * 2011-12-20 2017-10-13 中兴通讯股份有限公司 Packet is joined the team and gone out group management method and data packet processing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090034549A1 (en) * 2007-08-01 2009-02-05 Texas Instruments Incorporated Managing Free Packet Descriptors in Packet-Based Communications
US20090034548A1 (en) * 2007-08-01 2009-02-05 Texas Instruments Incorporated Hardware Queue Management with Distributed Linking Information
CN102130833A (en) * 2011-03-11 2011-07-20 中兴通讯股份有限公司 Memory management method and system of traffic management chip chain tables of high-speed router

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112306945A (en) * 2019-07-30 2021-02-02 安徽寒武纪信息科技有限公司 Data synchronization method and device and related product
CN112306945B (en) * 2019-07-30 2023-05-12 安徽寒武纪信息科技有限公司 Data synchronization method and device and related products
CN111124355B (en) * 2019-12-12 2023-04-07 东软集团股份有限公司 Information processing method and device, readable storage medium and electronic equipment
CN111124355A (en) * 2019-12-12 2020-05-08 东软集团股份有限公司 Information processing method and device, readable storage medium and electronic equipment
CN111143065A (en) * 2019-12-25 2020-05-12 杭州安恒信息技术股份有限公司 Data processing method, device, equipment and medium
CN111143065B (en) * 2019-12-25 2023-08-22 杭州安恒信息技术股份有限公司 Data processing method, device, equipment and medium
CN111488496A (en) * 2020-04-30 2020-08-04 湖北师范大学 Sliding window based Tango tree construction method and system
CN111488496B (en) * 2020-04-30 2023-07-21 湖北师范大学 Sliding window-based Tango tree construction method and system
CN114189569A (en) * 2020-08-31 2022-03-15 华为技术有限公司 Data transmission method, device and system
CN114189569B (en) * 2020-08-31 2024-03-26 华为技术有限公司 Data transmission method, device and system
CN112615796A (en) * 2020-12-10 2021-04-06 北京时代民芯科技有限公司 Queue management system considering storage utilization rate and management complexity
CN112615796B (en) * 2020-12-10 2023-03-10 北京时代民芯科技有限公司 Queue management system considering storage utilization rate and management complexity
CN113225307A (en) * 2021-03-18 2021-08-06 西安电子科技大学 Optimization method, system and terminal for pre-reading descriptors in offload engine network card
CN114610661A (en) * 2022-03-10 2022-06-10 北京百度网讯科技有限公司 Data processing device and method and electronic equipment
CN114610661B (en) * 2022-03-10 2024-06-11 北京百度网讯科技有限公司 Data processing device, method and electronic equipment

Also Published As

Publication number Publication date
CN106302238A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
WO2016179968A1 (en) Queue management method and device, and storage medium
US20240171507A1 (en) System and method for facilitating efficient utilization of an output buffer in a network interface controller (nic)
WO2018076793A1 (en) Nvme device, and methods for reading and writing nvme data
WO2018107681A1 (en) Processing method, device, and computer storage medium for queue operation
CN107124286B (en) System and method for high-speed processing and interaction of mass data
US20140233588A1 (en) Large receive offload functionality for a system on chip
US11425057B2 (en) Packet processing
US20190044879A1 (en) Technologies for reordering network packets on egress
US20220066699A1 (en) Data read/write method and apparatus, and exchange chip and storage medium
US9584332B2 (en) Message processing method and device
US10205673B2 (en) Data caching method and device, and storage medium
WO2015184706A1 (en) Statistical counting device and implementation method therefor, and system having statistical counting device
CN114945009B (en) Method, device and system for communication between devices connected by PCIe bus
WO2016202158A1 (en) Message transmission method and device, and computer-readable storage medium
WO2016202113A1 (en) Queue management method, apparatus, and storage medium
WO2017133439A1 (en) Data management method and device, and computer storage medium
CN112698959A (en) Multi-core communication method and device
WO2014146468A1 (en) Method and apparatus for scheduling and buffering data packet, and computer storage medium
WO2017012096A1 (en) Computer device and data read-write method for computer device
CN111290979B (en) Data transmission method, device and system
WO2019109902A1 (en) Queue scheduling method and apparatus, communication device, and storage medium
CN100512218C (en) Transmitting method for data message
US20160320967A1 (en) Receive Side Packet Aggregation
CN116633879A (en) Data packet receiving method, device, equipment and storage medium
CN107846328B (en) Network rate real-time statistical method based on concurrent lock-free ring queue

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15891667

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15891667

Country of ref document: EP

Kind code of ref document: A1