CN113126911B - DDR3 SDRAM-based queue management method, medium and equipment - Google Patents

DDR3 SDRAM-based queue management method, medium and equipment Download PDF

Info

Publication number
CN113126911B
CN113126911B CN202110270207.0A CN202110270207A CN113126911B CN 113126911 B CN113126911 B CN 113126911B CN 202110270207 A CN202110270207 A CN 202110270207A CN 113126911 B CN113126911 B CN 113126911B
Authority
CN
China
Prior art keywords
packet
length
current
write pointer
enqueue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110270207.0A
Other languages
Chinese (zh)
Other versions
CN113126911A (en
Inventor
邱智亮
张晓雯
潘伟涛
孙义雯
耿政琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110270207.0A priority Critical patent/CN113126911B/en
Publication of CN113126911A publication Critical patent/CN113126911A/en
Application granted granted Critical
Publication of CN113126911B publication Critical patent/CN113126911B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention belongs to the technical field of satellite communication, and discloses a queue management method, medium and equipment based on DDR3SDRAM, wherein the cache space of the off-chip DDR3SDRAM for storing data packets is statically divided into N fixed cache intervals with equal size according to the number of the queues; a write pointer W is arranged for each fixed buffer area i And a read pointer R i Write pointer W i And a read pointer R i Corresponding to enqueuing and dequeuing operations, respectively, of the data packet; performing an associated enqueue processing operation upon enqueuing the data packet; when a data packet is dequeued, an associated dequeue processing operation is performed. The invention realizes a queue management scheme of the on-board switch for high-speed packet service by using the off-chip DDR3 SDRAM. The invention reduces the resources occupied by the storage of the cache information FPGA chip and improves the utilization rate of DDR3SDRAM cache space.

Description

DDR3 SDRAM-based queue management method, medium and equipment
Technical Field
The invention belongs to the technical field of satellite communication, and particularly relates to a DDR3 SDRAM-based queue management method, medium and equipment.
Background
At present, satellite communication plays an increasingly important role in the global Internet by virtue of wide coverage, stable and reliable performance, flexible access, large capacity and wide frequency band, insensitive cost to distance and the like. At present, the Internet is explosively increased, the network scale and the number of users of the network are continuously increased, the service types are also continuously increased, the data traffic presents a plurality of characteristics of high load, high bandwidth, high burst, complex traffic and the like, and higher requirements are put forward on the capacity, the transmission rate and the service quality guarantee of the current satellite Internet system. The satellite-borne switch is used as core equipment of a network node, the performance of the satellite-borne switch directly influences important performance parameters such as time delay, throughput and the like which can be provided by the satellite Internet, and the development of the satellite Internet is restricted to a great extent. Because the bandwidth and the storage resources of the satellite-borne switch are very limited, and the DDR3SDRAM has the advantages of large storage capacity, high read-write speed and the like, in order to meet the high-speed transmission of data services, the DDR3SDRAM hung on the FPGA is increasingly applied to the design of a queue manager in the switch. The DDR 3-based space-borne switching queue management scheme is researched, DDR3 is used as a data buffer area, so that the resource consumption in an FPGA (field programmable gate array) chip can be effectively reduced, a large number of service flows can be effectively managed, the bus utilization rate of DDR3 can be further improved by adopting a queue management technology, the requirements of high throughput rate of a switch and the actual situation of precious storage resources of the space-borne switch are considered, and the method has important research significance. The related prior patent technology is as follows: the university of western electrotechnology discloses a queue management method and apparatus in the patent literature filed by the university of western electrotechnology, "a queue management method and apparatus for storing variable-length packets based on a fixed-length unit" (publication No. CN 102377682B). The method is based on the storage of variable-length packets in fixed-length units, divides the storage space of a queue into basic cache units with equal size, sets a cache descriptor for each unit, and stores the descriptors in a cache descriptor storage table to form a linked list. However, in the prior art, queue management is performed in a manner of dividing the fixed-length units, because of the large capacity of DDR3, the storage space of the fixed-length units is generally larger than the size of the fixed-length storage units in a chip, and more internal fragments are generated when the fixed-length storage units are used for storing the variable-length data frames, so that the utilization rate of the shared buffer is reduced. The cache information and the stored data packet information of the fixed-length unit are stored in the FPGA chip, so that more on-chip resources are occupied, the on-chip storage resources are tense, the data processing efficiency of the switch is further affected, and the superiority of off-chip cache cannot be reflected.
Through the above analysis, the problems and defects existing in the prior art are as follows:
(1) In the prior art, queue management is performed in a mode of dividing fixed-length units, because DDR3 capacity is large, the storage space of the fixed-length units is generally larger than the size of the fixed-length storage units in a chip, more internal fragments are generated when the fixed-length storage units are used for storing variable-length data frames, and the utilization rate of shared caches is reduced.
(2) In the prior art, the cache information and the stored data grouping information of the fixed-length unit are stored in the FPGA chip, so that more on-chip resources are occupied, the on-chip storage resources are tense, the data processing efficiency of the switch is further affected, and the superiority of off-chip cache cannot be reflected.
The difficulty of solving the problems and the defects is as follows: a new storage structure is required to be designed to store the cache information and the stored data packet information stored in the FPGA in the prior art into the DDR3 cache so as to reduce occupation of internal resources of the FPGA and not to influence normal access of the data packet; meanwhile, the structure needs to reduce cache fragments generated by the storage of the fixed-length units while distinguishing a plurality of queues, so that the cache utilization rate is improved.
The meaning of solving the problems and the defects is as follows: the data packets and related information are stored in the DDR3 cache with large capacity outside the chip, which is beneficial to reducing the occupation of resources in the FPGA chip, and the packet switch with higher switching capacity and more ports is easy to realize.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides a DDR3 SDRAM-based queue management method, medium and equipment.
The invention is realized in such a way that the queue management method based on DDR3SDRAM comprises the following steps:
firstly, statically dividing a cache space of an off-chip DDR3SDRAM for storing data packets into N fixed cache intervals with equal size according to the number of queues;
the positive effects are as follows: the structure designed under the step well realizes the distinction of different queues in the cache space and the fair allocation of the cache space among the different queues, thereby being beneficial to the efficient and fair storage and reading of the data packets of the subsequent different queues
Step two, for each fixingThe buffer area is provided with a write pointer W i And a read pointer R i Write pointer W i And a read pointer R i Corresponding to enqueuing and dequeuing operations, respectively, of the data packet;
the positive effects are as follows: this step specifies the use mechanism of the buffer space under each queue, i.e. the implementation basis for data packet enqueue store and dequeue read, which solves the problem of buffer fragmentation of the prior art fixed length units
Step three, when data packets are enqueued, executing relevant enqueuing processing operation;
the positive effects are as follows: the step provides a using method and related operation of the buffer memory space under the enqueuing condition, and realizes the enqueuing function of the data packet
And step four, when the data packet is dequeued, executing relevant dequeuing processing operation.
The positive effects are as follows: the step provides a using method and related operation of the cache space under the dequeue condition, and the dequeue function of the data packet is realized.
In the first step, each fixed buffer area corresponds to a logic queue and has continuous addresses, and the difference between the initial address Fi and the final address Li corresponding to each fixed buffer area is recorded as the size S of the fixed buffer area, i.e., li-fi=s, and i is more than or equal to 0 and less than or equal to N-1.
Further, in the second step, the pointer W is written i A read pointer R pointing to the current writable start position of the fixed buffer i Pointing to the current readable starting position of the fixed buffer, wherein Fi is less than or equal to R i ≤W i Li is more than or equal to 0 and i is more than or equal to N-1; at the initial time, the write pointer and the read pointer are coincident and are the initial first address Fi of the fixed buffer area.
Further, in the third step, a related enqueuing operation is executed, and the specific process is as follows:
(1) When a data packet is to be enqueued, extracting and outputting relevant control information such as a destination port number, a priority, a packet length and the like;
(2) Determining an enqueue number of the packet according to the output destination port number and the priority, and constructing a packet header field;
(3) Inquiring the write pointer W of the corresponding fixed buffer area according to the enqueue queue number i And a read pointer R i Calculating the occupied part of the current fixed buffer, i.e. the write pointer W i And a read pointer R i Is a difference in (2);
(4) Performing enqueue judgment according to the packet length:
(5) Write pointer W according to the judgment result i Is updated by:
(6) The data packet is subjected to different operation processes according to the judgment result.
Further, the enqueuing judgment is performed according to the packet length, and the specific process is as follows:
if the occupied part of the current fixed buffer area is 0, indicating that the current fixed buffer area is empty, the current enqueuing packet is the first packet of the queue, recording the length of the current packet and the length of a header field, and allowing enqueuing;
if the occupied part of the current fixed buffer area plus the packet length and the head field length are greater than the size S of the fixed buffer area, indicating that the queue overflows, and failing to enqueue;
if the occupied part of the current fixed buffer area plus the packet length and the header field length are less than or equal to the size S of the fixed buffer area, the queue is not overflowed, and the enqueue is successful.
Further, the writing pointer W is performed according to the judgment result i The updating of (a) comprises the following specific processes:
if enqueuing is successful, write pointer W i The update of (2) is divided into two cases, one is when W i When the sum of the enqueue packet length and the header field length is greater than the end tail address Li, the write pointer should be updated to the current write pointer W i +enqueue packet length, header field length-end tail address li+start head address Fi; the other is when W i When the sum of the enqueue packet length and the header field length is less than or equal to the end tail address Li, the write pointer should be updated to the current write pointer W i +enqueue packet length and header field length;
if enqueue fails, the write pointer is not updated, and the original value is maintained.
Further, the data packet performs different operation processing according to the judgment result, and the specific process is as follows:
the head field and the packet data are packaged and combined by the packet which is successfully enqueued through the enqueuing judging process according to the packet length, and the packet is stored into the DDR3SDRAM through the cross-clock domain and bit width conversion processing until the whole data transmission is completed; packets that fail enqueue are discarded by performing an enqueue determination process based on the packet length.
Further, in the fourth step, when the data packet is dequeued, a related dequeuing processing operation is executed, and the specific process is as follows:
1) When dequeuing, firstly, a dequeue queue number is obtained according to a certain scheduling strategy;
2) Inquiring the write pointer W of the corresponding fixed buffer area according to the dequeue queue number i And a read pointer R i Dequeue judgment is carried out, and occupied part of the current fixed buffer area, namely a write pointer W is calculated i And a read pointer R i Is a difference in (2);
if the occupied part of the current fixed buffer area is 0, indicating that the current fixed buffer area is empty, and returning to 1) continuing polling without grouping to be dequeued;
if the occupied part of the current fixed buffer area is not 0, indicating that the current fixed buffer area is not empty, and continuing the step 3) if the current packet is to be dequeued;
3) Inquiring the storage total length of the first packet in the current queue according to the dequeue queue number;
4) The packed and combined packet is completely read out from DDR3SDRAM according to the storage length obtained by inquiry, packet data and a header field are split, and the packet data and the header field are moved to a bus for output through cross-clock domain and bit width conversion processing until the whole data transmission is completed;
5) Read pointer R based on total length of storage of dequeued packets i Is updated by:
when R is i When the sum of the dequeue packet length and the header field length is greater than the end tail address Li, the read pointer should be updatedFor the current read pointer R i +dequeue packet length, header field length-end tail address li+start head address Fi;
when R is i When the sum of the dequeue packet length and the header field length is less than or equal to the end tail address Li, the read pointer should be updated to the current read pointer R i +dequeue packet length and header field length.
Another object of the present invention is to provide a storage medium for receiving a user input program, the stored computer program causing an electronic device to execute the DDR3 SDRAM-based queue management method, comprising the steps of:
firstly, statically dividing a cache space of an off-chip DDR3SDRAM for storing data packets into N fixed cache intervals with equal size according to the number of queues;
step two, setting a write pointer W for each fixed buffer area i And a read pointer R i Write pointer W i And a read pointer R i Corresponding to enqueuing and dequeuing operations, respectively, of the data packet;
step three, when data packets are enqueued, executing relevant enqueuing processing operation;
and step four, when the data packet is dequeued, executing relevant dequeuing processing operation.
It is another object of the present invention to provide a computer program product stored on a computer readable medium, comprising a computer readable program, which when executed on an electronic device, provides a user input interface to implement the DDR3 SDRAM-based queue management method.
By combining all the technical schemes, the invention has the advantages and positive effects that: the invention uses a fixed allocation mode to carry out a static queue management mode on DDR3SDRAM, and the grouping information and the grouping data are continuously stored in a divided fixed buffer area. The invention mainly realizes the queue management scheme of the on-chip DDR3SDRAM aiming at the high-speed packet service, and compared with the traditional shared cache queue management mechanism, the invention reduces the resources occupied by the storage of the cache information FPGA chip as much as possible, and improves the utilization rate of the DDR3SDRAM cache space.
Meanwhile, the invention has the following technical effects:
1) The DDR3SDRAM is subjected to queue management in a fixed allocation mode, so that the effective data storage of variable-length packets is ensured to be continuous, the internal fragments stored by using fixed-length units are reduced, and the utilization rate of storage space is improved.
2) The read-write pointer and necessary grouping information of each queue on the DDR3SDRAM are only required to be maintained and stored in the chip, so that the cache information and grouping information of the fixed-length unit are reduced, and the problem of on-chip resource shortage is relieved.
3) Compared with dynamic cache management, the storage in a fixed allocation mode is simpler in operation and low in complexity.
Drawings
FIG. 1 is a flow chart of a DDR3 SDRAM-based queue management method provided by an embodiment of the invention.
Fig. 2 is a enqueuing flowchart provided by an embodiment of the present invention.
FIG. 3 is a flow chart of a write pointer update provided by an embodiment of the present invention.
Fig. 4 is a dequeue flowchart provided by an embodiment of the invention.
FIG. 5 is a flow chart of the read pointer update according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Aiming at the problems existing in the prior art, the invention provides a DDR3 SDRAM-based queue management method, medium and equipment, and the invention is described in detail below with reference to the accompanying drawings.
Other steps may be performed by those skilled in the art of the DDR3 SDRAM-based queue management method provided by the present invention, and fig. 1 is only a specific embodiment of the DDR3 SDRAM-based queue management method provided by the present invention.
As shown in fig. 1, the queue management method based on DDR3SDRAM provided by the embodiment of the present invention includes:
s101: and statically dividing the cache space of the off-chip DDR3SDRAM for storing the data packets into N fixed cache intervals with equal size according to the number of the queues.
S102: a write pointer W is arranged for each fixed buffer area i And a read pointer R i Write pointer W i And a read pointer R i Corresponding to enqueuing and dequeuing operations, respectively, of the data packet.
S103: upon enqueuing a data packet, an associated enqueue processing operation is performed.
S104: when a data packet is dequeued, an associated dequeue processing operation is performed.
In S101 provided in the embodiment of the present invention, each fixed buffer area corresponds to a logic queue and has continuous addresses, and records a start first address Fi and an end last address Li corresponding to each fixed buffer area, where the difference between the two addresses is the size S of the fixed buffer area, i.e., li-fi=s, and i is greater than or equal to 0 and less than or equal to N-1.
In S102 provided by the embodiment of the present invention, the write pointer W i A read pointer R pointing to the current writable start position of the fixed buffer i Pointing to the current readable starting position of the fixed buffer, wherein Fi is less than or equal to R i ≤W i Li is more than or equal to 0 and i is more than or equal to N-1; at the initial time, the write pointer and the read pointer are coincident and are the initial first address Fi of the fixed buffer area.
In S103 provided by the embodiment of the present invention, a relevant enqueue processing operation is executed, and the specific process is as follows:
(1) When a data packet is to be enqueued, extracting and outputting relevant control information such as a destination port number, a priority, a packet length and the like;
(2) Determining an enqueue number of a packet according to the output destination port number and the priority, and constructing a packet header field in the following format:
packet length Enqueue queue number Reservation of
Table 1 packet header fields
(3) Inquiring the write pointer W of the corresponding fixed buffer area according to the enqueue queue number i And a read pointer R i Calculating the occupied part of the current fixed buffer, i.e. the write pointer W i And a read pointer R i Is a difference in (2);
(4) Performing enqueue judgment according to the packet length:
if the occupied part of the current fixed buffer area is 0, indicating that the current fixed buffer area is empty, the current enqueuing packet is the first packet of the queue, recording the length of the current packet and the length of a header field, and allowing enqueuing;
if the occupied part of the current fixed buffer area plus the packet length and the head field length are greater than the size S of the fixed buffer area, indicating that the queue overflows, and failing to enqueue;
if the occupied part of the current fixed cache area is added with the packet length and the head field length is smaller than or equal to the size S of the fixed cache area, indicating that the queue does not overflow, and if the enqueue is successful;
(5) Write pointer W according to the judgment result i Is updated by:
if enqueuing is successful, write pointer W i The update of (2) is divided into two cases, one is when W i When the sum of the enqueue packet length and the header field length is greater than the end tail address Li, the write pointer should be updated to the current write pointer W i +enqueue packet length, header field length-end tail address li+start head address Fi; the other is when W i When the sum of the enqueue packet length and the header field length is less than or equal to the end tail address Li, thenThe write pointer should be updated to the current write pointer W i +enqueue packet length and header field length;
if enqueue fails, the write pointer is not updated, and the original value is maintained.
(6) The data packet performs different operation processes according to the judgment result:
and (3) performing enqueuing judgment according to the packet length, packaging and combining a header field and packet data by the packet with successful enqueuing, and storing the packet into the DDR3SDRAM through cross-clock domain and bit width conversion processing until the whole data transmission is completed. Packets that fail enqueue are discarded by performing an enqueue determination process based on the packet length.
In S104 provided by the embodiment of the present invention, when a data packet is dequeued, a related dequeuing processing operation is executed, and the specific process is:
1) When dequeuing, firstly, a dequeue queue number is obtained according to a certain scheduling strategy;
2) Inquiring the write pointer W of the corresponding fixed buffer area according to the dequeue queue number i And a read pointer R i Dequeue judgment is carried out, and occupied part of the current fixed buffer area, namely a write pointer W is calculated i And a read pointer R i Is a difference in (2);
if the occupied part of the current fixed buffer area is 0, indicating that the current fixed buffer area is empty, and returning to 1) continuing polling without grouping to be dequeued;
if the occupied part of the current fixed buffer area is not 0, indicating that the current fixed buffer area is not empty, and continuing the step 3) if the current packet is to be dequeued;
3) Inquiring the storage total length of the first packet in the current queue according to the dequeue queue number;
4) And completely reading the packaged and combined packet from the DDR3SDRAM according to the storage length obtained by inquiry, splitting the packet data and the header field, and moving the packet data and the header field to a bus for outputting through cross-clock domain and bit width conversion processing until the whole data transmission is completed.
5) Read pointer R based on total length of storage of dequeued packets i Is updated by:
when R is i When the sum of the dequeue packet length and the header field length is greater than the end tail address Li, the read pointer should be updated to the current read pointer R i +dequeue packet length, header field length-end tail address li+start head address Fi;
when R is i When the sum of the dequeue packet length and the header field length is less than or equal to the end tail address Li, the read pointer should be updated to the current read pointer R i +dequeue packet length and header field length.
It should be noted that the embodiments of the present invention can be realized in hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or special purpose design hardware. Those of ordinary skill in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a magnetic disk, CD or DVD-ROM, a programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The device of the present invention and its modules may be implemented by hardware circuitry, such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., as well as software executed by various types of processors, or by a combination of the above hardware circuitry and software, such as firmware.
The foregoing is merely illustrative of specific embodiments of the present invention, and the scope of the invention is not limited thereto, but any modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present invention will be apparent to those skilled in the art within the scope of the present invention.

Claims (4)

1. The DDR3 SDRAM-based queue management method is characterized by comprising the following steps of:
statically dividing a cache space of an off-chip DDR3SDRAM for storing data packets into N fixed cache intervals with equal size according to the number of the queues;
a write pointer W is arranged for each fixed buffer area i And a read pointer R i Write pointer W i And a read pointer R i Corresponding to enqueuing and dequeuing operations, respectively, of the data packet;
performing an associated enqueue processing operation upon enqueuing the data packet;
performing an associated dequeue processing operation upon dequeuing the data packet;
when the data packets are enqueued, in the process of executing related enqueuing processing operation, the related enqueuing processing operation is executed, and the specific process is as follows:
(1) When a data packet is to be enqueued, extracting and outputting a destination port number, a priority and control information related to the packet length;
(2) Determining an enqueue number of the packet according to the output destination port number and the priority, and constructing a packet header field;
(3) Inquiring the write pointer W of the corresponding fixed buffer area according to the enqueue queue number i And a read pointer R i Calculating the occupied part of the current fixed buffer, i.e. the write pointer W i And a read pointer R i Is a difference in (2);
(4) Performing enqueue judgment according to the packet length:
(5) Write pointer W according to the judgment result i Is updated by:
(6) The data packet carries out different operation treatments according to the judging result;
the enqueue judgment is carried out according to the packet length, and the specific process is as follows:
if the occupied part of the current fixed buffer area is 0, indicating that the current fixed buffer area is empty, the current enqueuing packet is the first packet of the queue, recording the length of the current packet and the length of a header field, and allowing enqueuing;
if the occupied part of the current fixed buffer area plus the packet length and the head field length are greater than the size S of the fixed buffer area, indicating that the queue overflows, and failing to enqueue;
if the occupied part of the current fixed cache area is added with the packet length and the head field length is smaller than or equal to the size S of the fixed cache area, indicating that the queue does not overflow, and if the enqueue is successful;
the write pointer W is performed according to the judgment result i The updating of (a) comprises the following specific processes: if enqueuing is successful, write pointer W i The update of (2) is divided into two cases, one is when W i When the sum of the enqueue packet length and the header field length is greater than the end tail address Li, the write pointer should be updated to the current write pointer W i +enqueue packet length, header field length-end tail address li+start head address Fi; the other is when W i When the sum of the enqueue packet length and the header field length is less than or equal to the end tail address Li, the write pointer should be updated to the current write pointer W i +enqueue packet length and header field length;
if the enqueue fails, the write pointer is not updated, and the original value is maintained;
the data packet carries out different operation treatments according to the judging result, and the specific process is as follows: the head field and the packet data are packaged and combined by the packet which is successfully enqueued through the enqueuing judging process according to the packet length, and the packet is stored into the DDR3SDRAM through the cross-clock domain and bit width conversion processing until the whole data transmission is completed; the packets failed in enqueuing are discarded through the enqueuing judging process according to the packet length;
in the step of executing the related dequeuing processing operation when the data packet is dequeued, the related dequeuing processing operation is executed when the data packet is dequeued, and the specific process is as follows:
1) When dequeuing, firstly, a dequeue queue number is obtained according to a certain scheduling strategy;
2) Inquiring the write pointer W of the corresponding fixed buffer area according to the dequeue queue number i And a read pointer R i Dequeue judgment is carried out, and occupied part of the current fixed buffer area, namely a write pointer W is calculated i And a read pointer R i Is a difference in (2);
if the occupied part of the current fixed buffer area is 0, indicating that the current fixed buffer area is empty, and returning to 1) continuing polling without grouping to be dequeued;
if the occupied part of the current fixed buffer area is not 0, indicating that the current fixed buffer area is not empty, and continuing the step 3) if the current packet is to be dequeued;
3) Inquiring the storage total length of the first packet in the current queue according to the dequeue queue number;
4) The packed and combined packet is completely read out from DDR3SDRAM according to the storage length obtained by inquiry, packet data and a header field are split, and the packet data and the header field are moved to a bus for output through cross-clock domain and bit width conversion processing until the whole data transmission is completed;
5) Read pointer R based on total length of storage of dequeued packets i Is updated by:
when R is i When the sum of the dequeue packet length and the header field length is greater than the end tail address Li, the read pointer should be updated to the current read pointer R i +dequeue packet length, header field length-end tail address li+start head address Fi;
when R is i When the sum of the dequeue packet length and the header field length is less than or equal to the end tail address Li, the read pointer should be updated to the current read pointer R i +dequeue packet length and header field length.
2. The method for managing the queues based on the DDR3SDRAM according to claim 1, wherein the cache space of the off-chip DDR3SDRAM for storing the data packets is statically divided into N fixed cache regions with the same size according to the number of the queues, each fixed cache region corresponds to one logic queue and has continuous addresses, the starting head address Fi and the ending tail address Li corresponding to each fixed cache region are recorded, and the difference value between the starting head address Fi and the ending tail address Li is the size S of the fixed cache region, namely Li-Fi=S, and i is more than or equal to 0 and less than or equal to N-1.
3. The DDR3SDRAM based queue management method as claimed in claim 1, wherein said setting a write pointer W for each fixed buffer i And a read pointer R i Write pointer W i And a read pointer R i The write pointer W in enqueue and dequeue operations respectively corresponding to data packets i A read pointer R pointing to the current writable start position of the fixed buffer i Pointing to the current readable starting position of the fixed buffer, wherein Fi is less than or equal to R i ≤W i Li is more than or equal to 0 and i is more than or equal to N-1; at the initial time, the write pointer and the read pointer are coincident and are the initial first address Fi of the fixed buffer area.
4. A storage medium for receiving a user input program, the stored computer program causing an electronic device to execute the steps of the DDR3 SDRAM-based queue management method of any one of claims 1 to 3.
CN202110270207.0A 2021-03-12 2021-03-12 DDR3 SDRAM-based queue management method, medium and equipment Active CN113126911B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110270207.0A CN113126911B (en) 2021-03-12 2021-03-12 DDR3 SDRAM-based queue management method, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110270207.0A CN113126911B (en) 2021-03-12 2021-03-12 DDR3 SDRAM-based queue management method, medium and equipment

Publications (2)

Publication Number Publication Date
CN113126911A CN113126911A (en) 2021-07-16
CN113126911B true CN113126911B (en) 2023-04-28

Family

ID=76773050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110270207.0A Active CN113126911B (en) 2021-03-12 2021-03-12 DDR3 SDRAM-based queue management method, medium and equipment

Country Status (1)

Country Link
CN (1) CN113126911B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827300B (en) * 2022-03-20 2023-09-01 西安电子科技大学 Data reliable transmission system, control method, equipment and terminal for hardware guarantee
CN115396384B (en) * 2022-07-28 2023-11-28 广东技术师范大学 Data packet scheduling method, system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7730238B1 (en) * 2005-10-07 2010-06-01 Agere System Inc. Buffer management method and system with two thresholds
CN109144749A (en) * 2018-08-14 2019-01-04 苏州硅岛信息科技有限公司 A method of it is communicated between realizing multiprocessor using processor
CN112084136A (en) * 2020-07-23 2020-12-15 西安电子科技大学 Queue cache management method, system, storage medium, computer device and application
CN112385186A (en) * 2018-07-03 2021-02-19 华为技术有限公司 Apparatus and method for ordering data packets

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150281126A1 (en) * 2014-03-31 2015-10-01 Plx Technology, Inc. METHODS AND APPARATUS FOR A HIGH PERFORMANCE MESSAGING ENGINE INTEGRATED WITHIN A PCIe SWITCH

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7730238B1 (en) * 2005-10-07 2010-06-01 Agere System Inc. Buffer management method and system with two thresholds
CN112385186A (en) * 2018-07-03 2021-02-19 华为技术有限公司 Apparatus and method for ordering data packets
CN109144749A (en) * 2018-08-14 2019-01-04 苏州硅岛信息科技有限公司 A method of it is communicated between realizing multiprocessor using processor
CN112084136A (en) * 2020-07-23 2020-12-15 西安电子科技大学 Queue cache management method, system, storage medium, computer device and application

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《FPGA高速大容量外挂数据缓存技术研究》;张汶汶;《万方学位》;20190603;第1-92页 *
《LaminarIR:Compile-Time Queues for Structured Streams》;Ko,Yousun等;《ACM SIGPLAN NOTICES》;20150630;第121-130页 *
一种应用于星载交换机的DDR3共享存储交换结构的设计与实现;王雷淘等;《通信技术》;20200610(第06期);全文 *

Also Published As

Publication number Publication date
CN113126911A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
US8325603B2 (en) Method and apparatus for dequeuing data
US8982658B2 (en) Scalable multi-bank memory architecture
US8225026B2 (en) Data packet access control apparatus and method thereof
US10795837B2 (en) Allocation of memory buffers in computing system with multiple memory channels
CN113126911B (en) DDR3 SDRAM-based queue management method, medium and equipment
WO2021209051A1 (en) On-chip cache device, on-chip cache read/write method, and computer readable medium
US9841913B2 (en) System and method for enabling high read rates to data element lists
CN112084136B (en) Queue cache management method, system, storage medium, computer device and application
CN110058816B (en) DDR-based high-speed multi-user queue manager and method
KR20160117108A (en) Method and apparatus for using multiple linked memory lists
US10152434B2 (en) Efficient arbitration for memory accesses
CN115080455B (en) Computer chip, computer board card, and storage space distribution method and device
US9785367B2 (en) System and method for enabling high read rates to data element lists
WO2024082747A1 (en) Router having cache, routing and switching network system, chip, and routing method
CN111181874B (en) Message processing method, device and storage medium
Kornaros et al. A fully-programmable memory management system optimizing queue handling at multi gigabit rates
US7451182B2 (en) Coordinating operations of network and host processors
CN114186163A (en) Application layer network data caching method
CN114564420A (en) Method for sharing parallel bus by multi-core processor
US10067690B1 (en) System and methods for flexible data access containers
WO2024001414A1 (en) Message buffering method and apparatus, electronic device and storage medium
US11094368B2 (en) Memory, memory chip and memory data access method
Shi et al. Optimization of shared memory controller for multi-core system
CN117312013A (en) Interactive queue management method and device based on active writing back of message queue pointer
CN117991983A (en) High-speed SATA storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant