CN113126911A - Queue management method, medium and equipment based on DDR3SDRAM - Google Patents

Queue management method, medium and equipment based on DDR3SDRAM Download PDF

Info

Publication number
CN113126911A
CN113126911A CN202110270207.0A CN202110270207A CN113126911A CN 113126911 A CN113126911 A CN 113126911A CN 202110270207 A CN202110270207 A CN 202110270207A CN 113126911 A CN113126911 A CN 113126911A
Authority
CN
China
Prior art keywords
packet
length
enqueue
ddr3sdram
pointer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110270207.0A
Other languages
Chinese (zh)
Other versions
CN113126911B (en
Inventor
邱智亮
张晓雯
潘伟涛
孙义雯
耿政琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110270207.0A priority Critical patent/CN113126911B/en
Publication of CN113126911A publication Critical patent/CN113126911A/en
Application granted granted Critical
Publication of CN113126911B publication Critical patent/CN113126911B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system

Abstract

The invention belongs to the technical field of satellite communication, and discloses a queue management method, medium and equipment based on DDR3SDRAM, wherein N fixed buffer intervals with equal size are statically divided into a buffer space of the off-chip DDR3SDRAM for storing data packets according to the number of queues; setting a write pointer W for each fixed bufferiAnd a read pointer RiWrite pointer WiAnd a read pointer RiRespectively corresponding to enqueue and dequeue operations of the data packet; when the data packet is enqueued, relevant enqueue processing operation is executed; upon dequeuing of a data packet, the associated dequeue processing operation is performed. The invention realizes the queue management scheme of the off-chip DDR3SDRAM for the high-speed packet service of the satellite-borne switch. The invention reduces the resources occupied by the storage in the cache information FPGA chip and improves the utilization rate of the DDR3SDRAM cache space.

Description

Queue management method, medium and equipment based on DDR3SDRAM
Technical Field
The invention belongs to the technical field of satellite communication, and particularly relates to a queue management method, medium and equipment based on DDR3 SDRAM.
Background
At present, satellite communication plays an increasingly important role in the global internet with the advantages of wide coverage, stable and reliable performance, flexible access, large-capacity broadband, insensitivity of cost to distance and the like. At present, the internet is explosively increased, the network scale and the number of users of the network are continuously increased, the service types are also continuously increased, the data traffic has the characteristics of high load, high bandwidth, high burst, complex traffic and the like, and higher requirements are provided for the capacity, the transmission rate and the service quality guarantee of the current satellite internet system. The performance of the satellite-borne switch, which is used as a core device of a network node, directly affects important performance parameters such as time delay and throughput which can be provided by the satellite internet, and greatly restricts the development of the satellite internet. Because the bandwidth and the storage resource of the satellite-borne switch are very limited, and the DDR3SDRAM has the advantages of large storage capacity, high read-write speed and the like, in order to meet the high-speed transmission of data services, the DDR3SDRAM externally hung on the FPGA is increasingly applied to the design of a queue manager in the switch. The DDR 3-based satellite-borne exchange queue management scheme is researched, DDR3 is used as a data cache region, so that the consumption of resources in an FPGA (field programmable gate array) chip can be effectively reduced, a large number of service flows can be effectively managed, the bus utilization rate of DDR3 can be further improved by adopting a queue management technology, the requirement of high throughput rate of an exchanger and the actual condition that storage resources of the satellite-borne exchanger are precious are considered, and the method has important research significance. The prior related patent technologies are as follows: the patent document "queue management method and apparatus for storing variable-length packets based on fixed-length cells" (publication No. CN 102377682B) filed by the university of sienna electronics technology discloses a queue management method and apparatus. The method divides a queue storage space into basic cache units with equal size on the basis of storing variable-length packets based on a fixed-length unit, sets a cache descriptor for each unit, and stores the descriptors in a cache descriptor storage table to form a linked list. However, in the prior art, queue management is performed by dividing fixed-length units, because the DDR3 has a large capacity, the storage space of the fixed-length units is generally larger than the size of the fixed-length storage units in the chip, and when the fixed-length storage units are used for storing variable-length data frames, the fixed-length storage units become more internal fragments, and the utilization rate of the shared cache is reduced. The cache information and the storage data grouping information of the fixed-length unit are stored in the FPGA chip, so that more on-chip resources are occupied, the on-chip storage resources are in shortage, the data processing efficiency of the switch is further influenced, and the superiority of off-chip cache cannot be reflected.
Through the above analysis, the problems and defects of the prior art are as follows:
(1) in the prior art, queue management is performed by dividing fixed-length units, because the DDR3 has a large capacity, the storage space of the fixed-length units is generally larger than the size of fixed-length storage units in the chip, and when the fixed-length storage units are used for storing variable-length data frames, more internal fragments are generated, and the utilization rate of a shared cache is reduced.
(2) In the prior art, the cache information and the storage data grouping information of the fixed-length unit are both stored in the FPGA chip, so that more on-chip resources are occupied, the on-chip storage resources are in shortage, the data processing efficiency of the switch is further influenced, and the superiority of off-chip cache cannot be reflected.
The difficulty in solving the above problems and defects is: a new storage structure is needed to be designed to store the cache information and the stored data grouping information stored in the FPGA in the prior art into the DDR3 cache so as to reduce the occupation of the internal resources of the FPGA and avoid affecting the normal access of the data grouping; meanwhile, the structure needs to distinguish a plurality of queues, reduce cache fragments generated by fixed-length unit storage and improve the utilization rate of the cache.
The significance of solving the problems and the defects is as follows: the data grouping and the related information are stored in the off-chip large-capacity DDR3 cache, occupation of on-chip resources of the FPGA is reduced, the packet switch with higher switching capacity and more ports is easy to realize, the reliability of the large-capacity packet switch is improved by using the large-capacity off-chip cache, and longer-time burst transmission of data can be tolerated.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a queue management method, medium and equipment based on DDR3 SDRAM.
The invention is realized in this way, a queue management method based on DDR3SDRAM, the queue management method based on DDR3SDRAM includes:
step one, dividing a buffer space of an off-chip DDR3SDRAM for storing data packets into N fixed buffer intervals with equal size statically according to the number of queues;
has the following positive effects: the structure designed in the step well realizes the distinguishing of different queues in the cache space and the fair distribution of the cache space among different queues, and is favorable for the efficient and fair storage and reading of the data packets of the subsequent different queues
Step two, setting a write pointer W for each fixed buffer areaiAnd a read pointer RiWrite pointer WiAnd a read pointer RiRespectively corresponding to enqueue and dequeue operations of the data packet;
has the following positive effects: the step specifies the use mechanism of the buffer space under each queue, namely the implementation basis of the enqueue storage and the dequeue reading of the data packets, and the mechanism solves the problem of buffer fragment of the fixed-length unit in the prior art
Step three, when the data packet is enqueued, relevant enqueue processing operation is executed;
has the following positive effects: the step gives the using method and the related operation of the buffer space under the enqueue condition, and realizes the enqueue function of the data packet
And step four, when the data packet is dequeued, executing relevant dequeue processing operation.
Has the following positive effects: the step provides a using method and related operation of the cache space under the dequeuing condition, and the dequeuing function of the data packet is realized.
Further, in the first step, each fixed buffer area corresponds to a logical queue and addresses are continuous, a start head address Fi and an end tail address Li corresponding to each fixed buffer area are recorded, a difference between the start head address Fi and the end tail address Li is the size S of the fixed buffer area, i is greater than or equal to 0 and less than or equal to N-1, i is greater than or equal to Li-Fi.
Further, in the second step, the pointer W is writteniPointing to the currently writable start position of the fixed bufferiFixed pointDetermining the current readable initial position of the buffer area, wherein Fi is less than or equal to Ri≤WiLi is not less than 0, i is not less than 0 and not more than N-1; and the writing pointer and the reading pointer are overlapped at the initial moment and are both the initial head address Fi of the fixed cache region.
Further, in the third step, relevant enqueue processing operations are executed, and the specific process is as follows:
(1) when a data packet is to be enqueued, extracting relevant control information such as an output destination port number, priority, packet length and the like;
(2) determining the queue number of the packet according to the port number of the output destination and the priority, and establishing a packet header field;
(3) inquiring the write pointer W corresponding to the fixed buffer area according to the queue numberiAnd a read pointer RiCalculating the occupied part of the current fixed buffer, i.e. the write pointer WiAnd a read pointer RiA difference of (d);
(4) and (3) performing enqueue judgment according to the packet length:
(5) writing the pointer W according to the judgment resultiUpdating:
(6) and the data packet is subjected to different operation processing according to the judgment result.
Further, the enqueuing judgment according to the packet length comprises the following specific processes:
if the occupied part of the current fixed buffer area is 0, the current fixed buffer area is empty, the current enqueue group is the first group of the queue, the length of the current group and the length of the head field are recorded, and enqueue is allowed;
if the occupied part of the current fixed buffer area plus the packet length and the length of the head field are larger than the size S of the fixed buffer area, indicating that the queue overflows, and then the enqueue fails;
and if the occupied part of the current fixed buffer area plus the packet length and the head field length is less than or equal to the size S of the fixed buffer area, the queue is not overflowed, and the enqueuing is successful.
Further, the writing pointer W is carried out according to the judgment resultiThe specific process of updating is as follows:
if enqueue is successful, write pointer WiThe update of (2) is divided into two cases, one is when WiWhen the sum of the length of the enqueue packet and the length of the head field is larger than the end address Li, the write pointer is updated to the current write pointer Wi+ enqueue packet length and header field length-end tail address Li + start head address Fi; the other is when WiWhen the sum of the length of the enqueue packet and the length of the head field is less than or equal to the end address Li, the write pointer is updated to the current write pointer Wi+ enqueue packet length and header field length;
if enqueue fails, the write pointer is not updated and remains the original value.
Further, the data packet performs different operation processing according to the judgment result, and the specific process is as follows:
after the enqueue judgment process is carried out according to the packet length, the successfully enqueued packet encapsulates and combines the header field and the packet data, and the encapsulated and combined packet data is stored into a DDR3SDRAM through clock domain crossing and bit width conversion processing until the whole data transmission is completed; the packet that fails enqueue is discarded after the enqueue determination process according to the packet length.
Further, in the fourth step, when the data packet is dequeued, the relevant dequeue processing operation is executed, and the specific process is as follows:
1) when dequeuing, firstly, a dequeue queue number is obtained according to a certain scheduling strategy;
2) inquiring the write pointer W corresponding to the fixed buffer area according to the dequeue queue numberiAnd a read pointer RiPerforming dequeue judgment, and calculating occupied part of current fixed buffer area, i.e. write pointer WiAnd a read pointer RiA difference of (d);
if the occupied part of the current fixed cache area is 0, the current fixed cache area is empty, no packet needs to be dequeued, and 1) the continuous polling is returned;
if the occupied part of the current fixed cache area is not 0, the current fixed cache area is not empty, and if a packet is to be dequeued currently, the step 3) is continued;
3) inquiring the total storage length of the first packet in the current queue according to the dequeue number;
4) reading the packaged and combined packet completely from a DDR3SDRAM according to the storage length obtained by inquiry, splitting the packet data and a header field, and moving the packet data to a bus for output through clock domain crossing and bit width conversion processing until the whole data transmission is completed;
5) reading pointer R according to total storage length of dequeued packetiUpdating:
when R isiWhen the sum of the length of the dequeued packet and the length of the head field is greater than the end address Li, the read pointer is updated to the current read pointer Ri+ dequeue packet length and header field length-end tail address Li + start head address Fi;
when R isiWhen the sum of the length of the dequeue packet and the length of the head field is less than or equal to the end address Li, the read pointer is updated to the current read pointer Ri+ dequeue packet length and header field length.
Another object of the present invention is to provide a program storage medium for receiving a user input, the stored computer program causing an electronic device to execute the DDR3SDRAM based queue management method, comprising the steps of:
step one, dividing a buffer space of an off-chip DDR3SDRAM for storing data packets into N fixed buffer intervals with equal size statically according to the number of queues;
step two, setting a write pointer W for each fixed buffer areaiAnd a read pointer RiWrite pointer WiAnd a read pointer RiRespectively corresponding to enqueue and dequeue operations of the data packet;
step three, when the data packet is enqueued, relevant enqueue processing operation is executed;
and step four, when the data packet is dequeued, executing relevant dequeue processing operation.
It is another object of the present invention to provide a computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface to implement the DDR3SDRAM based queue management method when executed on an electronic device.
By combining all the technical schemes, the invention has the advantages and positive effects that: the invention uses a fixed allocation mode to carry out a static queue management mode on DDR3SDRAM, and the grouping information and the grouping data are continuously stored in the divided fixed buffer areas. Compared with the traditional shared cache queue management mechanism, the invention reduces the resources occupied by the on-chip storage of the cache information FPGA as much as possible and improves the utilization rate of the DDR3SDRAM cache space.
Meanwhile, the invention also has the following technical effects:
1) the DDR3SDRAM is subjected to queue management in a fixed distribution mode, so that the effective data storage of variable-length packets is continuous, internal fragments stored by using fixed-length units are reduced, and the utilization rate of a storage space is improved.
2) In the chip, only the read-write pointer and necessary grouping information of each queue on the DDR3SDRAM are needed to be maintained, so that the cache information and the grouping information of the fixed-length unit are reduced, and the problem of resource shortage in the chip is solved.
3) Compared with dynamic cache management, the storage in a fixed allocation mode is simpler in operation and low in complexity.
Drawings
Fig. 1 is a flowchart of a method for queue management based on DDR3SDRAM according to an embodiment of the present invention.
Fig. 2 is a flow chart of enqueuing according to an embodiment of the present invention.
Fig. 3 is a flowchart of a write pointer update provided by an embodiment of the present invention.
Fig. 4 is a flow chart of dequeuing according to an embodiment of the present invention.
Fig. 5 is a flowchart of a read pointer update process according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In view of the problems in the prior art, the present invention provides a queue management method, medium, and apparatus based on DDR3SDRAM, and the present invention is described in detail below with reference to the accompanying drawings.
Those skilled in the art can also implement the method for queue management based on DDR3SDRAM by other steps, and fig. 1 shows that the method for queue management based on DDR3SDRAM provided by the present invention is only a specific example.
As shown in fig. 1, a method for managing a queue based on DDR3SDRAM according to an embodiment of the present invention includes:
s101: the buffer space of the off-chip DDR3SDRAM for storing data packets is statically divided into N fixed buffer intervals with equal size according to the number of queues.
S102: setting a write pointer W for each fixed bufferiAnd a read pointer RiWrite pointer WiAnd a read pointer RiRespectively, correspond to enqueue and dequeue operations of the data packet.
S103: upon enqueuing of a data packet, the associated enqueue processing operation is performed.
S104: upon dequeuing of a data packet, the associated dequeue processing operation is performed.
In S101 provided in the embodiment of the present invention, each fixed buffer corresponds to one logical queue and addresses are consecutive, a start head address Fi and an end tail address Li corresponding to each fixed buffer are recorded, a difference between the start head address Fi and the end tail address Li is a size S of the fixed buffer, that is, Li-Fi is equal to S, and i is greater than or equal to 0 and less than or equal to N-1.
In S102 provided in the embodiment of the present invention, the write pointer WiPointing to the currently writable start position of the fixed bufferiPointing to the currently readable initial position of the fixed cache region, wherein Fi is less than or equal to Ri≤WiLi is not less than 0, i is not less than 0 and not more than N-1; and the writing pointer and the reading pointer are overlapped at the initial moment and are both the initial head address Fi of the fixed cache region.
In S103 provided by the embodiment of the present invention, a relevant enqueue processing operation is executed, and the specific process is as follows:
(1) when a data packet is to be enqueued, extracting relevant control information such as an output destination port number, priority, packet length and the like;
(2) determining the enqueue number of the packet according to the output destination port number and the priority, and establishing a packet header field, wherein the format is as follows:
packet length Number of enqueue Retention
Table 1 packet header fields
(3) Inquiring the write pointer W corresponding to the fixed buffer area according to the queue numberiAnd a read pointer RiCalculating the occupied part of the current fixed buffer, i.e. the write pointer WiAnd a read pointer RiA difference of (d);
(4) and (3) performing enqueue judgment according to the packet length:
if the occupied part of the current fixed buffer area is 0, the current fixed buffer area is empty, the current enqueue group is the first group of the queue, the length of the current group and the length of the head field are recorded, and enqueue is allowed;
if the occupied part of the current fixed buffer area plus the packet length and the length of the head field are larger than the size S of the fixed buffer area, indicating that the queue overflows, and then the enqueue fails;
if the occupied part of the current fixed buffer area plus the packet length and the length of the head field is less than or equal to the size S of the fixed buffer area, the queue is not overflowed, and the enqueuing is successful;
(5) writing the pointer W according to the judgment resultiUpdating:
if enqueue is successful, write pointer WiThe update of (2) is divided into two cases, one is when WiWhen the sum of the length of the enqueue packet and the length of the head field is larger than the end address Li, the write pointer is updated to the current write pointer Wi+ enqueue packet length and header field length-end tail address Li + start head address Fi; the other is when WiWhen the sum of the length of the enqueue packet and the length of the head field is less than or equal to the end address Li, the write pointer is updated to the current write pointer Wi+ enqueue packet length and header field length;
if enqueue fails, the write pointer is not updated and remains the original value.
(6) And the data packet is subjected to different operation treatments according to the judgment result:
and performing an enqueue judgment process according to the packet length, packaging and combining the header field and the packet data of the successfully enqueued packet, and storing the successfully enqueued packet into the DDR3SDRAM through clock domain crossing and bit width conversion processing until the whole data transmission is completed. The packet that fails enqueue is discarded after the enqueue determination process according to the packet length.
In S104 provided in the embodiment of the present invention, when dequeuing a data packet, a relevant dequeue processing operation is performed, which specifically includes:
1) when dequeuing, firstly, a dequeue queue number is obtained according to a certain scheduling strategy;
2) inquiring the write pointer W corresponding to the fixed buffer area according to the dequeue queue numberiAnd a read pointer RiPerforming dequeue judgment, and calculating occupied part of current fixed buffer area, i.e. write pointer WiAnd a read pointer RiA difference of (d);
if the occupied part of the current fixed cache area is 0, the current fixed cache area is empty, no packet needs to be dequeued, and 1) the continuous polling is returned;
if the occupied part of the current fixed cache area is not 0, the current fixed cache area is not empty, and if a packet is to be dequeued currently, the step 3) is continued;
3) inquiring the total storage length of the first packet in the current queue according to the dequeue number;
4) and reading the packaged and combined packet completely from the DDR3SDRAM according to the storage length obtained by inquiry, splitting the packet data and the header field, and moving the packet data to a bus for output through clock domain crossing and bit width conversion processing until the whole data transmission is completed.
5) Reading pointer R according to total storage length of dequeued packetiUpdating:
when R isiWhen the sum of the length of the dequeued packet and the length of the head field is greater than the end address Li, the read pointer is updated to the current read pointer Ri+ dequeue packet length and header field length-end tail address Li + start head address Fi;
when R isiWhen the sum of the length of the dequeue packet and the length of the head field is less than or equal to the end address Li, the read pointer is updated to the current read pointer Ri+ dequeue packet length and header field length.
It should be noted that the embodiments of the present invention can be realized by hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided on a carrier medium such as a disk, CD-or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier, for example. The apparatus and its modules of the present invention may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., or by software executed by various types of processors, or by a combination of hardware circuits and software, e.g., firmware.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A queue management method based on DDR3SDRAM is characterized in that the queue management method based on DDR3SDRAM comprises:
statically dividing a buffer space of an off-chip DDR3SDRAM for storing data packets into N fixed buffer intervals with equal size according to the number of queues;
setting a write pointer W for each fixed bufferiAnd a read pointer RiWrite pointer WiAnd a read pointer RiRespectively corresponding to enqueue and dequeue operations of the data packet;
when the data packet is enqueued, relevant enqueue processing operation is executed;
upon dequeuing of a data packet, the associated dequeue processing operation is performed.
2. The method for queue management based on DDR3SDRAM of claim 1, wherein the buffer space of the off-chip DDR3SDRAM for storing data packets is statically divided into N fixed buffer sections with equal size according to the number of queues, each fixed buffer section corresponds to a logical queue and has consecutive addresses, the starting head address Fi and the ending tail address Li corresponding to each fixed buffer section are recorded, and the difference between the two is the size S of the fixed buffer section, i.e. Li-Fi ═ S, and i is greater than or equal to 0 and less than or equal to N-1.
3. The method for queue management based on DDR3SDRAM of claim 1, wherein a write pointer W is set for each fixed bufferiAnd a read pointer RiWrite pointer WiAnd a read pointer RiRespectively, corresponds to the write pointer W in enqueue and dequeue operations of the data packetiPointing to the currently writable start position of the fixed bufferiPointing to the currently readable initial position of the fixed cache region, wherein Fi is less than or equal to Ri≤WiLi is not less than 0, i is not less than 0 and not more than N-1; and the writing pointer and the reading pointer are overlapped at the initial moment and are both the initial head address Fi of the fixed cache region.
4. The method for queue management based on DDR3SDRAM as claimed in claim 1, wherein said performing relevant enqueue processing operation during enqueue of data packet comprises:
(1) when a data packet is to be enqueued, extracting relevant control information such as an output destination port number, priority, packet length and the like;
(2) determining the queue number of the packet according to the port number of the output destination and the priority, and establishing a packet header field;
(3) inquiring the write pointer W corresponding to the fixed buffer area according to the queue numberiAnd a read pointer RiCalculating the occupied part of the current fixed buffer, i.e. the write pointer WiAnd a read pointer RiA difference of (d);
(4) and (3) performing enqueue judgment according to the packet length:
(5) writing the pointer W according to the judgment resultiUpdating:
(6) and the data packet is subjected to different operation processing according to the judgment result.
5. The DDR3 SDRAM-based queue management method of claim 4, wherein the enqueue determination according to packet length is performed by:
if the occupied part of the current fixed buffer area is 0, the current fixed buffer area is empty, the current enqueue group is the first group of the queue, the length of the current group and the length of the head field are recorded, and enqueue is allowed;
if the occupied part of the current fixed buffer area plus the packet length and the length of the head field are larger than the size S of the fixed buffer area, indicating that the queue overflows, and then the enqueue fails;
and if the occupied part of the current fixed buffer area plus the packet length and the head field length is less than or equal to the size S of the fixed buffer area, the queue is not overflowed, and the enqueuing is successful.
6. The DDR3 SDRAM-based queue management method of claim 4, wherein the write pointer W is based on a determination resultiThe specific process of updating is as follows: if enqueue is successful, write pointer WiThe update of (2) is divided into two cases, one is when WiWhen the sum of the length of the enqueue packet and the length of the head field is larger than the end address Li, the write pointer is updated to the current write pointer Wi+ enqueue packet length and header field length-end tail address Li + start head address Fi; the other is when WiWhen the sum of the length of the enqueue packet and the length of the head field is less than or equal to the end address Li, the write pointer is updated to the current write pointer Wi+ enqueue packet length and header field length;
if enqueue fails, the write pointer is not updated and remains the original value.
7. The DDR3 SDRAM-based queue management method of claim 4, wherein the data packets are processed by different operations according to the determination result, and the specific process is as follows: after the enqueue judgment process is carried out according to the packet length, the successfully enqueued packet encapsulates and combines the header field and the packet data, and the encapsulated and combined packet data is stored into a DDR3SDRAM through clock domain crossing and bit width conversion processing until the whole data transmission is completed; the packet that fails enqueue is discarded after the enqueue determination process according to the packet length.
8. The method for queue management based on DDR3SDRAM as claimed in claim 1, wherein said performing relevant dequeue processing operation when dequeuing data packet, and performing relevant dequeue processing operation when dequeuing data packet, comprises:
1) when dequeuing, firstly, a dequeue queue number is obtained according to a certain scheduling strategy;
2) inquiring the write pointer W corresponding to the fixed buffer area according to the dequeue queue numberiAnd a read pointer RiPerforming dequeue judgment, and calculating occupied part of current fixed buffer area, i.e. write pointer WiAnd a read pointer RiA difference of (d);
if the occupied part of the current fixed cache area is 0, the current fixed cache area is empty, no packet needs to be dequeued, and 1) the continuous polling is returned;
if the occupied part of the current fixed cache area is not 0, the current fixed cache area is not empty, and if a packet is to be dequeued currently, the step 3) is continued;
3) inquiring the total storage length of the first packet in the current queue according to the dequeue number;
4) reading the packaged and combined packet completely from a DDR3SDRAM according to the storage length obtained by inquiry, splitting the packet data and a header field, and moving the packet data to a bus for output through clock domain crossing and bit width conversion processing until the whole data transmission is completed;
5) reading pointer R according to total storage length of dequeued packetiUpdating:
when R isiWhen the sum of the length of the dequeued packet and the length of the head field is greater than the end address Li, the read pointer is updated to the current read pointer Ri+ dequeue packet length and header field length-end tail address Li + start head address Fi;
when R isiWhen the sum of the length of the dequeue packet and the length of the head field is less than or equal to the end address Li, the read pointer is updated to the current read pointer Ri+ dequeue packet length and header field length.
9. A program storage medium for receiving user input, the stored computer program causing an electronic device to execute the DDR3SDRAM based queue management method of any one of claims 1 to 8, comprising the steps of:
step one, dividing a buffer space of an off-chip DDR3SDRAM for storing data packets into N fixed buffer intervals with equal size statically according to the number of queues;
step two, setting a write pointer W for each fixed buffer areaiAnd a read pointer RiWrite pointer WiAnd a read pointer RiRespectively corresponding to enqueue and dequeue operations of the data packet;
step three, when the data packet is enqueued, relevant enqueue processing operation is executed;
and step four, when the data packet is dequeued, executing relevant dequeue processing operation.
10. A computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface for implementing a DDR3SDRAM based queue management method as claimed in claims 1-8 when executed on an electronic device.
CN202110270207.0A 2021-03-12 2021-03-12 DDR3 SDRAM-based queue management method, medium and equipment Active CN113126911B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110270207.0A CN113126911B (en) 2021-03-12 2021-03-12 DDR3 SDRAM-based queue management method, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110270207.0A CN113126911B (en) 2021-03-12 2021-03-12 DDR3 SDRAM-based queue management method, medium and equipment

Publications (2)

Publication Number Publication Date
CN113126911A true CN113126911A (en) 2021-07-16
CN113126911B CN113126911B (en) 2023-04-28

Family

ID=76773050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110270207.0A Active CN113126911B (en) 2021-03-12 2021-03-12 DDR3 SDRAM-based queue management method, medium and equipment

Country Status (1)

Country Link
CN (1) CN113126911B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827300A (en) * 2022-03-20 2022-07-29 西安电子科技大学 Hardware-guaranteed data reliable transmission system, control method, equipment and terminal
CN115396384A (en) * 2022-07-28 2022-11-25 广东技术师范大学 Data packet scheduling method, system and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7730238B1 (en) * 2005-10-07 2010-06-01 Agere System Inc. Buffer management method and system with two thresholds
US20150281126A1 (en) * 2014-03-31 2015-10-01 Plx Technology, Inc. METHODS AND APPARATUS FOR A HIGH PERFORMANCE MESSAGING ENGINE INTEGRATED WITHIN A PCIe SWITCH
CN109144749A (en) * 2018-08-14 2019-01-04 苏州硅岛信息科技有限公司 A method of it is communicated between realizing multiprocessor using processor
CN112084136A (en) * 2020-07-23 2020-12-15 西安电子科技大学 Queue cache management method, system, storage medium, computer device and application
CN112385186A (en) * 2018-07-03 2021-02-19 华为技术有限公司 Apparatus and method for ordering data packets

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7730238B1 (en) * 2005-10-07 2010-06-01 Agere System Inc. Buffer management method and system with two thresholds
US20150281126A1 (en) * 2014-03-31 2015-10-01 Plx Technology, Inc. METHODS AND APPARATUS FOR A HIGH PERFORMANCE MESSAGING ENGINE INTEGRATED WITHIN A PCIe SWITCH
CN112385186A (en) * 2018-07-03 2021-02-19 华为技术有限公司 Apparatus and method for ordering data packets
CN109144749A (en) * 2018-08-14 2019-01-04 苏州硅岛信息科技有限公司 A method of it is communicated between realizing multiprocessor using processor
CN112084136A (en) * 2020-07-23 2020-12-15 西安电子科技大学 Queue cache management method, system, storage medium, computer device and application

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KO,YOUSUN等: "《LaminarIR:Compile-Time Queues for Structured Streams》", 《ACM SIGPLAN NOTICES》 *
张汶汶: "《FPGA高速大容量外挂数据缓存技术研究》", 《万方学位》 *
王雷淘等: "一种应用于星载交换机的DDR3共享存储交换结构的设计与实现", 《通信技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827300A (en) * 2022-03-20 2022-07-29 西安电子科技大学 Hardware-guaranteed data reliable transmission system, control method, equipment and terminal
CN114827300B (en) * 2022-03-20 2023-09-01 西安电子科技大学 Data reliable transmission system, control method, equipment and terminal for hardware guarantee
CN115396384A (en) * 2022-07-28 2022-11-25 广东技术师范大学 Data packet scheduling method, system and storage medium
CN115396384B (en) * 2022-07-28 2023-11-28 广东技术师范大学 Data packet scheduling method, system and storage medium

Also Published As

Publication number Publication date
CN113126911B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
US8325603B2 (en) Method and apparatus for dequeuing data
US6307789B1 (en) Scratchpad memory
US7555579B2 (en) Implementing FIFOs in shared memory using linked lists and interleaved linked lists
US7277982B2 (en) DRAM access command queuing structure
WO2021209051A1 (en) On-chip cache device, on-chip cache read/write method, and computer readable medium
CN112084136B (en) Queue cache management method, system, storage medium, computer device and application
KR20160117108A (en) Method and apparatus for using multiple linked memory lists
CN113126911B (en) DDR3 SDRAM-based queue management method, medium and equipment
CN110058816B (en) DDR-based high-speed multi-user queue manager and method
CN111949568A (en) Message processing method and device and network chip
US20030056073A1 (en) Queue management method and system for a shared memory switch
TWI536772B (en) Directly providing data messages to a protocol layer
Kornaros et al. A fully-programmable memory management system optimizing queue handling at multi gigabit rates
CN111181874B (en) Message processing method, device and storage medium
CN116955247B (en) Cache descriptor management device and method, medium and chip thereof
CN111694777B (en) DMA transmission method based on PCIe interface
Nikologiannis et al. An FPGA-based queue management system for high speed networking devices
CN114564420A (en) Method for sharing parallel bus by multi-core processor
Mutter A novel hybrid memory architecture with parallel DRAM for fast packet buffers
US10067690B1 (en) System and methods for flexible data access containers
WO2024001414A1 (en) Message buffering method and apparatus, electronic device and storage medium
Shi et al. Optimization of shared memory controller for multi-core system
US7275145B2 (en) Processing element with next and previous neighbor registers for direct data transfer
CN114996011A (en) Method for realizing virtualized DMA controller supporting flexible resource allocation
US20200059437A1 (en) Link layer data packing and packet flow control scheme

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant