CN107220187A - A kind of buffer memory management method, device and field programmable gate array - Google Patents

A kind of buffer memory management method, device and field programmable gate array Download PDF

Info

Publication number
CN107220187A
CN107220187A CN201710364480.3A CN201710364480A CN107220187A CN 107220187 A CN107220187 A CN 107220187A CN 201710364480 A CN201710364480 A CN 201710364480A CN 107220187 A CN107220187 A CN 107220187A
Authority
CN
China
Prior art keywords
data
queue
buffer
buffer storage
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710364480.3A
Other languages
Chinese (zh)
Other versions
CN107220187B (en
Inventor
陈鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Star Net Ruijie Networks Co Ltd
Original Assignee
Beijing Star Net Ruijie Networks Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Star Net Ruijie Networks Co Ltd filed Critical Beijing Star Net Ruijie Networks Co Ltd
Priority to CN201710364480.3A priority Critical patent/CN107220187B/en
Publication of CN107220187A publication Critical patent/CN107220187A/en
Application granted granted Critical
Publication of CN107220187B publication Critical patent/CN107220187B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache

Abstract

The present invention provides a kind of buffer memory management method, device and field programmable gate array, is related to Data cache technology field.Bandwidth for improving caching system.This method includes:Receive join the team message when, at least one data fragmentation is cut into according to the length message that will join the team of default burst size and message of joining the team;Data buffer storage channel number is distributed to each data fragmentation;Data fragmentation is distributed to the data cache controller of corresponding data buffer storage passage according to the data buffer storage channel number distributed to data fragmentation;Data fragmentation is write into corresponding data buffer storage under the control of data cache controller;Buffer address and buffer storage length that data buffer storage is returned are received, and the buffer address and buffer storage length returned according to the data buffer storage channel number and data buffer storage distributed to data fragmentation generates the buffer descriptor of each data fragmentation;By the buffer descriptor write-in BD cachings of each data fragmentation.The present invention is used for cache management.

Description

A kind of buffer memory management method, device and field programmable gate array
Technical field
The present invention relates to Data cache technology field, more particularly to a kind of buffer memory management method, device and field-programmable Gate array.
Background technology
With the development of information technology and network technology, caching technology is increasingly becoming a hot topic and indispensable neck Domain, caching technology, which refers to, can carry out high-speed data exchange, therefore caching technology improves a lot to the response speed of system.So And, the memory capacity of traditional buffer storage is typically limited so that its store content it is also relatively limited, therefore to cache into Row management is the hot research problem of this area one.
Shown in reference picture 1, caching system of the prior art includes:Packet input circuit 11, cache interface circuit 12, Chained list management circuit 13, dispatch circuit 14, data output circuit 15, data buffer storage 16 and storage of linked list device 17.Delay shown in Fig. 1 The cache management process of deposit system is:By data buffer storage 16 cut into slices for multiple fixed sizes memory cell (for example:512B);When When receiving packet, if the packet received is less than or equal to the size of a memory cell (for example:256B), then the data Bag takes an independent memory cell, and if the packet that receives is more than the size of a memory cell (for example:2KB), Packet is then cut into multiple data fragmentations (for example:2KB packet is cut into size is:512B four data point Piece), then multiple data fragmentations are respectively stored in a memory cell by cache interface circuit 12, and pass through chained list pipe Reason circuit 13 sends the memory cell group for storing multiple data fragmentations to dispatch circuit 14 and storage of linked list device 17 into chained list, from And complete the write-in of packet;When need to the read data packet from data buffer storage 16 when, dispatch circuit 14 is according to storage of linked list device 17 In link table information read data from corresponding memory cell, while data buffer storage 14 reclaims corresponding memory cell, pass through The data fragmentation of reading is spliced into after a complete packet and exported by data output circuit 15, so as to complete the reading of packet Take.
Although above-mentioned cache management system can be such that the performance of any number of queue is all sufficiently close to, above-mentioned caching The bandwidth that system is provided is extremely limited.Specifically, when 64bit bandwidth DDR4 is operated in 2400Mbps speed, band width in physical is only There is 153Gbps, and the bandwidth availability ratio of the short bag of the byte of DDR controller 64 of 64bit bit wides is very low, only 20% or so, Its reality that can be provided two-way bandwidth only 153*0.2=30.72Gbps, is unidirectionally only about 15Gbps.
The content of the invention
Embodiments of the invention provide a kind of buffer memory management method, device and field programmable gate array, slow for improving The bandwidth of deposit system.
To reach above-mentioned purpose, embodiments of the invention are adopted the following technical scheme that:
In a first aspect, a kind of buffer memory management method, for being managed to caching system, the caching system includes many Individual data buffer storage passage, any data caching passage includes a data buffer storage and a data cache controller, and each data buffer storage Passage has unique data buffer storage channel number;Methods described includes:
Receive join the team message when, the message of joining the team is cut according to default burst size and the length of the message of joining the team It is segmented at least one data fragmentation;
The data buffer storage channel number is distributed to each data fragmentation;
Data fragmentation is distributed to by the corresponding data according to the data buffer storage channel number distributed to the data fragmentation Cache the data cache controller of passage;
The data fragmentation is write into corresponding data buffer storage under the control of the data cache controller;
Buffer address and buffer storage length that the data buffer storage is returned are received, and it is slow according to the data distributed to data fragmentation Deposit channel number and the buffer address and the buffer descriptor of each data fragmentation of buffer storage length generation of data buffer storage return;
By the buffer descriptor write-in BD cachings of each data fragmentation.
Optionally, it is described to distribute the data buffer storage channel number to each data fragmentation, including:
Multiple data buffer storage channel numbers are rotated and distributed to each data fragmentation.
Optionally, methods described also includes:
The data that a data buffer storage channel number makees the current state of individual queue are randomly assigned when upper electric to individual queue respectively Cache channel number;
Multiple data buffer storage channel numbers are being rotated to after each data fragmentation distribution, by the message institute that joins the team The data buffer storage channel number of the current state of the queue of category be updated to rotation at the end of data buffer storage channel number after it is next Individual data buffer storage channel number;
Described rotate multiple data buffer storage channel numbers is distributed to each data fragmentation, including:
By multiple data by the data buffer storage channel number of the current state of the queue joined the team belonging to message Caching channel number is rotated to be distributed to each data fragmentation.
Optionally, any data buffer storage includes multiple memory banks, and any memory bank includes:One recovery area and One unused area;Wherein, the recovery area is used for the buffer address for depositing the spatial cache reclaimed, and the unused area is used In the buffer address for depositing untapped spatial cache;It is described to divide the data under the control of the data cache controller Piece writes corresponding data buffer storage, including:
Control multiple memory banks in the data buffer storage to rotate to the data fragmentation and distribute buffer address;
Judge the corresponding data buffer storage of data buffer storage channel number of each data fragmentation memory bank recovery area whether With buffer address;
Distributed if so, then taking out a buffer address from recovery area to the data fragmentation;
Distributed if it is not, then taking out a buffer address from unused area to the data fragmentation;
Data fragmentation is write under the control of the data cache controller indicated by the buffer address distributed to it Spatial cache.
Optionally, the caching system includes multiple BD caching passages, and any BD cachings passage includes a BD and cached Controller and BD cachings;
The buffer descriptor write-in BD cachings by each data fragmentation, including:
The data buffer storage channel number of each data fragmentation is carried out according to order of each data fragmentation in the message of joining the team Sequence;
The BD cache controllers of each BD cachings passage respectively will according to the order of the data buffer storage channel number of each data fragmentation The buffer descriptor of each data fragmentation writes corresponding BD cachings.
Optionally, the BD cache controllers of each BD cachings passage are according to the data buffer storage channel number of each data fragmentation The buffer descriptor of each data fragmentation is write corresponding BD respectively and cached by order, including:
The multiple BD cachings are divided into multiple cache blocks according to preset buffer memory size;
To cache blocks, head end buffer address and tail end buffer address described in each queue assignment, and record to each described The head end buffer address and tail end buffer address of queue assignment;Wherein, the cache blocks difference distributed to different queue, any queue Head end buffer address and tail end buffer address indicated by spatial cache to be same into the cache blocks of the queue assignment Spatial cache;
When receiving buffer descriptor, the head end buffer address of first queue is obtained;The first queue is received The queue joined the team belonging to message belonging to data fragmentation belonging to buffer descriptor;
The buffer descriptor received is write into the spatial cache indicated by the head end buffer address of the first queue;
Judge whether the remaining spatial cache of the first cache blocks is less than or equal to the size of a buffer descriptor;Described One cache blocks are the cache blocks distributed to first queue;
If it is not, then the head end buffer address of first queue is updated in first cache blocks after head end buffer address Next buffer address;
If so, then distributing the second cache blocks to the first queue, the caching block address of second cache blocks is write First cache blocks, the head end buffer address of the first queue are updated to first buffer address in the second cache blocks;
The buffer descriptor of each data fragmentation is write successively according to the order of the data buffer storage channel number of each data fragmentation Spatial cache indicated by the head end buffer address of the first queue.
Optionally, methods described also includes:
When distributing the second cache blocks to the first queue, the cache blocks during the multiple BD is cached are rotated to described First queue is distributed.
Optionally, methods described also includes:
Receive dequeue command;The dequeue command will go out group message for instruction and be recalled from the data buffer storage, described Dequeue command goes out the queuing message of group message including described in;
According to it is described go out group message queuing message obtain second queue;The second queue goes out belonging to group message for described in Queue;
Obtain the tail end buffer address of the second queue;
Read the buffer descriptor in the spatial cache indicated by the tail end buffer address of the second queue;
Judge whether the buffer descriptor in the 3rd cache blocks all takes out;3rd cache blocks are described second The cache blocks described in spatial cache indicated by the tail end buffer address of queue;
If it is not, the tail end buffer address of the second queue then is updated into tail end buffer address in the 3rd cache blocks Next buffer address afterwards;
If so, the caching block address in the 3rd cache blocks is then obtained, and by the tail end buffer address of the second queue more First buffer address in cache blocks indicated by the new caching block address for into the 3rd cache blocks;
According to the quantity of the data fragmentation included for going out group message described in acquisition of information of joining the team of the second queue;
The buffer descriptor number read in judging the spatial cache indicated by the tail end buffer address from the second queue Whether amount goes out the quantity for the data fragmentation that group message is included equal to described in;
If so, the caching read in the spatial cache indicated by tail end buffer address respectively from the second queue is described Symbol is distributed to the data cache controller that corresponding data caches passage;
Under the control of the data cache controller according to indicated by the tail end buffer address from the second queue The buffer descriptor read in spatial cache reads data fragmentation from corresponding data buffer storage;
Each data fragmentation is exported successively according to the order of the data buffer storage channel number in each buffer descriptor read.
Optionally, methods described also includes:
Record the accumulative write-in data volume and accumulative reading data volume of each data buffer storage;
When any data buffer storage is idle condition, when the accumulative reading data volume of the data buffer storage is zero or accumulative reading Take data volume be more than preset data amount and detected data fragmentation need write-in when, the data buffer storage jumps to write state simultaneously Accumulative reading data volume is reset;When the accumulative write-in data volume of the data buffer storage be zero and need read data fragmentation when, should Data buffer storage jumps to reading state and resets accumulative write-in data volume;
When any data buffer storage is write state, judges whether the accumulative write-in data volume of the data buffer storage is more than and preset Data volume, when the accumulative write-in data volume of the data buffer storage is more than preset data amount, then the data buffer storage jumps to idle condition;
When any data buffer storage is reading state, judges whether the accumulative reading data volume of the data buffer storage is more than and preset Data volume, when the accumulative reading data volume of the data buffer storage is more than preset data amount, then the data buffer storage jumps to idle condition.
Second aspect is there is provided a kind of cache management device, for being managed to caching system, is wrapped in the caching system Multiple data buffer storage passages are included, any data caching passage includes a data buffer storage and a data cache controller, and each data are delayed Depositing passage has unique data buffer storage channel number;The cache management device includes:
Join the team data slicer circuit, for receive join the team message when, according to default burst size and the message of joining the team Length the message of joining the team is cut at least one data fragmentation;
Data buffer storage channel number enquiry circuit, for distributing the data buffer storage channel number to each data fragmentation;
Join the team distribution circuit, for being distributed to data fragmentation pair according to the data buffer storage channel number distributed to data fragmentation The data cache controller for the data buffer storage passage answered;
Data cache controller, for the data fragmentation to be write into corresponding data buffer storage;
Chained list manages circuit, for receiving buffer address and buffer storage length that the data buffer storage is returned, and according to The buffer address and buffer storage length that the data buffer storage channel number and the data buffer storage of data fragmentation distribution are returned generate each number According to the buffer descriptor of burst;
BD cache controllers, for the buffer descriptor write-in BD of each data fragmentation to be cached.
Optionally, the caching system includes multiple BD caching passages, and any BD cachings passage includes a BD and cached Controller and BD cachings;The cache management device also includes:Reorder circuit for burst address;
The burst address circuit that reorders is used for according to order of each data fragmentation in the message of joining the team to each number It is ranked up according to the data buffer storage channel number of burst;
The BD cache controllers of each BD cachings passage are used for the order point according to the data buffer storage channel number of each data fragmentation The buffer descriptor of each data fragmentation is not write into corresponding BD to cache.
Optionally, the multiple BD cachings specifically for being divided into by the BD cache controllers according to preset buffer memory size Multiple cache blocks;Head end buffer address is distributed to cache blocks described in each queue assignment, and to individual queue;Wherein, to difference The cache blocks of queue assignment are different, caching from the spatial cache indicated by the head end buffer address of any queue to the queue assignment Block;When receiving buffer descriptor, the head end buffer address of first queue is obtained;The first queue is that the caching received is retouched State the queue joined the team belonging to message belonging to the data fragmentation belonging to symbol;The buffer descriptor received is write into the first team Spatial cache indicated by the head end buffer address of row, and the head end buffer address distributed to first queue is recorded as first team The tail end buffer address of row;Judge whether the remaining spatial cache of the first cache blocks is less than or equal to the big of buffer descriptor It is small;First cache blocks are the cache blocks distributed to first queue;If it is not, then the head end buffer address of first queue is updated For next buffer address after head end buffer address in first cache blocks;If so, then to first queue distribution the Two cache blocks, the first cache blocks are write by the caching block address of second cache blocks, and the head end of the first queue delays Deposit first buffer address that address is updated in the second cache blocks;According to the order of the data buffer storage channel number of each data fragmentation The buffer descriptor of each data fragmentation is write to the spatial cache indicated by the head end buffer address of the first queue successively.
Optionally, the caching system also includes:Scheduling unit,
The scheduling unit is used to record the information of joining the team of individual queue, receive dequeue command, according to it is described go out group message Queuing message obtains second queue, and the data included point of group message are gone out described in acquisition of information according to joining the team for the second queue The quantity of piece;
The cache management device also includes:Go out team's distribution circuit and data dequeued restructuring circuit;
Chained list management circuit is additionally operable to obtain the tail end buffer address of the second queue and by the second queue Tail end buffer address be distributed to the BD cache controllers;Wherein, the dequeue command, which is used for instruction, will go out group message from institute State in data buffer storage and recall, the dequeue command goes out the queuing message of group message including described in;The second queue goes out for described in Queue belonging to team's message;
The BD cache controllers are additionally operable to read the spatial cache indicated by the tail end buffer address of the second queue Middle buffer descriptor;Judge whether the buffer descriptor in the 3rd cache blocks all takes out;If it is not, then by second team The tail end buffer address of row is updated to next buffer address after tail end buffer address in the 3rd cache blocks;If so, then The caching block address in the 3rd cache blocks is obtained, and the tail end buffer address of the second queue is updated to the described 3rd to delay First buffer address in the cache blocks indicated by caching block address in counterfoil;3rd cache blocks are second team The cache blocks described in spatial cache indicated by the tail end buffer address of row;Judge the tail end buffer address from the second queue Whether the buffer descriptor quantity read in indicated spatial cache is equal to the length information acquisition for going out group message according to It is described go out the quantity of data fragmentation that includes of group message;
It is described go out team distribution circuit, for judging the spatial cache indicated by the tail end buffer address from the second queue Whether the buffer descriptor quantity of middle reading, which is equal to, goes out group message according to the length information acquisition for going out group message and includes Data fragmentation quantity and read in the spatial cache indicated by the tail end buffer address from the second queue it is slow Deposit descriptor quantity and be equal to the number for going out to go out described in the length information acquisition of group message the data fragmentation that group message is included according to The buffer descriptor read in the spatial cache indicated by tail end buffer address respectively from the second queue is distributed to during amount Corresponding data caches the data cache controller of passage;
The data cache controller is additionally operable to the caching according to indicated by the tail end buffer address from the second queue The buffer descriptor read in space reads data fragmentation from corresponding data buffer storage;
The data dequeued recombinates circuit, for according to the data buffer storage channel number in each buffer descriptor read Order successively exports each data fragmentation.
The third aspect there is provided a kind of field programmable gate array, including:Cache management dress described in any one of second aspect Put.
The buffer memory management method that embodiments of the invention are provided is used to be managed caching system, wherein, caching system Include multiple data buffer storage passages, any data caching passage includes a data buffer storage and a data cache controller, connect Receive join the team message and the message that will join the team cutting ask after data fragmentation, first according to the data buffer storage passage distributed to data fragmentation Data fragmentation number is distributed to the data cache controller of corresponding data buffer storage passage, then again in data cache controller Data fragmentation is write into corresponding data buffer storage under control;Data cache controller independent control in i.e. each data buffer storage passage The data write-in of data buffer storage in data buffer storage passage, because the buffer memory management method that the present invention is provided provides multiple independences Data buffer storage passage, so caching passage compared to individual data, the embodiment of the present invention can provide higher bandwidth, and phase Managed than the buffer address in multiple caching passages is distributed unitedly, the embodiment of the present invention can maximize each caching passage Bandwidth availability ratio, so buffer memory management method provided in an embodiment of the present invention can improve the bandwidth of caching system.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is the schematic diagram of caching system of the prior art;
Fig. 2 is one of hardware frame figure of buffer memory management method provided in an embodiment of the present invention;
Fig. 3 is one of step flow chart of buffer memory management method provided in an embodiment of the present invention;
Fig. 4 is the schematic diagram of pseudorandom number generation circuit provided in an embodiment of the present invention;
Fig. 5 is the circuit diagram of random code generation module provided in an embodiment of the present invention;
Fig. 6 is the schematic diagram of data buffer storage provided in an embodiment of the present invention;
Fig. 7 is one of state machine rotation figure of data buffer storage provided in an embodiment of the present invention;
Fig. 8 is the two of the hardware frame figure of buffer memory management method provided in an embodiment of the present invention;
Fig. 9 is the three of the hardware frame figure of buffer memory management method provided in an embodiment of the present invention;
Figure 10 is the two of the step flow chart of buffer memory management method provided in an embodiment of the present invention;
Figure 11 is the schematic diagram of BD chained lists provided in an embodiment of the present invention;
Figure 12 is the three of the step flow chart of buffer memory management method provided in an embodiment of the present invention;
Figure 13 is the two of the state machine rotation figure of data buffer storage provided in an embodiment of the present invention;
Figure 14 is the four of the hardware frame figure of buffer memory management method provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
Letter is made to the general hardware framework of the application scenarios of buffer memory management method provided in an embodiment of the present invention first below Illustrate.
Shown in reference picture 2, the hardware of the application scenarios of buffer memory management method provided in an embodiment of the present invention includes:Positioned at existing Field programmable gate array (English name:Field Programmable Gate Array, referred to as:FPGA) internal data are cut Piece circuit 201, data buffer storage channel number enquiry circuit 202, join the team distribution circuit 203, burst address reorder circuit 204, chain Table management circuit 205, multiple buffer descriptor (English names:Buffer Description, referred to as:BD) cache controller 206 (being illustrated in Fig. 2 exemplified by including 2 BD cache controllers), go out team distribution circuit 207, multiple data buffer storages control Device 208 (being illustrated in Fig. 2 exemplified by including 4 data cache controllers) and data dequeued recombinate circuit 209, are located at Outside FPGA with the one-to-one data buffer storage 210 of multiple data cache controllers 208, with BD cache controllers 206 one by one Corresponding BD cachings 211 and scheduling unit 212.
The annexation of unit, module and circuit is in above-mentioned hardware frame:Data slicer circuit 201 connects number According to caching channel number enquiry circuit 202;The connection of data buffer storage channel number enquiry circuit 202, which is joined the team, distributes circuit 203;Join the team distribution Circuit 203 is connected and is connected with the burst address circuit 204 that reorders with being connected multiple data cache controllers 208 respectively;Burst Reorder the circuit of circuit 204 connection chained list management circuit 205 for address;Chained list management circuit 205 respectively with multiple BD buffer controls Device 206 connects and connects scheduling unit 212;Go out team's distribution circuit 207 to connect with being connected multiple data cache controllers 208 respectively And be connected with scheduling unit 212;Data dequeued restructuring circuit 209 connects multiple data cache controllers 208 respectively;Multiple data Cache controller 208 is also connected one data buffer storage passage of formation with a data buffer storage 210 respectively;Multiple BD cache controllers 206 are also connected one BD caching passage of formation with a BD caching 211 respectively.
The major function of unit, module and circuit is in above-mentioned hardware frame:
Join the team data slicer circuit 201:The message of each queue of input is cut into the data fragmentation of fixed size.
Data buffer storage channel number enquiry circuit 202:The primary data caching channel number of individual queue is inquired about, and according to individual queue Primary data caching channel number for individual queue data fragmentation distribution data buffer storage channel number.
Join the team and distribute circuit 203:Each data fragmentation is distributed to by each number according to the data buffer storage channel number of each data fragmentation According to caching passage on, and receive data buffer storage 210 be data fragmentation distribution buffer address and be data by data buffer storage 210 The buffer address of burst distribution, which is sent to burst address, to reorder circuit 204.
Reorder circuit 204 for burst address:According to the data buffer storage channel number distributed to data fragmentation to each data fragmentation Buffer address reordered, so as to ensure order of the order of the buffer address of data fragmentation with data fragmentation in messages It is identical.
Chained list manages circuit 205:Manage the head end buffer address and tail end buffer address of each queue;Divide according to data The buffer address and buffer storage length that the data buffer storage channel number and data buffer storage 210 of piece distribution are returned generate each data fragmentation BD, BD chained lists are constituted according to the BD of each data fragmentation;And the information of joining the team of each queue is submitted into scheduling unit.
BD cache controllers 206:The read-write operation of FPGA external storages BD BD cachings is managed.
Go out team's distribution circuit 207:The information of joining the team for the individual queue that scheduling unit 212 is inputted is distributed to the number of respective channel According to cache controller 208.
Data cache controller 208:The read-write operation of data buffer storage is managed and data fragmentation buffer address Distribution and recovery.
Data buffer storage 210:Store the data fragmentation of individual queue.
BD cachings 211:Store the BD of each data fragmentation.
Scheduling unit 212:The information of joining the team of individual queue is recorded, according to traffic management (English name:Traffic Management, referred to as:TM scheduling strategy) and configuration, the scheduling for completing each queue go out team, will dispatch out group information and submit Distribute circuit 207 to dequeue command.
Based on above-mentioned hardware frame, The embodiment provides a kind of buffer memory management method, the buffer memory management method For being managed to caching system, as shown in figure 2 above, the caching system includes multiple data buffer storage passages, any data Caching passage includes a data buffer storage and a data cache controller, and there is each data buffer storage passage unique data buffer storage to lead to Taoist monastic name.Specifically, shown in reference picture 3, buffer memory management method provided in an embodiment of the present invention comprises the following steps:
S31, receive join the team message when, be cut into according to the length message that will join the team of default burst size and message of joining the team At least one data fragmentation.
Wherein, presetting burst size can be set according to practical application scene, to default point in the embodiment of the present invention Piece occurrence is not limited.Exemplary, default burst size can be:128B, 256B, 512B etc..
It is exemplary, when default burst size be 256B, the length for message of joining the team be 1518B when, due to 1518B/256B More than=5 238B;Therefore the message that will join the team is cut into 6 data fragmentations, and the size of 6 data fragmentations is followed successively by:256B、 256B、256B、256B、256B、238B。
It is exemplary, when default burst size be 256B, the length for message of joining the team be 125B when, the message that will join the team is cut into 1 data fragmentation, and the size of 1 data fragmentation is 125B.
S32, to each data fragmentation distribute data buffer storage channel number.
Optionally, such as lower section can specifically be passed through to each data fragmentation distribution data buffer storage channel number in above-mentioned steps S32 Formula is realized:
A, it is upper electric when be randomly assigned the number that a data buffer storage channel number makees the current state of individual queue to individual queue respectively According to caching channel number.
B, since the data buffer storage channel number of the current state of individual queue multiple data buffer storage channel numbers are rotated Distributed to each data fragmentation
C, multiple data buffer storage channel numbers are being rotated to after each data fragmentation distribution, by the message of joining the team Under the data buffer storage channel number of the current state of affiliated queue is updated to after the data buffer storage channel number at the end of rotating One data buffer storage channel number.
A data buffer storage channel number is randomly assigned as the team for each queue in system electrification in the above method The data buffer storage channel number of the current state of row, and the data buffer storage channel number of the current state as the queue belonging to message of joining the team Start to rotate each data buffer storage channel number and distributed to each data fragmentation, institute's method described above can avoid all queues all from same One data buffer storage passage starts to deposit data fragmentation, and then partial data can be avoided to cache channel overload, and partial data The idle situation of passage is cached, therefore the above method can improve the efficiency of data buffer storage.
It is exemplary, below to include 8 data buffer storage passages in caching system altogether, and 8 data buffer storage passages It is respectively 0,1,2,3,4,5,6,7 that data, which change caching channel number,;There is input to join the team message for queue 0, queue 1, queue 2, queue 3 Exemplified by the principle of above-described embodiment is illustrated.
First, if not being randomly assigned the data buffer storage channel number of current state when upper electric to individual queue, individual queue is worked as The data buffer storage channel number of preceding state is identical, then following situation occurs:
The data buffer storage channel number of the current state of queue 0 is arranged to 0, and the message of joining the team of queue 0 is cut into 4 data Burst, each data buffer storage channel number is rotated and distributed to each data fragmentation, then the channel number to 4 data fragmentations is respectively 0、1、2、3。
The data buffer storage channel number of the current state of queue 1 is arranged to 0, and the message of joining the team of queue 1 is cut into 1 data Burst, each data buffer storage channel number is rotated and distributed to each data fragmentation, then the channel number to 1 data fragmentation is respectively 0。
The data buffer storage channel number of the current state of queue 2 is arranged to 0, and the message of joining the team of queue 2 is cut into 2 data Burst, each data buffer storage channel number is rotated and distributed to each data fragmentation, then the channel number to 2 data fragmentations is respectively 0、1。
The data buffer storage channel number of the current state of queue 3 is arranged to 0, and the message of joining the team of queue 3 is cut into 10 numbers According to burst, each data buffer storage channel number is rotated and distributed to each data fragmentation, then to the channel number point of 10 data fragmentations Wei 0,1,2,3,4,5,6,7,0,1.
Now, as shown in table 1 below, the number of times that each data buffer storage passage is used is:
Data buffer storage channel number 0 1 2 3 4 5 6 7
By access times 5 4 2 2 1 1 1 1
Table 1
In upper embodiment, if being randomly assigned a data buffer storage channel number as the team for each queue when upper electric The data buffer storage channel number of the current state of row, then occur following situation:
The data buffer storage channel number of the current state of queue 0 turns to 3 at random, and the message of joining the team of queue 0 is cut into 4 data Burst, each data buffer storage channel number is rotated and distributed to each data fragmentation, then the channel number to 4 data fragmentations is respectively 3、4、5、6。
The data buffer storage channel number of the current state of queue 1 turns to 0 at random, and the message of joining the team of queue 1 is cut into 1 data Burst, each data buffer storage channel number is rotated and distributed to each data fragmentation, then the channel number to 1 data fragmentation is respectively 0。
The data buffer storage channel number of the current state of queue 2 turns to 7 at random, and the message of joining the team of queue 2 is cut into 2 data Burst, each data buffer storage channel number is rotated and distributed to each data fragmentation, then the channel number to 2 data fragmentations is respectively 7、0。
The data buffer storage channel number of the current state of queue 3 turns to 4 at random, and the message of joining the team of queue 3 is cut into 10 numbers According to burst, each data buffer storage channel number is rotated and distributed to each data fragmentation, then to the channel number point of 10 data fragmentations Wei 4,5,6,7,0,1,2,3,4,5.
Now, as shown in table 2 below, the number of times that each data buffer storage passage is used is:
Data buffer storage channel number 0 1 2 3 4 5 6 7
By access times 3 1 1 2 3 3 2 2
Table 2
It can be learnt by contrasting above-mentioned Tables 1 and 2, a data buffer storage passage is randomly assigned for each queue when upper electric Number as the queue current state data buffer storage channel number, and data of the current state as the queue belonging to message of joining the team Caching channel number starts to rotate each data buffer storage channel number to be distributed to each data fragmentation, can make the profit of each data buffer storage passage It is balanced with rate, and then partial data can be avoided to cache channel overload, and partial data caches the idle situation of passage.
Further, the embodiment of the present invention can distribute current state by pseudorandom number generation circuit for individual queue Data buffer storage channel number, and can be by static random after the data buffer storage channel number of current state is randomly assigned to individual queue Access memory (English name:Static Random Access Memory, referred to as:SRAM) the current of individual queue is stored The data buffer storage channel number of state.
Exemplary, shown in reference picture 4, pseudorandom number generation circuit includes:Random code generation module 41 and static random Access memory 42.Wherein, the write-in Enable Pin PORTNUM_WR_EN connection static random-access of random code generation module 41 is deposited The write-in Enable Pin WR_EN of reservoir 42;The write-in data output end PORTNUM_WR_DIN connections of random code generation module 41 are quiet The write-in data input pin WR_DIN of state random access memory 42;The writing address output end of random code generation module 41 PORTNUM_WR_ADDR and static RAM 42 writing address input WR_ADDR.In addition, static random is deposited Access to memory 42 also includes:Enable Pin RD_EN is read, data output end RD_DIN is read and reads address output end ED_ ADDR。
The embodiment of the present invention additionally provides a kind of circuit diagram of random code generation module.Specifically, shown in reference picture 5, with The physical circuit of machine code generation module includes:4 XOR arithmetic elements 411,8 triggers 412 (be respectively a0, a1, A2, a0, a3, a4, a5, a6, a7), 1 counter 413,1 comparator 414 and 1 and ALU 415.Wherein, Output end with ALU 415 is the write-in data output end PORTNUM_WR_DIN of random code generation module 41, than Compared with device 414 output end be random code generation module 41 writing address output end PORTNUM_WR_ADDR, counter 413 Output end is the write-in Enable Pin PORTNUM_WR_EN of random code generation module 41.
The multi-term expression of the circuit of random code generation module shown in Fig. 5 41 is:
F (x)=1+x2+x3+x4+x8
It is exemplary, it is assumed that above-mentioned pseudorandom number generation circuit is 8 applied to data buffer storage channel number, queue book is During 64K application scenarios, depth M=64K, width N=3 (2 are chosen in the design of static RAM3=8), thus it is right The tap coefficient that static RAM chooses 3 trigger outputs of a0, a1, a2 is stored.
Further, above-mentioned steps S32 as the queue belonging to message of joining the team current state data buffer storage channel number Start rotation to rotate multiple data buffer storage channel numbers to after the distribution of each data fragmentation, method also includes:
Data at the end of the data buffer storage channel number of the current state for the queue joined the team belonging to message is updated into rotation Cache next data buffer storage channel number after channel number.
Exemplary, include 8 data buffer storage passages in caching system altogether, and the data of 8 data buffer storage passages are changed It is respectively 0,1,2,3,4,5,6,7 to cache channel number, and the data buffer storage channel number distributed to four data fragmentations of queue 0 is distinguished For 3,4,5,6, then the data buffer storage channel number of the current state of queue 0 is updated to next after data buffer storage channel number 6 Data buffer storage channel number 7;The data buffer storage channel number distributed to a data fragmentation of queue 1 is 0, then by the current of queue 1 The data buffer storage channel number of state is updated to next data buffer storage channel number 1 after data buffer storage channel number 0;To queue 2 The data buffer storage channel number of two data fragmentation distribution is respectively 7,0, then by the data buffer storage channel number of the current state of queue 2 It is updated to next data buffer storage channel number 1 after data buffer storage channel number 0;The number distributed to ten data fragmentations of queue 3 It is respectively 4,5,6,7,0,1,2,3,4,5 according to caching channel number, then updates the data buffer storage channel number of the current state of queue 3 For next data buffer storage channel number 6 after data buffer storage channel number 5.
S33, data fragmentation is distributed to by corresponding data buffer storage according to the data buffer storage channel number distributed to data fragmentation The data cache controller of passage.
Exemplary, the message of joining the team of queue 4 is cut into four data fragmentations (data fragmentation 0, data fragmentation 1, data point Piece 2, data fragmentation 3), and be respectively to the data buffer storage channel number of four data fragmentations distribution:3rd, 4,5,6, then by data Burst 0 is distributed to the data cache controller 208 of data buffer storage passage 3, data fragmentation 1 is distributed into data buffer storage passage 4 Data cache controller 208, the data cache controller 208 that data fragmentation 2 is distributed to data buffer storage passage 5, by data point Piece 3 is distributed to the data cache controller 208 of data buffer storage passage 6.
S34, under the control of data cache controller data fragmentation write into corresponding data buffer storage.
As above, if data fragmentation 0 is distributed into the data cache controller 208 of data buffer storage passage 3, by data fragmentation 1 The data cache controller 208 for being distributed to data buffer storage passage 4, the data that data fragmentation 2 is distributed into data buffer storage passage 5 are delayed Memory controller 208, the data cache controller 208 that data fragmentation 3 is distributed to data buffer storage passage 6 are then logical in data buffer storage Data fragmentation 0 is write in the data buffer storage 210 of data buffer storage passage 3 under the control of the data cache controller 208 in road 3, The data that data fragmentation 1 is write into data buffer storage passage 4 under the control of the data cache controller 208 of data buffer storage passage 4 are delayed Deposit in 210, data fragmentation 2 is write into data buffer storage passage under the control of the data cache controller 208 of data buffer storage passage 5 In 5 data buffer storage 210, data fragmentation 3 is write into number under the control of the data cache controller 208 of data buffer storage passage 6 In data buffer storage 210 according to caching passage 6.
Exemplary, data buffer storage can be forth generation Double Data Rate synchronous DRAM (English name: Double Data Rate Synchronous Dynamic Random Access Memory 4, referred to as:DDR SDRAM4 or Person DDR4).
Generally, DDR4 memory space is divided into countless small memory spaces according to fixed size.And DDR4 is in caching Efficiency during shorter packet is very low, and it is lower in its read-write state to switch its more frequent efficiency;In addition, DDR4 is same Individual memory bank (English name:Bank efficiency can drastically decline when line feed in), and in order to solve, data buffer storage efficiency is low to ask Topic, invention further provides following several methods for realizing above-mentioned steps S34.
Shown in reference picture 6, any data caching 210 includes multiple memory banks (with bag in a data buffer storage 210 in Fig. 6 4 memory banks are included, and 4 memory banks are respectively:Illustrated exemplified by bank0, bank1, bank2, bank3), either memory storehouse Include:One recovery area and a unused area;Wherein, recovery area is used for the buffer address for depositing the spatial cache reclaimed, not It is used for the buffer address for depositing untapped spatial cache using region;In the control of data cache controller in above-mentioned steps S36 Data fragmentation is write into corresponding data buffer storage under system, including:
Multiple memory banks in control data caching, which are rotated to data fragmentation, distributes buffer address.
Rotation in the embodiment of the present invention refer in turn, circulation.Specifically, multiple memory banks are rotated and distributed to data fragmentation Buffer address refers to, is started to distribute buffer address to data fragmentation successively from one in multiple memory banks, when multiple memory banks In last memory bank to data fragmentation distribute buffer address after, opened again from first memory bank in multiple memory banks Begin to distribute buffer address to data fragmentation successively.
Whether judge the recovery area of the memory bank of the corresponding data buffer storage of data buffer storage channel number of each data fragmentation has Buffer address;
Distributed if so, then taking out a buffer address from recovery area to data fragmentation;
Distributed if it is not, then taking out a buffer address from unused area to data fragmentation;
Data fragmentation is write to the caching indicated by the buffer address distributed to it under the control of data cache controller Space.
Divide because the buffer address of each bank in data buffer storage in the embodiment of the present invention is managed independently, and to data When piece distributes buffer address, buffer address is preferentially distributed from the bank of data buffer storage recovery area to data fragmentation, in recovery area Do not have to distribute buffer address to data fragmentation from unused area again during buffer address, so the embodiment of the present invention can be maximum Degree avoids reading buffer address from bank unused areas, and then improves the efficiency of data buffer storage.
For example:As shown in figure 6 above, the data buffer storage of data buffer storage passage include 4 bank (be respectively bank0, Bank1, bank2, bank3), before this distributes buffer address to data fragmentation the bank that accesses be bank0, bank1, Bank2, then when distributing buffer address to data fragmentation again, the then buffer address distributed to data fragmentation in bank3.
Optionally, the above method also includes:
Record the accumulative write-in data volume and accumulative reading data volume of each data buffer storage.
When any data buffer storage is idle condition, when the accumulative write-in data volume of the data buffer storage is zero or adds up to write Enter data volume more than preset data amount and detected data fragmentation need write-in when, the data buffer storage jumps to write state simultaneously Accumulative write-in data volume is reset;
When any data buffer storage is write state, judges whether the accumulative write-in data volume of the data buffer storage is more than and preset Data volume, when the accumulative write-in data volume of the data buffer storage is more than preset data amount, then the data buffer storage jumps to idle condition.
Specifically, rotating figure to the read-write state of data buffer storage in above-described embodiment referring to state machine shown in Fig. 7 Handoff procedure is described in detail.
Transfer process 1, in idle (English name:IDLE) during state, (add up to read if not treating read request before Data volume is taken for 0), when having detected data fragmentation and needing write-in, state machine jumps to write state;If there is treated reading before Request, and when adding up to read data volume more than or equal to preset data amount (exemplary, preset data amount can be 1KB), Detected data fragmentation need write-in when, state machine also jumps to write state.
Transfer process 2, into after write state, the accumulative data volume that reads is reset, and confirms to current data point The bank that the read write command of feeding data buffer storage is accessed before piece distribution buffer address;If current data burst distribution caching ground The bank that the read write command of feeding data buffer storage is accessed before location is bank3, then jumps to bank0 write state (ST_ GET_BA0), the buffer address in bank0 is taken to distribute to current data burst;If before current data burst distribution buffer address The bank that the read write command of feeding data buffer storage is accessed is bank0, then jumps to bank1 write state (ST_GET_ BA1), the buffer address in bank1 is taken to distribute to current data burst;If being sent into before current data burst distribution buffer address The bank that the read write command of data buffer storage is accessed is bank1, then jumps to bank2 write state (ST_GET_BA2), take Buffer address in bank2 distributes to current data burst;If feeding data are delayed before current data burst distribution buffer address The bank that the read write command deposited is accessed is bank2, then jumps to bank3 write state (ST_GET_BA3), take in bank3 Buffer address distribute to current data burst;To after data fragmentation distribution buffer address, according to delaying for being distributed to data fragmentation Address is deposited to write data fragmentation in the corresponding spatial cache of buffer address.
Transfer process 3, write state to data fragmentation distribute buffer address after, according to the caching distributed to data fragmentation Address writes data fragmentation in the corresponding spatial cache of buffer address, and jumps to write state.
Transfer process 4, in write state, accumulative write-in data volume is added the length of current data burst, judge accumulative Write whether data volume is more than or equal to preset data amount, if accumulative write-in data volume is more than or equal to preset data amount, jump Go to idle condition.
S35, the buffer address and buffer storage length for receiving data buffer storage return, and it is slow according to the data distributed to data fragmentation Deposit channel number and the buffer address and the buffer descriptor of each data fragmentation of buffer storage length generation of data buffer storage return.
Exemplary, the buffer descriptor of data fragmentation can be with as shown in table 3 below:
Data buffer storage channel number Buffer address Buffer storage length First instruction Cauda is indicated
3 0x12345678 0x100 1 0
4 0x23456780 0x100 0 0
5 0x34567800 0x100 0 0
6 0x00000000 0x100 0 0
7 0x5555555a 0x100 0 0
0 0xaaaaaaaa 0x0ee 0 1
Table 3
S36, buffer descriptor write-in BD cachings and the information of joining the team of storage individual queue by each data fragmentation.
Specifically, by the buffer descriptor write-in BD cachings of each data fragmentation in above-mentioned steps S36, specifically can be by such as Lower step is realized:
S361, data buffer storage channel number progress of the order to each data fragmentation according to each data fragmentation in message of joining the team Sequence.
Exemplary, shown in reference picture 8, the hardware frame for the buffer memory management method that the present invention is provided, which also includes one, to be used for Store the deposit FIFO (English name of the data buffer storage channel number order of each data fragmentation:First Input First Output, referred to as:FIFO) memory 81 and multiple FIFO memories that the buffer address returned is cached for data storage 82, wherein, the data volume of FIFO memory 82 is identical with the quantity of data cache controller 208, and FIFO memory 82 is distinguished It is connected with a data cache controller 208.
Under hardware frame shown in above-mentioned Fig. 8, when distributing data buffer storage channel number to each data fragmentation in step S32, root The data buffer storage channel number deposit FIFO distributed to each data fragmentation is stored according to order of each data fragmentation in message of joining the team In device 81.
For example:The channel number distributed to 6 data fragmentations of message of joining the team is as shown in table 4 below:
Data fragmentation number 1 2 3 4 5 6
Data buffer storage channel number 3 4 5 6 7 0
Table 4
Then, the order of the data buffer storage channel number in write-in FIFO memory is 3,4,5,6,7,0.
Because multiple data buffer storage refreshing frequencys are different, data fragmentation length is unequal, read-write operation alternating is asynchronous etc. Reason, so the data buffer storage that the order at the time point of each passage returned data burst address might not be with each data fragmentation The order of channel number is identical.In order to avoid the BD orders of multiple data fragmentations of same message of joining the team are chaotic, as shown in figure 8, can First to record the buffer address that each data buffer storage passage is returned using FIFO memory 82, then remember according in FIFO memory 81 The order of the data buffer storage channel number of each data fragmentation of record changes address and caching length according to return to each data buffer storage passage Degree sequentially generates the buffer descriptor of each data fragmentation.
S362, according to the order of the data buffer storage channel number of each data fragmentation successively by the buffer descriptor of each data fragmentation Write BD cachings.
Specifically, can according to the order of the data buffer storage channel number of each data fragmentation recorded in FIFO memory 81, The BD of the data fragmentation for message of joining the team is sent to BD cachings, finally according to the data fragmentation of reception BD order by each data The buffer descriptor write-in BD cachings of burst.
The buffer memory management method that embodiments of the invention are provided is used to be managed caching system, wherein, caching system Include multiple data buffer storage passages, any data caching passage includes a data buffer storage and a data cache controller, connect Receive join the team message and the message that will join the team cutting ask after data fragmentation, first according to the data buffer storage passage distributed to data fragmentation Data fragmentation number is distributed to the data cache controller of corresponding data buffer storage passage, then again in data cache controller Data fragmentation is write into corresponding data buffer storage under control;Data cache controller independent control in i.e. each data buffer storage passage The data write-in of data buffer storage in data buffer storage passage, because the buffer memory management method that the present invention is provided provides multiple independences Data buffer storage passage, so caching passage compared to individual data, the embodiment of the present invention can provide higher bandwidth, and phase Managed than the buffer address in multiple caching passages is distributed unitedly, the embodiment of the present invention can maximize each caching passage Bandwidth availability ratio, so buffer memory management method provided in an embodiment of the present invention can improve the bandwidth of caching system.
In addition it is also necessary to which the number of queues in explanation, the buffer memory management method that above-described embodiment is provided does not limit bar Part, therefore the embodiment of the present invention can realize the cache management of any number of queues, i.e., the caching pipe provided by above-described embodiment Reason method, the embodiment of the present invention can carry out arbitrary extension to the number of queues in cache management system.
Further, common BD is managed with chained list single-channel mode, but as performance requirement is lifted, The two-way BD access that the short bag of 100G systems beats stream can reach 300Mpps.And single channel caching is difficult to reach this performance.Although four Haplotype data multiplying power (English name:Quad Data Rate, referred to as:QDR) can improve BD access bandwidths, but QDR cost too Height, and need in the case of many queues Large Copacity QDR, therefore be difficult to bear in being actually used in.
In order to solve the above problems, the embodiment of the present invention further provides the caching system that a kind of many BD cache passage, Specifically, shown in reference picture 9, Fig. 9 is the hardware architecture diagram of buffer memory management method provided in an embodiment of the present invention, the present invention is implemented The hardware architecture diagram for the buffer memory management method that example is provided includes:By the data slicer circuit 201 of caching system, data buffer storage passage Number enquiry circuit 202, join the team distribution circuit 203 and burst address reorder the composition of circuit 204 unit 92 of joining the team, by chained list Manage circuit 205, the BD buffer units 91 of multiple BD cache controllers 206 and multiple BD caching compositions, chained list management circuit 205.Wherein, chained list management circuit 205 includes:Caching 2052 in initialization unit 2051 and piece inner sheet.Multiple BD buffer controls Device 206 and multiple BD caching one-to-one corresponding formed multiple BD caching passages (in Fig. 9 using caching system include two caching passages as Example is illustrated).
When buffer memory management method provided in an embodiment of the present invention should be slow in caching system as shown in Figure 9, including multiple BD Passage is deposited, when any BD cachings passage includes a caching description cache controller and BD cachings, in above-mentioned steps S363 According to the order of the data buffer storage channel number of each data fragmentation successively by the BD write-in BD cachings of each data fragmentation, including:
The description cache controller of depositing of each BD cachings passage divides according to the order of the data buffer storage channel number of each data fragmentation The buffer descriptor of each data fragmentation is not write into corresponding BD to cache.
Caching system provided in an embodiment of the present invention can include multiple BD and cache in passage, and any BD caching passages The BD cachings that BD cache controllers are managed independently in BD caching passages are written and read operation, due to increasing in the embodiment of the present invention BD caching numbers of channels, so the BD that the embodiment of the present invention can increase system caches the bandwidth of passage, further, since each BD The read-write for caching the independent BD cachings cached to each BD in passage of BD cache controllers of passage kind is managed, therefore can be most Change the utilization rate that each BD caches passage greatly, therefore the embodiment of the present invention can improve the efficiency of BD cachings.
Specifically, shown in reference picture 10, when caching system, which includes multiple BD, caches passage, basis in above-mentioned steps S363 The order of the data buffer storage channel number of each data fragmentation successively can be by walking by the BD write-in BD cachings of each data fragmentation as follows It is rapid to realize:
S101, according to preset buffer memory size by multiple BD caching be divided into multiple cache blocks.
Exemplary, preset buffer memory size can be 2KB.
S102, to individual queue cache blocks, head end buffer address and tail end buffer address are distributed, and record and distributed to individual queue Head end buffer address and tail end buffer address.
Wherein, the cache blocks difference distributed to different queue, the head end buffer address and tail end buffer address of any queue Indicated spatial cache is the same spatial cache into the cache blocks of the queue assignment.
Optionally, caching 2052 in the piece in circuit 205 can be managed by chained list and records the head end distributed to individual queue Buffer address and tail end buffer address.
Exemplary, it is assumed that the spatial cache of the BD cachings in BD caching passages is 2GB, and preset buffer memory size is 2KB, one The BD of individual data fragmentation size is 8bt, then include in 1000000 cache blocks, a cache blocks can be with for BD caching Deposit the BD of 256 data fragmentations.The address head end buffer address and tail end of record individual queue cache ground in caching 2052 in piece The cache contents form of location can be with as shown in table 5 below:
Cache block address [19:0] Offset address [7:0]
Table 5
In addition, being to individual queue distribution cache blocks, head end buffer address and tail end buffer address in above-mentioned steps S102 Caching block address to individual queue in caching 2052 in piece is configured and the offset address zero setting to individual queue.
Further, cache blocks are distributed to individual queue in above-mentioned steps S102, be specifically as follows:
The cache blocks in multiple BD cachings are randomly assigned to individual queue.
That is, individual queue is assigned randomly into multiple BD to cache on passage.
Exemplary, it is assumed that including:5 queues (queue 0, queue 1, queue 2, queue 3, queue 4), 2 BD cachings are logical Road (BD cachings passage 0, BD cachings passage 1), then be assigned randomly to can be such as table 6 below institute on multiple BD caching passages for individual queue Show:
Queue BD caches passage
0 0
1 0
2 1
3 1
4 0
Table 6
The cache blocks in multiple BD cachings are randomly assigned to individual queue, individual queue can be assigned randomly to multiple BD cachings On passage, so that avoid part BD from caching channel overload, and BD caching passages in part are idle, and then balanced multiple BD cachings passages Flow, lifting BD caching performance.
S103, when receiving buffer descriptor, obtain the head end buffer address of first queue.
Wherein, first queue is the team joined the team belonging to message belonging to the data fragmentation belonging to the buffer descriptor received Row.
Exemplary, if the queue for message 1 of joining the team is queue 1, data fragmentation 1 is to form data by the cutting of message 1 of joining the team One in burst, BD1 is the buffer descriptor of data fragmentation 1, then when receiving BD1, obtains the head end caching ground of queue 1 Location.
S104, by the buffer descriptor received write first queue head end buffer address indicated by spatial cache.
S105, judge whether the remaining spatial cache of the first cache blocks is less than or equal to the size of buffer descriptor.
Wherein, the first cache blocks are the cache blocks distributed to first queue.
If specifically, the address head end buffer address and the cache contents form such as table of tail end buffer address of record individual queue Shown in 5, then judge whether the remaining spatial cache of a cache blocks is less than or equal to the big of buffer descriptor in step S105 It is small, Ke Yiwei:Judge whether offset address+1 is equal to 255.
In step S105, if the remaining spatial cache of the first cache blocks is more than the size (skew of a buffer descriptor 255) address+1 is not equal to, then performs step S86, and if the remaining spatial cache of the first cache blocks is less than or equal to a caching The size (offset address+1 is equal to 255) of descriptor, then perform step S87.
S106, the head end buffer address of first queue is updated to it is next after head end buffer address in the first cache blocks Buffer address.
S107, to first queue the second cache blocks are distributed, the caching block address of the second cache blocks are write into the first cache blocks, The head end buffer address of first queue is updated to first buffer address in the second cache blocks.
Optionally, it is slow during multiple BD are cached in above-mentioned steps S107 when distributing the second cache blocks to first queue Counterfoil is rotated to be distributed to first queue.
For example:Caching system includes 2 BD cachings (BD cachings 0, BD cachings 1), and the first cache blocks of queue 0 belong to BD cachings 0, then when distributing the second cache blocks to queue 0, cache blocks are chosen as the second cache blocks to 0 point of queue from BD cachings 1 Match somebody with somebody.
In above-described embodiment to first queue distribute the second cache blocks when, by multiple BD cache in cache blocks rotate to First queue is distributed, that is, distributes multiple BD cachings channel rotation distribution during cache blocks;When distributing cache blocks, multiple BD cachings are logical Road Cycle arranging can make the BD of each data fragmentation is average to be buffered on multiple BD cachings passages, so as to avoid part BD from caching Channel overload, and BD caching passages in part are idle, and then balanced multiple BD cache the flow of passage, the efficiency of lifting BD cachings.
S108, according to the order of the data buffer storage channel number of each data fragmentation successively by the buffer descriptor of each data fragmentation Write the spatial cache indicated by the head end buffer address of first queue.
Exemplary, the BD chained lists of BD cachings can be as shown in figure 11, includes the chained list of the BD compositions of multiple data fragmentations, And last memory space of each BD block deposits a caching block address.
Further, since buffer memory management method provided in an embodiment of the present invention is when managing the BD chained lists of data fragmentation, it is only necessary to To be managed in chained list in the piece of circuit 205 and the head end buffer address distributed to individual queue and tail end caching ground are stored in caching 2052 Location, it is possible to save storage resource, reduces manufacturing cost.
Below to going out group flow to buffer memory management method provided in an embodiment of the present invention based on above-mentioned data flow of joining the team It is described in detail, specifically, above-mentioned buffer memory management method also includes shown in reference picture 12:
S201, reception dequeue command.
Wherein, dequeue command will go out group message for instruction and be recalled from data buffer storage, and dequeue command includes group message Queuing message.
The queuing message that S202, basis go out group message obtains second queue.
Wherein, second queue is the queue belonging to group message.
That is, the queue of group message is obtained out according to the queuing message for going out group message.
S203, the tail end buffer address for obtaining second queue.
The buffer descriptor in spatial cache indicated by S204, the tail end buffer address of reading second queue.
Whether S205, the buffer descriptor judged in the 3rd cache blocks all take out.
Wherein, the 3rd cache blocks are the cache blocks of the spatial cache indicated by the tail end buffer address of second queue.
In step S205, if the buffer descriptor in the 3rd cache blocks performs step S206 without all taking out;If Buffer descriptor in 3rd cache blocks all takes out, then performs step S207.
S206, the tail end buffer address of second queue is updated to it is next after tail end buffer address in the 3rd cache blocks Buffer address.
S207, obtain the 3rd cache blocks in caching block address, and by the tail end buffer address of second queue be updated to First buffer address in the cache blocks indicated by caching block address in 3rd cache blocks.
Wherein, the caching block address in the 3rd cache blocks obtained in above-mentioned steps S207 is step in above-described embodiment The caching block address of write-in is changed in S207 to caching.
S208, gone out according to the acquisition of information of joining the team of second queue group message the data fragmentation included quantity.
S209, judge spatial cache indicated by tail end buffer address from second queue in the buffer descriptor number that reads Whether amount is equal to the quantity for the data fragmentation that group message is included.
For example:The size for going out group message is 2KB, and is gone out when the data fragmentation of group message is 64 byte, it is necessary to from reading Data fragmentation quantity be 32.
In step S209, if the caching read in spatial cache indicated by tail end buffer address from second queue is retouched The quantity that symbol quantity is equal to the data fragmentation that group message is included is stated, then performs step S210;If delaying from the tail end of second queue The quantity that the buffer descriptor quantity read in the spatial cache indicated by address is less than the data fragmentation that group message is included is deposited, Above-mentioned steps S203-S209 is then repeated in return to step S203 until indicated by the tail end buffer address from second queue The buffer descriptor quantity read in spatial cache is equal to the quantity for the data fragmentation that group message is included.
S210, by the buffer descriptor read in the spatial cache indicated by tail end buffer address respectively from second queue point It is sent to the data cache controller that corresponding data caches passage.
S211, the caching under the control of data cache controller according to indicated by the tail end buffer address from second queue The buffer descriptor read in space reads data fragmentation from corresponding data buffer storage.
When performing above-mentioned steps S211, buffer memory management method provided in an embodiment of the present invention also includes:
Record the accumulative write-in data volume and accumulative reading data volume of each data buffer storage;
When any data buffer storage is idle condition, when the accumulative write-in data volume of the data buffer storage is zero and needs to read During data fragmentation, the data buffer storage jumps to reading state and resets accumulative write-in data volume;
When any data buffer storage is reading state, judges whether the accumulative reading data volume of the data buffer storage is more than and preset Data volume, when the accumulative reading data volume of the data buffer storage is more than preset data amount, then the data buffer storage jumps to idle condition.
Specifically, the following state machine shown in reference picture 13 on the basis of the conversion of state machine shown in Fig. 7 rotates figure to above-mentioned The state conversion process of data buffer storage is described in detail in embodiment.
Transfer process 5, in idle condition, if before treat write request (i.e. add up write-in data volume be 0), inspection Measure need read data fragmentation when, state machine jumps to reading state;If there is treated write request before, and accumulative write-in Data volume be more than or equal to preset data amount when, detect need read data fragmentation when, state machine also jumps to write state.
Transfer process 6, under reading state, to accumulative write-in data volume clear 0, data fragmentation is read according to group BD is gone out.Give It is accumulative to read the length that data volume adds current read data burst, if the current accumulative data volume that reads is less than preset data amount, Then still rest on reading state.
Transfer process 7, under reading state, if the current accumulative data volume that reads is more than or equal to preset data amount, return Return under idle condition.
S112, according to the order of the data buffer storage channel number in each buffer descriptor read successively by each data fragmentation Output.
Above-mentioned steps S101-109 reads for the BD of the data fragmentation in buffer memory management method provided in an embodiment of the present invention Flow, step S110-S112 is that data point are read from data buffer storage according to the BD of the step S101-109 data fragmentations read Piece and the process exported.
Similarly, since multiple data buffer storage refreshing frequencys are different, data fragmentation length is unequal, read-write operation is alternately different The reasons such as step, so the order at the time point of each data buffer storage passage returned data burst might not be with going out in group message respectively Individual data fragmentation it is identical, therefore export after need to being ranked up to each data buffer storage passage returned data burst.Based on above-mentioned need Ask, the embodiment of the present invention further provides a kind of hardware frame of buffer memory management method.Specifically, shown in reference picture 14, this Inventing the hardware frame of the buffer memory management method provided also includes a data buffer storage channel number for being used to store each data fragmentation The FIFO memory 81 of order and multiple FIFO memories 82 that the buffer address returned is cached for data storage, wherein, The data volume of FIFO memory 82 is identical with the quantity of data cache controller 208, and FIFO memory 82 respectively with a number Connected according to cache controller 208.In step 209, if in spatial cache indicated by tail end buffer address from second queue The buffer descriptor quantity of reading is equal to the quantity for the data fragmentation that group message is included, then is stored respectively by FIFO memory 81 BD data buffer storage channel number is suitable, is initially stored in after the data buffer storage returned data burst of each data buffer storage passage corresponding In FIFO memory 82, each BD safely stored according to FIFO memory 81 data buffer storage channel number order is returned to data buffer storage Data fragmentation is returned to be exported.
The above method can ensure single queue when continuous read-write, be all to the operation of data buffer storage it is continuous, Its performance can reach highest.Guarantee data slow join the team can also be deposited when even if all queues in turn giving out a contract for a project to be respectively written into, and go out team Shi Lianxu is read.During using single channel 16bitddr particles, the DDR effective utilizations continuously read are more than 80%.Scattered write-in Also can close to 40%. therefore even if all queues are given out a contract for a project in turn when, the method that uses of the present invention also can guarantee that DDR bandwidth usage Rate reaches more than 50%.And storage BD block can be reused, after the BD in block all takes out, it is possible to The block is reclaimed to utilize next time.
Yet another embodiment of the invention provides the cache management corresponding with the buffer memory management method of above-described embodiment offer and filled Put, it is necessary to illustrate, the explanation in above-mentioned buffer memory management method can be quoted into following embodiments to of the invention real The cache management device for applying example offer is explained.Same cache management device provided in an embodiment of the present invention is used for slow Deposit system is managed, and caching system includes multiple data buffer storage passages, and any data caching passage includes a data buffer storage With a data cache controller, and each data buffer storage passage has unique data buffer storage channel number.Specifically, the cache management Device includes:
Join the team data slicer circuit, for receive join the team message when, according to default burst size and the length for message of joining the team Spend the message that will join the team and be cut at least one data fragmentation;
Data buffer storage channel number enquiry circuit, for distributing data buffer storage channel number to each data fragmentation;
Join the team distribution circuit, for being distributed to data fragmentation pair according to the data buffer storage channel number distributed to data fragmentation The data cache controller for the data buffer storage passage answered;
Data cache controller, for the data fragmentation to be write into corresponding data buffer storage;
Chained list manages circuit, buffer address and buffer storage length for receiving data buffer storage return, and according to data The buffer address and buffer storage length that the data buffer storage channel number and data buffer storage of burst distribution are returned generate each data fragmentation Buffer descriptor;
BD cache controllers, for the buffer descriptor write-in BD of each data fragmentation to be cached.
Optionally, the caching system includes multiple BD caching passages, and any BD cachings passage includes a BD and cached Controller and BD cachings;The cache management device also includes:Reorder circuit for burst address;
The burst address circuit that reorders is used for according to order of each data fragmentation in the message of joining the team to each number It is ranked up according to the data buffer storage channel number of burst;
The BD cache controllers of each BD cachings passage are used for the order point according to the data buffer storage channel number of each data fragmentation The buffer descriptor of each data fragmentation is not write into corresponding BD to cache.
Optionally, BD cache controllers by multiple BD cachings according to preset buffer memory size specifically for being divided into multiple cachings Block;Cache blocks are distributed to individual queue, and head end buffer address is distributed to individual queue;Wherein, the cache blocks distributed to different queue Difference, cache blocks from spatial cache indicated by the head end buffer address of any queue to the queue assignment;Caching is received to retouch When stating symbol, the head end buffer address of first queue is obtained;First queue is the data fragmentation belonging to the buffer descriptor received The affiliated queue joined the team belonging to message;The buffer descriptor received is write indicated by the head end buffer address of first queue Spatial cache, and the head end buffer address distributed to first queue is recorded as to the tail end buffer address of first queue;Judge Whether the remaining spatial cache of the first cache blocks is less than or equal to the size of a buffer descriptor;First cache blocks are to first The cache blocks of queue assignment;If it is not, the head end buffer address of first queue then is updated into head end caching ground in the first cache blocks Next buffer address behind location;If so, then the second cache blocks are distributed to first queue, by the caching block address of the second cache blocks The first cache blocks are write, the head end buffer address of first queue is updated to first buffer address in the second cache blocks;Root The buffer descriptor of each data fragmentation is write into first queue successively according to the order of the data buffer storage channel number of each data fragmentation Spatial cache indicated by head end buffer address.
Optionally, caching system also includes:Scheduling unit,
Scheduling unit is used for the information of joining the team for recording individual queue, dequeue command is received, according to the queuing message for going out group message Second queue is obtained, the quantity of the data fragmentation included of group message is gone out according to the acquisition of information of joining the team of second queue;
Cache management device also includes:Go out team's distribution circuit and data dequeued restructuring circuit;
Chained list management circuit is additionally operable to obtain the tail end buffer address of second queue and caches ground by the tail end of second queue Location is distributed to BD cache controllers;Wherein, dequeue command will go out group message for instruction and be recalled from data buffer storage, dequeue command Queuing message including going out group message;Second queue is the queue belonging to group message;
BD cache controllers are additionally operable to read to cache in the spatial cache indicated by the tail end buffer address of second queue and retouched State symbol;Judge whether the buffer descriptor in the 3rd cache blocks all takes out;If it is not, then the tail end of second queue is cached Address is updated to next buffer address after tail end buffer address in the 3rd cache blocks;If so, then obtaining in the 3rd cache blocks Caching block address, and the tail end buffer address of second queue is updated to indicated by the caching block address into the 3rd cache blocks Cache blocks in first buffer address;3rd cache blocks are the spatial cache indicated by the tail end buffer address of second queue Cache blocks;The buffer descriptor quantity read in judging the spatial cache indicated by the tail end buffer address from second queue is It is no to be equal to the quantity that the data fragmentation that group message is included is obtained out according to the length information for going out group message;
Go out team's distribution circuit, for what is read in judging the spatial cache indicated by the tail end buffer address from second queue Whether buffer descriptor quantity is equal to the quantity that the data fragmentation that group message is included is obtained out according to the length information for going out group message And the buffer descriptor quantity read in the spatial cache indicated by the tail end buffer address from second queue is equal to basis Respectively it will be cached when the length information for going out group message obtains out the quantity for the data fragmentation that group message is included from the tail end of second queue The buffer descriptor read in spatial cache indicated by address is distributed to the data cache controller that corresponding data caches passage;
Data cache controller is additionally operable to read in the spatial cache according to indicated by the tail end buffer address from second queue The buffer descriptor taken reads data fragmentation from corresponding data buffer storage;
Data dequeued recombinates circuit, for the order according to the data buffer storage channel number in each buffer descriptor read Each data fragmentation is exported successively.
The present invention takes notice of that embodiment provides a kind of field programmable gate array, including:It is slow that any of the above-described embodiment is provided Deposit managing device.
Field programmable gate array is a kind of integrated level very high novel high-performance programmable chip.FPGA internal circuit work( It can be programmable, hardware description language (English name can be passed through:Hardware Description Language, letter Claim:HDL) and special designs instrument, extremely complex circuit function is neatly realized inside FPGA, it is adaptable at a high speed, highly dense The high end digital Logic Circuit Design field of degree.Realized when the cache management device in above-described embodiment is realized inside FPGA When, occupancy logical resource is few, and in the case of without using third party's control device such as central processing unit, it is real with minimum cost Show FPGA active upgrading, and there is provided the difunctional effect of veneer.
Buffer memory management method, device and the FPGA provided based on above-described embodiment, the embodiment of the present invention can be utilized Output/output (English name abundant FPGA:Input/Output, referred to as:I/O) resource and inner buffer resource, using many The BD cachings of the data buffer storage of each passage and multiple passages, realize the high-performance cache management of any number of queues.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, the change or replacement that can be readily occurred in, all should It is included within the scope of the present invention.Therefore, protection scope of the present invention should be defined by scope of the claims.

Claims (14)

1. a kind of buffer memory management method, it is characterised in that for being managed to caching system, the caching system includes many Individual data buffer storage passage, any data caching passage includes a data buffer storage and a data cache controller, and each data buffer storage Passage has unique data buffer storage channel number;Methods described includes:
Receive join the team message when, the message of joining the team is cut into according to default burst size and the length of the message of joining the team At least one data fragmentation;
The data buffer storage channel number is distributed to each data fragmentation;
Data fragmentation is distributed to by the corresponding data buffer storage according to the data buffer storage channel number distributed to the data fragmentation The data cache controller of passage;
The data fragmentation is write into corresponding data buffer storage under the control of the data cache controller;
Buffer address and buffer storage length that the data buffer storage is returned are received, and it is logical according to the data buffer storage distributed to data fragmentation Buffer address and buffer storage length that Taoist monastic name and the data buffer storage are returned generate the buffer descriptor of each data fragmentation;
By the buffer descriptor write-in buffer descriptor BD cachings of each data fragmentation.
2. according to the method described in claim 1, it is characterised in that described to distribute the data buffer storage to each data fragmentation Channel number, including:
Multiple data buffer storage channel numbers are rotated and distributed to each data fragmentation.
3. method according to claim 2, it is characterised in that methods described also includes:
A data buffer storage channel number is randomly assigned when upper electric to individual queue respectively as the data of the current state of individual queue to delay Deposit channel number;
Multiple data buffer storage channel numbers are being rotated to after each data fragmentation distribution, joined the team described belonging to message The data buffer storage channel number of the current state of queue is updated to next number after the data buffer storage channel number at the end of rotating According to caching channel number;
Described rotate multiple data buffer storage channel numbers is distributed to each data fragmentation, including:
By multiple data buffer storages by the data buffer storage channel number of the current state of the queue joined the team belonging to message Channel number is rotated to be distributed to each data fragmentation.
4. according to the method described in claim 1, it is characterised in that any data buffer storage includes multiple memory banks, any The memory bank includes:One recovery area and a unused area;Wherein, the recovery area is used to deposit the spatial cache reclaimed Buffer address, the unused area is used to deposit the buffer address of untapped spatial cache;It is described slow in the data The data fragmentation is write into corresponding data buffer storage under the control of memory controller, including:
Control multiple memory banks in the data buffer storage to rotate to the data fragmentation and distribute buffer address;
Whether judge the recovery area of the memory bank of the corresponding data buffer storage of data buffer storage channel number of each data fragmentation has Buffer address;
Distributed if so, then taking out a buffer address from recovery area to the data fragmentation;
Distributed if it is not, then taking out a buffer address from unused area to the data fragmentation;
Data fragmentation is write to the caching indicated by the buffer address distributed to it under the control of the data cache controller Space.
5. according to the method described in claim 1, it is characterised in that the caching system includes multiple BD and caches passage, any The BD cachings passage includes a BD cache controllers and a BD is cached;
The buffer descriptor write-in BD cachings by each data fragmentation, including:
The data buffer storage channel number of each data fragmentation is ranked up according to order of each data fragmentation in the message of joining the team;
The BD cache controllers of each BD cachings passage are according to the order of the data buffer storage channel number of each data fragmentation respectively by each number Corresponding BD cachings are write according to the buffer descriptor of burst.
6. method according to claim 5, it is characterised in that the BD cache controllers of each BD cachings passage are according to each The buffer descriptor of each data fragmentation is write corresponding BD respectively and cached by the order of the data buffer storage channel number of data fragmentation, bag Include:
The multiple BD cachings are divided into multiple cache blocks according to preset buffer memory size;
To cache blocks, head end buffer address and tail end buffer address described in each queue assignment, and record to each queue The head end buffer address and tail end buffer address of distribution;Wherein, the cache blocks difference distributed to different queue, the head of any queue Spatial cache indicated by end buffer address and tail end buffer address is the same caching into the cache blocks of the queue assignment Space;
When receiving buffer descriptor, the head end buffer address of first queue is obtained;The first queue is the caching received The queue joined the team belonging to message belonging to data fragmentation belonging to descriptor;
The buffer descriptor received is write into the spatial cache indicated by the head end buffer address of the first queue;
Judge whether the remaining spatial cache of the first cache blocks is less than or equal to the size of a buffer descriptor;Described first delays Counterfoil is the cache blocks distributed to first queue;
If it is not, then the head end buffer address of first queue is updated to next after head end buffer address in first cache blocks Individual buffer address;
If so, then distributing the second cache blocks to the first queue, the caching block address of second cache blocks is write first In the remaining spatial cache of cache blocks, the head end buffer address of the first queue is updated to first in the second cache blocks Buffer address;
According to the order of the data buffer storage channel number of each data fragmentation successively writes the buffer descriptor of each data fragmentation Spatial cache indicated by the head end buffer address of first queue.
7. method according to claim 6, it is characterised in that methods described also includes:
When distributing the second cache blocks to the first queue, the cache blocks during the multiple BD is cached are rotated to described first Queue assignment.
8. method according to claim 6, it is characterised in that methods described also includes:
Receive dequeue command;The dequeue command, which is used for instruction, will go out group message and be recalled from the data buffer storage, it is described go out team Order goes out the queuing message of group message including described in;
According to it is described go out group message queuing message obtain second queue;The second queue goes out the team belonging to group message for described in Row;
Obtain the tail end buffer address of the second queue;
Read the buffer descriptor in the spatial cache indicated by the tail end buffer address of the second queue;
Judge whether the buffer descriptor in the 3rd cache blocks all takes out;3rd cache blocks are the second queue Tail end buffer address indicated by spatial cache described in cache blocks;
If it is not, then the tail end buffer address of the second queue is updated in the 3rd cache blocks after tail end buffer address Next buffer address;
If so, then obtaining the caching block address in the 3rd cache blocks, and the tail end buffer address of the second queue is updated to First buffer address in the cache blocks indicated by caching block address into the 3rd cache blocks;
According to the quantity of the data fragmentation included for going out group message described in acquisition of information of joining the team of the second queue;
The buffer descriptor quantity read in judging the spatial cache indicated by the tail end buffer address from the second queue is The no quantity for going out the data fragmentation that group message is included equal to described in;
If so, the buffer descriptor read in the spatial cache indicated by tail end buffer address respectively from the second queue is divided It is sent to the data cache controller that corresponding data caches passage;
Caching under the control of the data cache controller according to indicated by the tail end buffer address from the second queue The buffer descriptor read in space reads data fragmentation from corresponding data buffer storage;
Each data fragmentation is exported successively according to the order of the data buffer storage channel number in each buffer descriptor read.
9. the method according to claim any one of 1-8, it is characterised in that methods described also includes:
Record the accumulative write-in data volume and accumulative reading data volume of each data buffer storage;
When any data buffer storage is idle condition, when the accumulative reading data volume of the data buffer storage is zero or accumulative reading number According to amount more than preset data amount and when having detected data fragmentation and needing write-in, the data buffer storage jumps to write state and will be tired Meter reads data volume and reset;When the accumulative write-in data volume of the data buffer storage be zero and need read data fragmentation when, the data Caching jumps to reading state and resets accumulative write-in data volume;
When any data buffer storage is write state, judge whether the accumulative write-in data volume of the data buffer storage is more than preset data Amount, when the accumulative write-in data volume of the data buffer storage is more than preset data amount, then the data buffer storage jumps to idle condition;
When any data buffer storage is reading state, judge whether the accumulative reading data volume of the data buffer storage is more than preset data Amount, when the accumulative reading data volume of the data buffer storage is more than preset data amount, then the data buffer storage jumps to idle condition.
10. a kind of cache management device, it is characterised in that for being managed to caching system, the caching system includes Multiple data buffer storage passages, any data caching passage includes a data buffer storage and a data cache controller, and each number respectively There is unique data buffer storage channel number according to caching passage;The cache management device includes:
Join the team data slicer circuit, for receive join the team message when, according to default burst size and the length of the message of joining the team The message of joining the team is cut at least one data fragmentation by degree;
Data buffer storage channel number enquiry circuit, for distributing the data buffer storage channel number to each data fragmentation;
Join the team distribution circuit, it is corresponding for being distributed to data fragmentation according to the data buffer storage channel number distributed to data fragmentation The data cache controller of the data buffer storage passage;
Data cache controller, for the data fragmentation to be write into corresponding data buffer storage;
Chained list manages circuit, for receiving buffer address and buffer storage length that the data buffer storage is returned, and according to data The buffer address and buffer storage length that the data buffer storage channel number and the data buffer storage of burst distribution are returned generate each data point The buffer descriptor of piece;
BD cache controllers, for the buffer descriptor write-in BD of each data fragmentation to be cached.
11. device according to claim 10, it is characterised in that the caching system includes multiple BD and caches passage, appoints BD described in one, which caches passage, includes a BD cache controllers and BD cachings;The cache management device also includes:Burst address Reorder circuit;
Circuit reorder for dividing according to order of each data fragmentation in the message of joining the team each data in the burst address The data buffer storage channel number of piece is ranked up;
The BD cache controllers of each BD cachings passage are used to respectively will according to the order of the data buffer storage channel number of each data fragmentation The buffer descriptor of each data fragmentation writes corresponding BD cachings.
12. device according to claim 11, it is characterised in that the BD cache controllers are specifically for according to default slow Deposit size and the multiple BD cachings are divided into multiple cache blocks;To cache blocks described in each queue assignment, and to individual queue Distribute head end buffer address;Wherein, the cache blocks distributed to different queue are different, indicated by the head end buffer address of any queue Cache blocks from spatial cache to the queue assignment;When receiving buffer descriptor, the head end buffer address of first queue is obtained; The first queue is the queue joined the team belonging to message belonging to the data fragmentation belonging to the buffer descriptor received;It will receive The buffer descriptor arrived writes the spatial cache indicated by the head end buffer address of the first queue, and will be to first queue point The head end buffer address matched somebody with somebody is recorded as the tail end buffer address of first queue;Whether judge the remaining spatial cache of the first cache blocks Less than or equal to the size of a buffer descriptor;First cache blocks are the cache blocks distributed to first queue;If it is not, then The head end buffer address of first queue is updated to next buffer address in first cache blocks after head end buffer address; If so, then distributing the second cache blocks to the first queue, the caching block address write-in first of second cache blocks is cached Block, and the head end buffer address of the first queue are updated to first buffer address in the second cache blocks;According to each number The buffer descriptor of each data fragmentation is write to the head of the first queue successively according to the order of the data buffer storage channel number of burst Hold the spatial cache indicated by buffer address.
13. device according to claim 12, it is characterised in that the caching system also includes:Scheduling unit,
The scheduling unit is used to record the information of joining the team of individual queue, and receives dequeue command, according to it is described go out group message queue Acquisition of information second queue, the data fragmentation that includes of group message is gone out according to joining the team for the second queue described in acquisition of information Quantity;
The cache management device also includes:Go out team's distribution circuit and data dequeued restructuring circuit;
Chained list management circuit is additionally operable to obtain the tail end buffer address of the second queue and by the tail of the second queue End buffer address is distributed to the BD cache controllers;Wherein, the dequeue command, which is used for instruction, will go out group message from the number According to being recalled in caching, the dequeue command goes out the queuing message of group message including described in;The second queue goes out team's report for described in Queue belonging to text;
The BD cache controllers, which are additionally operable to read in the spatial cache indicated by the tail end buffer address of the second queue, to be delayed Deposit descriptor;Judge whether the buffer descriptor in the 3rd cache blocks all takes out;If it is not, then by the second queue Tail end buffer address is updated to next buffer address after tail end buffer address in the 3rd cache blocks;If so, then obtaining Caching block address in 3rd cache blocks, and the tail end buffer address of the second queue is updated to the 3rd cache blocks In caching block address indicated by cache blocks in first buffer address;3rd cache blocks are the second queue The cache blocks described in spatial cache indicated by tail end buffer address;Judge the tail end buffer address meaning from the second queue Whether the buffer descriptor quantity read in the spatial cache shown is equal to described in the length information acquisition for going out group message according to Go out the quantity for the data fragmentation that group message is included;
It is described go out team distribution circuit, in judging the spatial cache indicated by the tail end buffer address from the second queue read Whether the buffer descriptor quantity taken, which is equal to described in the length information acquisition for going out group message according to, goes out the number that group message is included The caching read according to the quantity of burst and in the spatial cache indicated by the tail end buffer address from the second queue is retouched Symbol quantity is stated to be equal to when going out the quantity for the data fragmentation that group message is included according to the length information acquisition for going out group message The buffer descriptor read in spatial cache indicated by tail end buffer address respectively from the second queue is distributed to correspondence The data cache controller of data buffer storage passage;
The data cache controller is additionally operable to the spatial cache according to indicated by the tail end buffer address from the second queue The buffer descriptor of middle reading reads data fragmentation from corresponding data buffer storage;
The data dequeued recombinates circuit, for the order according to the data buffer storage channel number in each buffer descriptor read Each data fragmentation is exported successively.
14. a kind of field programmable gate array, it is characterised in that including:Cache management described in claim any one of 10-13 Device.
CN201710364480.3A 2017-05-22 2017-05-22 Cache management method and device and field programmable gate array Active CN107220187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710364480.3A CN107220187B (en) 2017-05-22 2017-05-22 Cache management method and device and field programmable gate array

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710364480.3A CN107220187B (en) 2017-05-22 2017-05-22 Cache management method and device and field programmable gate array

Publications (2)

Publication Number Publication Date
CN107220187A true CN107220187A (en) 2017-09-29
CN107220187B CN107220187B (en) 2020-06-16

Family

ID=59945433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710364480.3A Active CN107220187B (en) 2017-05-22 2017-05-22 Cache management method and device and field programmable gate array

Country Status (1)

Country Link
CN (1) CN107220187B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108762676A (en) * 2018-05-24 2018-11-06 安徽雷索信息科技有限公司 A kind of multichannel big data storage method and its system
CN108848530A (en) * 2018-07-10 2018-11-20 网宿科技股份有限公司 A kind of method, apparatus and dispatch server obtaining Internet resources
CN109343799A (en) * 2018-09-28 2019-02-15 中国电子科技集团公司第五十二研究所 It is a kind of to continue superfast data loader system
CN110958331A (en) * 2019-12-27 2020-04-03 视联动力信息技术股份有限公司 Data transmission method and terminal
CN111459852A (en) * 2019-01-22 2020-07-28 阿里巴巴集团控股有限公司 Cache control method and device and electronic equipment
CN111541624A (en) * 2020-04-13 2020-08-14 上海航天计算机技术研究所 Space Ethernet cache processing method
CN111651377A (en) * 2020-06-28 2020-09-11 中国人民解放军国防科技大学 Elastic shared cache architecture for on-chip message processing
CN111782578A (en) * 2020-05-29 2020-10-16 西安电子科技大学 Cache control method, system, storage medium, computer equipment and application
CN113595932A (en) * 2021-08-06 2021-11-02 上海金仕达软件科技有限公司 Method for processing data out-of-order message and special integrated circuit
WO2022143678A1 (en) * 2020-12-30 2022-07-07 苏州盛科通信股份有限公司 Message storage method, method for adding message to queue, method for deleting message from queue, and storage scheduling apparatus
CN115190089A (en) * 2022-05-26 2022-10-14 中科驭数(北京)科技有限公司 Message storage method, device, equipment and storage medium
CN116418734A (en) * 2023-06-09 2023-07-11 湖北微源卓越科技有限公司 Low-delay packet sending method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5664116A (en) * 1995-07-07 1997-09-02 Sun Microsystems, Inc. Buffering of data for transmission in a computer communication system interface
CN1680929A (en) * 2004-04-08 2005-10-12 华为技术有限公司 Dada buffer designing method with multiple channels and device thereof
CN101094183A (en) * 2007-07-25 2007-12-26 杭州华三通信技术有限公司 Buffer memory management method and device
CN101187896A (en) * 2007-12-14 2008-05-28 中兴通讯股份有限公司 On-spot programmable gate array data cache management method
CN102377682A (en) * 2011-12-12 2012-03-14 西安电子科技大学 Queue management method and device based on variable-length packets stored in fixed-size location
CN104021091A (en) * 2014-05-26 2014-09-03 西安交通大学 Multichannel data caching implementation method based on FPGA/CPLD
CN105162724A (en) * 2015-07-30 2015-12-16 华为技术有限公司 Data enqueue and dequeue method an queue management unit
CN105975209A (en) * 2016-04-26 2016-09-28 浪潮(北京)电子信息产业有限公司 Multichannel data write-in method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5664116A (en) * 1995-07-07 1997-09-02 Sun Microsystems, Inc. Buffering of data for transmission in a computer communication system interface
CN1680929A (en) * 2004-04-08 2005-10-12 华为技术有限公司 Dada buffer designing method with multiple channels and device thereof
CN101094183A (en) * 2007-07-25 2007-12-26 杭州华三通信技术有限公司 Buffer memory management method and device
CN101187896A (en) * 2007-12-14 2008-05-28 中兴通讯股份有限公司 On-spot programmable gate array data cache management method
CN102377682A (en) * 2011-12-12 2012-03-14 西安电子科技大学 Queue management method and device based on variable-length packets stored in fixed-size location
CN104021091A (en) * 2014-05-26 2014-09-03 西安交通大学 Multichannel data caching implementation method based on FPGA/CPLD
CN105162724A (en) * 2015-07-30 2015-12-16 华为技术有限公司 Data enqueue and dequeue method an queue management unit
CN105975209A (en) * 2016-04-26 2016-09-28 浪潮(北京)电子信息产业有限公司 Multichannel data write-in method and system

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108762676A (en) * 2018-05-24 2018-11-06 安徽雷索信息科技有限公司 A kind of multichannel big data storage method and its system
CN108848530A (en) * 2018-07-10 2018-11-20 网宿科技股份有限公司 A kind of method, apparatus and dispatch server obtaining Internet resources
CN109343799A (en) * 2018-09-28 2019-02-15 中国电子科技集团公司第五十二研究所 It is a kind of to continue superfast data loader system
CN109343799B (en) * 2018-09-28 2022-04-01 中国电子科技集团公司第五十二研究所 Continuous ultrahigh-speed data unloading system
CN111459852B (en) * 2019-01-22 2023-05-05 阿里巴巴集团控股有限公司 Cache control method and device and electronic equipment
CN111459852A (en) * 2019-01-22 2020-07-28 阿里巴巴集团控股有限公司 Cache control method and device and electronic equipment
CN110958331A (en) * 2019-12-27 2020-04-03 视联动力信息技术股份有限公司 Data transmission method and terminal
CN111541624A (en) * 2020-04-13 2020-08-14 上海航天计算机技术研究所 Space Ethernet cache processing method
CN111782578A (en) * 2020-05-29 2020-10-16 西安电子科技大学 Cache control method, system, storage medium, computer equipment and application
CN111651377A (en) * 2020-06-28 2020-09-11 中国人民解放军国防科技大学 Elastic shared cache architecture for on-chip message processing
CN111651377B (en) * 2020-06-28 2022-05-20 中国人民解放军国防科技大学 Elastic shared buffer for on-chip message processing
WO2022143678A1 (en) * 2020-12-30 2022-07-07 苏州盛科通信股份有限公司 Message storage method, method for adding message to queue, method for deleting message from queue, and storage scheduling apparatus
CN113595932A (en) * 2021-08-06 2021-11-02 上海金仕达软件科技有限公司 Method for processing data out-of-order message and special integrated circuit
CN115190089A (en) * 2022-05-26 2022-10-14 中科驭数(北京)科技有限公司 Message storage method, device, equipment and storage medium
CN115190089B (en) * 2022-05-26 2024-03-22 中科驭数(北京)科技有限公司 Message storage method, device, equipment and storage medium
CN116418734A (en) * 2023-06-09 2023-07-11 湖北微源卓越科技有限公司 Low-delay packet sending method and device
CN116418734B (en) * 2023-06-09 2023-08-18 湖北微源卓越科技有限公司 Low-delay packet sending method and device

Also Published As

Publication number Publication date
CN107220187B (en) 2020-06-16

Similar Documents

Publication Publication Date Title
CN107220187A (en) A kind of buffer memory management method, device and field programmable gate array
EP3149595B1 (en) Systems and methods for segmenting data structures in a memory system
CN103914341B (en) Data queue goes out group management-control method and device
CN107204198A (en) The control method and device of high speed access Double Data Rate synchronous DRAM
CN102985909B (en) Object for good lattice provides the method and apparatus of the high scalability network storage
CN102103548B (en) Method and device for increasing read-write rate of double data rate synchronous dynamic random access memory
CN101751980B (en) Embedded programmable memory based on memory IP core
CN101499956B (en) Hierarchical buffer zone management system and method
US11700209B2 (en) Multi-path packet descriptor delivery scheme
CN103136120B (en) Row buffering operating strategy defining method and device, bank division methods and device
CN102707788B (en) The content search system limited lower than specified power for keeping its power consumption and method
EP3758318A1 (en) Shared memory mesh for switching
CN101916227A (en) RLDRAM SIO storage access control method and device
CN106951488A (en) A kind of log recording method and device
US20090002864A1 (en) Memory Controller for Packet Applications
EP3356945B1 (en) Computer device provided with processing in memory and narrow access ports
US11385900B2 (en) Accessing queue data
CN106254270A (en) A kind of queue management method and device
CN104461956B (en) The method of access synchronized dynamic RAM, apparatus and system
CN101848150B (en) Method and device for maintaining count value of multicast counter
WO2023179619A1 (en) Neural network caching method, system, and device and storage medium
CN103902471B (en) Data buffer storage treating method and apparatus
US10067868B2 (en) Memory architecture determining the number of replicas stored in memory banks or devices according to a packet size
CN101566933B (en) Method and device for configurating cache and electronic equipment and data read-write equipment
CN110096456A (en) A kind of High rate and large capacity caching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant