WO2016058355A1 - 一种数据缓存方法、装置及存储介质 - Google Patents

一种数据缓存方法、装置及存储介质 Download PDF

Info

Publication number
WO2016058355A1
WO2016058355A1 PCT/CN2015/077639 CN2015077639W WO2016058355A1 WO 2016058355 A1 WO2016058355 A1 WO 2016058355A1 CN 2015077639 W CN2015077639 W CN 2015077639W WO 2016058355 A1 WO2016058355 A1 WO 2016058355A1
Authority
WO
WIPO (PCT)
Prior art keywords
cell
dequeued
splicing
count value
pointer
Prior art date
Application number
PCT/CN2015/077639
Other languages
English (en)
French (fr)
Inventor
赵姣
赖明亮
田浩暄
常艳蕊
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Priority to EP15850914.1A priority Critical patent/EP3206123B1/en
Priority to US15/519,073 priority patent/US10205673B2/en
Priority to JP2017520382A priority patent/JP6340481B2/ja
Publication of WO2016058355A1 publication Critical patent/WO2016058355A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6245Modifications to standard FIFO or LIFO
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1008Correctness of operation, e.g. memory ordering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/15Use in a specific computing environment
    • G06F2212/154Networked environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/60Details of cache memory
    • G06F2212/608Details relating to cache mapping

Definitions

  • the present invention relates to the field of data communication technologies, and in particular, to a data caching method, apparatus, and storage medium in a packet switching network.
  • FIG. 1 is a schematic diagram of large-scale switching network data processing.
  • the cell has two processing methods: fixed length and variable length. The variable length cell is more efficient than the fixed length cell for cache resources and bus resources, and generally adopts variable length cell processing. .
  • the cell cache in the switching network element chip is mainly used for storing the cell body data in the process of waiting for the automatic addressing processing result and the scheduling output of the cell.
  • the required size of the buffer space is small;
  • the cache space is calculated according to the switching capacity of the switching network element chip to avoid large-scale packet loss in the case of network congestion.
  • the existing cell buffering method generally converts a storage space with redundancy according to the number of links, the length of the optical fiber, and the data transmission rate, but as the size of the switching network and the data rate increase, the cache space resources must also be Correspondingly, for example, when the data rate is increased from 12.5G to 25G, the capacity of the cache space must be doubled to ensure no loss forwarding. Moreover, under the premise of variable-length cell switching, the utilization of cache resources is also low. In the case of minimum cell storage, less than 50%, as shown in FIG. 2 is a schematic diagram of storage space utilization of network elements of different lengths.
  • embodiments of the present invention are directed to providing a data caching method, apparatus, and storage medium, which can effectively improve utilization of cache resources and bus resources.
  • An embodiment of the present invention provides a data caching method, where the method includes:
  • the current Kth cycle determines that the to-be-de-queried cell can be dequeued, schedules the to-be-de-queried cell to dequeue, and obtains the actual value of the number of splicing units occupied by the to-be-de-queued cell, and the to-be-departed cell Stored in the same way as the bus bit width in the cell splicing manner;
  • the determining that the to-be-decoded cell can be dequeued is performed according to the first back-pressure count value of the K-1th period being less than or equal to the first preset threshold, and the first reverse of the K-1th period
  • the pressure count value is obtained according to an estimated value of the number of splicing units occupied by the last dequeue cell, the number of splicing units that can be transmitted per cycle of the bus, and the first back pressure count value of the K-2 period; K is a positive integer.
  • the method before the storing the cell to the corresponding first in first out queue according to the input port number of the cell, the method further includes: extracting the cell length information and the cell version information carried by the cell. And obtaining, according to the cell length information and the cell version information, the number of cache units occupied by the cell.
  • the storing the cell to the corresponding first in first out queue according to the input port number of the cell includes:
  • the number of the cache units is stored in the first-in first-out queue, and the valid cache unit occupied by the cells is read. If the valid cache unit does not have an address, the inbound sub-pointer of the first-in first-out queue is updated, and the queue is released. The idle address; if the valid buffer unit occupied by the cell crosses the address, the tail pointer of the first in first out queue and the enqueue sub pointer are updated.
  • the method further includes:
  • the first back pressure count value is corrected according to the actual number of splicing units occupied by the to-be-dequeted cell and the number of buffer units occupied by the to-be-deframeed cell.
  • the to-be-departed cells are stored in a cell concatenated manner to the same register as the bus bit width, including:
  • Finding a write pointer storing the cell in a register corresponding to the write pointer according to an actual number of splicing units occupied by the to-be-dequeted cell, if the register includes a valid cell, The splicing unit unites the to-be-deleted cell with the valid cell, records cell splicing information, and updates the write pointer.
  • the embodiment of the present invention further provides a data cache device, where the device includes: a cache module and a processing module;
  • the cache module is configured to store the cell to a corresponding first in first out queue according to an input port number of the cell;
  • the processing module is configured to determine that the to-be-de-queried cell can be dequeued in the current K-th cycle, and to schedule the to-be-de-queried cell to dequeue, and obtain the actual value of the number of splicing units occupied by the to-be-de-queried cell, And storing the to-be-decoded cells in a cell splicing manner to a register having the same bus bit width for data transmission;
  • the determining that the to-be-decoded cell can be dequeued is performed according to the first back-pressure count value of the K-1th period being less than or equal to the first preset threshold, and the first reverse of the K-1th period
  • the pressure count value is obtained according to an estimated value of the number of splicing units occupied by the last dequeue cell, the number of splicing units that can be transmitted per cycle of the bus, and the first back pressure count value of the K-2 period; K is a positive integer.
  • the device further includes: an acquiring module, configured to extract cell length information and cell version information carried by the cell, and obtain the cell according to the cell length information and cell version information. The number of cache units occupied.
  • the device further includes: a correction module, configured to: according to the actual number of the splicing units occupied by the to-be-dequeted cells and the number of cache units occupied by the to-be-dequeted cells, to the first The pressure count value is corrected.
  • a correction module configured to: according to the actual number of the splicing units occupied by the to-be-dequeted cells and the number of cache units occupied by the to-be-dequeted cells, to the first The pressure count value is corrected.
  • the embodiment of the invention further provides a data caching method, the method comprising:
  • Restoring data spliced in a cell splicing manner is an independent cell
  • the current Kth cycle determines that the to-be-departed cell can be dequeued, and schedules the to-be-departed cell to be dequeued;
  • the determining that the to-be-decoded cell can be dequeued is performed according to the second back-pressure count value of the K-1th period being less than or equal to the second preset threshold, and the second reverse of the K-1th period
  • the pressure count value is obtained according to an estimated value of the number of splicing units occupied when the last dequeue cell is dequeued, the number of splicing units that can be transmitted per cycle of the bus, and the second back pressure count value of the K-2 period; K is a positive integer.
  • the data spliced in the manner of cell splicing is restored as an independent cell.
  • the embodiment of the present invention further provides a data cache device, where the device includes: a restore module, a storage module, and a scheduling module;
  • the restoring module is configured to restore data spliced by means of cell splicing into independent cells
  • the storage module is configured to store the cell to a corresponding first in first out queue according to an output port number of the cell;
  • the scheduling module is configured to determine that a cell to be dequeued can be dequeued in a current Kth cycle, and schedule a cell to be dequeued to be dequeued;
  • the determining that the cell to be dequeued can be dequeued is based on the second back pressure gauge of the K-1 cycle. If the value is less than or equal to the second preset threshold value, the second back pressure count value of the K-1th period is estimated according to the estimated number of splicing units when the last dequeue cell is dequeued, and the bus can be transmitted per cycle. The number of splicing units and the second back pressure count value of the K-2 period are obtained; K is a positive integer.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program, and the computer program is used to execute the data caching method of the embodiment of the present invention.
  • the data caching method, device and storage medium provided by the embodiment of the present invention store the cell to a corresponding first in first out queue (FIFO, First In First Out) according to an input port number of the cell; the current Kth cycle is determined.
  • the to-be-departed cell can be dequeued, the dequeued cell is dequeued, the actual value of the number of splicing units occupied by the to-be-departed cell is obtained, and the to-be-de-queried cell is spliced by the cell.
  • the mode is stored in the same register as the bus width; wherein the determining that the cell to be dequeued can be dequeued is performed according to the first back pressure count value of the K-1 cycle being less than or equal to the first preset threshold value,
  • the first back pressure count value determined in the K-1th period is estimated according to the number of splicing units occupied when the last dequeue cell is dequeued, the number of splicing units that can be transmitted per cycle of the bus, and the number of the K-2 period. A back pressure count value is obtained.
  • FIG. 1 is a schematic diagram of data processing of a large-scale exchange network
  • 2 is a schematic diagram of storage space utilization of network elements of different lengths
  • FIG. 3 is a schematic diagram of a data caching method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a cell cache according to the present invention.
  • Figure 5 is a schematic diagram of cell splicing according to the present invention.
  • FIG. 6 is a schematic diagram of a data caching method according to Embodiment 2 of the present invention.
  • FIG. 7 is a schematic structural diagram of a data cache apparatus according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a data cache apparatus according to Embodiment 2 of the present invention.
  • the cell is stored in the corresponding first-in first-out queue according to the input port number of the cell; the current K-th cycle determines that the cell to be dequeued can be dequeued, and the cell to be dequeued is scheduled.
  • Dequeuing obtaining an actual value of the number of splicing units occupied by the to-be-departed cell, and storing the to-be-decoded cell in a cell splicing manner to a register having the same bus bit width for data transmission; Determining that the to-be-de-queried cell can be dequeued is performed according to the first back-pressure count value of the K-1th period being less than or equal to the first preset threshold value, and the first back pressure determined by the K-1th period The count value is obtained according to an estimated value of the number of splicing units occupied when the last dequeue cell is dequeued, the number of splicing units that can be transmitted per cycle of the bus, and the first back pressure count value of the K-2 period; K is a positive integer.
  • FIG. 3 is a schematic flowchart of a data caching method according to an embodiment of the present invention.
  • the data caching method process in this embodiment includes:
  • Step 301 Store the cell to a corresponding first in first out queue according to an input port number of the cell;
  • the method further includes: extracting information such as a cell length and a cell version carried in a cell header of the cell, and acquiring the cell according to the cell length information and the cell version information.
  • the cache unit is one-Nth of each address of a random access memory (RAM), that is, the RAM is divided into fixed N shares, each of which is a cache unit; N is a positive integer; The value can be set according to the forwarding rate of the data, etc.;
  • RAM random access memory
  • the method further includes: storing the cell information of the cell to the corresponding first in first out queue according to the input port number of the cell to store the cell to the corresponding first in first out queue
  • the cell information includes: a length of the cell, a cell version, and a number of cache units occupied by the cell.
  • the step includes: obtaining the input according to an input port number of the cell.
  • the cell When it is determined that the first-in first-out queue is empty or the buffer unit in the tail pointer is already full, the cell is divided into a group M data having the same bit width as the buffer unit from a high to a low bit, and is pressed by The order from high to low writes the cell to the idle address;
  • the buffer unit in the tail pointer Determining that the first in first out queue is not empty or the buffer unit in the tail pointer is not full, dividing the cell from a high to a low to the same M group data as the buffer unit, and according to The position of the enqueue sub-pointer is written into the M group data of the cell in the order from the high bit to the low bit, and the position of the enqueue sub-pointer is written into a group of divided data in which the highest bit of the cell is located, and the free address number is The buffer unit with the enqueue sub-point minus 1 writes the last set of data of the cell division;
  • the effective buffer unit occupied by the cell is: a buffer unit actually occupied by the cell;
  • the effective buffer unit has no cross-address including: the number of cache units actually occupied by the cell is not greater than M, and the enqueue sub-pointer is not 0;
  • the valid cache unit cross-address occupied by the cell includes: the first-in first-out queue is empty, or the cache unit in the tail pointer is already full, or the number of cache units actually occupied by the cell
  • the team pointers are added more than M;
  • Step 302 The current Kth cycle determines that the to-be-de-queued cell can be dequeued, schedules the to-be-de-queried cell to dequeue, and obtains the actual value of the number of splicing units occupied by the to-be-de-queried cell, and the pending value
  • the team cells are stored in a cell concatenated manner to the same register as the bus bit width;
  • bit width of the splicing unit is one of X points of a buffer unit bit width, and can be set according to a data forwarding rate, etc., to ensure that data is not lost under minimum register conditions, and bus resources are guaranteed. Fully utilized, no empty shots; X is a positive integer;
  • Determining that the to-be-decoded cell can be dequeued is performed according to the first back-pressure count value of the K-1th period being less than or equal to the first preset threshold, and the first back-pressure count of the K-1th period
  • the value is obtained according to an estimated value of the number of splicing units when the last dequeue cell is dequeued, the number of splicing units that can be transmitted by the bus per cycle, and the first back pressure count value of the K-2 period;
  • K is a positive integer;
  • the first back pressure count value is used as a basis for judging whether to allow the cells to be dequeued in the queue to be dequeued in the next cycle;
  • the first back pressure count value of the K-1th cycle the first back pressure count value of the K-2th cycle + the estimated value of the number of splicing units occupied when the last dequeue cell is dequeued - the bus can be transmitted per cycle Number of splicing units;
  • the estimated value of the number of splicing units occupied when the last dequeuing cell is dequeued the product of the number of buffer units occupied by the last dequeued cell and the X.
  • the first back pressure count value of the K-2 period is 0. Therefore, the first one can be obtained directly according to the estimated value of the number of splicing units occupied by the first cell to be dequeued and the number of splicing units that can be transmitted per cycle of the bus.
  • First anti-cycle The count value is pressed, and it is judged according to the first back pressure count value of the first period whether the next cell to be dequeued is allowed to be dequeued.
  • the cells to be dequeued in the queue are not allowed to be dequeued in the Kth cycle, according to each period.
  • the number of splicing units that can be transmitted by the bus transmits the data in the register until the first back pressure count value of the Gth period is less than or equal to the first preset threshold, determining that the to-be-exited cell in the G+1th cycle can Dequeue; where G is a positive integer greater than K.
  • scheduling the outbound cell to be dequeued includes: obtaining a head pointer, a secondary pointer, and a dequeue pointer of the corresponding first in first out queue according to the dequeue port number, according to the queue waiting
  • the number of buffer units and the dequeue sub-points occupied by the dequeued cells are calculated, and the number range and number of the buffer units to be read are calculated, and the data in the buffer unit occupied by the to-be-exited cells is recombined into one letter.
  • the dequeue port number is the same as the input port number.
  • the method further includes:
  • Correcting the first back pressure count value according to the actual value of the number of splicing units occupied by the to-be-dequeted cell is usually less than or equal to Determining that the dequeue cell occupies an estimated value of the number of tiling units, and therefore, when the actual value is different from the estimated value, the correcting comprises: subtracting the first back pressure counter value from the estimated value and the The difference between the actual values. And comparing the corrected first back pressure count value with the first preset threshold value to determine whether to allow the queue to be dequeued in the queue to be dequeued in the next cycle.
  • two or two registers are included in the embodiment of the present invention to form a register set
  • the register includes Y virtual units, that is, the register is divided into Y virtual units, and the bit width of the virtual unit is the same as the bit width of one splicing unit;
  • the storing the cells to be dequeued in a cell splicing manner to the same register as the bus bit width includes:
  • FIG. 5 is a schematic diagram of cell splicing according to the present invention.
  • the cell splicing information includes: a splicing position, a cell header identifier, a cell tail identifier, and a cell valid identifier; wherein the splicing location identifies a boundary of two cells;
  • the register setting allows a maximum of two different cells to be spliced. Therefore, the cell valid identifier includes a first cell valid identifier and a second cell valid identifier, when the second in the register If the cell has not been input, and the virtual cell contained in the register is not full, the second cell valid identifier is invalid.
  • the register in the register set contains valid cell data
  • all data in the register corresponding to the read pointer in the register group is output to the cell buffer output end, and the cell splicing is carried.
  • Information as shown in FIG. 5, the read pointer changes in units of registers, and after the data of register 0 is output, the read pointer points to register 1.
  • FIG. 6 is a schematic diagram of a data caching method according to Embodiment 2 of the present invention.
  • the data caching method process of this embodiment includes:
  • Step 601 Restoring data spliced in a cell splicing manner is an independent cell
  • the data to be spliced in the manner of restoring the cells in the first step is to be an independent cell
  • the step includes: restoring data spliced by means of cell splicing into independent cells according to the splicing information of the cells carried in the data;
  • the cell splicing information includes: a splicing location, a cell header identifier, a cell tail identifier, and a cell valid identifier; wherein the splicing location identifies a boundary of two cells.
  • Step 602 Store the cell according to an output port number of the cell to a corresponding first in first out queue
  • the method further includes: extracting information such as a cell length and a cell version carried in a cell header of the cell, and acquiring the cell according to the cell length information and the cell version information.
  • the cache unit is one-Nth of each address of the RAM, that is, the RAM is divided into a fixed number of N, each of which is a buffer unit; N is a positive integer; the value of N can be set according to the forwarding rate of the data, etc. set;
  • the method further includes: storing the cell information of the cell to the corresponding first in first out queue according to the output port number of the cell to store the cell to the corresponding first in first out queue.
  • the cell information includes: a length of the cell, a cell version, and a number of cache units occupied by the cell.
  • the step includes: obtaining a tail pointer, an enqueue sub-pointer, and a free address of the first-in first-out queue corresponding to the output port number according to an output port number of the cell, and using the length according to the length
  • the number of cache units occupied by each address is stored in the first-in first-out queue, and the valid cache unit occupied by the cells is read. If the valid cache unit does not have an address, the in-first-out queue is updated. a team pointer, releasing the idle address; if the valid buffer unit occupied by the cell crosses the address, updating the tail pointer and the enqueue sub-pointer of the first-in first-out queue;
  • the cell When it is determined that the first-in first-out queue is empty or the buffer unit in the tail pointer is already full, the cell is divided into a group M data having the same bit width as the buffer unit from a high to a low bit, and is pressed by The order from high to low writes the cell to the idle address;
  • the buffer unit in the tail pointer Determining that the first in first out queue is not empty or the buffer unit in the tail pointer is not full, dividing the cell from a high to a low to the same M group data as the buffer unit, and according to The position of the enqueue sub-pointer is written into the M group data of the cell in the order from the high bit to the low bit, and the position of the enqueue sub-pointer is written into a group of divided data in which the highest bit of the cell is located, and the free address number is The buffer unit with the enqueue sub-point minus 1 writes the last set of data of the cell division;
  • the effective buffer unit occupied by the cell is: a buffer unit actually occupied by the cell;
  • the valid cache unit does not have an address across the address including: the number of cache units actually occupied by the cell is added to the sub-join pointer is not greater than M, and the enqueue sub-pointer is not 0;
  • the valid cache unit cross-address occupied by the cell includes: the first-in first-out queue is empty, or the cache unit in the tail pointer is already full, or the number of cache units actually occupied by the cell
  • the team pointers are added more than M;
  • Step 603 The current Kth cycle determines that the to-be-departed cell can be dequeued, and schedules the to-be-departed cell to be dequeued;
  • the determining that the to-be-decoded cell can be dequeued is performed according to the second back-pressure count value of the K-1th period being less than or equal to the second preset threshold value, and the second reverse of the K-1th period
  • the pressure count value is based on the estimated value of the number of splicing units when the last dequeue cell is dequeued, and the bus can be transmitted per cycle.
  • the number of splicing units to be input and the second back pressure count value of the K-2 period are obtained; K is a positive integer;
  • bit width of the splicing unit is one of X points of a buffer unit bit width, and can be set according to a data forwarding rate, etc., so as to ensure that data is not lost under minimum register conditions, and bus resources are guaranteed. Fully utilized, no empty shots; X is a positive integer;
  • the second back pressure count value is used as a basis for judging whether to allow the cells to be dequeued in the queue to be dequeued in the next cycle;
  • the second back pressure count value of the K-1th cycle the second back pressure count value of the K-2th cycle + the estimated value of the number of splicing units occupied when the last dequeue cell is dequeued - the bus can be transmitted per cycle Number of splicing units;
  • the estimated value of the number of splicing units occupied when the last dequeuing cell is dequeued the product of the number of buffer units occupied by the last dequeued cell and the X.
  • the first one can be obtained directly according to the estimated value of the number of splicing units occupied by the first cell to be dequeued and the number of splicing units that can be transmitted per cycle of the bus.
  • the second back pressure count value of the cycle and judging whether to allow the next team to be dequeued to be dequeued according to the second back pressure count value of the first cycle.
  • the cells to be dequeued in the queue are not allowed to be dequeued in the Kth cycle, according to each period.
  • the number of splicing units that can be transmitted by the bus transmits the data in the register until the second back pressure count value of the Gth period is less than or equal to the second preset threshold, determining that the to-be-exited cell in the G+1th cycle can Dequeue; where G is a positive integer greater than K.
  • scheduling the outbound cell to be dequeued includes: obtaining a head pointer, a secondary pointer, and a dequeue pointer of the corresponding first in first out queue according to the outbound port number, and waiting in the queue according to the outbound port number
  • the number of buffer units and the dequeue sub-points occupied by the team cell, the number range and the number of the buffer unit to be read are calculated, and the data in the buffer unit occupied by the to-be-decoded cell is recombined into one cell.
  • the updated team sub-pointer is a value obtained by adding the original dequeue sub-pointer to the number of buffer units occupied by the cell, and if the added value is greater than N, the update is the addition. Subsequent value minus the value of N, if the dequeue pointer is added to the number of buffer units occupied by the cell is not greater than N, there is no need to update the head pointer; if the dequeue sub-pointer and the cache unit occupied by the cell The number of additions is greater than N, and the update header pointer is the secondary header pointer; wherein the to-be-departed cell is the first cell of the first-in first-out queue;
  • the outgoing port number is the same as the output port number.
  • the method further includes:
  • the correcting comprises: subtracting the second back-pressure count value by The difference between the estimated value and the actual value. Comparing the corrected second back pressure count value with the second preset threshold value to determine whether to allow the queue to be dequeued in the queue to be dequeued in the next cycle.
  • FIG. 7 is a schematic structural diagram of a data cache device according to an embodiment of the present invention. As shown in FIG. 7, the device includes: a cache module 71 and a processing module 72;
  • the cache module 71 is configured to store the cell to a corresponding first in first out queue according to an input port number of the cell;
  • the processing module 72 is configured to determine that the to-be-dequeted cell can be dequeued in the current K-th cycle, and schedule the to-be-de-queued cell to dequeue to obtain the number of splicing units occupied by the to-be-dequeted cell. Interval value, storing the to-be-departed cells in a cell splicing manner to a register having the same bus bit width for data transmission;
  • the determining that the to-be-decoded cell can be dequeued is performed according to the first back-pressure count value of the K-1th period being less than or equal to the first preset threshold, and the first reverse of the K-1th period
  • the pressure count value is obtained according to an estimated value of the number of splicing units occupied by the last dequeue cell, the number of splicing units that can be transmitted per cycle of the bus, and the first back pressure count value of the K-2 period;
  • K is a positive integer;
  • the buffer unit is one-Nth of each address of the RAM, that is, the RAM is divided into a fixed number of N, each of which is a buffer unit; N is a positive integer; the value of N can be set according to the forwarding rate of the data;
  • the bit width of the splicing unit is one of X points of a buffer unit bit width, and can be set according to a data forwarding rate, etc., to ensure that data is not lost under a minimum register condition, and that bus resources are fully guaranteed Use, no empty shots; X is a positive integer.
  • the apparatus further includes: an obtaining module 73, configured to extract cell length information and cell version information carried in a cell header of the cell, and according to the cell length information and the letter The meta version information acquires the number of cache units occupied by the cell;
  • the cache module 71 is further configured to store the cell information of the cell to the corresponding first in first out queue; wherein the cell information comprises: a length of the cell, a cell The version and the number of cache units occupied by the cell.
  • the cache module 71 stores the cells in the corresponding first in first out queue according to the input port number of the cell, including:
  • the cache module 71 obtains the tail pointer, the enqueue sub-pointer, and the idle address of the first-in first-out queue corresponding to the input port number according to the input port number of the cell, and uses the cell for each address according to the length.
  • the number of the cache units is stored in the first-in first-out queue, and the valid cache unit occupied by the cells is read. If the valid cache unit does not have an address, the inbound sub-pointer of the first-in first-out queue is updated, and the queue is released.
  • the idle address if the cell occupies a valid cache The unit crosses the address, and updates the tail pointer of the first in first out queue and the inbound sub pointer;
  • the cache module 71 stores the number of cache units occupied by the address for each address in the first-in first-out queue, including:
  • the cache module 71 determines that the first-in first-out queue is empty or the buffer unit in the tail pointer is full, the cell is divided into a M group with the same bit width as the cache unit when the cache unit is full. Data, and writes the cell to an idle address in order from high to low;
  • the buffer unit in the tail pointer Determining that the first in first out queue is not empty or the buffer unit in the tail pointer is not full, dividing the cell from a high to a low to the same M group data as the buffer unit, and according to The position of the enqueue sub-pointer is written into the M group data of the cell in the order from the high bit to the low bit, and the position of the enqueue sub-pointer is written into a group of divided data in which the highest bit of the cell is located, and the free address number is The buffer unit with the enqueue sub-point minus 1 writes the last set of data of the cell division;
  • the effective buffer unit occupied by the cell is: a buffer unit actually occupied by the cell;
  • the valid cache unit does not have an address across the address including: the number of cache units actually occupied by the cell is added to the sub-join pointer is not greater than M, and the enqueue sub-pointer is not 0;
  • the valid cache unit cross-address occupied by the cell includes: the first-in first-out queue is empty, or the cache unit in the tail pointer is already full, or the number of cache units actually occupied by the cell
  • the team pointers are added more than M;
  • the first back pressure count value of the K-1th period is used as a basis for determining whether to allow the cells to be dequeued in the queue to be dequeued in the Kth cycle;
  • the first back pressure count value of the K-1th cycle the first back pressure count value of the K-2th cycle + the estimated value of the number of splicing units occupied when the last dequeue cell is dequeued - the bus can be transmitted per cycle Splicing Number of units;
  • the estimated value of the number of splicing units occupied when the last dequeuing cell is dequeued the product of the number of buffer units occupied by the last dequeued cell and the X.
  • the first back pressure count value of the K-2 period is 0. Therefore, the first one can be obtained directly according to the estimated value of the number of splicing units occupied by the first cell to be dequeued and the number of splicing units that can be transmitted per cycle of the bus. The first back pressure count value of the cycle, and determining whether to allow the next team to be dequeued to be dequeued according to the first back pressure count value of the first cycle.
  • the processing module 72 is further configured to not allow the cells to be dequeued in the queue when the first back pressure count value of the K-1th period is greater than the first preset threshold.
  • the Kth cycle is dequeued, and the data in the register is transmitted according to the number of splicing units that can be transmitted by the bus per cycle, until the first back pressure count value of the Gth cycle is less than or equal to the first preset threshold, and the G+1 is determined.
  • the cells to be dequeued in the cycle can be dequeued; wherein G is a positive integer greater than K.
  • processing module 72 scheduling the dequeuing cell to be dequeued includes:
  • the processing module 72 obtains a head pointer, a second pointer, and a dequeue pointer of the corresponding first-in first-out queue according to the dequeue port number, according to the number of cache units occupied by the to-be-dequeted cells in the queue, and the dequeue a child pointer, calculating a number range and a number of the buffer unit to be read, reassembling the data in the cache unit occupied by the to-be-decoded cell into one cell, and updating the team pointer to the original dequeue sub-pointer And a value obtained by adding the number of buffer units occupied by the cell, if the added value is greater than N, updating to the added value minus the value of N, if the dequeue pointer and the location
  • the number of cache units occupied by the cells is not more than N, and there is no need to update the head pointer;
  • the child pointer is added to the number of cache units occupied by the cell by more than N, and the update header point
  • the dequeue port number is the same as the input port number.
  • the apparatus further includes: a correction module 74 configured to: according to the actual number of splicing units occupied by the to-be-dequeted cell and the number of cache units occupied by the to-be-dequeted cell Correcting the first back pressure count value;
  • the actual value of the number of splicing units occupied by the to-be-dequeted cell is usually less than or equal to an estimated value of the number of splicing units occupied by the to-be-dequeted cell, and therefore, when the actual value and the estimated value are not Meanwhile, the correcting includes subtracting the first back pressure count value from the difference between the estimated value and the actual value. And comparing the corrected first back pressure count value with the first preset threshold value to determine whether to allow the queue to be dequeued in the queue to be dequeued in the next cycle.
  • two or two registers are included in the embodiment of the present invention to form a register set
  • the register includes Y virtual units, that is, the register is divided into Y virtual units, and the bit width of the virtual unit is the same as the bit width of one splicing unit;
  • the processing module 72 stores the to-be-dequeted cells in a cell splicing manner to the same register as the bus bit width, including:
  • the processing module 72 acquires a write pointer in the register group, and stores the cell into a register corresponding to the write pointer according to the actual number of splicing units occupied by the to-be-dequeted cell.
  • the register includes a valid cell, and the detached cell is spliced with the valid splicing unit, the cell splicing information is recorded, and the number of splicing units occupied by the pending write pointer is updated. a value added to the original write pointer, when the added value is greater than or equal to Y, the added value is subtracted by Y as a new write pointer; wherein the write pointer is a splicing unit Step value
  • the cell splicing information includes: a splicing position, a cell header identifier, and a cell tail identifier. And a cell valid identifier; wherein the splicing location identifies a boundary of two cells;
  • the register setting allows a maximum of two different cells to be spliced. Therefore, the cell valid identifier includes a first cell valid identifier and a second cell valid identifier, when the second in the register If the cell has not been input, and the virtual cell contained in the register is not full, the second cell valid identifier is invalid.
  • the processing module 72 is further configured to: when determining that the register in the register set contains valid cell data, output all data in the register corresponding to the read pointer in the register group to the cell cache.
  • the output end carries the cell splicing information; as shown in FIG. 5, the read pointer changes by a register, and after the data of the register 0 is output, the read pointer points to the register 1.
  • FIG. 8 is a schematic structural diagram of a data cache apparatus according to Embodiment 2 of the present invention. As shown in FIG. 8, the apparatus includes: a restoration module 81, a storage module 82, and a scheduling module 83;
  • the restoring module 81 is configured to restore data spliced in a cell splicing manner as an independent cell
  • the storage module 82 is configured to store the cell to a corresponding first in first out queue according to an output port number of the cell;
  • the scheduling module 83 is configured to determine that the cells to be dequeued can be dequeued in the current Kth cycle, and schedule the cells to be dequeued to be dequeued;
  • the determining that the to-be-decoded cell can be dequeued is performed according to the second back-pressure count value of the K-1th period being less than or equal to the second preset threshold, and the second reverse of the K-1th period
  • the pressure count value is obtained according to an estimated value of the number of splicing units occupied when the last dequeue cell is dequeued, the number of splicing units that can be transmitted per cycle of the bus, and the second back pressure count value of the K-2 period; K is a positive integer.
  • the restoring module 81 restores the data spliced in a cell splicing manner into independent cells, including:
  • the restoration module 81 restores the cell to the information according to the information of the cell splicing carried in the data.
  • the data stitched in the connected manner is an independent cell;
  • the cell splicing information includes: a splicing location, a cell header identifier, a cell tail identifier, and a cell valid identifier; wherein the splicing location identifies a boundary of two cells.
  • the apparatus further includes: an extracting module 84, configured to extract information such as a cell length and a cell version carried in a cell header of the cell, and according to the cell length information and the information The meta version information acquires the number of cache units occupied by the cell;
  • the cache unit is one-Nth of each address of the RAM, that is, the RAM is divided into a fixed number of N, each of which is a buffer unit; N is a positive integer; the value of N can be set according to the forwarding rate of the data, etc. set;
  • the storage module 82 is further configured to store the cell information of the cell to the corresponding first in first out queue; wherein the cell information comprises: a length of the cell, a cell The version and the number of cache units occupied by the cell.
  • the storage module 82 stores the cells into the corresponding first-in first-out queue according to the output port number of the cell, including:
  • the storage module 82 obtains the tail pointer, the enqueue sub-pointer, and the idle address of the first-in first-out queue corresponding to the output port number according to the output port number of the cell, and uses the cell for each address according to the length.
  • the number of the cache units is stored in the first-in first-out queue, and the valid cache unit occupied by the cells is read. If the valid cache unit does not have an address, the inbound sub-pointer of the first-in first-out queue is updated, and the queue is released.
  • the idle address if the valid buffer unit occupied by the cell crosses the address, updating the tail pointer of the first in first out queue and the enqueue sub pointer;
  • the storage module 82 stores the number of cache units occupied by the address for each address in the first-in first-out queue, including:
  • the storage module 82 determines that the first-in first-out queue is empty or the buffer unit in the tail pointer is full, the cell is divided into a M group with the same bit width as the cache unit when the cache unit is full. Data, and writes the cell to an idle address in order from high to low;
  • the storage module 82 determines that the first in first out queue is not empty or the buffer unit in the tail pointer is not full, and divides the cell from a high to a low bit into the same M width as the buffer unit. Group data, and according to the position of the enqueue sub-pointer, write the M group data of the cell in order from high to low, and the position of the enqueue sub-point is written into a group of divided data where the highest bit of the cell is located.
  • the cache unit whose idle address number is the enqueue sub-pointer minus 1 writes the last set of data of the cell division;
  • the effective buffer unit occupied by the cell is: a buffer unit actually occupied by the cell;
  • the valid cache unit does not have an address across the address including: the number of cache units actually occupied by the cell is added to the sub-join pointer is not greater than M, and the enqueue sub-pointer is not 0;
  • the valid cache unit cross-address occupied by the cell includes: the first-in first-out queue is empty, or the cache unit in the tail pointer is already full, or the number of cache units actually occupied by the cell
  • the team pointers are added more than M;
  • the splicing unit has a bit width of one X of a buffer unit width, and can be set according to a data forwarding rate, etc., so as to ensure that data is not lost under a minimum register condition, and To ensure that the bus resources are fully utilized, no aerial shots occur;
  • X is a positive integer;
  • the second back pressure count value is used as a basis for judging whether to allow the cells to be dequeued in the queue to be dequeued in the next cycle;
  • the second back pressure count value of the K-1th cycle the second back pressure count value of the K-2th cycle + the estimated value of the number of splicing units occupied when the last dequeue cell is dequeued - the bus can be transmitted per cycle Number of splicing units;
  • Estimated value of the number of splicing units when the last dequeue cell is dequeued the previous one
  • the number of buffer units occupied by the dequeue cell is the product of the X.
  • the first one can be obtained directly according to the estimated value of the number of splicing units occupied by the first cell to be dequeued and the number of splicing units that can be transmitted per cycle of the bus.
  • the second back pressure count value of the cycle and judging whether to allow the next team to be dequeued to be dequeued according to the second back pressure count value of the first cycle.
  • the scheduling module 83 is further configured to not allow the cells to be dequeued in the queue when the second backpressure count value of the K-1th period is greater than the second preset threshold. Deleting the Kth cycle, transmitting the data in the register according to the number of splicing units that can be transmitted on the bus per cycle, until the second back pressure count value of the Gth period is less than or equal to the second preset threshold, determining the G+1 The cells to be dequeued in the cycle can be dequeued; wherein G is a positive integer greater than K.
  • the scheduling module 83 scheduling the dequeuing cell to be dequeued includes:
  • the scheduling module 83 obtains a head pointer, a secondary pointer, and a dequeue sub-pointer of the corresponding first-in first-out queue according to the outbound port number, according to the number of cache units occupied by the to-be-decoded cells in the queue, and the dequeue Pointer, calculate the number range and number of the buffer unit to be read, recombine the data in the buffer unit occupied by the to-be-exited cell into one cell and transmit it to the bus, and update the team pointer to the original a value obtained by adding a team pointer to a number of buffer units occupied by the cell, and if the added value is greater than N, updating to the added value minus the value of N, if the team is out
  • the pointer is added to the number of buffer units occupied by the cell by no more than N, and the head pointer does not need to be updated; if the dequeue sub-pointer is added to the number of buffer units occupied by the cell by more than N,
  • the outgoing port number is the same as the output port number.
  • the apparatus further includes: a correction module 85, configured to: after scheduling the outbound cell to be dequeued,
  • the correcting comprises: subtracting the second back-pressure count value by The difference between the estimated value and the actual value. Comparing the corrected second back pressure count value with the second preset threshold value to determine whether to allow the queue to be dequeued in the queue to be dequeued in the next cycle.
  • the cache module, the processing module, the acquisition module, the correction module, the restoration module, the storage module, the scheduling module, the extraction module, and the correction module in the data cache device proposed in the embodiment of the present invention may all be implemented by a processor, and of course The specific logic circuit is implemented; wherein the processor may be a mobile terminal or a processor on a server. In practical applications, the processor may be a central processing unit (CPU), a microprocessor (MPU), or a digital signal processor ( DSP) or field programmable gate array (FPGA).
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGA field programmable gate array
  • the data caching method is implemented in the form of a software function module and sold or used as a stand-alone product, it may also be stored in a computer readable storage medium.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk. This As such, embodiments of the invention are not limited to any specific combination of hardware and software.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program, and the computer program is used to execute the data caching method of the embodiment of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本发明公开了一种数据缓存方法,依据信元的输入端口号将所述信元存储至对应的先入先出队列;当前第K周期确定待出队信元能够出队,调度所述待出队信元出队,获取所述待出队信元占用的拼接单元数目的实际值,将所述待出队信元以信元拼接的方式存储至与总线位宽相同的寄存器;其中,所述确定待出队信元能够出队是依据第K-1周期的第一反压计数值小于等于第一预设门限值进行的,所述第K-1周期的第一反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传输的拼接单元数目及第K-2周期的第一反压计数值得到。本发明还同时公开了一种数据缓存装置及存储介质。

Description

一种数据缓存方法、装置及存储介质 技术领域
本发明涉及数据通信技术领域,尤其涉及分组交换网络中的一种数据缓存方法、装置及存储介质。
背景技术
在数据通信领域,分组交换网络中交换网元芯片的数据缓存空间的效率和规模以及数据总线的宽度对于芯片的性能和面积功耗至关重要。为了节省内部总线资源和缓存资源,提高交换效率,一般将数据报文切割为一定数量的信元后,传输到交换网元芯片进行信元交换和复制处理,然后重组为数据报文。信元经过自动寻址查询,输入端和输出端缓存后,由入端口进入出端口,完成转发和复制功能,如图1为大规模交换网络数据处理示意图。信元作为最小传输和交换单位,有定长和变长两种处理方式,变长信元较之定长信元对于缓存资源和总线资源的利用率更高,一般采用变长信元处理方式。
交换网元芯片中的信元缓存主要用于信元等待自动寻址处理结果及调度输出的过程中信元体数据的存储,在网络无流量压力的情况下,缓存空间所需的规模较小;但是在网络出现拥塞的情况时,需要吸纳百米光纤上的串行数据报文。缓存空间按照交换网元芯片交换能力计算,以避免在网络拥塞的情况下出现大规模丢包。现有的信元缓存方式一般是根据链路数量和光纤的长度以及数据传输速率,换算出一个带有冗余的存储空间,但是随着交换网规模和数据速率的提高,缓存空间资源也必须相应提高,例如数据速率由12.5G提高到25G时,缓存空间的容量必须加倍才能保证无丢失转发。而且在变长信元交换的大前提下,缓存资源的利用率也偏低, 在最小信元存储的情况下,小于50%,如图2所示为不同长度的网元存储空间利用率示意图。
综上所述,提供一种应用于分组交换网络的数据缓存方法及装置以提高缓存资源和总线资源的利用率,已成为亟待解决的问题。
发明内容
有鉴于此,本发明实施例期望提供一种数据缓存方法、装置及存储介质,能够有效的提高缓存资源和总线资源的利用率。
为达到上述目的,本发明实施例的技术方案是这样实现的:
本发明实施例提供了一种数据缓存方法,所述方法包括:
依据信元的输入端口号将所述信元存储至对应的先入先出队列;
当前第K周期确定待出队信元能够出队,调度所述待出队信元出队,获取所述待出队信元占用的拼接单元数目的实际值,将所述待出队信元以信元拼接的方式存储至与总线位宽相同的寄存器;
其中,所述确定待出队信元能够出队是依据第K-1周期的第一反压计数值小于等于第一预设门限值进行的,所述第K-1周期的第一反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传输的拼接单元数目及第K-2周期的第一反压计数值得到;K为正整数。
上述方案中,所述依据信元的输入端口号将所述信元存储至对应的先入先出队列之前,所述方法还包括:提取所述信元携带的信元长度信息及信元版本信息,并依据所述信元长度信息及信元版本信息获取所述信元占用的缓存单元数目。
上述方案中,所述依据信元的输入端口号将所述信元存储至对应的先入先出队列包括:
依据信元的输入端口号,获取所述输入端口号对应的先入先出队列的尾指针、入队子指针及空闲地址,将所述信元按照长度为每个地址所占用 的缓存单元的数目存储至所述先入先出队列,读取所述信元占用的有效缓存单元,若所述有效缓存单元没有跨地址,更新所述先入先出队列的入队子指针,释放所述空闲地址;若所述信元占用的有效缓存单元跨地址,更新所述先入先出队列的尾指针及入队子指针。
上述方案中,调度所述待出队信元出队后,所述方法还包括:
依据所述待出队信元占用的拼接单元的实际数目及所述待出队信元占用的缓存单元的数目对所述第一反压计数值进行校正。
上述方案中,所述待出队信元以信元拼接的方式存储至与总线位宽相同的寄存器,包括:
查找写入指针,依据所述待出队信元占用的拼接单元的实际数目,将所述信元存储至与所述写入指针对应的寄存器中,若所述寄存器中包含有效信元,以拼接单元为单位将所述待出队信元与所述有效信元进行拼接,记录信元拼接信息,并更新写入指针。
本发明实施例还提供了一种数据缓存装置,所述装置包括:缓存模块及处理模块;其中,
所述缓存模块,配置为依据信元的输入端口号将所述信元存储至对应的先入先出队列;
所述处理模块,配置为在当前第K周期确定待出队信元能够出队,调度所述待出队信元出队,获取所述待出队信元占用的拼接单元数目的实际值,将所述待出队信元以信元拼接的方式存储至与总线位宽相同的寄存器以进行数据传输;
其中,所述确定待出队信元能够出队是依据第K-1周期的第一反压计数值小于等于第一预设门限值进行的,所述第K-1周期的第一反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传输的拼接单元数目及第K-2周期的第一反压计数值得到;K为正整数。
上述方案中,所述装置还包括:获取模块,配置为提取所述信元携带的信元长度信息及信元版本信息,并依据所述信元长度信息及信元版本信息获取所述信元占用的缓存单元数目。
上述方案中,所述装置还包括:校正模块,配置为依据所述待出队信元占用的拼接单元的实际数目及所述待出队信元占用的缓存单元的数目对所述第一反压计数值进行校正。
本发明实施例还提供了一种数据缓存方法,所述方法包括:
还原以信元拼接的方式拼接的数据为独立信元;
依据信元的输出端口号将所述信元存储至对应的先入先出队列;
当前第K周期确定待出队信元能够出队,调度所述待出队信元出队;
其中,所述确定待出队信元能够出队是依据第K-1周期的第二反压计数值小于等于第二预设门限值进行的,所述第K-1周期的第二反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传输的拼接单元数目及第K-2周期的第二反压计数值得到;K为正整数。
上述方案中,还原以信元拼接的方式拼接的数据为独立信元,包括:
依据所述数据中携带的信元拼接信息,还原以信元拼接的方式拼接的数据为独立信元。
本发明实施例还提供了一种数据缓存装置,所述装置包括:还原模块、存储模块及调度模块;其中,
所述还原模块,配置为还原以信元拼接的方式拼接的数据为独立信元;
所述存储模块,配置为依据信元的输出端口号将所述信元存储至对应的先入先出队列;
所述调度模块,配置为在当前第K周期确定待出队信元能够出队,调度待出队信元出队;
其中,所述确定待出队信元能够出队是依据第K-1周期的第二反压计 数值小于等于第二预设门限值进行的,所述第K-1周期的第二反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传输的拼接单元数目及第K-2周期的第二反压计数值得到;K为正整数。
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质存储有计算机程序,该计算机程序用于执行本发明实施例的上述数据缓存方法。
本发明实施例所提供的数据缓存方法、装置及存储介质,依据信元的输入端口号将所述信元存储至对应的先入先出队列(FIFO,First In First Out);当前第K周期确定待出队信元能够出队,调度所述待出队信元出队,获取所述待出队信元占用的拼接单元数目的实际值,将所述待出队信元以信元拼接的方式存储至与总线位宽相同的寄存器;其中,所述确定待出队信元能够出队是依据第K-1周期的第一反压计数值小于等于第一预设门限值进行的,所述第K-1周期确定的第一反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传输的拼接单元数目及第K-2周期的第一反压计数值得到。如此,通过将信元以信元拼接的方式存储至与总线位宽相同的寄存器,提高了总线的利用率;并通过对信元出队前的主动反压处理,保证了交换网元芯片的信元交换效率和准确性。
附图说明
图1为大规模交换网络数据处理示意图;
图2为不同长度的网元存储空间利用率示意图;
图3为本发明实施例一数据缓存方法示意图;
图4为本发明信元缓存示意图;
图5为本发明信元拼接示意图;
图6为本发明实施例二数据缓存方法示意图;
图7为本发明实施例一数据缓存装置组成结构示意图;
图8为本发明实施例二数据缓存装置组成结构示意图。
具体实施方式
在本发明实施例中,依据信元的输入端口号将所述信元存储至对应的先入先出队列;当前第K周期确定待出队信元能够出队,调度所述待出队信元出队,获取所述待出队信元占用的拼接单元数目的实际值,将所述待出队信元以信元拼接的方式存储至与总线位宽相同的寄存器以进行数据传输;其中,所述确定待出队信元能够出队是依据第K-1周期的第一反压计数值小于等于第一预设门限值进行的,所述第K-1周期确定的第一反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传输的拼接单元数目及第K-2周期的第一反压计数值得到;K为正整数。
图3为本发明实施例一数据缓存方法流程示意图,在信元缓存输入端,如图3所示,本实施例数据缓存方法流程包括:
步骤301:依据信元的输入端口号将所述信元存储至对应的先入先出队列;
本步骤之前,所述方法还包括:提取所述信元的信元头中携带的信元长度及信元版本等信息,并依据所述信元长度信息及信元版本信息获取所述信元占用的缓存单元数目;
这里,所述缓存单元为随机存储器(RAM,Random Access Memory)每个地址的N分之一,即:将RAM划分为固定的N份,每一份为一个缓存单元;N为正整数;N值可依据数据的转发速率等进行设定;
所述依据信元的输入端口号将所述信元存储至对应的先入先出队列的同时,所述方法还包括:将所述信元的信元信息存储至所述对应的先入先出队列;其中,所述信元信息包括:所述信元的长度、信元版本及所述信元占用的缓存单元数目。
在一实施例中,本步骤包括:依据信元的输入端口号,获取所述输入 端口号对应的先入先出队列的尾指针、入队子指针及空闲地址,将所述信元按照长度为每个地址所占用的缓存单元的数目存储至所述先入先出队列,读取所述信元占用的有效缓存单元,若所述有效缓存单元没有跨地址,更新所述先入先出队列的入队子指针,释放所述空闲地址;若所述信元占用的有效缓存单元跨地址,更新所述先入先出队列的尾指针及入队子指针;如图4所示为本发明信元缓存示意图;
其中,所述将所述信元按照长度为每个地址所占用的缓存单元的数目存储至所述先入先出队列,包括:
确定所述先入先出队列为空或者所述尾指针中的缓存单元已经写满时,将所述信元由高位到低位划分为与所述缓存单元位宽相同的M组数据,并按由高位到低位的顺序将所述信元写入空闲地址;
确定所述先入先出队列不为空或者所述尾指针中的缓存单元未写满时,将所述信元由高位到低位划分为与所述缓存单元位宽相同的M组数据,并依据入队子指针的位置,按由高位到低位的顺序写入所述信元的M组数据,入队子指针的位置写入信元最高位所在的一组划分数据,所述空闲地址编号为入队子指针减1的缓存单元写入所述信元划分的最后一组数据;
这里,所述M值与一个地址所包含的缓存单元的数目相同,即M=N;所述M为正整数;
所述信元占用的有效缓存单元为:所述信元实际占用的缓存单元;
所述有效缓存单元没有跨地址包括:所述信元实际占用的缓存单元个数与入队子指针相加不大于M,且所述入队子指针非0;
所述信元占用的有效缓存单元跨地址包括:所述先入先出队列为空,或者所述尾指针中的缓存单元已经写满,或者,所述信元实际占用的缓存单元个数与入队子指针相加大于M;
当所述信元实际占用的缓存单元的数目小于M时,所述信元的最后一 组划分数据无效。
步骤302:当前第K周期确定待出队信元能够出队,调度所述待出队信元出队,获取所述待出队信元占用的拼接单元数目的实际值,将所述待出队信元以信元拼接的方式存储至与总线位宽相同的寄存器;
这里,所述拼接单元的位宽为一个缓存单元位宽的X分之一,可依据数据的转发速率等进行设定,既要保证在最少寄存器的条件下数据无丢失,又要保证总线资源充分被利用,不出现空拍;X为正整数;
所述确定待出队信元能够出队是依据第K-1周期的第一反压计数值小于等于第一预设门限值进行的,所述第K-1周期的第一反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传输的拼接单元数目及第K-2周期的第一反压计数值得到;K为正整数;
其中,所述第一反压计数值用作判断是否允许所述队列中的待出队信元在下个周期出队的依据;
所述第K-1周期的第一反压计数值=第K-2周期的第一反压计数值+上一个出队信元出队时占用拼接单元数目的估计值-每周期总线可传输的拼接单元数目;
所述上一个出队信元出队时占用拼接单元数目的估计值=所述上一个出队信元占用的缓存单元的数目与所述X的乘积。
当K值为1时,即所述先入先出队列中第一个待出队信元出队时,由于此前并无任何其他信元的出队,因此所述第一反压计数值一直为0,可直接进行所述待出队信元的出队及后续操作;
当K值为2时,即所述先入先出队列中第一个待出队信元已出队,由于在第一个待出队信元出队前并不存在其他信元的出队,即第K-2周期的第一反压计数值为0,因此,可直接依据第一个待出队信元占用拼接单元数目的估计值及每周期总线可传输的拼接单元数目获取第1个周期的第一反 压计数值,并依据第1个周期的第一反压计数值判断是否允许下一个待出队信元出队。
在一实施例中,当第K-1周期的第一反压计数值大于第一预设门限值时,不允许所述队列中待出队信元在第K周期出队,依据每周期总线可传输的拼接单元数目传输所述寄存器中数据,直至第G周期的第一反压计数值小于等于第一预设门限值时,确定第G+1周期所述待出队信元能够出队;其中,G为大于K的正整数。
在一实施例中,调度所述待出队信元出队包括:依据出队端口号,获取对应的先入先出队列的头指针、次头指针及出队子指针,依据所述队列中待出队信元所占用的缓存单元数目及出队子指针,计算将要读取的缓存单元的编号范围及个数,将所述待出队信元占用的缓存单元中的数据重新组合为一个信元,更新出队子指针为原出队子指针与所述信元占用的缓存单元的数目相加后的值,如果所述相加后的值大于N,更新为所述相加后的值减去N的值,若出队子指针与所述信元占用的缓存单元的数目相加不大于N,无须更新头指针;若出队子指针与所述信元占用的缓存单元的数目相加大于N,更新头指针为所述次头指针;其中,所述待出队信元为所述先入先出队列的首信元;
这里,所述出队端口号与所述输入端口号相同。
在一实施例中,调度所述待出队信元出队后,所述方法还包括:
依据所述待出队信元占用的拼接单元数目的实际值,对第一反压计数值进行校正;这里,由于所述待出队信元占用的拼接单元数目的实际值,通常小于等于所述待出队信元占用拼接单元数目的估计值,因此,当所述实际值与所述估计值不同时,所述校正包括:将第一反压计数值减去所述估计值与所述实际值的差值。利用校正后的第一反压计数值与所述第一预设门限值进行比较,判断是否允许队列中待出队信元下一个周期出队。
在一实施例中,在本发明实施例中包括两个或两个的寄存器组成一个寄存器组;
所述寄存器包含Y个虚拟单元,即所述寄存器被划分成Y个虚拟单元,所述虚拟单元的位宽与一个拼接单元的位宽相同;
将所述待出队信元以信元拼接的方式存储至与总线位宽相同的寄存器包括:
获取寄存器组中的写入指针,依据所述待出队信元占用的拼接单元的实际数目,将所述信元存储至与所述写入指针对应的寄存器中,若所述寄存器中包含有效信元,以拼接单元为单位将所述待出队信元与所述有效进行拼接,记录信元拼接信息,更新写入指针为所述待出所占用的拼接单元的个数与原写入指针相加的值,当所述相加的值大于等于Y时,将所述相加的值减去Y作为新的写入指针;其中,所述写入指针以拼接单元为步进值;如图5所示为本发明信元拼接示意图;
这里,所述信元拼接信息包括:拼接位置、信元头标识、信元尾标识及信元有效标识;其中,所述拼接位置标识两个信元的边界;
在本发明实施例中所述寄存器设置允许最多两个不同的信元进行拼接,因此,所述信元有效标识包括第一信元有效标识及第二信元有效标识,当寄存器中第二个信元尚未输入,所述寄存器包含的虚拟单元尚未写满时,第二信元有效标识则是无效的。
在一实施例中,当所述寄存器组中寄存器含有有效的信元数据时,将所述寄存器组中读出指针对应的寄存器中所有数据输出至信元缓存输出端,携带所述信元拼接信息;如图5所示,读出指针以寄存器为单元变化,寄存器0的数据输出后,读出指针指向寄存器1。
图6为本发明实施例二数据缓存方法示意图,在信元缓存输出端,如图6所示,本实施例数据缓存方法流程包括:
步骤601:还原以信元拼接的方式拼接的数据为独立信元;
这里,当输出寄存器中输出的数据时,首先需执行的为还原以信元拼接的方式拼接的数据为独立信元;
本步骤包括:依据所述数据中携带的信元拼接信息,还原以信元拼接的方式拼接的数据为独立信元;
其中,所述信元拼接信息包括:拼接位置、信元头标识、信元尾标识及信元有效标识;其中,所述拼接位置标识两个信元的边界。
步骤602:依据信元的输出端口号将所述信元存储至对应的先入先出队列;
本步骤之前,所述方法还包括:提取所述信元的信元头中携带的信元长度及信元版本等信息,并依据所述信元长度信息及信元版本信息获取所述信元占用的缓存单元数目;
这里,所述缓存单元为RAM每个地址的N分之一,即将RAM划分为固定的N份,每一份为一个缓存单元;N为正整数;N值可依据数据的转发速率等进行设定;
所述依据信元的输出端口号将所述信元存储至对应的先入先出队列的同时,所述方法还包括:将所述信元的信元信息存储至所述对应的先入先出队列;其中,所述信元信息包括:所述信元的长度、信元版本及所述信元占用的缓存单元数目。
在一实施例中,本步骤包括:依据信元的输出端口号,获取所述输出端口号对应的先入先出队列的尾指针、入队子指针及空闲地址,将所述信元按照长度为每个地址所占用的缓存单元的数目存储至所述先入先出队列,读取所述信元占用的有效缓存单元,若所述有效缓存单元没有跨地址,更新所述先入先出队列的入队子指针,释放所述空闲地址;若所述信元占用的有效缓存单元跨地址,更新所述先入先出队列的尾指针及入队子指针;
其中,所述将所述信元按照长度为每个地址所占用的缓存单元的数目存储至所述先入先出队列,包括:
确定所述先入先出队列为空或者所述尾指针中的缓存单元已经写满时,将所述信元由高位到低位划分为与所述缓存单元位宽相同的M组数据,并按由高位到低位的顺序将所述信元写入空闲地址;
确定所述先入先出队列不为空或者所述尾指针中的缓存单元未写满时,将所述信元由高位到低位划分为与所述缓存单元位宽相同的M组数据,并依据入队子指针的位置,按由高位到低位的顺序写入所述信元的M组数据,入队子指针的位置写入信元最高位所在的一组划分数据,所述空闲地址编号为入队子指针减1的缓存单元写入所述信元划分的最后一组数据;
这里,所述M值与一个地址所包含的缓存单元的数目相同,即M=N;所述M为正整数;
所述信元占用的有效缓存单元为:所述信元实际占用的缓存单元;
所述有效缓存单元没有跨地址为包括:所述信元实际占用的缓存单元个数与入队子指针相加不大于M,且所述入队子指针非0;
所述信元占用的有效缓存单元跨地址包括:所述先入先出队列为空,或者所述尾指针中的缓存单元已经写满,或者,所述信元实际占用的缓存单元个数与入队子指针相加大于M;
当所述信元实际占用的缓存单元的数目小于M时,所述信元的最后一组划分数据无效。
步骤603:当前第K周期确定待出队信元能够出队,调度所述待出队信元出队;
这里,所述确定待出队信元能够出队是依据第K-1周期的第二反压计数值小于等于第二预设门限值进行的,所述第K-1周期的第二反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传 输的拼接单元数目及第K-2周期的第二反压计数值得到;K为正整数;
其中,所述拼接单元的位宽为一个缓存单元位宽的X分之一,可依据数据的转发速率等进行设定,既要保证在最少寄存器的条件下数据无丢失,又要保证总线资源充分被利用,不出现空拍;X为正整数;
所述第二反压计数值用作判断是否允许所述队列中的待出队信元在下个周期出队的依据;
所述第K-1周期的第二反压计数值=第K-2周期的第二反压计数值+上一个出队信元出队时占用拼接单元数目的估计值-每周期总线可传输的拼接单元数目;
所述上一个出队信元出队时占用拼接单元数目的估计值=所述上一个出队信元占用的缓存单元的数目与所述X的乘积。
当K值为1时,即所述先入先出队列中第一个待出队信元出队时,由于此前并无任何其他信元的出队,因此所述第二反压计数值一直为0,可直接进行所述待出队信元的出队及后续操作;
当K值为2时,即所述先入先出队列中第一个待出队信元已出队,由于在第一个待出队信元出队前并不存在其他信元的出队,即第K-2周期的第二反压计数值为0,因此,可直接依据第一个待出队信元占用拼接单元数目的估计值及每周期总线可传输的拼接单元数目获取第1个周期的第二反压计数值,并依据第1个周期的第二反压计数值判断是否允许下一个待出队信元出队。
在一实施例中,当第K-1周期的第二反压计数值大于第二预设门限值时,不允许所述队列中待出队信元在第K周期出队,依据每周期总线可传输的拼接单元数目传输所述寄存器中数据,直至第G周期的第二反压计数值小于等于第二预设门限值时,确定第G+1周期所述待出队信元能够出队;其中,G为大于K的正整数。
在一实施例中,调度所述待出队信元出队包括:依据出端口号,获取对应的先入先出队列的头指针、次头指针及出队子指针,依据所述队列中待出队信元所占用的缓存单元数目及出队子指针,计算将要读取的缓存单元的编号范围及个数,将所述待出队信元占用的缓存单元中的数据重新组合为一个信元传输至总线上,更新出队子指针为原出队子指针与所述信元占用的缓存单元的数目相加后的值,如果所述相加后的值大于N,更新为所述相加后的值减去N的值,若出队子指针与所述信元占用的缓存单元的数目相加不大于N,无须更新头指针;若出队子指针与所述信元占用的缓存单元的数目相加大于N,更新头指针为所述次头指针;其中,所述待出队信元为所述先入先出队列的首信元;
这里,所述出端口号与所述输出端口号相同。
在一实施例中,调度所述待出队信元出队后,所述方法还包括:
获取所述待出队信元占用的拼接单元数目的实际值,并依据所述实际值对第二反压计数值进行校正;这里,由于所述待出队信元占用的拼接单元数目的实际值,通常小于等于所述待出队信元占用拼接单元数目的估计值,因此,当所述实际值与所述估计值不同时,所述校正包括:将第二反压计数值减去所述估计值与所述实际值的差值。利用校正后的第二反压计数值与所述第二预设门限值进行比较,判断是否允许队列中待出队信元下一个周期出队。
图7为本发明实施例一数据缓存装置组成结构示意图,如图7所示,所述装置包括:缓存模块71及处理模块72;其中,
所述缓存模块71,配置为依据信元的输入端口号将所述信元存储至对应的先入先出队列;
所述处理模块72,配置为在当前第K周期确定待出队信元能够出队,调度所述待出队信元出队,获取所述待出队信元占用的拼接单元数目的实 际值,将所述待出队信元以信元拼接的方式存储至与总线位宽相同的寄存器以进行数据传输;
其中,所述确定待出队信元能够出队是依据第K-1周期的第一反压计数值小于等于第一预设门限值进行的,所述第K-1周期的第一反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传输的拼接单元数目及第K-2周期的第一反压计数值得到;K为正整数;
所述缓存单元为RAM每个地址的N分之一,即将RAM划分为固定的N份,每一份为一个缓存单元;N为正整数;N值可依据数据的转发速率等进行设定;
所述拼接单元的位宽为一个缓存单元位宽的X分之一,可依据数据的转发速率等进行设定,既要保证在最少寄存器的条件下数据无丢失,又要保证总线资源充分被利用,不出现空拍;X为正整数。
在一实施例中,所述装置还包括:获取模块73,配置为提取所述信元的信元头中携带的信元长度信息及信元版本信息,并依据所述信元长度信息及信元版本信息获取所述信元占用的缓存单元数目;
相应的,所述缓存模块71,还配置为将所述信元的信元信息存储至所述对应的先入先出队列;其中,所述信元信息包括:所述信元的长度、信元版本及所述信元占用的缓存单元数目。
在一实施例中,所述缓存模块71依据信元的输入端口号将所述信元存储至对应的先入先出队列,包括:
所述缓存模块71依据信元的输入端口号,获取所述输入端口号对应的先入先出队列的尾指针、入队子指针及空闲地址,将所述信元按照长度为每个地址所占用的缓存单元的数目存储至所述先入先出队列,读取所述信元占用的有效缓存单元,若所述有效缓存单元没有跨地址,更新所述先入先出队列的入队子指针,释放所述空闲地址;若所述信元占用的有效缓存 单元跨地址,更新所述先入先出队列的尾指针及入队子指针;
其中,所述缓存模块71将所述信元按照长度为每个地址所占用的缓存单元的数目存储至所述先入先出队列,包括:
所述缓存模块71确定所述先入先出队列为空或者所述尾指针中的缓存单元已经写满时,将所述信元由高位到低位划分为与所述缓存单元位宽相同的M组数据,并按由高位到低位的顺序将所述信元写入空闲地址;
确定所述先入先出队列不为空或者所述尾指针中的缓存单元未写满时,将所述信元由高位到低位划分为与所述缓存单元位宽相同的M组数据,并依据入队子指针的位置,按由高位到低位的顺序写入所述信元的M组数据,入队子指针的位置写入信元最高位所在的一组划分数据,所述空闲地址编号为入队子指针减1的缓存单元写入所述信元划分的最后一组数据;
这里,所述M值与一个地址所包含的缓存单元的数目相同,即M=N;所述M为正整数;
所述信元占用的有效缓存单元为:所述信元实际占用的缓存单元;
所述有效缓存单元没有跨地址为包括:所述信元实际占用的缓存单元个数与入队子指针相加不大于M,且所述入队子指针非0;
所述信元占用的有效缓存单元跨地址包括:所述先入先出队列为空,或者所述尾指针中的缓存单元已经写满,或者,所述信元实际占用的缓存单元个数与入队子指针相加大于M;
当所述信元实际占用的缓存单元的数目小于M时,所述信元的最后一组划分数据无效。
在一实施例中,所述第K-1周期的第一反压计数值用作判断是否允许所述队列中的待出队信元在第K周期出队的依据;
所述第K-1周期的第一反压计数值=第K-2周期的第一反压计数值+上一个出队信元出队时占用拼接单元数目的估计值-每周期总线可传输的拼接 单元数目;
所述上一个出队信元出队时占用拼接单元数目的估计值=所述上一个出队信元占用的缓存单元的数目与所述X的乘积。
当K值为1时,即所述先入先出队列中第一个待出队信元出队时,由于此前并无任何其他信元的出队,因此所述第一反压计数值一直为0,可直接进行所述待出队信元的出队及后续操作;
当K值为2时,即所述先入先出队列中第一个待出队信元已出队,由于在第一个待出队信元出队前并不存在其他信元的出队,即第K-2周期的第一反压计数值为0,因此,可直接依据第一个待出队信元占用拼接单元数目的估计值及每周期总线可传输的拼接单元数目获取第1个周期的第一反压计数值,并依据第1个周期的第一反压计数值判断是否允许下一个待出队信元出队。
在一实施例中,所述处理模块72,还配置为当第K-1周期的第一反压计数值大于第一预设门限值时,不允许所述队列中待出队信元在第K周期出队,依据每周期总线可传输的拼接单元数目传输所述寄存器中数据,直至第G周期的第一反压计数值小于等于第一预设门限值时,确定第G+1周期所述待出队信元能够出队;其中,G为大于K的正整数。
在一实施例中,所述处理模块72调度所述待出队信元出队包括:
所述处理模块72依据出队端口号,获取对应的先入先出队列的头指针、次头指针及出队子指针,依据所述队列中待出队信元所占用的缓存单元数目及出队子指针,计算将要读取的缓存单元的编号范围及个数,将所述待出队信元占用的缓存单元中的数据重新组合为一个信元,更新出队子指针为原出队子指针与所述信元占用的缓存单元的数目相加后的值,如果所述相加后的值大于N,更新为所述相加后的值减去N的值,若出队子指针与所述信元占用的缓存单元的数目相加不大于N,无须更新头指针;若出队 子指针与所述信元占用的缓存单元的数目相加大于N,更新头指针为所述次头指针;其中,所述待出队信元为所述先入先出队列的首信元;
这里,所述出队端口号与所述输入端口号相同。
在一实施例中,所述装置还包括:校正模块74,配置为依据所述待出队信元占用的拼接单元的实际数目及所述待出队信元占用的缓存单元的数目对所述第一反压计数值进行校正;
这里,由于所述待出队信元占用的拼接单元数目的实际值,通常小于等于所述待出队信元占用拼接单元数目的估计值,因此,当所述实际值与所述估计值不同时,所述校正包括:将第一反压计数值减去所述估计值与所述实际值的差值。利用校正后的第一反压计数值与所述第一预设门限值进行比较,判断是否允许队列中待出队信元下一个周期出队。
在一实施例中,在本发明实施例中包括两个或两个的寄存器组成一个寄存器组;
所述寄存器包含Y个虚拟单元,即所述寄存器被划分成Y个虚拟单元,所述虚拟单元的位宽与一个拼接单元的位宽相同;
所述处理模块72将所述待出队信元以信元拼接的方式存储至与总线位宽相同的寄存器包括:
所述处理模块72获取寄存器组中的写入指针,依据所述待出队信元占用的拼接单元的实际数目,将所述信元存储至与所述写入指针对应的寄存器中,若所述寄存器中包含有效信元,以拼接单元为单位将所述待出队信元与所述有效进行拼接,记录信元拼接信息,更新写入指针为所述待出所占用的拼接单元的个数与原写入指针相加的值,当所述相加的值大于等于Y时,将所述相加的值减去Y作为新的写入指针;其中,所述写入指针以拼接单元为步进值;
这里,所述信元拼接信息包括:拼接位置、信元头标识、信元尾标识 及信元有效标识;其中,所述拼接位置标识两个信元的边界;
在本发明实施例中所述寄存器设置允许最多两个不同的信元进行拼接,因此,所述信元有效标识包括第一信元有效标识及第二信元有效标识,当寄存器中第二个信元尚未输入,所述寄存器包含的虚拟单元尚未写满时,第二信元有效标识则是无效的。
在一实施例中,所述处理模块72,还配置为确定所述寄存器组中寄存器含有有效的信元数据时,将所述寄存器组中读出指针对应的寄存器中所有数据输出至信元缓存输出端,携带所述信元拼接信息;如图5所示,读出指针以寄存器为单元变化,寄存器0的数据输出后,读出指针指向寄存器1。
图8为本发明实施例二数据缓存装置组成结构示意图,如图8所示,所述装置包括:还原模块81、存储模块82及调度模块83;其中,
所述还原模块81,配置为还原以信元拼接的方式拼接的数据为独立信元;
所述存储模块82,配置为依据信元的输出端口号将所述信元存储至对应的先入先出队列;
所述调度模块83,配置为在当前第K周期确定待出队信元能够出队,调度待出队信元出队;
其中,所述确定待出队信元能够出队是依据第K-1周期的第二反压计数值小于等于第二预设门限值进行的,所述第K-1周期的第二反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传输的拼接单元数目及第K-2周期的第二反压计数值得到;K为正整数。
在一实施例中,所述还原模块81还原以信元拼接的方式拼接的数据为独立信元包括:
所述还原模块81依据所述数据中携带的信元拼接信息,还原以信元拼 接的方式拼接的数据为独立信元;
其中,所述信元拼接信息包括:拼接位置、信元头标识、信元尾标识及信元有效标识;其中,所述拼接位置标识两个信元的边界。
在一实施例中,所述装置还包括:提取模块84,配置为提取所述信元的信元头中携带的信元长度及信元版本等信息,并依据所述信元长度信息及信元版本信息获取所述信元占用的缓存单元数目;
这里,所述缓存单元为RAM每个地址的N分之一,即将RAM划分为固定的N份,每一份为一个缓存单元;N为正整数;N值可依据数据的转发速率等进行设定;
相应的,所述存储模块82,还配置为将所述信元的信元信息存储至所述对应的先入先出队列;其中,所述信元信息包括:所述信元的长度、信元版本及所述信元占用的缓存单元数目。
在一实施例中,所述存储模块82依据信元的输出端口号将所述信元存储至对应的先入先出队列,包括:
所述存储模块82依据信元的输出端口号,获取所述输出端口号对应的先入先出队列的尾指针、入队子指针及空闲地址,将所述信元按照长度为每个地址所占用的缓存单元的数目存储至所述先入先出队列,读取所述信元占用的有效缓存单元,若所述有效缓存单元没有跨地址,更新所述先入先出队列的入队子指针,释放所述空闲地址;若所述信元占用的有效缓存单元跨地址,更新所述先入先出队列的尾指针及入队子指针;
其中,所述存储模块82将所述信元按照长度为每个地址所占用的缓存单元的数目存储至所述先入先出队列,包括:
所述存储模块82确定所述先入先出队列为空或者所述尾指针中的缓存单元已经写满时,将所述信元由高位到低位划分为与所述缓存单元位宽相同的M组数据,并按由高位到低位的顺序将所述信元写入空闲地址;
所述存储模块82确定所述先入先出队列不为空或者所述尾指针中的缓存单元未写满时,将所述信元由高位到低位划分为与所述缓存单元位宽相同的M组数据,并依据入队子指针的位置,按由高位到低位的顺序写入所述信元的M组数据,入队子指针的位置写入信元最高位所在的一组划分数据,所述空闲地址编号为入队子指针减1的缓存单元写入所述信元划分的最后一组数据;
这里,所述M值与一个地址所包含的缓存单元的数目相同,即M=N;所述M为正整数;
所述信元占用的有效缓存单元为:所述信元实际占用的缓存单元;
所述有效缓存单元没有跨地址为包括:所述信元实际占用的缓存单元个数与入队子指针相加不大于M,且所述入队子指针非0;
所述信元占用的有效缓存单元跨地址包括:所述先入先出队列为空,或者所述尾指针中的缓存单元已经写满,或者,所述信元实际占用的缓存单元个数与入队子指针相加大于M;
当所述信元实际占用的缓存单元的数目小于M时,所述信元的最后一组划分数据无效。
在一实施例中,所述拼接单元的位宽为一个缓存单元位宽的X分之一,可依据数据的转发速率等进行设定,既要保证在最少寄存器的条件下数据无丢失,又要保证总线资源充分被利用,不出现空拍;X为正整数;
所述第二反压计数值用作判断是否允许所述队列中的待出队信元在下个周期出队的依据;
所述第K-1周期的第二反压计数值=第K-2周期的第二反压计数值+上一个出队信元出队时占用拼接单元数目的估计值-每周期总线可传输的拼接单元数目;
所述上一个出队信元出队时占用拼接单元数目的估计值=所述上一个 出队信元占用的缓存单元的数目与所述X的乘积。
当K值为1时,即所述先入先出队列中第一个待出队信元出队时,由于此前并无任何其他信元的出队,因此所述第二反压计数值一直为0,可直接进行所述待出队信元的出队及后续操作;
当K值为2时,即所述先入先出队列中第一个待出队信元已出队,由于在第一个待出队信元出队前并不存在其他信元的出队,即第K-2周期的第二反压计数值为0,因此,可直接依据第一个待出队信元占用拼接单元数目的估计值及每周期总线可传输的拼接单元数目获取第1个周期的第二反压计数值,并依据第1个周期的第二反压计数值判断是否允许下一个待出队信元出队。
在一实施例中,所述调度模块83,还配置为当第K-1周期的第二反压计数值大于第二预设门限值时,不允许所述队列中待出队信元在第K周期出队,依据每周期总线可传输的拼接单元数目传输所述寄存器中数据,直至第G周期的第二反压计数值小于等于第二预设门限值时,确定第G+1周期所述待出队信元能够出队;其中,G为大于K的正整数。
在一实施例中,所述调度模块83调度所述待出队信元出队包括:
所述调度模块83依据出端口号,获取对应的先入先出队列的头指针、次头指针及出队子指针,依据所述队列中待出队信元所占用的缓存单元数目及出队子指针,计算将要读取的缓存单元的编号范围及个数,将所述待出队信元占用的缓存单元中的数据重新组合为一个信元传输至总线上,更新出队子指针为原出队子指针与所述信元占用的缓存单元的数目相加后的值,如果所述相加后的值大于N,更新为所述相加后的值减去N的值,若出队子指针与所述信元占用的缓存单元的数目相加不大于N,无须更新头指针;若出队子指针与所述信元占用的缓存单元的数目相加大于N,更新头指针为所述次头指针;其中,所述待出队信元为所述先入先出队列的首 信元;
这里,所述出端口号与所述输出端口号相同。
在一实施例中,所述装置还包括:修正模块85,配置为在调度所述待出队信元出队后,
获取所述待出队信元占用的拼接单元数目的实际值,并依据所述实际值对第二反压计数值进行校正;这里,由于所述待出队信元占用的拼接单元数目的实际值,通常小于等于所述待出队信元占用拼接单元数目的估计值,因此,当所述实际值与所述估计值不同时,所述校正包括:将第二反压计数值减去所述估计值与所述实际值的差值。利用校正后的第二反压计数值与所述第二预设门限值进行比较,判断是否允许队列中待出队信元下一个周期出队。
本发明实施例中提出的数据缓存装置中的缓存模块、处理模块、获取模块、校正模块、还原模块、存储模块、调度模块、提取模块及修正模块都可以通过处理器来实现,当然也可通过具体的逻辑电路实现;其中所述处理器可以是移动终端或服务器上的处理器,在实际应用中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
本发明实施例中,如果以软件功能模块的形式实现上述数据缓存方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这 样,本发明实施例不限制于任何特定的硬件和软件结合。
相应地,本发明实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机程序,该计算机程序用于执行本发明实施例的上述数据缓存方法。
以上所述仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。

Claims (13)

  1. 一种数据缓存方法,所述方法包括:
    依据信元的输入端口号将所述信元存储至对应的先入先出队列;
    当前第K周期确定待出队信元能够出队,调度所述待出队信元出队,获取所述待出队信元占用的拼接单元数目的实际值,将所述待出队信元以信元拼接的方式存储至与总线位宽相同的寄存器;
    其中,所述确定待出队信元能够出队是依据第K-1周期的第一反压计数值小于等于第一预设门限值进行的,所述第K-1周期的第一反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传输的拼接单元数目及第K-2周期的第一反压计数值得到;K为正整数。
  2. 根据权利要求1所述方法,其中,所述依据信元的输入端口号将所述信元存储至对应的先入先出队列之前,所述方法还包括:提取所述信元携带的信元长度信息及信元版本信息,并依据所述信元长度信息及信元版本信息获取所述信元占用的缓存单元数目。
  3. 根据权利要求1或2所述方法,其中,所述依据信元的输入端口号将所述信元存储至对应的先入先出队列包括:
    依据信元的输入端口号,获取所述输入端口号对应的先入先出队列的尾指针、入队子指针及空闲地址,将所述信元按照长度为每个地址所占用的缓存单元的数目存储至所述先入先出队列,读取所述信元占用的有效缓存单元,若所述有效缓存单元没有跨地址,更新所述先入先出队列的入队子指针,释放所述空闲地址;若所述信元占用的有效缓存单元跨地址,更新所述先入先出队列的尾指针及入队子指针。
  4. 根据权利要求2所述方法,其中,调度所述待出队信元出队后,所述方法还包括:
    依据所述待出队信元占用的拼接单元的实际数目及所述待出队信元占 用的缓存单元的数目对所述第一反压计数值进行校正。
  5. 根据权利要求1或2所述方法,其中,所述待出队信元以信元拼接的方式存储至与总线位宽相同的寄存器,包括:
    查找写入指针,依据所述待出队信元占用的拼接单元的实际数目,将所述信元存储至与所述写入指针对应的寄存器中,若所述寄存器中包含有效信元,以拼接单元为单位将所述待出队信元与所述有效信元进行拼接,记录信元拼接信息,并更新写入指针。
  6. 一种数据缓存装置,所述装置包括:缓存模块及处理模块;其中,
    所述缓存模块,配置为依据信元的输入端口号将所述信元存储至对应的先入先出队列;
    所述处理模块,配置为在当前第K周期确定待出队信元能够出队,调度所述待出队信元出队,获取所述待出队信元占用的拼接单元数目的实际值,将所述待出队信元以信元拼接的方式存储至与总线位宽相同的寄存器以进行数据传输;
    其中,所述确定待出队信元能够出队是依据第K-1周期的第一反压计数值小于等于第一预设门限值进行的,所述第K-1周期的第一反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传输的拼接单元数目及第K-2周期的第一反压计数值得到;K为正整数。
  7. 根据权利要求6所述装置,其中,所述装置还包括:获取模块,配置为提取所述信元携带的信元长度信息及信元版本信息,并依据所述信元长度信息及信元版本信息获取所述信元占用的缓存单元数目。
  8. 根据权利要求7所述装置,其中,所述装置还包括:校正模块,配置为依据所述待出队信元占用的拼接单元的实际数目及所述待出队信元占用的缓存单元的数目对所述第一反压计数值进行校正。
  9. 一种数据缓存方法,所述方法包括:
    还原以信元拼接的方式拼接的数据为独立信元;
    依据信元的输出端口号将所述信元存储至对应的先入先出队列;
    当前第K周期确定待出队信元能够出队,调度所述待出队信元出队;
    其中,所述确定待出队信元能够出队是依据第K-1周期的第二反压计数值小于等于第二预设门限值进行的,所述第K-1周期的第二反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传输的拼接单元数目及第K-2周期的第二反压计数值得到;K为正整数。
  10. 根据权利要求9所述方法,其中,还原以信元拼接的方式拼接的数据为独立信元,包括:
    依据所述数据中携带的信元拼接信息,还原以信元拼接的方式拼接的数据为独立信元。
  11. 一种数据缓存装置,所述装置包括:还原模块、存储模块及调度模块;其中,
    所述还原模块,配置为还原以信元拼接的方式拼接的数据为独立信元;
    所述存储模块,配置为依据信元的输出端口号将所述信元存储至对应的先入先出队列;
    所述调度模块,配置为在当前第K周期确定待出队信元能够出队,调度待出队信元出队;
    其中,所述确定待出队信元能够出队是依据第K-1周期的第二反压计数值小于等于第二预设门限值进行的,所述第K-1周期的第二反压计数值根据上一个出队信元出队时占用拼接单元数目的估计值、每周期总线可传输的拼接单元数目及第K-2周期的第二反压计数值得到;K为正整数。
  12. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行权利要求1至5任一项所述的数据缓存方法。
  13. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行权利要求9至10任一项所述的数据缓存方法。
PCT/CN2015/077639 2014-10-14 2015-04-28 一种数据缓存方法、装置及存储介质 WO2016058355A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP15850914.1A EP3206123B1 (en) 2014-10-14 2015-04-28 Data caching method and device, and storage medium
US15/519,073 US10205673B2 (en) 2014-10-14 2015-04-28 Data caching method and device, and storage medium
JP2017520382A JP6340481B2 (ja) 2014-10-14 2015-04-28 データキャッシング方法、装置及び記憶媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410542710.7A CN105573711B (zh) 2014-10-14 2014-10-14 一种数据缓存方法及装置
CN201410542710.7 2014-10-14

Publications (1)

Publication Number Publication Date
WO2016058355A1 true WO2016058355A1 (zh) 2016-04-21

Family

ID=55746060

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/077639 WO2016058355A1 (zh) 2014-10-14 2015-04-28 一种数据缓存方法、装置及存储介质

Country Status (5)

Country Link
US (1) US10205673B2 (zh)
EP (1) EP3206123B1 (zh)
JP (1) JP6340481B2 (zh)
CN (1) CN105573711B (zh)
WO (1) WO2016058355A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106130930B (zh) * 2016-06-24 2019-04-19 西安电子科技大学 一种数据帧预入队处理的装置及方法
US10785348B2 (en) * 2017-08-29 2020-09-22 Hewlett Packard Enterprise Development Lp Segment size determination
CN110554852B (zh) * 2018-05-31 2021-11-12 赛灵思公司 数据拼接结构、方法及其片上实现
US11237960B2 (en) * 2019-05-21 2022-02-01 Arm Limited Method and apparatus for asynchronous memory write-back in a data processing system
CN111522643B (zh) * 2020-04-22 2024-06-25 杭州迪普科技股份有限公司 基于fpga的多队列调度方法、装置、计算机设备及存储介质
CN113312385A (zh) * 2020-07-07 2021-08-27 阿里巴巴集团控股有限公司 缓存操作方法、装置及系统,存储介质和操作设备
CN114185513B (zh) * 2022-02-17 2022-05-20 沐曦集成电路(上海)有限公司 数据缓存装置和芯片
CN115348218B (zh) * 2022-10-18 2022-12-27 井芯微电子技术(天津)有限公司 一种队列调度方法及装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1897555A (zh) * 2005-07-11 2007-01-17 普天信息技术研究院 动态时分交换装置及方法
CN101222444A (zh) * 2008-02-04 2008-07-16 华为技术有限公司 缓存数据处理方法、装置及系统
CN101770356A (zh) * 2008-12-30 2010-07-07 陈海红 一种定长信元交换中的数据位宽的转换装置和方法

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05183528A (ja) * 1990-08-17 1993-07-23 Toshiba Corp コンカチネーション多重化合成装置
US5602994A (en) * 1992-09-25 1997-02-11 The United States Of America As Represented By The United States Department Of Energy Method and apparatus for high speed data acquisition and processing
JP3144386B2 (ja) * 1998-07-06 2001-03-12 日本電気株式会社 バックプレッシャ制御方法およびその装置
IL133922A (en) 1999-02-01 2005-03-20 Ciba Sc Holding Ag Compositions comprising polyolefins produced by polymerization over a metallocene catalyst and a stabilizer mixture and a method for stabilizing said polyolefins
US6977930B1 (en) 2000-02-14 2005-12-20 Cisco Technology, Inc. Pipelined packet switching and queuing architecture
US6535510B2 (en) 2000-06-19 2003-03-18 Broadcom Corporation Switch fabric with path redundancy
JP3908483B2 (ja) * 2001-06-28 2007-04-25 富士通株式会社 通信装置
JP3719222B2 (ja) * 2002-02-27 2005-11-24 日本電気株式会社 パケット処理システム
IL155742A0 (en) * 2003-05-04 2006-12-31 Teracross Ltd Method and apparatus for fast contention-free, buffer management in a muti-lane communication system
JP2006253790A (ja) * 2005-03-08 2006-09-21 Fujitsu Ltd パケット伝送装置及びパケット伝送方法
JP2006324861A (ja) * 2005-05-18 2006-11-30 Konica Minolta Photo Imaging Inc プリント情報設定装置、カメラ及びプリンタ
US8321651B2 (en) * 2008-04-02 2012-11-27 Qualcomm Incorporated System and method for memory allocation in embedded or wireless communication systems
CN101605100B (zh) 2009-07-15 2012-04-25 华为技术有限公司 队列存储空间的管理方法和设备
CN101964751B (zh) * 2010-09-30 2013-01-16 华为技术有限公司 数据包的传输方法及装置
CN102065014B (zh) 2010-12-29 2014-12-31 中兴通讯股份有限公司 数据信元处理方法和装置
KR101607180B1 (ko) * 2011-08-17 2016-03-29 후아웨이 테크놀러지 컴퍼니 리미티드 패킷 재조립 및 재배열 방법, 장치 및 시스템

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1897555A (zh) * 2005-07-11 2007-01-17 普天信息技术研究院 动态时分交换装置及方法
CN101222444A (zh) * 2008-02-04 2008-07-16 华为技术有限公司 缓存数据处理方法、装置及系统
CN101770356A (zh) * 2008-12-30 2010-07-07 陈海红 一种定长信元交换中的数据位宽的转换装置和方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3206123A4 *

Also Published As

Publication number Publication date
EP3206123B1 (en) 2020-04-29
JP2017532908A (ja) 2017-11-02
CN105573711A (zh) 2016-05-11
CN105573711B (zh) 2019-07-19
EP3206123A1 (en) 2017-08-16
EP3206123A4 (en) 2017-10-04
US10205673B2 (en) 2019-02-12
JP6340481B2 (ja) 2018-06-06
US20170237677A1 (en) 2017-08-17

Similar Documents

Publication Publication Date Title
WO2016058355A1 (zh) 一种数据缓存方法、装置及存储介质
US10423358B1 (en) High-speed data packet capture and storage with playback capabilities
JP5863076B2 (ja) パケットを再構築し再順序付けするための方法、装置、およびシステム
US20200241915A1 (en) Work conserving, load balancing, and scheduling
US7751404B2 (en) Method, system, and computer program product for high performance bonding resequencing
CN112084136B (zh) 队列缓存管理方法、系统、存储介质、计算机设备及应用
US10200313B2 (en) Packet descriptor storage in packet memory with cache
KR102532173B1 (ko) 메모리 액세스 기술 및 컴퓨터 시스템
CN107528789B (zh) 报文调度方法及装置
JP7074839B2 (ja) パケット処理
US20210223997A1 (en) High-Speed Replay of Captured Data Packets
EP3395015A1 (en) Technologies for inline network traffic performance tracing
WO2016070668A1 (zh) 一种实现数据格式转换的方法、装置及计算机存储介质
CN106254270A (zh) 一种队列管理方法及装置
US9544401B2 (en) Device and method for data communication using a transmission ring buffer
EP3299965B1 (en) Method and physical device for managing linked lists
EP3223478A1 (en) Packet processing method and device, and storage medium
WO2023130997A1 (zh) 管理流量管理tm控制信息的方法、tm模块和网络转发设备
US20130247071A1 (en) System and method for efficient shared buffer management
CN113347112B (zh) 一种基于多级缓存的数据包转发方法及装置
US12026113B2 (en) Packet processing device and packet processing method
JP4588558B2 (ja) パケット読み出し制御装置
CN118900254A (zh) 流量管理系统、方法、芯片以及计算机可读存储介质
CN113448896A (zh) 一种基于wqe反压的qp的流控方法
JP2013135383A (ja) パケットバッファ装置およびパケットバッファ制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15850914

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017520382

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2015850914

Country of ref document: EP