WO2006084417A1 - Méthode de gestion de tampon basée sur une table de bitmap - Google Patents

Méthode de gestion de tampon basée sur une table de bitmap Download PDF

Info

Publication number
WO2006084417A1
WO2006084417A1 PCT/CN2005/002220 CN2005002220W WO2006084417A1 WO 2006084417 A1 WO2006084417 A1 WO 2006084417A1 CN 2005002220 W CN2005002220 W CN 2005002220W WO 2006084417 A1 WO2006084417 A1 WO 2006084417A1
Authority
WO
WIPO (PCT)
Prior art keywords
address
cache
area
bitmap
fifo
Prior art date
Application number
PCT/CN2005/002220
Other languages
English (en)
French (fr)
Inventor
Jingjie Cui
Yu Lin
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Publication of WO2006084417A1 publication Critical patent/WO2006084417A1/zh
Priority to US11/773,733 priority Critical patent/US7733892B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management

Definitions

  • the present invention relates to the field of network communication technologies, and in particular, to a cache management method based on a bitmap table.
  • the shared data storage-forward switching chip's data frame forwarding process mainly includes the following three steps: receiving and buffering data frames, data forwarding, and sending data frame recovery buffers.
  • Receiving and buffering data frames The data frames entering the chip from the external port are buffered into the shared buffer 2 via the input interface 1, and the shared cache 2 is generally RAM (random access memory).
  • the allocation of data frames in the shared cache 2 is managed by the cache management 4 through the cache address pointer.
  • the body of the data frame will always be stored in the shared cache 2, only the cache address pointer is passed in the chip until the output interface 3 is reached under the forwarding instruction, and the output interface 3 is based on the cache address pointer.
  • the data frame is read from the shared buffer 2 and sent to the external port.
  • Data forwarding When the data frame is written to the shared buffer 2, the input interface 1 extracts information for forwarding the data frame from the data frame, and the information is sent to the data forwarding channel 5 together with the buffer address pointer of the data frame.
  • the data forwarding channel 5 performs a forwarding search operation on the data frame according to the forwarding information it receives, obtains the destination port of the data frame, and transmits the buffer address pointer together with the destination port information to the output interface 3.
  • the output interface 3 reads the data frame from the shared cache 2 according to the buffer address pointer transmitted from the data forwarding channel 5, and sends the data frame to the external output port according to the destination port information of the data frame.
  • Cache Management 4 will recycle the corresponding cache.
  • Method 1 The FIFO (first in, first out) form is used to allocate and recycle the cached address pointer of the shared cache.
  • a FIFO queue is used to store all free cache address pointers of the shared cache.
  • Each FIFO unit in the FIFO queue stores a free cache address pointer.
  • the depth of the FIFO queue should be equal to the total number of cache units in the shared cache for sharing.
  • Cache all caches When the units are all idle, the cache address pointers of all cache units are stored. Use the "FIFO read address,” to indicate the first available free cache address pointer in the shared cache, and the "FIFO write address" to indicate the FIFO unit that the reclaimed cache address pointer should hold.
  • Cache Management 4 allocates a buffer for the data frame.
  • a free cache address pointer is read according to the "FIFO read address", and the number of free cache address pointers in the FIFO queue is decremented by one; when the data frame stored in the buffer is sent to the external port and the cache address pointer needs to be reclaimed
  • the buffer address pointer is written into the FIFO unit according to the "FIFO write address", and the number of free buffer address pointers in the FIFO queue is increased by one.
  • the next assignable cache address pointer indicated by the "FIFO Read Address” is stored in the FIFO unit of the FIFO queue address 90, "FIFO write address, indicating the next recovered cache address pointer. It should be written to the FIFO unit at address 56 of the FIFO queue. At this time, the total number of cache address pointers stored in the FIFO queue is 8K-(90-56).
  • a cache address pointer can be allocated or reclaimed once by accessing the FIFO queue, and the ability to allocate and recycle the shared cache is strong, and the management efficiency of the shared cache is high.
  • the width and depth of the FIFO queue increase sharply as the number of cache units of the shared cache increases, the amount of RAM resources consumed by managing each cache unit increases sharply. Therefore, this method is not suitable for large storage.
  • a swap chip for space shared cache is not suitable for large storage.
  • the Bitmap table is a two-dimensional table. Each cache address pointer corresponds to the lbit in the Bitmap table.
  • the Bitmap table can be implemented using RAM.
  • the mapping relationship between the commonly used cache address pointer and the Bitmap table is as follows: The high nbit of the cache address pointer is used as the row address of the Bitmap table, and the low mbit of the cache address pointer is used as the column address of the Bitmap table.
  • the bit in the Bitmap table is 1, indicating that the corresponding cache address pointer is occupied. When 0, it indicates that the corresponding cache address pointer is idle. If you search for a free cache address pointer directly from the Bitmap table, it can take many clock cycles, so you can use a small FIFO queue to manage the cache address pointer.
  • the free cache address pointer stored in the FIFO exceeds the reclamation field value, the cache address pointer higher than the reclaim field value is returned to the Bitmap table, and the corresponding bit in the Bitmap table is set to 0; the free cache address pointer stored in the FIFO Below the search field value, some free cache address pointers are automatically searched from the Bitmap table and stored in the FIFO. The corresponding bit in the Bitmap table is set to 1.
  • the total number of cache units in the shared cache is 8K, and the size of the Bitmap table is (2 8 x 2 5 ) bits. Since the cache address pointers of 0-31 are stored in the FIFO queue, these cache address pointers are used. The corresponding bit position in the Bitmap table is "1".
  • the buffer address pointer corresponding to the bit of the "1" bitmap table is occupied, and the buffer address pointer corresponding to the bit of "0" is not occupied.
  • the cache address pointer is occupied: the cache address pointer is a free cache address pointer, and is stored in the FIFO, or the cache unit corresponding to the cache address pointer is occupied by the data frame.
  • this method can allocate or reclaim a cached address pointer in one clock cycle, which alleviates the speed of searching for free cache pointers directly from the Bitmap table.
  • the Bitmap table needs to be directly read and written, which may result in greatly reduced access efficiency of the cache address pointer.
  • This method is used to manage the shared cache.
  • the RAM resources consumed by each cache unit are about 1 bit, and do not change with the number of cache units.
  • the resource consumption is small, but the method is time-consuming when searching for free cache address pointers. Especially when the FIFO is in the limit state, the ability to allocate and reclaim the cache address pointer is poor, and the management efficiency of the shared cache is greatly reduced. This method is also not applicable to the switch chip with a large exchange bandwidth.
  • the present invention provides a bitmap-based cache management method, which includes:
  • bitmap table a, dividing the bitmap table into a plurality of regions including at least 1 bit;
  • the cache address pointer is managed according to the idle state of each area of the record. among them:
  • the bitmap table is divided into a plurality of regions in units of rows of the bitmap table.
  • the idle state of each area is determined and recorded according to the unoccupied state of the bits in each area as follows:
  • the bitmap table is an n-row table;
  • the bitmap table is divided into n regions in units of rows of the bitmap table; and a row address of the bitmap table is an address of each region.
  • an area including at least one bit of an unoccupied state is determined as a free area.
  • the address of the free area is stored in a first in first out queue or stack.
  • n, m are positive integers
  • the read address of the FIFO queue indicates the address of the next free area stored in the FIFO queue
  • the write address of the first in first out queue indicates a storage unit of the first in first out queue stored in the next free area address
  • the initial state of the FIFO queue is: storing the full state of the address of each area.
  • the cache address pointer is managed according to the idle state of each of the recorded areas as follows:
  • the method further includes:
  • the address of the area is stored.
  • the method further includes: when the buffer address pointer needs to be allocated and reclaimed at the same time, the The received cache address pointer is directly allocated.
  • the present invention divides the bitmap table into several regions and stores the address of the free area.
  • the cache address pointer is allocated, as long as there is a free cache unit in the cache, according to the stored free area address. It is sure to be able to obtain at least one free cache address pointer, so that the entire process of allocating the cache address pointer is fixed and easy to control; the present invention uses RAM to implement the FIFO to store the free area address, and manages each cache unit to consume about 1 RAM.
  • the bit consumes less resources, and does not substantially change with the number of cache units; the present invention enables an average of one cache address recovery process every 3 clock cycles, and when the RAM is a read port and a write port In the case of the RAM, the present invention can realize the process of allocating the cache address pointer every 2 clock cycles, and the efficiency of allocating and retrieving the cache address pointer is high; thereby improving the controllable process in the cache management process by the technical solution provided by the present invention Sex, and consume resources In the case of a small number of cases, the management efficiency of the cache is maximized, so that the present invention can be well applied to a switch chip with a large switching bandwidth.
  • FIG. 1 is a schematic diagram of an overall framework of a shared storage-forward switching chip
  • FIG. 2 is a schematic diagram of an initial state of a FIFO queue in the prior art
  • FIG. 3 is a schematic diagram of a FIFO queue in a normal operation process in the prior art
  • FIG. 4 is a schematic diagram of an initial state of a Bitmap table in the prior art
  • FIG. 5 is a schematic diagram of a Bitmap table in a normal operation process in the prior art
  • FIG. 6 is a schematic diagram showing an initial state of a Bitmap table of the present invention.
  • Figure 7 is a schematic diagram showing the initial state of the FIFO queue of the present invention.
  • Figure 8 is a timing diagram 1 of the allocation buffer address pointer of the present invention.
  • FIG. 9 is a timing diagram of a reclaim cache address pointer of the present invention.
  • Figure 10 is a timing diagram 2 of the allocation buffer address pointer of the present invention.
  • the core of the present invention is: dividing the bitmap table into thousands of regions including at least one bit, determining and storing the idle state of each region according to the unoccupied state of the bits in each region, according to the stored regions.
  • the idle state manages the cache address pointer.
  • the cache management method of the present invention is based on a Bitmap table. Therefore, the present invention is applicable to any Bitmap table used when managing a cache.
  • the cache in the present invention includes a shared cache.
  • the present invention firstly needs to divide the Bitmap table into a plurality of regions, and then, according to the unoccupied state of the bits contained in each region, determine whether the region is a free region, and record an area of the idle state, such as storing an address of the free area, such that When the cache address pointer is allocated, the address of the stored free area can be searched to the corresponding area in the Bitmap table, and at least one free cache address pointer must be obtained.
  • the stored free area should be dynamically changed according to the change of the occupancy status of the bit in the area during the buffer address pointer allocation and recovery process, so as to ensure that the stored free area must contain the unoccupied status bit, and It should be ensured that the free area containing a predetermined number of unoccupied bits must be stored.
  • Bitmap table is divided into an action unit as an example, and the area division of the Bitmap table is described in detail.
  • the shared cache is set to have 8K cache units, and the Bitmap table of the shared cache is set to a two-dimensional table whose rows and columns are respectively (2 8 X 2 5 ) bits.
  • Bitmap table In units of one regional breakdown of Bitmap table, you can Bitmap table is divided into 28 regions. The address of each zone can be determined as the row address of the Bitmap table.
  • the address of the free area can be stored in the FIFO queue or it can be stored on the stack.
  • the predetermined number of the above may be a minimum of 1, and the maximum should be less than the total number of columns of the Bitmap table.
  • the predetermined number can be selected according to the actual needs of the communication system.
  • the FIFO queue storing the address of the free area is referred to as a line idle indication FIFO in this embodiment.
  • the Bitmap table corresponding to the shared cache with 8K cache units is (2 8 ⁇ 2 5 ) bits, and the row idle indication FIFO is used to store the address of the free area, the depth of the FIFO queue should be 2 8 , The width of the FIFO queue should be 8bit.
  • the "FIFO read address" of the line idle indication FIFO indicates the address of the area of the next bit that contains at least one unoccupied state stored in the queue, that is, the corresponding row address in the Bitmap table; "FIFO write address” indicates when the Bitmap table When a certain row is changed from a bit that does not include an unoccupied state to a bit that contains at least one unoccupied state, the address of the region corresponding to the row is the FIFO unit to which the row address of the row should be stored.
  • the row idle indication FIFO should save the row address of all the rows in the Bitmap table.
  • the row idle indication FIFO holds 256 row addresses, and the row idle indication FIFO is in the "full" state, and the row is idle.
  • the "FIFO read address" indicating the FIFO is the same as the "FIFO write address" and is 0. Since the depth of the FIFO is the number of rows in the Bitmap table, the row idle indicates that the FIFO does not overflow.
  • a row address is read from the row idle indication FIFO according to the "FIFO read address", and the bit of the unoccupied state is searched according to the row address to the corresponding row of the Bitmap table.
  • the obtained unoccupied state bit can determine a free cache address pointer, and the data frame is stored in the cache according to the cache address pointer, and the state of the unoccupied bit in the Bitmap table should be changed from an unoccupied state to Occupy state, and determine whether the line where the bit is located also contains at least one unoccupied bit.
  • the line FIFO indicates that the "FIFO read address" of the FIFO does not change, and the allocation process of the buffer address pointer ends. If all the bits in the line are occupied, the area is no longer Free area, the address of this area should not be saved Stored in the line idle indication FIFO, the "FIFO read address" is incremented by one. The allocation process of this cache address pointer ends.
  • the bit in the Bitmap table corresponding to the pointer is determined according to the cache address pointer that needs to be recovered, and it is determined whether all the bits in the row in which the bit is located are Occupancy status, if all the bits in the line are occupied, after the occupied state of the bit corresponding to the recovered buffer address pointer is changed to the unoccupied state, the row address of the row in which the bit is located needs to be stored in In the line idle indication FIFO, the row address of the row is stored in the corresponding FIFO unit according to the "FIFO write address", and the "FIFO write address" is incremented by one.
  • the recovery process of the cache address pointer ends; if the bit is located If all the bits in the line are not occupied, the occupied state of the bit is directly changed to the unoccupied state, and the recovery process of the cache address pointer ends.
  • the buffer address pointer needs to be allocated when the data frame is sent to the external port and the cache address pointer needs to be reclaimed, the buffer address pointer to be reclaimed can be directly allocated, and no operation is required on the row idle indication FIFO and the Bitmap table. The allocation and recovery efficiency of the cache address pointer is further improved.
  • the Bitmap table is accessed by using the row address, and a corresponding row of information in the Bitmap table corresponding to the row address is read.
  • the Bitmap table is set to be implemented by using RAM, and includes 256 rows and 32 columns. Then, the row of information read from the Bitmap table is 32 bits. After the response time of the RAM, one line of information read is valid in clock cycle 4.
  • a bit of unoccupied status is searched from the read line of information, and a free buffer address pointer corresponding to the bit is determined.
  • the cache address pointer is 13 bits, and the cache address pointer can be determined by the row address and the column address of the Bitmap table.
  • the upper 8 bits of the cache address pointer are the row address of the Bitmap table, and the lower 5 bits of the cache address pointer are the Bitmap.
  • the column address of the table is 13 bits, and the cache address pointer can be determined by the row address and the column address of the Bitmap table.
  • the bit of the unoccupied state searched in the row information is modified to the occupancy state, and then the modified row information is written back to the row corresponding to the Bitmap table.
  • the row idle indication FIFO "FIFO read address, plus one, points to the next valid row address index; if the written back information still contains the unoccupied status bit, then Indicates that the Bitmap table contains at least one unoccupied bit in the row.
  • the row address should still be stored in the row idle indication FIFO.
  • the row idle indication FIFO does not operate on the "FIFO read address”. The row address is still stored in the row. Idle indicator in the FIFO.
  • the upper 8 bits of the recovered buffer address pointer are used as the read address to read a row of information from the Bitmap table. After the response time of the RAM, the read row information is valid in the clock cycle 2.
  • the lower 5 bits of the recovered buffer address pointer are used as the column address, and the corresponding bit in the read row information is changed to the unoccupied state, and the modified row information is written back to the corresponding bitmap of the Bitmap table. in.
  • the buffer address pointer corresponding to all the bits in the row of the Bitmap table is allocated and occupied before the buffer pointer is reclaimed.
  • the row address is not stored in the row idle indication FIFO.
  • the row includes 1 unoccupied bit, the row address should be stored in the row idle indication FIFO, the row address of the row is stored in the corresponding FIFO unit according to the "FIFO write address", and the "FIFO write address" is incremented by one. If the row information read from the Bitmap table contains an unoccupied bit, it indicates that the cache address pointer corresponding to all the bits in the row of the Bitmap table is not allocated and occupied before the buffer pointer is reclaimed. The address is already stored in the row idle indication FIFO and does not operate on the "FIFO write address" of the row idle indication FIFO.
  • the present invention is capable of achieving an average of every three hours.
  • the clock cycle reclaims a free cache address pointer, which maximizes the recovery efficiency of the cache address pointer in the case of less resource consumption.
  • the search process of the cache address pointer may be performed in a pipeline manner, further The rate of allocation of the cache address pointer is increased.
  • the specific implementation process of the allocation of the cache address pointer will be described below with reference to FIG.
  • the Bitmap table is accessed by using the corresponding read row address, and the corresponding row information in the Bitmap table corresponding to the three row addresses is successively read.
  • the Bitmap table is set to pass. implemented as RAM, and includes a 2% rows, 32 columns from three lines Bitmap table are read out 32bit, and after the response time of the RAM, the above-mentioned three lines of information are read at clock cycle 4, 5, 6 is valid.
  • the cache address pointer is 13 bits, and the cache address pointer can be determined by the row address and the column address of the Bitmap table.
  • the upper 8 bits of the cache address pointer are the row address of the Bitmap table, and the lower 5 bits of the cache address pointer are the Bitmap.
  • the column address of the table is 13 bits
  • the unoccupied bits in the three rows of information are modified to the occupied state, and then the modified three rows of information are respectively written back to the row corresponding to the Bitmap table.
  • the present invention can allocate an idle buffer address pointer every 2 clock cycles, which can meet the design requirement of allocating a cache address pointer every 2 clock cycles, and consume resources. In less cases, the allocation efficiency of the cache address pointer is maximized, and the management capability of the cache is improved.
  • the address of the free area is stored in the FIFO queue, and the address of the free area can also be stored in the stack.
  • the stack is used to implement the cache management, the principle and process are basically the same as those described above, and will not be described in detail in this embodiment.
  • the Bitmap table is described by 2 n rows, and the Bitmap table may also be n rows, where n is a positive integer, so that the Bitmap table should be divided into n regions, and the width m of the FIFO queue is:
  • log 2 n is an integer
  • m log 2 n
  • log 2 n is not an integer
  • m (int(log 2 n )+l)
  • other implementation processes are basically the same as described above, and are not in this embodiment. More details will be described.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Systems (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Description

一种基于位图表的緩存管理方法
技术领域
本发明涉及网络通讯技术领域, 具体涉及一种基于位图 (bitmap )表 的緩存管理方法。
背景技术
共享型存储 -转发式交换芯片的总体框架如附图 1所示。
共享型存储 -转发式交换芯片的数据帧转发处理过程主要包括如下 3 个步骤: 接收并緩存数据帧、 数据转发、 发送数据帧回收緩存。
接收并緩存数据帧: 从外部端口进入芯片的数据帧经输入接口 1緩存 到共享緩存 2中, 共享緩存 2—般为 RAM (随机存储器)。 数据帧在共享緩 存 2的分配由緩存管理 4通过緩存地址指针管理。 在以后的数据帧转发处理 的过程中, 数据帧的主体将始终保存在共享緩存 2中, 只有緩存地址指针 在芯片中传递, 直到转发指令下达到输出接口 3, 由输出接口 3根据緩存地 址指针把数据帧从共享缓存 2中读出并发送到外部端口。
数据转发: 数据帧在写入共享緩存 2时, 输入接口 1从数据帧中提取出 用于对该数据帧进行转发的信息, 该信息与数据帧的緩存地址指针一起, 发送到数据转发通道 5, 数据转发通道 5根据其接收的转发信息对数据帧进 行转发查找操作, 以获得数据帧的目的端口, 并将緩存地址指针与目的端 口信息一起传输至输出接口 3。
发送数据帧回收緩存: 输出接口 3根据数据转发通道 5传输来的緩存地 址指针将数据帧从共享緩存 2中读出, 并根据数据帧的目的端口信息将数 据帧发送到外部输出端口。 緩存管理 4将相应的緩存回收。
在上述过程中緩存管理 4对共享緩存的緩存地址指针分配及回收的管 理方法主要有如下两种:
方法一: 采用 FIFO (先进先出)的形式对共享緩存的緩存地址指针进 行分配及回收管理。
具体为: 使用一个 FIFO队列来保存共享緩存的所有空闲緩存地址指 针, FIFO队列中的每个 FIFO单元存放一个空闲緩存地址指针, FIFO队 列的深度应等于共享緩存中緩存单元的总数, 以便在共享緩存的所有緩存 单元均为空闲时, 存储全部緩存单元的緩存地址指针。 使用 "FIFO读地 址,, 来指示共享緩存中第一个可用的空闲緩存地址指针, 使用 "FIFO 写 地址" 来指示回收的緩存地址指针应存放的 FIFO单元。 当緩存管理 4为 数据帧分配緩存地址指针时, 根据 "FIFO读地址" 读出一个空闲缓存地 址指针, FIFO 队列中空闲緩存地址指针的个数减一; 当存储在緩存中的 数据帧发送到外部端口、 需要回收緩存地址指针时, 将该緩存地址指针根 据 "FIFO写地址" 写入 FIFO单元中, FIFO队列中空闲緩存地址指针的 个数增加一。
FIFO 队列可以使用 RAM来实现, 如果共享緩存的緩存单元总数为 2m,则需要 m X 2mbits的 RAM来实现 FIFO队列, 即如果共享緩存的緩存 单元总数为 8K, 则 FIFO队列应该为 13bit x 8K=104Kbits的 RAM, 平均 每个緩存单元消耗 13bit的 RAM。
系统复位时, FIFO队列的初始状态如附图 2所示。
在图 2 中, 设定緩存单元的总数为 8K, 系统复位时, 共享緩存中的 所有緩存单元都没有存放数据帧, 因此, FIFO队列中应存放全部的 8Κ个 緩存地址指针, 此时, FIFO处于 "满" 状态, "FIFO读地址" 和 "FIFO 写地址,, 相等, 均为 0。
处于正常运行过程中的 FIFO队列如附图 3所示。
在图 3中, 设定 "FIFO读地址"指示的下一个可分配的緩存地址指针 存储于 FIFO队列的地址为 90的 FIFO单元中, "FIFO写地址,, 指示的下 一个回收的緩存地址指针应写入 FIFO队列的地址为 56的 FIFO单元中, 此时, FIFO队列中存储的緩存地址指针的总数为 8K- ( 90-56 ) 个。
采用该方法进行共享緩存的管理时, 对 FIFO队列进行一次访问就可 以分配或回收一个緩存地址指针, 分配和回收共享緩存的能力强, 共享缓 存的管理效率高。 但是, 由于 FIFO队列的宽度和深度会随着共享緩存的 緩存单元数目的增加而急剧增大, 管理每个緩存单元消耗的 RAM资源也 会急剧增加, 所以, 该方法不适用于具有较大存储空间共享緩存的交换芯 片。
方法二,使用 Bitmap表和 FIFO形式对共享緩存的緩存地址指针进行分 配及回收管理。
Bitmap表是一个二维表, 每个緩存地址指针对应 Bitmap表中的 lbit, Bitmap表可以使用 RAM来实现。
常用的緩存地址指针与 Bitmap表的映射关系为: 緩存地址指针的高 nbit作为 Bitmap表的行地址, 緩存地址指针的低 mbit作为 Bitmap表的列 地址。 行、 列地址共同确定 Bitmap表中的比特位为 1 时, 表示对应的缓 存地址指针被占用, 为 0时, 表示对应的緩存地址指针空闲。 如果直接从 Bitmap表中搜索一个空闲緩存地址指针, 可能会消耗很多个时钟周期, 所 以可以结合使用一个小的 FIFO队列来进行緩存地址指针的管理。
具体为: 在 FIFO队列中预存一些共享緩存的空闲緩存地址指针, 并 设定 FIFO的两个域值, 一个为搜索域值, 即低域值, 一个为回收域值, 即高域值。 当 FIFO中存储的空闲緩存地址指针超过回收域值时, 将高于 回收域值的緩存地址指针归还到 Bitmap表中, Bitmap表中对应的 bit置为 0; 当 FIFO 中存储的空闲緩存地址指针低于搜索域值, 将自动从 Bitmap 表中搜索出一些空闲緩存地址指针, 存放到 FIFO中, Bitmap表中对应的 bit置为 1。
采用该方法进行緩存地址指针管理, 在系统复位时, Bitmap表的初始 状态如附图 4所示。
在图 4中,共享緩存的緩存单元总数为 8K个, Bitmap表的大小为(28 x 25 ) bit, 由于 0-31 的緩存地址指针存储于 FIFO队列中, 所以, 将这 些緩存地址指针在 Bitmap表中对应的比特位置为 "1"。
处于正常运行过程中的 Bitmap表如附图 5所示。
在图 5中, Bitmap表为 "1" 的比特位对应的緩存地址指针被占用, 为 "0" 的比特位对应的緩存地址指针未被占用。 在这里, 緩存地址指针 被占用包括: 该緩存地址指针为空闲緩存地址指针, 且存储于 FIFO中, 或者该緩存地址指针对应的緩存单元已被数据帧占用。
在分配緩存地址指针时, 根据 "FIFO读地址" 从 FIFO中读出一个緩 存地址指针, FIFO 中存储的緩存地址指针的总数减一; 在回收緩存地址 指针时, 将回收的缓存指针 居 "FIFO写地址" 写入 FIFO中, FIFO中 存储的缓存地址指针的总数加一。
在正常情况下, 该方法能够实现一个时钟周期分配或回收一个緩存地 址指针, 緩解了直接从 Bitmap表中搜索空闲緩存指针的速度问题。 但是, 如杲连续分配或回收緩存地址指针的总数超过 FIFO的深度,使 FIFO达到 极限状态时, 则需要直接对 Bitmap表进行读写访问, 从而可能会导致緩 存地址指针的访问效率大大降^ ί氐。
采用该方法进行共享緩存的管理, 管理每个缓存单元消耗的 RAM资 源约为 lbit, 且不随緩存单元数量的变化而发生变化, 资源消耗少, 但是 该方法在搜索空闲緩存地址指针时耗时不定, 尤其在 FIFO处于极限状态 下,分配和回收緩存地址指针的能力差,使共享緩存的管理效率大大降低, 该方法同样不适用于交换带宽较大的交换芯片。
发明内容
本发明的目的在于, 提供一种基于位图表的緩存管理方法, 该方法能够 在消耗资源尽量少的情况下, 最大程度的提高緩存的管理效率。
为达到上述目的, 本发明提供的一种基于位图表的緩存管理方法, 包 括:
a、 将 bitmap表划分为若干个至少包括 1比特位的区域;
b、 根据各区域中的比特位的未占用状态确定并记录各区域的空闲状 态;
c、 根据所述记录的各区域的空闲状态对緩存地址指针进行管理。 其中:
以 bitmap表的行为单位将所述 bitmap表划分为若干个区域。
按照下述步骤根据各区域中的比特位的未占用状态确定并记录 各区域的空闲状态:
bl、 确定所述各区域的地址;
b2、 将至少包含有预定个数的未占用状态的比特位的区域确定为空闲 区域;
b3、 存储所述空闲区域的地址。
所述 bitmap表为 n行表; 以所述 bitmap表的行为单位将所述 bitmap表划分为 n个区域; 以及所述 bitmap表的行地址为所述各区域的地址。
并且,将至少包含有 1个未占用状态的比特位的区域确定为空闲 区域。
将所述空闲区域的地址存储在先进先出队列或堆栈中。
其中, 设定所述 bitmap表为 n行表; 所述先进先出队列的深度 为 n, 宽度为 m, 且当 log2 n为整数时, m=log2 n; 当 log2 n不为整数时, m = (int(log2 n)+l);
其中: 所述 n、 m为正整数;
所述先进先出队列的读地址指示先进先出队列中存储的下一个空闲 区域的地址;
所述先进先出队列的写地址指示下一个空闲区域地址存储的先进先 出队列的存储单元;
所述先进先出队列的初始状态为: 存储各区域的地址的满状态。 按照下述步骤根据所述记录的各区域的空闲状态对緩存地址指 针进行管理:
读取一个所述存储的空闲区域的地址;
根据所述读取的地址确定其对应的所述 bitmap表的区域,并根据该区 域中的未占用状态的比特位确定空闲緩存单元的緩存地址指针;
分配所述緩存地址指针,并将该緩存地址指针对应的 bitmap表中的比 特位设置为占用状态; 所述存储的该区域的地址删除。
所述所述方法还包括:
将回收的緩存地址指针对应的 bitmap表中的比特位设置为未占用状 态;
在确定该区域中包含未占用状态的比特位达到预定个数, 且该区域的 地址没有存储时, 存储该区域的地址。
所述方法还包括: 当同时需要分配和回收緩存地址指针时, 将所述回 收的緩存地址指针直接分配。
通过上述技术方案的描述可明显得知,本发明通过将 bitmap表划分为 若干区域, 并存储空闲区域的地址, 在分配緩存地址指针时, 只要緩存中 存在空闲緩存单元, 根据存储的空闲区域地址就一定能够获取至少一个空 闲緩存地址指针, 使分配緩存地址指针的整个过程固定, 易控制; 本发明 在使用 RAM来实现 FIFO以存储空闲区域地址时, 管理每个緩存单元消 耗的 RAM约为 1比特, 消耗资源较少, 且基本上不会随緩存单元数量的 变化而变化; 本发明能够实现平均每 3个时钟周期完成一次緩存地址的回 收过程, 而且当 RAM为一个读端口一个写端口的 RAM时, 本发明能够 实现平均每 2个时钟周期完成一次缓存地址指针的分配过程, 分配、 回收 缓存地址指针的效率较高; 从而通过本发明提供的技术方案提高了緩存管 理过程中的可控性, 且在消耗资源尽可能少的情况下, 最大程度的提高了 緩存的管理效率, 使本发明能够很好的适用于交换带宽较大的交换芯片 中。
附图说明
图 1是共享型存储 -转发式交换芯片的总体框架示意图;
图 2是现有技术中 FIFO队列的初始状态示意图;
图 3是现有技术中处于正常运行过程中的 FIFO队列示意图;
图 4是现有技术中 Bitmap表的初始状态示意图;
图 5是现有技术中处于正常运行过程中的 Bitmap表示意图;
图 6是本发明的 Bitmap表的初始状态示意图;
图 7是本发明的 FIFO队列的初始状态示意图;
图 8是本发明的分配緩存地址指针的时序图 1;
图 9是本发明的回收緩存地址指针的时序图;
图 10是本发明的分配緩存地址指针的时序图 2。
具体实施方式
本发明的核心是: 将 bitmap表划分为若千个至少包括 1比特位的区域, 分别根据各区域中的比特位的未占用状态确定并存储各区域的空闲状态 , 根据所述存储的各区域的空闲状态对緩存地址指针进行管理。 下面基于本发明的核心思想对本发明提供的技术方案做进一步的描 述。
本发明的緩存管理方法是以 Bitmap表为基础的, 所以, 凡是在对緩 存进行管理时使用到 Bitmap表的, 本发明都可适用。 本发明中的緩存包 括共享緩存。
本发明首先需要将 Bitmap表划分为若干个区域, 然后, 根据各区域 中包含的比特位的未占用状态确定该区域是否为空闲区域, 记录空闲状态 的区域, 如存储空闲区域的地址, 这样, 在进行緩存地址指针分配时, 可 以通过这些存储的空闲区域的地址到 Bitmap表中的相应区域去查找, 一 定会获得至少一个空闲的緩存地址指针。 上述存储的空闲区域应在緩存地 址指针分配和回收过程中根据该区域中比特位的占用状态的变化而动态 变化, 以保证存储的空闲区域中一定包含有未占用状态的比特位, 而且, 还应该保证包含有预定个数的未占用状态的比特位的空闲区域一定被存 储。
下面以一行为单位对 Bitmap表进行划分为例 , 对 Bitmap表的区域划 分进行详细描述。
设定共享緩存具有 8K个緩存单元,设定该共享緩存的 Bitmap表为行、 列分别为 ( 28 X 25 ) bit的一个二维表。
以一行为单位对 Bitmap表进行区域划分, 则可以将 Bitmap表划分为 28个区域。 每个区域的地址可以确定为 Bitmap表的行地址。
如果一个区域中包含的未占用状态的比特位的个数达到预定个数, 则 确定该区域为空闲区域, 将该空闲区域的地址即 Bitmap表中相应的行地 址存储。 空闲区域的地址可以存储在 FIFO队列中, 也可以存储在堆栈中。
上述预定个数最小可以为 1 , 最大应小于 Bitmap表的总列数。 预定个 数可根据通信系统的实际需要来选取。
为与现有技术中的 FIFO队列进行区分, 在本实施例中将存储空闲区 域的地址的 FIFO队列称为行空闲指示 FIFO,
如果具有 8K个緩存单元的共享緩存对应的 Bitmap表为( 28 χ 25 ) bit, 且使用行空闲指示 FIFO存储空闲区域的地址时, FIFO队列的深度应该为 28, FIFO队列的宽度应该为 8bit。
如果使用 RAM来实现行空闲指示 FIFO,设定共享緩存的缓存单元为 2\则管理每个緩存单元消耗的 RAM为:( 2X+ (x-5) x 2(x"5) )/2χ= 1+(χ-5)/32, 约为 1比特, 消耗资源较少, 在緩存单元大幅度增加时, 管理每个緩存单 元消耗的 RAM的变化量很小, 可以认为管理每个缓存单元消耗的 RAM 基本上不随緩存单元数量的变化而变化。
为方便描述, 下面以预定个数为 1、 空闲区域地址存储在行空闲指示 FIFO队列为例, 对本发明的緩存管理的过程进行说明。
行空闲指示 FIFO的 "FIFO读地址" 指示队列中存储的下一个至少包 含 1个未占用状态的比特位的区域的地址, 即 Bitmap表中相应的行地址; "FIFO写地址" 指示当 Bitmap表中某行由不包括未占用状态的比特位, 而转变为至少包含 1个未占用状态的比特位时, 该行对应的区域的地址即 该行的行地址应存储的 FIFO单元。
在系统复位时, 緩存中未存储任何数据帧, Bitmap表中所有的比特位 都为未占用状态,如附图 6所示。此时,行空闲指示 FIFO中应保存 Bitmap 表中所有行的行地址, 如附图 Ί所示, 行空闲指示 FIFO中保存了 256个 行地址, 行空闲指示 FIFO处于 "满" 状态, 行空闲指示 FIFO的 "FIFO 读地址" 与 "FIFO写地址" 相同, 均为 0。 由于 FIFO的深度为 Bitmap 表的行数, 所以行空闲指示 FIFO不会发生溢出现象。
在需要为数据帧分配緩存地址指针时, 根据 "FIFO读地址"从行空闲 指示 FIFO中读取一个行地址,根据该行地址到 Bitmap表相应的行中去查找 获取未占用状态的比特位, 从获取的该未占用状态的比特位能够确定一个 空闲緩存地址指针, 将数据帧根据该緩存地址指针存储在緩存中, Bitmap 表中该未占用状态的比特位的状态应由未占用状态转变为占用状态, 并判 断该比特位所在的行是否还至少包含一个未占用状态的比特位, 如果该行 至少包含一个未占用状态的比特位, 说明该区域仍然为空闲区域, 其地址 仍然应存储在行空闲指示 FIFO中, 行空闲指示 FIFO的 "FIFO读地址" 不 发生改变, 本次缓存地址指针的分配过程结束; 如果该行中所有的比特位 都为占用状态, 则说明该区域不再为空闲区域, 该区域的地址不应该再存 储在行空闲指示 FIFO中, "FIFO读地址"加一。 本次緩存地址指针的分配 过程结束。
在数据帧发送到外部端口, 需要回收緩存地址指针时, 根据该需要回 收的緩存地址指针确定该指针对应的 Bitmap表中的比特位 , 判断该比特位 所在的行中的所有比特位是否都为占用状态, 如果该行中所有的比特位都 为占用状态, 则在将回收的緩存地址指针对应的比特位的占用状态转变为 未占用状态后, 需要将该比特位所在行的行地址存储在行空闲指示 FIFO 中,根据 "FIFO写地址"将该行的行地址存储在对应的 FIFO单元中, "FIFO 写地址" 加一, 本次緩存地址指针的回收过程结束; 如果该比特位所在的 行中的所有比特位不都为占用状态, 则直接将该比特位的占用状态转变为 未占用状态, 本次緩存地址指针的回收过程结束。
如果在数据帧发送到外部端口, 需要回收緩存地址指针时, 需要分配 緩存地址指针, 则可以直接将需要回收的緩存地址指针进行分配, 不需要 再对行空闲指示 FIFO及 Bitmap表进行任何操作,进一步提高了緩存地址指 针的分配、 回收效率。
本发明中的行空闲指示 FIFO在通过 RAM来实现时, 如果 RAM为 1RW ( 1个读写端口)的 RAM, 下面结合附图 8对上述緩存地址指针分配的具体 实现过程进行说明。
在图 8中, 在时钟周期 1, 如果行空闲指示 FIFO为非空, 即緩存中存在 空闲緩存单元, 则根据 "FIFO读地址"从行空闲指示 FIFO中读出一个行地 址, 经过 RAM的响应时间, 读出的行地址在时钟周期 2有效。
在时钟周期 3, 用该行地址访问 Bitmap表,读出该行地址对应的 Bitmap 表中相应的一行信息,在本实施例中设定 Bitmap表通过 RAM来实现, 且包 括 256行、 32列, 则从 Bitmap表中读出的一行信息为 32bit, 经过 RAM的响 应时间, 读出的一行信息在时钟周期 4有效。
在时钟周期 5, 从读出的一行信息中搜索一个未占用状态的比特位, 并确定该比特位对应的空闲緩存地址指针。 在本实施例中, 缓存地址指针 为 13bit, 緩存地址指针可以通过 Bitmap表的行地址、 列地址来确定, 如緩 存地址指针的高 8bit为 Bitmap表的行地址, 緩存地址指针的低 5bit为 Bitmap 表的列地址。
在时钟周期 6, 将该行信息中搜索出的未占用状态的比特位修改为占 用状态, 然后将修改后的该行信息写回 Bitmap表对应的行中。
判断写回的该行信息中所有的比特位是否均为占用状态, 如果均为占 用状态, 说明 Bitmap表该行中所有比特位对应的緩存地址指针均被分配占 用, 该行地址不应继续存储在行空闲指示 FIFO中, 将行空闲指示 FIFO的 "FIFO读地址,,加一, 指向下一个有效的行地址索引; 如果写回的该行信 息中仍然包含有未占用状态的比特位, 则说明 Bitmap表该行中至少包含一 个未占用状态的比特位, 该行地址仍应存储在行空闲指示 FIFO中, 不对行 空闲指示 FIFO的 "FIFO读地址"进行操作, 该行地址仍然存储在行空闲指 示 FIFO中。
如果行空闲指示 FIFO为 1RW的 RAM, 则上述緩存地址指针回收的具 体实现如附图 9所示:
在图 9中, 在时钟周期 1, 以回收的緩存地址指针的高 8bit作为读地址 从 Bitmap表中读出一行信息, 经过 RAM的响应时间, 读出的行信息在时钟 周期 2有效。
在时钟周期 3 , 将回收的緩存地址指针的低 5bit作为列地址, 将读出的 行信息中对应的比特位改变为未占用状态, 将修改后的该行信息写回 Bitmap表对应的^亍中。
如果从 Bitmap表中读出的行信息的所有比特位均为占用状态, 说明
Bitmap表的该行中所有比特位对应的緩存地址指针在回收该緩存指针前 均被分配占用, 该行地址没有存储在行空闲指示 FIFO中, 在回收该緩存地 址指针后, 该行中包含了 1个未占用状态的比特位, 应将该行地址存储在 行空闲指示 FIFO中, 根据 "FIFO写地址" 将该行的行地址存储在对应的 FIFO单元中, 将 "FIFO写地址"加一; 如果从 Bitmap表中读出的行信息中 包含有未占用状态的比特位, 说明 Bitmap表的该行中所有比特位对应的緩 存地址指针在回收该緩存指针前没有都被分配占用, 该行地址已经存储在 行空闲指示 FIFO中了, 不对行空闲指示 FIFO的 "FIFO写地址" 进行操作。
从上述对图 9的描述过程中可明显看出, 本发明能够实现平均每 3个时 钟周期回收一个空闲緩存地址指针, 在消耗资源较少的情况下, 最大程度 的提高了緩存地址指针的回收效率。
如果行空闲指示 FIFO使用 1R1W (—个读端口和一个写端口)的 RAM 来实现, 且行空闲指示 FIFO中至少存储了 3个行地址时, 可采用流水线的 方式进行緩存地址指针的搜索过程, 进一步提高了緩存地址指针分配的速 率, 下面结合附图 10对緩存地址指针的分配的具体实现过程进行说明。
在图 10中, 在时钟周期 1、 2、 3, 根据行空闲指示 FIFO的 "FIFO读地 址" 从行空闲指示 FIFO中连续读出三个行地址, 经过 RAM的响应时间, 读出的行地址分别在时钟周期 2、 3、 4有效。 在连续的读行地址的过程中, 每执行一次读操作, "FIFO读地址" 加一。
分别在时钟周期 3、 4、 5, 用相应读出的行地址访问 Bitmap表, 连续 读出这三个行地址分别对应的 Bitmap表中相应的行信息 , 在本实施例中设 定 Bitmap表通过 RAM来实现, 且包括 2%行、 32列, 则从 Bitmap表中读出 的三行信息均为 32bit,经过 RAM的响应时间,上述读出的三行信息分别在 时钟周期 4、 5、 6有效。
在时钟周期 5、 6、 7, 分别从读出的三行信息中分别搜索一个未占用 状态的比特位, 并确定三个比特位分别对应的空闲緩存地址指针。 在本实 施例中,緩存地址指针为 13bit,緩存地址指针可以通过 Bitmap表的行地址、 列地址来确定, 如緩存地址指针的高 8bit为 Bitmap表的行地址, 緩存地址 指针的低 5bit为 Bitmap表的列地址。
在时钟周期 6、 7、 8, 分别将三行信息中搜索出的未占用状态的比特 位修改为占用状态, 然后将修改后的三行信息分别写回 Bitmap表对应的行 中。
分别判断写回的三行信息中所有的比特位是否均为占用状态, 如果均 为占用状态, 说明 Bitmap表该行中所有比特位对应的緩存地址指针均被分 配占用,该行地址不应继续存储在行空闲指示 FIFO中,将行空闲指示 FIFO 的 "FIFO读地址"加一, 指向下一个有效的行地址索引; 如果写回的行信 息中仍然包含有未占用状态的比特位, 则说明 Bitmap表该行中至少包含一 个未占用状态的比特位, 该行地址仍应存储在行空闲指示 FIFO中, 不对行 空闲指示 FIFO的 "FIFO读地址"进行操作, 该行地址仍然存储在行空闲指 示 FIFO中。
从上述对图 10的描述过程中可明显看出, 本发明能够实现平均每 2个 时钟周期分配一个空闲緩存地址指针, 可以满足每 2个时钟周期分配一个 緩存地址指针的设计需要, 在消耗资源较少的情况下, 最大程度的提高了 緩存地址指针的分配效率, 提高了緩存的管理能力。
上述实施例中, 是以 FIFO队列存储空闲区域的地址进行描述的, 空闲 区域的地址同样可以存储在堆栈中。 当使用堆栈来实现緩存管理时, 其实 现原理及过程与上述描述基本相同, 在本实施例中不再详细描述。
上述实施例中, Bitmap表是以 2n行进行描述的, Bitmap表也可以为 n 行, 其中 n为正整数, 这样, Bitmap表应划分为 n个区域, 而 FIFO队列 的宽度 m 为: 当 log2 n为整数时, m=log2 n; 当 log2 n不为整数时, m = (int(log2 n)+l), 其他实现过程与上述描述基本相同, 在本实施例中不再详 细描述。
虽然通过实施例描绘了本发明, 本领域普通技术人员知道, 本发明有 许多变形和变化而不脱离本发明的精神, 希望所附的权利要求包括这些变 形和变化。

Claims

权 利 要 求
1、 一种基于位图表的緩存管理方法, 其特征在于包括:
a、 将 bitmap表划分为若干个至少包括 1比特位的区域;
b、 根据各区域中的比特位的未占用状态确定并记录各区域的空闲状 态;
c、 根据所述记录的各区域的空闲状态对緩存地址指针进行管理。
2、 如权利要求 1 所述的一种基于位图表的緩存管理方法, 其特征在 于:
以 bitmap表的行为单位将所述 bitmap表划分为若干个区域。
3、 如权利要求 1或 2所述的一种基于位图表的緩存管理方法, 其特 征在于按照下述步骤根据各区域中的比特位的未占用状态确定并记录各 区域的空闲状态:
bl、 确定所述各区域的地址;
b2、 将至少包含有预定个数的未占用状态的比特位的区域确定为空闲 区域;
b3、 存储所述空闲区域的地址。
4、 如权利要求 3 所述的一种基于位图表的緩存管理方法, 其特征在 于:
所述 bitma 表为 n行表; 以所述 bitmap表的行为单位将所述 bitmap 表划分为 n个区域; 以及所述 bitmap表的行地址为所述各区域的地址。
5、 如权利要求 3 所述的一种基于位图表的緩存管理方法, 其特征在 于:
将至少包含有 1个未占用状态的比特位的区域确定为空闲区域。
6、 如权利要求 3 所述的一种基于位图表的緩存管理方法, 其特征在 于:
将所述空闲区域的地址存储在先进先出队列或堆栈中。
7、 如权利要求 6所述的一种基于位图表的緩存管理方法, 其特征在 于:
所述 bitmap表为 n行表; 所述先进先出队列的深度为 n, 宽度为 m, 且当 log2 n为整数时, m=log2 n; 当 log2 n不为整数时, m = (int(log2 n)+l); 其中: 所述 n、 m为正整数;
所述先进先出队列的读地址指示先进先出队列中存储的下一个空闲 区域的地址;
所述先进先出队列的写地址指示下一个空闲区域地址存储的先进先 出队列的存储单元;
所述先进先出队列的初始状态为: 存储各区域的地址的满状态。
8、 如权利要求 3 所述的一种基于位图表的緩存管理方法, 其特征在 于, 按照下述步骤根据所述记录的各区域的空闲状态对緩存地址指针进行 管理:
读取一个所述存储的空闲区域的地址;
根据所述读取的地址确定其对应的所述 bitmap表的区域,并根据该区 域中的未占用状态的比特位确定空闲緩存单元的緩存地址指针;
分配所述緩存地址指针,并将该緩存地址指针对应的 bitmap表中的比 特位设置为占用状态;
在确定该区域中包含的未占用状态的比特位没有达到预定个数时, 将 所述存储的该区域的地址删除。
9、 如权利要求 3 所述的一种基于位图表的緩存管理方法, 其特征在 于还包括:
将回收的緩存地址指针对应的 bitmap表中的比特位设置为未占用状 态;
在确定该区域中包含未占用状态的比特位达到预定个数, 且该区域的 地址没有存储时, 存储该区域的地址。
10、 如权利要求 3所述的一种基于位图表的緩存管理方法, 其特征在 于还包括:
当同时需要分配和回收緩存地址指针时, 将所述回收的緩存地址指针 直接分配。
PCT/CN2005/002220 2005-01-05 2005-12-16 Méthode de gestion de tampon basée sur une table de bitmap WO2006084417A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/773,733 US7733892B2 (en) 2005-01-05 2007-07-05 Buffer management method based on a bitmap table

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200510000145.2 2005-01-05
CNB2005100001452A CN100449504C (zh) 2005-01-05 2005-01-05 一种基于bitmap表的缓存管理方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/773,733 Continuation US7733892B2 (en) 2005-01-05 2007-07-05 Buffer management method based on a bitmap table

Publications (1)

Publication Number Publication Date
WO2006084417A1 true WO2006084417A1 (fr) 2006-08-17

Family

ID=36792892

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2005/002220 WO2006084417A1 (fr) 2005-01-05 2005-12-16 Méthode de gestion de tampon basée sur une table de bitmap

Country Status (3)

Country Link
US (1) US7733892B2 (zh)
CN (1) CN100449504C (zh)
WO (1) WO2006084417A1 (zh)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101488919B (zh) 2009-02-13 2011-07-06 华为技术有限公司 存储地址分配方法和装置
CN101848135B (zh) * 2009-03-24 2011-12-28 华为技术有限公司 芯片的统计数据的管理方法和装置
CN101551736B (zh) 2009-05-20 2010-11-03 杭州华三通信技术有限公司 基于地址指针链表的缓存管理装置和方法
CN102196573A (zh) 2010-03-10 2011-09-21 中兴通讯股份有限公司 Pucch的无线资源分配方法及无线资源管理器
CN102411543B (zh) * 2011-11-21 2014-12-03 华为技术有限公司 缓存地址的处理方法和装置
WO2013155673A1 (zh) * 2012-04-17 2013-10-24 中兴通讯股份有限公司 片内共享缓存的管理方法及装置
CN102662868B (zh) * 2012-05-02 2015-08-19 中国科学院计算技术研究所 用于处理器的动态组相联高速缓存装置及其访问方法
CN103856445B (zh) * 2012-11-30 2018-10-16 北京北广科技股份有限公司 基于udp的语音数据业务的数据传输方法、装置和系统
CN103605478B (zh) * 2013-05-17 2016-12-28 华为技术有限公司 存储地址标示、配置方法和数据存取方法及系统
CN104133784B (zh) * 2014-07-24 2017-08-29 大唐移动通信设备有限公司 一种报文缓存管理方法与装置
US10136384B1 (en) * 2014-10-14 2018-11-20 Altera Corporation Methods and apparatus for performing buffer fill level controlled dynamic power scaling
CN104484129A (zh) * 2014-12-05 2015-04-01 盛科网络(苏州)有限公司 一读一写存储器、多读多写存储器及其读写方法
DE102015209486A1 (de) * 2015-05-22 2016-11-24 Robert Bosch Gmbh FIFO Speicher mit im Betrieb veränderbarem Speicherbereich
CN106250321B (zh) 2016-07-28 2019-03-01 盛科网络(苏州)有限公司 2r1w存储器的数据处理方法及数据处理系统
US11080255B2 (en) * 2018-07-09 2021-08-03 Oracle International Corporation Space-efficient bookkeeping for database applications
CN109284234B (zh) * 2018-09-05 2020-12-04 珠海昇生微电子有限责任公司 一种存储地址分配方法及系统
CN112835834B (zh) * 2019-11-25 2024-03-19 瑞昱半导体股份有限公司 数据传输系统
CN113535633A (zh) * 2020-04-17 2021-10-22 深圳市中兴微电子技术有限公司 一种片上缓存装置和读写方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0859534A2 (en) * 1997-02-12 1998-08-19 Kabushiki Kaisha Toshiba ATM switch
CN1552023A (zh) * 2001-07-05 2004-12-01 松下电器产业株式会社 记录设备、介质、方法及相关的计算机程序
CN1661568A (zh) * 2004-02-24 2005-08-31 中国科学院声学研究所 一种嵌入式环境下音像录放装置的文件系统

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5005167A (en) * 1989-02-03 1991-04-02 Bell Communications Research, Inc. Multicast packet switching method
US5371885A (en) * 1989-08-29 1994-12-06 Microsoft Corporation High performance file system
US5068892A (en) * 1989-10-31 1991-11-26 At&T Bell Laboratories Route based network management
US5535197A (en) * 1991-09-26 1996-07-09 Ipc Information Systems, Inc. Shared buffer switching module
JPH07321815A (ja) * 1994-05-24 1995-12-08 Nec Corp 共有バッファ型atmスイッチおよびその同報制御方法
US5659794A (en) * 1995-03-31 1997-08-19 Unisys Corporation System architecture for improved network input/output processing
US5781549A (en) * 1996-02-23 1998-07-14 Allied Telesyn International Corp. Method and apparatus for switching data packets in a data network
US6175900B1 (en) * 1998-02-09 2001-01-16 Microsoft Corporation Hierarchical bitmap-based memory manager
JP3451424B2 (ja) * 1998-03-13 2003-09-29 富士通株式会社 共通バッファメモリ制御装置
US6310875B1 (en) * 1998-03-30 2001-10-30 Nortel Networks Limited Method and apparatus for port memory multicast common memory switches
US6657959B1 (en) * 1998-06-27 2003-12-02 Intel Corporation Systems and methods for implementing ABR with guaranteed MCR
US7210001B2 (en) * 1999-03-03 2007-04-24 Adaptec, Inc. Methods of and apparatus for efficient buffer cache utilization
TW445730B (en) * 1999-11-30 2001-07-11 Via Tech Inc Output queuing scheme for forwarding packets in sequence
US6658437B1 (en) * 2000-06-05 2003-12-02 International Business Machines Corporation System and method for data space allocation using optimized bit representation
TW513635B (en) * 2000-11-24 2002-12-11 Ibm Method and structure for variable-length frame support in a shared memory switch
JP2002281080A (ja) * 2001-03-19 2002-09-27 Fujitsu Ltd パケットスイッチ装置およびマルチキャスト送出方法
US7031331B2 (en) * 2001-08-15 2006-04-18 Riverstone Networks, Inc. Method and system for managing packets in a shared memory buffer that serves multiple output links
JP3698079B2 (ja) * 2001-08-22 2005-09-21 日本電気株式会社 データ転送方法、データ転送装置及びプログラム
US7417986B1 (en) * 2001-09-04 2008-08-26 Cisco Technology, Inc. Shared buffer switch interface
TW580619B (en) * 2002-04-03 2004-03-21 Via Tech Inc Buffer control device and the management method
US7003597B2 (en) * 2003-07-09 2006-02-21 International Business Machines Corporation Dynamic reallocation of data stored in buffers based on packet size

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0859534A2 (en) * 1997-02-12 1998-08-19 Kabushiki Kaisha Toshiba ATM switch
CN1552023A (zh) * 2001-07-05 2004-12-01 松下电器产业株式会社 记录设备、介质、方法及相关的计算机程序
CN1661568A (zh) * 2004-02-24 2005-08-31 中国科学院声学研究所 一种嵌入式环境下音像录放装置的文件系统

Also Published As

Publication number Publication date
US7733892B2 (en) 2010-06-08
CN100449504C (zh) 2009-01-07
CN1819544A (zh) 2006-08-16
US20070274303A1 (en) 2007-11-29

Similar Documents

Publication Publication Date Title
WO2006084417A1 (fr) Méthode de gestion de tampon basée sur une table de bitmap
CN104090847B (zh) 一种固态存储设备的地址分配方法
US7315550B2 (en) Method and apparatus for shared buffer packet switching
US9769081B2 (en) Buffer manager and methods for managing memory
US20110252215A1 (en) Computer memory with dynamic cell density
US9841913B2 (en) System and method for enabling high read rates to data element lists
US20090187681A1 (en) Buffer controller and management method thereof
CN101231619A (zh) 一种基于非连续页的动态内存管理方法
US20010007565A1 (en) Packet receiving method on a network with parallel and multiplexing capability
US8281103B2 (en) Method and apparatus for allocating storage addresses
US20070086428A1 (en) Network packet storage method and network packet transmitting apparatus using the same
US11425057B2 (en) Packet processing
WO2022062524A1 (zh) 内存管理方法、装置、设备和存储介质
CN101499956A (zh) 分级缓冲区管理系统及方法
US20170017423A1 (en) System And Method For Enabling High Read Rates To Data Element Lists
US7035988B1 (en) Hardware implementation of an N-way dynamic linked list
JPH11143779A (ja) 仮想記憶装置におけるページング処理システム
EP1471430B1 (en) Stream memory manager
US20160232125A1 (en) Storage apparatus and method for processing plurality of pieces of client data
US8812782B2 (en) Memory management system and memory management method
JP2004527024A (ja) 多重チャネルを有するデータメモリアクセス用のスケジューラ
CN113821191A (zh) 一种可配置fifo深度的装置及方法
US8645597B2 (en) Memory block reclaiming judging apparatus and memory block managing system
US9116814B1 (en) Use of cache to reduce memory bandwidth pressure with processing pipeline
EP4290386A1 (en) Packet cache system and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 11773733

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

WWP Wipo information: published in national office

Ref document number: 11773733

Country of ref document: US

122 Ep: pct application non-entry in european phase

Ref document number: 05818659

Country of ref document: EP

Kind code of ref document: A1