US9026735B1 - Method and apparatus for automated division of a multi-buffer - Google Patents

Method and apparatus for automated division of a multi-buffer Download PDF

Info

Publication number
US9026735B1
US9026735B1 US13/678,304 US201213678304A US9026735B1 US 9026735 B1 US9026735 B1 US 9026735B1 US 201213678304 A US201213678304 A US 201213678304A US 9026735 B1 US9026735 B1 US 9026735B1
Authority
US
United States
Prior art keywords
buffer
address
memory
divider
memory utilization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/678,304
Inventor
Ruven Torok
Oren Shafrir
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Marvell Israel MISL Ltd
Original Assignee
Marvell Israel MISL Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Marvell Israel MISL Ltd filed Critical Marvell Israel MISL Ltd
Priority to US13/678,304 priority Critical patent/US9026735B1/en
Assigned to MARVELL ISRAEL (M.I.S.L) LTD. reassignment MARVELL ISRAEL (M.I.S.L) LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHAFRIR, OREN, TOROK, RUVEN
Application granted granted Critical
Publication of US9026735B1 publication Critical patent/US9026735B1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • G06F5/065Partitioned buffers, e.g. allowing multiple independent queues, bidirectional FIFO's
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/084Multiuser, multiprocessor or multiprocessing cache systems with a shared cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2205/00Indexing scheme relating to group G06F5/00; Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F2205/06Indexing scheme relating to groups G06F5/06 - G06F5/16
    • G06F2205/063Dynamically variable buffer size

Definitions

  • the technology described herein relates generally to buffer management and more particularly to on-the-fly buffer partitioning.
  • a data buffer is a region in a memory, such as Random Access Memory (RAM) or cache memory, which can be used to temporarily hold data while that data is being moved from one location to another or awaiting consumption. Buffers are often used when there is a difference between the rate at which data is received and the rate at which that data can be processed. Buffers can also be implemented in systems where data processing is asynchronous, where delays may be present between data receipt and data processing.
  • RAM Random Access Memory
  • a system includes a buffer memory comprising a shared memory space, where the memory space is shared between at least first buffer and a second buffer, and where a dynamic delineation of the memory space between the first buffer and the second buffer is identified by a divider address.
  • a dynamic buffer control circuit includes a control memory that is configured to store the divider address, a first memory utilization metric associated with the first buffer, and a second memory utilization metric associated with the second buffer.
  • a system further includes one or more comparator circuits configured to compare the first memory utilization metric and the second memory utilization metric, where the dynamic buffer control circuit changes the divider address based on the comparison.
  • a method includes determining a first memory utilization of a first buffer among a plurality of buffers sharing a memory space and determining a second memory utilization of a second buffer among the plurality of buffers.
  • the memory space is repartitioned by redefining a division of memory space between the first buffer and the second buffer based on the first memory utilization and the second memory utilization.
  • a network data transport device includes a plurality of network ports and a hardware-implemented multi-buffer.
  • the hardware-implemented multi-buffer includes a buffer memory comprising a shared memory space, where the memory space is shared between at least first buffer and a second buffer, and where a dynamic delineation of the memory space between the first buffer and the second buffer is identified by a divider address.
  • a dynamic buffer control circuit includes a control memory that is configured to store the divider address, a first memory utilization metric associated with the first buffer, and a second memory utilization metric associated with the second buffer.
  • the transport device further includes one or more comparator circuits configured to compare the first memory utilization metric and the second memory utilization metric, where the dynamic buffer control circuit changes the divider address based on the comparison.
  • FIGS. 1A and 1B are diagrams depicting performance of a dynamic repartitioning of a memory space that includes a plurality of buffers in accordance with an embodiment of the disclosure.
  • FIG. 2 is a block diagram depicting a hardware-implemented multi-buffer in accordance with an embodiment of the disclosure.
  • FIG. 3 is a block diagram depicting internal hardware components of a hardware-implemented multi-buffer in accordance with an embodiment of the disclosure.
  • FIG. 4 is a diagram depicting relative positions of head pointers, tail pointers, and a divider address in accordance with an embodiment of the disclosure.
  • FIG. 5 is a block diagram depicting example divider address moving control logic in accordance with an embodiment of the disclosure.
  • FIG. 6 is a block diagram depicting control logic that includes handling of wrap-around conditions in accordance with an embodiment of the disclosure.
  • FIG. 7 is a block diagram depicting control logic periodically adjusting a divider address in accordance with an embodiment of the disclosure.
  • FIG. 8 is a block diagram depicting a receiver that includes a dynamic FIFO control in accordance with an embodiment of the disclosure.
  • FIG. 9 is a flow diagram depicting steps of a method in accordance with an embodiment of the disclosure.
  • FIGS. 1A and 1B are diagrams depicting performance of a dynamic repartitioning of a memory space that includes a plurality of buffers in accordance with an embodiment of the disclosure.
  • a shared memory space 102 includes a plurality of buffers that include a first buffer 104 and a second buffer 106 .
  • the shared memory space 102 is divided into the plurality of buffers 104 , 106 according one or more dividers, such as the divider identified at 108 .
  • the shared memory space 102 may be repartitioned by redefining the division of memory space between the first buffer 104 and the second buffer 106 , as identified by the position of the divider 108 .
  • Such a repartitioning is depicted in the repositioning of the divider from a first address 110 to a second address 112 .
  • Such repartitioning increases the amount of memory available to the first buffer 104 while reducing the amount of memory available to the second buffer 106 .
  • the white portions of a buffer represent utilized buffer memory, while the shaded areas represent unused buffer memory.
  • the memory space 102 is repartitioned by redefining a division of the memory space 102 , as indicated by the divider 108 , based on the first memory utilization and the second memory utilization.
  • the division of the memory space 102 between the first buffer 104 and the second buffer 106 is defined by the divider 108 , where the divider is associated with an address in the memory space 102 between the first buffer 104 and the second buffer 106 .
  • the division of the memory space 102 is redefined by changing the address of the divider 108 in the memory space 102 , such as from address 110 to address 112 , as depicted in FIGS. 1A and 1B .
  • the amount of memory allocated to each of the first buffer 104 and the second buffer 106 changes, so that the first buffer 104 is allocated more memory while the second buffer 106 is allocated less memory in FIG. 1B after movement of the divider 108 from address 110 to address 112 .
  • additional buffers may be encoded on the shared memory space 102 , where the boundaries of those additional buffers are delineated by additional dividers associated with memory addresses.
  • the memory space 102 is encoded with a third buffer among the plurality of buffers.
  • a third memory utilization is determined for the third buffer.
  • a division of the memory space 102 between the second buffer 106 and the third buffer is redefined by changing a second divider associated with an address of the memory space 102 between the second buffer 106 and the third buffer. The address of the second divider is changed based on the determined utilization of the second buffer 106 as well as the determined third memory utilization.
  • Dynamic buffer memory allocation can provide advantages over static buffers.
  • Static buffers may be permanently sized or may be sized at the beginning of operations based on expected utilization, for instance.
  • Dynamic buffer memory allocation as depicted in the embodiment of FIGS. 1A and 1B can offer improved performance through adjustment of buffer sizes during operation of those buffers. Buffer sizing can have a significant effect on system performance. When a buffer is filled, that buffer can no longer accept data, slowing the overall rate at which data can be received. When a buffer is not full and can accept more data, that buffer can continue to accept data from a source, regardless of the rate of consumption or processing of the received data by a target.
  • Static buffers while relatively simple in structure, cannot adapt to address the current state of data processing. When certain buffers are being heavily utilized, while other buffers are being little utilized, the heavily utilized buffers may reach a full state, regardless of the existence of unused buffer memory in the little utilized buffers. While the relative sizes of static buffers can be manually adjusted to address such a scenario when the future buffer utilization is known, such static buffers may be a less than optimal solution when future buffer utilization is unknown. Dynamic buffer memory allocation, as disclosed in certain embodiments herein, enables adaptation to the current buffer usage environment by increasing the amount of memory allocated to heavily utilized buffers to avoid buffer overruns by reducing the memory allocated to buffers that are currently being utilized at low levels of capacity.
  • a buffer for buffering packets from a first flow that is being received at a given point in time faster than the received packets can be re-transmitted, such as during a spike in traffic, could run out of buffer space, resulting in packet loss, without on-the-fly dynamic buffer memory allocation adding memory to the allocation to the filling buffer.
  • the buffer may run empty, resulting in unused memory space that can be reallocated to a buffer that is currently in a higher state of usage.
  • the buffers 104 , 106 of FIGS. 1A and 1B may take a variety of forms.
  • the buffers 104 , 106 may be encoded onto different types of memory spaces, such as a Random Access Memory (RAM) memory space or a cache memory space.
  • RAM Random Access Memory
  • FIGS. 1A and 1B low memory addresses are depicted at the top of the memory space, with high memory addresses being depicted at the bottom of the memory space.
  • Each of the buffers 104 , 106 is filled from low memory addresses toward high memory addresses.
  • the first buffer 104 is prevented from accessing memory past the divider address 108
  • the second buffer 106 is prevented from accessing memory before the divider address 108 .
  • the buffers may be implemented using a variety of data structures such as a stack, a queue, a cache, a look-up table, as well as others.
  • the buffers 104 , 106 take the form of circular first-in-first-out (FIFO) queues.
  • a first data element provided to the FIFO buffer by a data source such as a packet source
  • a data destination such as a packet destination
  • the FIFO buffer may be implemented as a circular buffer, whereupon filling the FIFO buffer to a maximum address in the memory space assigned to the FIFO buffer, the FIFO buffer continues storing additional received data elements at the minimum addresses of the FIFO buffer, provided the data elements from those minimum addresses have been previously transmitted and cleared from the FIFO buffer.
  • Such a configuration can ensure continued operation of the FIFO buffer after filling to the maximum address assigned to the FIFO buffer, as long as sufficient free memory is available at the lower addresses of the FIFO buffer. It is noted that a FIFO buffer can also be filled from high addresses to low addresses, for example.
  • FIG. 2 is a block diagram depicting a hardware-implemented multi-buffer in accordance with one embodiment of the disclosure.
  • the hardware implemented multi-buffer 202 includes a buffer memory comprising a memory space 204 .
  • the memory space 204 is divided between a first buffer 206 and a second buffer 208 , where a dynamic delineation of the memory space 204 between the first buffer 206 and the second buffer 208 is identified by a divider address 210 .
  • a dynamic buffer control unit 212 controls the relative sizes of the first buffer 206 and the second buffer 208 during operation of the multi-buffer 202 by adjusting an address of the divider address 210 based on monitoring of the utilization of the first buffer 206 and the second buffer 208 , respectively.
  • FIG. 3 is a block diagram depicting internal hardware components of a hardware-implemented multi-buffer in accordance with one embodiment of the disclosure.
  • the dynamic buffer control circuit 302 includes a control memory 304 .
  • the control memory 304 is configured to store a divider address 306 that corresponds to a division between the first buffer and the second buffer in a memory space.
  • the control memory 304 is further configured to store a first memory utilization metric 308 .
  • the first memory utilization metric 308 may be directed to an amount of allocated memory that the first buffer is using.
  • the control memory is also configured to store a second memory utilization metric 310 .
  • the second memory utilization metric 310 is directed to an amount of allocated memory that the second buffer is using.
  • the dynamic buffer control unit 302 further includes one or more comparator circuits 312 that are configured to compare the first memory utilization metric 308 and the second memory utilization metric 310 , where the dynamic buffer control circuit 302 changes the divider address 306 based on that comparison, such as via control logic 314 included in the dynamic buffer control circuit 302 .
  • the first memory utilization metric 308 and the second memory utilization metric 310 can take a variety of forms.
  • the utilization metrics 308 , 310 identify a total amount of memory utilized by the buffers associated with the respective utilization metrics.
  • the relative sizes of the buffers can be adjusted through changing the divider address 306 .
  • Alternative embodiments may adjust the relative sizes of the buffers based on absolute amounts of memory not used, proportional amounts of allocated memory used, or any other suitable measure of utilization.
  • the first memory utilization metric for a first FIFO buffer tracks a first head pointer address and a first tail pointer address associated with the first FIFO buffer.
  • the first tail pointer address points to a data element that has been in the first FIFO buffer the longest, and the first head pointer address points to a data element that has been added to the FIFO buffer most recently.
  • the current memory utilization of the first FIFO buffer can be determined based on the first tail pointer address and the first head pointer address.
  • the memory utilization of a second FIFO buffer can be similarly tracked and determined through storage of a second head pointer address and a second tail pointer address.
  • FIG. 4 is a diagram depicting relative positions of head pointers, tail pointers, and a divider address in accordance with an embodiment of the disclosure.
  • a memory space 402 is encoded with a plurality of FIFO buffers, including FIFO1 404 and FIFO2 406 . Lower addresses of the memory space 402 are depicted at the top of FIG. 4 , while higher addresses are depicted at the bottom of FIG. 4 .
  • An upper tail pointer (UT) 408 is tracked by a dynamic buffer control circuit. UT 408 identifies the location of a data element in FIFO1 404 that has been in FIFO1 404 the longest. The data element at UT 408 will be the next data element to be transmitted from FIFO1 404 .
  • An upper head pointer (UH) 410 is also tracked by the dynamic buffer control circuit and identifies the location of a data element in FIFO1 404 that has most recently been added to FIFO1 404 .
  • the data element at UH 410 is at the end of the FIFO1 404 queue.
  • the memory utilization of FIFO1 404 can be determined by subtracting the UT 408 address from the UH 410 address.
  • Corresponding pointers are tracked for FIFO 2 406 .
  • a lower tail pointer (LT) 412 is tracked by the dynamic buffer control circuit.
  • LT 412 identifies the location of a data element in FIFO2 406 that has been in FIFO2 406 the longest.
  • the data element at LT 412 will be the next data element to be transmitted from FIFO2 406 .
  • a lower head pointer (LH) 414 is also tracked by the dynamic buffer control circuit and identifies the location of a data element in FIFO2 406 that has most recently been added to FIFO2 406 .
  • the data element at LH 414 is at the end of the FIFO2 406 queue.
  • the memory utilization of FIFO2 406 can be determined by subtracting the LT 412 address from the LH 414 address.
  • a divider address 416 delineates the boundary between FIFO1 404 and FIFO2 406 .
  • those data elements may not be inputted at a higher address than the divider address 416 , and thus, UH 410 cannot extend below the divider address 416 .
  • the inputted data element can be placed at the low address end of FIFO1 404 (i.e., at the divider between FIFO1 404 and a previous buffer or at the beginning of set of buffers when FIFO1 404 is the first buffer in the memory space 402 ).
  • UH 410 is moved to point at the newly inputted data element at the low address end of FIFO1 404 , and FIFO1 404 is then in a wrap-around condition where UT 408 is greater than UH 410 .
  • a dynamic buffer control circuit may treat wrap around conditions as a special case when determining whether to expand or contract allocated buffer memories, as described further below.
  • FIG. 5 is a block diagram depicting example divider address moving control logic in one embodiment of the disclosure.
  • a dynamic buffer control circuit 502 includes a control memory 504 for storage of certain metrics.
  • the stored metrics include a divider address (D) 506 , first memory utilization metrics (UH, UT) 508 associated with a first buffer, and second memory utilization metrics (LH, LT) 510 associated with a second buffer.
  • a set of one or more comparator circuits 512 is configured to make comparisons to determine whether the dynamic buffer control circuit 502 should adjust the divider address 506 between a first buffer and a second buffer to change the amount of memory allocated to those buffers.
  • control logic 514 directs the one or more comparator circuits 512 to perform a determination of whether (D ⁇ UH) ⁇ (LT ⁇ D). In other words, the comparator 512 is tasked with determining whether the unused, higher address portion of the first buffer less than the unused, lower address portion of the second buffer. If the unused portion of the first buffer is less than the unused portion of the second buffer, then the divider address 506 is moved down, to a higher address, enabling the first buffer to take advantage of some of the unused memory previously allocated to the second buffer.
  • the divider address 506 is moved up, to a lower address, enabling the second buffer to take advantage of some of the unused memory previously allocated to the first buffer.
  • the size of the change in the divider address 506 can be based on the magnitude of the difference between (D ⁇ UH) and (LT ⁇ D) or may be a predetermined amount.
  • the comparator circuits 512 are configured to make comparisons to determine whether the dynamic buffer control circuit 502 should adjust the divider address 506 based on the total amount of unused space in each buffer.
  • the control logic 514 directs the one or more comparator circuits 512 to perform a determination of whether (FirstFifoSize ⁇ (UH ⁇ UT)) ⁇ (SecondFifoSize ⁇ (LH ⁇ LT)). In other words, the comparator 512 is tasked with determining whether the unused portion of the second buffer is greater than the unused portion of the first buffer.
  • the divider address 506 is moved down, to a higher address, enabling the first buffer to take advantage of some of the unused memory previously allocated to the second buffer. If the unused portion of the first buffer is greater than the unused portion of the second buffer, then the divider address 506 is moved up, to a lower address, enabling the second buffer to take advantage of some of the unused memory previously allocated to the first buffer.
  • Divider addresses between other buffers can be similarly managed.
  • the memory space is divided among the first buffer, the second buffer, and a third buffer.
  • the dynamic delineation of the memory space among the first buffer, the second buffer, and the third buffer is identified by the divider address 506 , which identifies the boundary between the first buffer and the second buffer, and a second divider address, which identifies a boundary between the second buffer and the third buffer.
  • the control memory 504 is configured to store the second divider address and one or more third memory utilization metrics associated with the third buffer (e.g., a third buffer head address and a third buffer tail address).
  • the one or more comparator circuits 512 are configured to compare one or more second memory utilization metrics 510 to one or more third memory utilization metrics, and the dynamic buffer control circuit 502 is configured to change the second divider address based on that comparison.
  • FIG. 6 is a block diagram depicting control logic that includes handling of wrap-around conditions in accordance with an embodiment of the disclosure.
  • the control logic 602 of a dynamic buffer control circuit 604 directs that a divider address 606 should not be moved when one of the two buffers whose boundary is identified by the divider address 606 is in a wrap-around condition.
  • a control memory 608 stores the divider address 606 , first memory utilization metrics 610 , and second memory utilization metrics 612 .
  • the control logic 602 directs the comparator circuits 614 to determine whether either of the first buffer or the second buffer is in a wrap-around condition.
  • a wrap-around condition at a buffer can be detected when the head pointer for the buffer is at a lower address than the tail pointer for the buffer. Thus, if the comparator circuits 614 determine that (UH>UT and LH>LT) is false, then the divider address 606 between the first buffer and the second buffer is not to be moved. If neither of the first buffer nor the second buffer is in a wrap-around condition, then the control logic continues operations to determine whether the divider address 606 is to be moved.
  • FIG. 7 is a block diagram depicting control logic periodically adjusting a divider address in accordance with an embodiment of the disclosure. Periodic adjustment of memory allocations during operation of a set of buffers enables adjustment of those memory allocations in according with current system conditions. Spacing adjustments at least a predetermined period of time apart ensures that changes are not made too rapidly in a manner that may disrupt system operation.
  • a dynamic buffer control circuit 702 periodically adjusts a divider address 704 stored by a control memory 706 according to a timer threshold 708 .
  • the dynamic buffer control circuit 702 includes a timer 710 .
  • Control logic 712 utilizes comparator circuits 714 to determine whether it is time to consider adjusting the divider address 704 .
  • that determination is made by performing a modulo operation on the timer value 710 to determine whether the remainder present after dividing the timer value 710 by the timer threshold value 708 is equal to zero.
  • the dynamic buffer control circuit 702 moves the divider down by increasing the divider address 704
  • (D ⁇ UH) is determined by the comparator circuits 714 to be greater than (LT ⁇ D)
  • the dynamic buffer control circuit 702 moves the divider up by decreasing the divider address 704 .
  • a dynamic buffer control circuit 702 may consider other constraints in determining whether divider addresses 704 should be moved. For example, a dynamic buffer control circuit 702 may consider a minimum buffer size constraint.
  • the control memory 706 stores a minimum size threshold (ST), a first divider address (D) 704 , and a second divider address (SD).
  • ST minimum size threshold
  • D first divider address
  • SD second divider address
  • the comparator circuits 714 determine whether SD ⁇ D ⁇ ST, and the dynamic buffer control circuit 702 is configured to prevent change of D or SD when SD ⁇ D ⁇ ST is true. Similar operations can be performed to prevent any buffer from exceeding a maximum size threshold.
  • a dynamic buffer control circuit can be utilized in a variety of environments.
  • a dynamic buffer control circuit may be utilized in a network receiver that receives data associated with a plurality of ports. Each port is associated with a different application or hardware element at a computing-system associated with the receiver.
  • the data handling or consumption rates of the different applications and hardware elements may vary, and the rates at which data is received on the different ports may vary as well, making a set of buffers, one assigned to each receiving port, beneficial in maintaining data flow.
  • FIG. 8 is a block diagram depicting a receiver that includes a dynamic FIFO control, in accordance with an embodiment of the disclosure.
  • a receiver 802 is disposed in a network device 800 such as a switch, server or gateway that is configured to receive data through a number of input ports 801 .
  • the receiver 802 includes a memory space 804 that is divided among a plurality of FIFO buffers. Dynamic delineations of the memory space 804 into the FIFO buffers are identified by divider addresses that are stored and managed by a dynamic FIFO control hardware module 806 .
  • the dynamic FIFO control 806 performs certain operations as described herein above, such as divider adjustments, timer monitoring, wrap-around monitoring, and minimum and maximum buffer size monitoring.
  • data is received at the receiver 802 , where the received data includes an indication of the port to which that data is destined.
  • the receiver 802 stores that received data in a FIFO buffer that is associated with the port identified by the received data.
  • a write control circuit 808 directs the received data to an address in the memory space 804 that is associated with the correct FIFO buffer for the identified port. Specifically, the write control circuit 808 writes the data to the address that follows the current head pointer address of the correct FIFO buffer, increases that head pointer address, and informs the dynamic FIFO control 806 of the head pointer address change, so that the dynamic FIFO control 806 can adjust the associated buffer utilization metrics accordingly.
  • the write control circuit 808 receives the divider addresses from the dynamic FIFO control 806 , so that data is not written to the correct FIFO at an address that is beyond the divider address that identifies the end of the memory allocation of the correct FIFO. If writing to the next address of the correct FIFO would run beyond that divider address, then the write control circuit 808 is configured to write the received data at the beginning of the correct FIFO, resulting in a wrap-around condition.
  • the receiver 802 also includes a read control circuit 810 that controls access to data stored in the memory space 804 by applications or hardware of the computer system associated with the receiver 802 .
  • the read control circuit 810 tracks tail pointers for each of the FIFO buffers indicating the data element that has been in those buffers the longest and is next to be outputted from the receiver 802 .
  • the read control circuit 810 accesses data at the tail address associated with the buffer for that port, and the read control circuit 810 outputs the accessed data to the requesting application or hardware via output ports 812 .
  • the read control circuit 810 further increments the tail address for that buffer, as long as such an increment would not pass a divider address, provided by the dynamic FIFO control 806 , associated with that buffer. If such a divider address would be exceeded, the read control circuit resets the tail address to the top (e.g., lowest address) of the buffer, ending a wrap-around condition for that buffer. The read control circuit 810 provides the updated tail address to the dynamic FIFO control 806 so that the appropriate buffer utilization metrics can be updated.
  • FIG. 9 is a flow diagram depicting steps of a method in accordance with an embodiment of the disclosure.
  • a determination is made of a first memory utilization of a first buffer among a plurality of buffers sharing a memory space.
  • a second memory utilization is determined for a second buffer among the plurality of buffers.
  • the memory space is repartitioned by redefining a division of the memory space between the first buffer and the second buffer based on the first memory utilization and the second memory utilization.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System (AREA)

Abstract

Systems and methods are provided for a hardware-implemented multi-buffer. A system includes a buffer memory comprising a shared memory space, where the memory space is shared between a first buffer and a second buffer, and where a dynamic delineation of the memory space between the first buffer and the second buffer is identified by a divider address. A dynamic buffer control circuit includes a control memory that is configured to store the divider address, a first memory utilization metric associated with the first buffer, and a second memory utilization metric associated with the second buffer. A system further includes one or more comparator circuits configured to compare the first memory utilization metric and the second memory utilization metric, where the dynamic buffer control circuit changes the divider address based on the comparison.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority to U.S. Provisional Patent Application No. 61/561,383, filed Nov. 18, 2011, entitled “Auto Division of Multi FIFO,” which is herein incorporated in its entirety.
FIELD
The technology described herein relates generally to buffer management and more particularly to on-the-fly buffer partitioning.
BACKGROUND
A data buffer is a region in a memory, such as Random Access Memory (RAM) or cache memory, which can be used to temporarily hold data while that data is being moved from one location to another or awaiting consumption. Buffers are often used when there is a difference between the rate at which data is received and the rate at which that data can be processed. Buffers can also be implemented in systems where data processing is asynchronous, where delays may be present between data receipt and data processing.
The description above is presented as a general overview of related art in this field and should not be construed as an admission that any of the information it contains constitutes prior art against the present patent application.
SUMMARY
Examples of systems and methods are provided for a hardware-implemented multi-buffer. A system includes a buffer memory comprising a shared memory space, where the memory space is shared between at least first buffer and a second buffer, and where a dynamic delineation of the memory space between the first buffer and the second buffer is identified by a divider address. A dynamic buffer control circuit includes a control memory that is configured to store the divider address, a first memory utilization metric associated with the first buffer, and a second memory utilization metric associated with the second buffer. A system further includes one or more comparator circuits configured to compare the first memory utilization metric and the second memory utilization metric, where the dynamic buffer control circuit changes the divider address based on the comparison.
As another example, a method includes determining a first memory utilization of a first buffer among a plurality of buffers sharing a memory space and determining a second memory utilization of a second buffer among the plurality of buffers. The memory space is repartitioned by redefining a division of memory space between the first buffer and the second buffer based on the first memory utilization and the second memory utilization.
As a further example, a network data transport device includes a plurality of network ports and a hardware-implemented multi-buffer. The hardware-implemented multi-buffer includes a buffer memory comprising a shared memory space, where the memory space is shared between at least first buffer and a second buffer, and where a dynamic delineation of the memory space between the first buffer and the second buffer is identified by a divider address. A dynamic buffer control circuit includes a control memory that is configured to store the divider address, a first memory utilization metric associated with the first buffer, and a second memory utilization metric associated with the second buffer. The transport device further includes one or more comparator circuits configured to compare the first memory utilization metric and the second memory utilization metric, where the dynamic buffer control circuit changes the divider address based on the comparison.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1A and 1B are diagrams depicting performance of a dynamic repartitioning of a memory space that includes a plurality of buffers in accordance with an embodiment of the disclosure.
FIG. 2 is a block diagram depicting a hardware-implemented multi-buffer in accordance with an embodiment of the disclosure.
FIG. 3 is a block diagram depicting internal hardware components of a hardware-implemented multi-buffer in accordance with an embodiment of the disclosure.
FIG. 4 is a diagram depicting relative positions of head pointers, tail pointers, and a divider address in accordance with an embodiment of the disclosure.
FIG. 5 is a block diagram depicting example divider address moving control logic in accordance with an embodiment of the disclosure.
FIG. 6 is a block diagram depicting control logic that includes handling of wrap-around conditions in accordance with an embodiment of the disclosure.
FIG. 7 is a block diagram depicting control logic periodically adjusting a divider address in accordance with an embodiment of the disclosure.
FIG. 8 is a block diagram depicting a receiver that includes a dynamic FIFO control in accordance with an embodiment of the disclosure.
FIG. 9 is a flow diagram depicting steps of a method in accordance with an embodiment of the disclosure.
DETAILED DESCRIPTION
FIGS. 1A and 1B are diagrams depicting performance of a dynamic repartitioning of a memory space that includes a plurality of buffers in accordance with an embodiment of the disclosure. A shared memory space 102 includes a plurality of buffers that include a first buffer 104 and a second buffer 106. The shared memory space 102 is divided into the plurality of buffers 104, 106 according one or more dividers, such as the divider identified at 108. During operation of the plurality of buffers 104, 106, the shared memory space 102 may be repartitioned by redefining the division of memory space between the first buffer 104 and the second buffer 106, as identified by the position of the divider 108. Such a repartitioning is depicted in the repositioning of the divider from a first address 110 to a second address 112. Such repartitioning increases the amount of memory available to the first buffer 104 while reducing the amount of memory available to the second buffer 106.
In one embodiment of the disclosure, a determination is made as to a first memory utilization 114 in the first buffer among a plurality of buffers sharing a memory space 102. In FIG. 1 and subsequent figures, the white portions of a buffer represent utilized buffer memory, while the shaded areas represent unused buffer memory. A determination is further made of a second memory utilization 116 of a second buffer among the plurality of buffers. The memory space 102 is repartitioned by redefining a division of the memory space 102, as indicated by the divider 108, based on the first memory utilization and the second memory utilization.
The division of the memory space 102 between the first buffer 104 and the second buffer 106 is defined by the divider 108, where the divider is associated with an address in the memory space 102 between the first buffer 104 and the second buffer 106. During repartitioning, the division of the memory space 102 is redefined by changing the address of the divider 108 in the memory space 102, such as from address 110 to address 112, as depicted in FIGS. 1A and 1B. Upon moving the divider 108, the amount of memory allocated to each of the first buffer 104 and the second buffer 106 changes, so that the first buffer 104 is allocated more memory while the second buffer 106 is allocated less memory in FIG. 1B after movement of the divider 108 from address 110 to address 112.
In one embodiment of the disclosure, additional buffers may be encoded on the shared memory space 102, where the boundaries of those additional buffers are delineated by additional dividers associated with memory addresses. For example, in one embodiment, the memory space 102 is encoded with a third buffer among the plurality of buffers. A third memory utilization is determined for the third buffer. A division of the memory space 102 between the second buffer 106 and the third buffer is redefined by changing a second divider associated with an address of the memory space 102 between the second buffer 106 and the third buffer. The address of the second divider is changed based on the determined utilization of the second buffer 106 as well as the determined third memory utilization.
Dynamic buffer memory allocation, as depicted in FIGS. 1A and 1B can provide advantages over static buffers. Static buffers may be permanently sized or may be sized at the beginning of operations based on expected utilization, for instance. Dynamic buffer memory allocation, as depicted in the embodiment of FIGS. 1A and 1B can offer improved performance through adjustment of buffer sizes during operation of those buffers. Buffer sizing can have a significant effect on system performance. When a buffer is filled, that buffer can no longer accept data, slowing the overall rate at which data can be received. When a buffer is not full and can accept more data, that buffer can continue to accept data from a source, regardless of the rate of consumption or processing of the received data by a target.
Static buffers, while relatively simple in structure, cannot adapt to address the current state of data processing. When certain buffers are being heavily utilized, while other buffers are being little utilized, the heavily utilized buffers may reach a full state, regardless of the existence of unused buffer memory in the little utilized buffers. While the relative sizes of static buffers can be manually adjusted to address such a scenario when the future buffer utilization is known, such static buffers may be a less than optimal solution when future buffer utilization is unknown. Dynamic buffer memory allocation, as disclosed in certain embodiments herein, enables adaptation to the current buffer usage environment by increasing the amount of memory allocated to heavily utilized buffers to avoid buffer overruns by reducing the memory allocated to buffers that are currently being utilized at low levels of capacity.
For example, a buffer for buffering packets from a first flow that is being received at a given point in time faster than the received packets can be re-transmitted, such as during a spike in traffic, could run out of buffer space, resulting in packet loss, without on-the-fly dynamic buffer memory allocation adding memory to the allocation to the filling buffer. Conversely, when the device is able to re-transmit packets faster than the packets are being received, the buffer may run empty, resulting in unused memory space that can be reallocated to a buffer that is currently in a higher state of usage.
The buffers 104, 106 of FIGS. 1A and 1B may take a variety of forms. For example, the buffers 104, 106 may be encoded onto different types of memory spaces, such as a Random Access Memory (RAM) memory space or a cache memory space. In the example of FIGS. 1A and 1B, low memory addresses are depicted at the top of the memory space, with high memory addresses being depicted at the bottom of the memory space. Each of the buffers 104, 106 is filled from low memory addresses toward high memory addresses. The first buffer 104 is prevented from accessing memory past the divider address 108, and the second buffer 106 is prevented from accessing memory before the divider address 108.
The buffers may be implemented using a variety of data structures such as a stack, a queue, a cache, a look-up table, as well as others. In an embodiment of the disclosure, the buffers 104, 106 take the form of circular first-in-first-out (FIFO) queues. In such a FIFO buffer, a first data element provided to the FIFO buffer by a data source, such as a packet source, is the first data element provided to the data destination, when the data destination, such as a packet destination, is available to accept the first data element. The FIFO buffer may be implemented as a circular buffer, whereupon filling the FIFO buffer to a maximum address in the memory space assigned to the FIFO buffer, the FIFO buffer continues storing additional received data elements at the minimum addresses of the FIFO buffer, provided the data elements from those minimum addresses have been previously transmitted and cleared from the FIFO buffer. Such a configuration can ensure continued operation of the FIFO buffer after filling to the maximum address assigned to the FIFO buffer, as long as sufficient free memory is available at the lower addresses of the FIFO buffer. It is noted that a FIFO buffer can also be filled from high addresses to low addresses, for example.
FIG. 2 is a block diagram depicting a hardware-implemented multi-buffer in accordance with one embodiment of the disclosure. The hardware implemented multi-buffer 202 includes a buffer memory comprising a memory space 204. The memory space 204 is divided between a first buffer 206 and a second buffer 208, where a dynamic delineation of the memory space 204 between the first buffer 206 and the second buffer 208 is identified by a divider address 210. In an embodiment, a dynamic buffer control unit 212 controls the relative sizes of the first buffer 206 and the second buffer 208 during operation of the multi-buffer 202 by adjusting an address of the divider address 210 based on monitoring of the utilization of the first buffer 206 and the second buffer 208, respectively.
FIG. 3 is a block diagram depicting internal hardware components of a hardware-implemented multi-buffer in accordance with one embodiment of the disclosure. The dynamic buffer control circuit 302 includes a control memory 304. The control memory 304 is configured to store a divider address 306 that corresponds to a division between the first buffer and the second buffer in a memory space. The control memory 304 is further configured to store a first memory utilization metric 308. For example, the first memory utilization metric 308 may be directed to an amount of allocated memory that the first buffer is using. The control memory is also configured to store a second memory utilization metric 310. For example, in one embodiment, the second memory utilization metric 310 is directed to an amount of allocated memory that the second buffer is using. The dynamic buffer control unit 302 further includes one or more comparator circuits 312 that are configured to compare the first memory utilization metric 308 and the second memory utilization metric 310, where the dynamic buffer control circuit 302 changes the divider address 306 based on that comparison, such as via control logic 314 included in the dynamic buffer control circuit 302.
The first memory utilization metric 308 and the second memory utilization metric 310 can take a variety of forms. In one embodiment, the utilization metrics 308, 310 identify a total amount of memory utilized by the buffers associated with the respective utilization metrics. When a disparity in size between the first memory utilization metric 308 and the second memory utilization metric 310 exists, the relative sizes of the buffers can be adjusted through changing the divider address 306. Alternative embodiments may adjust the relative sizes of the buffers based on absolute amounts of memory not used, proportional amounts of allocated memory used, or any other suitable measure of utilization.
In another embodiment, the first memory utilization metric for a first FIFO buffer tracks a first head pointer address and a first tail pointer address associated with the first FIFO buffer. The first tail pointer address points to a data element that has been in the first FIFO buffer the longest, and the first head pointer address points to a data element that has been added to the FIFO buffer most recently. The current memory utilization of the first FIFO buffer can be determined based on the first tail pointer address and the first head pointer address. The memory utilization of a second FIFO buffer can be similarly tracked and determined through storage of a second head pointer address and a second tail pointer address.
FIG. 4 is a diagram depicting relative positions of head pointers, tail pointers, and a divider address in accordance with an embodiment of the disclosure. A memory space 402 is encoded with a plurality of FIFO buffers, including FIFO1 404 and FIFO2 406. Lower addresses of the memory space 402 are depicted at the top of FIG. 4, while higher addresses are depicted at the bottom of FIG. 4. An upper tail pointer (UT) 408 is tracked by a dynamic buffer control circuit. UT 408 identifies the location of a data element in FIFO1 404 that has been in FIFO1 404 the longest. The data element at UT 408 will be the next data element to be transmitted from FIFO1 404. An upper head pointer (UH) 410 is also tracked by the dynamic buffer control circuit and identifies the location of a data element in FIFO1 404 that has most recently been added to FIFO1 404. The data element at UH 410 is at the end of the FIFO1 404 queue. The memory utilization of FIFO1 404 can be determined by subtracting the UT 408 address from the UH 410 address.
Corresponding pointers are tracked for FIFO 2 406. Specifically, a lower tail pointer (LT) 412 is tracked by the dynamic buffer control circuit. LT 412 identifies the location of a data element in FIFO2 406 that has been in FIFO2 406 the longest. The data element at LT 412 will be the next data element to be transmitted from FIFO2 406. A lower head pointer (LH) 414 is also tracked by the dynamic buffer control circuit and identifies the location of a data element in FIFO2 406 that has most recently been added to FIFO2 406. The data element at LH 414 is at the end of the FIFO2 406 queue. The memory utilization of FIFO2 406 can be determined by subtracting the LT 412 address from the LH 414 address.
A divider address 416 delineates the boundary between FIFO1 404 and FIFO2 406. When inputting data elements into FIFO1, those data elements may not be inputted at a higher address than the divider address 416, and thus, UH 410 cannot extend below the divider address 416. In implementations using a circular FIFO queue, when a data element is inputted, and there is no room to place the inputted data element at the end of the FIFO1 404 queue because UH 410 has reached the divider address 416, the inputted data element can be placed at the low address end of FIFO1 404 (i.e., at the divider between FIFO1 404 and a previous buffer or at the beginning of set of buffers when FIFO1 404 is the first buffer in the memory space 402). UH 410 is moved to point at the newly inputted data element at the low address end of FIFO1 404, and FIFO1 404 is then in a wrap-around condition where UT 408 is greater than UH 410. A dynamic buffer control circuit may treat wrap around conditions as a special case when determining whether to expand or contract allocated buffer memories, as described further below.
FIG. 5 is a block diagram depicting example divider address moving control logic in one embodiment of the disclosure. A dynamic buffer control circuit 502 includes a control memory 504 for storage of certain metrics. The stored metrics include a divider address (D) 506, first memory utilization metrics (UH, UT) 508 associated with a first buffer, and second memory utilization metrics (LH, LT) 510 associated with a second buffer. A set of one or more comparator circuits 512 is configured to make comparisons to determine whether the dynamic buffer control circuit 502 should adjust the divider address 506 between a first buffer and a second buffer to change the amount of memory allocated to those buffers. In one embodiment of the disclosure, the control logic 514 directs the one or more comparator circuits 512 to perform a determination of whether (D−UH)<(LT−D). In other words, the comparator 512 is tasked with determining whether the unused, higher address portion of the first buffer less than the unused, lower address portion of the second buffer. If the unused portion of the first buffer is less than the unused portion of the second buffer, then the divider address 506 is moved down, to a higher address, enabling the first buffer to take advantage of some of the unused memory previously allocated to the second buffer. If the unused portion of the first buffer is greater than the unused portion of the second buffer, then the divider address 506 is moved up, to a lower address, enabling the second buffer to take advantage of some of the unused memory previously allocated to the first buffer. The size of the change in the divider address 506 can be based on the magnitude of the difference between (D−UH) and (LT−D) or may be a predetermined amount.
In another embodiment of the disclosure, the comparator circuits 512 are configured to make comparisons to determine whether the dynamic buffer control circuit 502 should adjust the divider address 506 based on the total amount of unused space in each buffer. The control logic 514 directs the one or more comparator circuits 512 to perform a determination of whether (FirstFifoSize−(UH−UT))<(SecondFifoSize−(LH−LT)). In other words, the comparator 512 is tasked with determining whether the unused portion of the second buffer is greater than the unused portion of the first buffer. If the unused portion of the second buffer is greater than the unused portion of the first buffer, then the divider address 506 is moved down, to a higher address, enabling the first buffer to take advantage of some of the unused memory previously allocated to the second buffer. If the unused portion of the first buffer is greater than the unused portion of the second buffer, then the divider address 506 is moved up, to a lower address, enabling the second buffer to take advantage of some of the unused memory previously allocated to the first buffer.
Divider addresses between other buffers can be similarly managed. For example, in one embodiment, the memory space is divided among the first buffer, the second buffer, and a third buffer. The dynamic delineation of the memory space among the first buffer, the second buffer, and the third buffer is identified by the divider address 506, which identifies the boundary between the first buffer and the second buffer, and a second divider address, which identifies a boundary between the second buffer and the third buffer. The control memory 504 is configured to store the second divider address and one or more third memory utilization metrics associated with the third buffer (e.g., a third buffer head address and a third buffer tail address). The one or more comparator circuits 512 are configured to compare one or more second memory utilization metrics 510 to one or more third memory utilization metrics, and the dynamic buffer control circuit 502 is configured to change the second divider address based on that comparison.
FIG. 6 is a block diagram depicting control logic that includes handling of wrap-around conditions in accordance with an embodiment of the disclosure. In the embodiment of FIG. 6, the control logic 602 of a dynamic buffer control circuit 604 directs that a divider address 606 should not be moved when one of the two buffers whose boundary is identified by the divider address 606 is in a wrap-around condition. A control memory 608 stores the divider address 606, first memory utilization metrics 610, and second memory utilization metrics 612. The control logic 602 directs the comparator circuits 614 to determine whether either of the first buffer or the second buffer is in a wrap-around condition. A wrap-around condition at a buffer can be detected when the head pointer for the buffer is at a lower address than the tail pointer for the buffer. Thus, if the comparator circuits 614 determine that (UH>UT and LH>LT) is false, then the divider address 606 between the first buffer and the second buffer is not to be moved. If neither of the first buffer nor the second buffer is in a wrap-around condition, then the control logic continues operations to determine whether the divider address 606 is to be moved. If (D−UH) is determined by the comparator circuits 614 to be less than (LT−D), then the dynamic buffer control circuit 604 moves the divider down by increasing the divider address 606, and if (D−UH) is determined by the comparator circuits 614 to be greater than (LT−D), then the dynamic buffer control circuit 604 moves the divider up by decreasing the divider address 606.
FIG. 7 is a block diagram depicting control logic periodically adjusting a divider address in accordance with an embodiment of the disclosure. Periodic adjustment of memory allocations during operation of a set of buffers enables adjustment of those memory allocations in according with current system conditions. Spacing adjustments at least a predetermined period of time apart ensures that changes are not made too rapidly in a manner that may disrupt system operation. In one embodiment, a dynamic buffer control circuit 702 periodically adjusts a divider address 704 stored by a control memory 706 according to a timer threshold 708. The dynamic buffer control circuit 702 includes a timer 710. Control logic 712 utilizes comparator circuits 714 to determine whether it is time to consider adjusting the divider address 704. In one embodiment, that determination is made by performing a modulo operation on the timer value 710 to determine whether the remainder present after dividing the timer value 710 by the timer threshold value 708 is equal to zero. When MOD(Timer, Threshold)=0, it is time to consider adjusting the divider address 704. At that time, if (D−UH) is determined by the comparator circuits 714 to be less than (LT−D), then the dynamic buffer control circuit 702 moves the divider down by increasing the divider address 704, and if (D−UH) is determined by the comparator circuits 714 to be greater than (LT−D), then the dynamic buffer control circuit 702 moves the divider up by decreasing the divider address 704.
In other embodiments of the disclosure, a dynamic buffer control circuit 702 may consider other constraints in determining whether divider addresses 704 should be moved. For example, a dynamic buffer control circuit 702 may consider a minimum buffer size constraint. In such an embodiment, the control memory 706 stores a minimum size threshold (ST), a first divider address (D) 704, and a second divider address (SD). The comparator circuits 714 determine whether SD−D<ST, and the dynamic buffer control circuit 702 is configured to prevent change of D or SD when SD−D<ST is true. Similar operations can be performed to prevent any buffer from exceeding a maximum size threshold.
A dynamic buffer control circuit can be utilized in a variety of environments. For example, a dynamic buffer control circuit may be utilized in a network receiver that receives data associated with a plurality of ports. Each port is associated with a different application or hardware element at a computing-system associated with the receiver. The data handling or consumption rates of the different applications and hardware elements may vary, and the rates at which data is received on the different ports may vary as well, making a set of buffers, one assigned to each receiving port, beneficial in maintaining data flow.
FIG. 8 is a block diagram depicting a receiver that includes a dynamic FIFO control, in accordance with an embodiment of the disclosure. In an embodiment, a receiver 802 is disposed in a network device 800 such as a switch, server or gateway that is configured to receive data through a number of input ports 801. The receiver 802 includes a memory space 804 that is divided among a plurality of FIFO buffers. Dynamic delineations of the memory space 804 into the FIFO buffers are identified by divider addresses that are stored and managed by a dynamic FIFO control hardware module 806. The dynamic FIFO control 806 performs certain operations as described herein above, such as divider adjustments, timer monitoring, wrap-around monitoring, and minimum and maximum buffer size monitoring.
In operation, data is received at the receiver 802, where the received data includes an indication of the port to which that data is destined. The receiver 802 stores that received data in a FIFO buffer that is associated with the port identified by the received data. A write control circuit 808 directs the received data to an address in the memory space 804 that is associated with the correct FIFO buffer for the identified port. Specifically, the write control circuit 808 writes the data to the address that follows the current head pointer address of the correct FIFO buffer, increases that head pointer address, and informs the dynamic FIFO control 806 of the head pointer address change, so that the dynamic FIFO control 806 can adjust the associated buffer utilization metrics accordingly. The write control circuit 808 receives the divider addresses from the dynamic FIFO control 806, so that data is not written to the correct FIFO at an address that is beyond the divider address that identifies the end of the memory allocation of the correct FIFO. If writing to the next address of the correct FIFO would run beyond that divider address, then the write control circuit 808 is configured to write the received data at the beginning of the correct FIFO, resulting in a wrap-around condition.
The receiver 802 also includes a read control circuit 810 that controls access to data stored in the memory space 804 by applications or hardware of the computer system associated with the receiver 802. The read control circuit 810 tracks tail pointers for each of the FIFO buffers indicating the data element that has been in those buffers the longest and is next to be outputted from the receiver 802. When an application or hardware is ready to receive data transmitted to a particular port, the read control circuit 810 is notified, the read control circuit 810 accesses data at the tail address associated with the buffer for that port, and the read control circuit 810 outputs the accessed data to the requesting application or hardware via output ports 812. The read control circuit 810 further increments the tail address for that buffer, as long as such an increment would not pass a divider address, provided by the dynamic FIFO control 806, associated with that buffer. If such a divider address would be exceeded, the read control circuit resets the tail address to the top (e.g., lowest address) of the buffer, ending a wrap-around condition for that buffer. The read control circuit 810 provides the updated tail address to the dynamic FIFO control 806 so that the appropriate buffer utilization metrics can be updated.
FIG. 9 is a flow diagram depicting steps of a method in accordance with an embodiment of the disclosure. At 902, a determination is made of a first memory utilization of a first buffer among a plurality of buffers sharing a memory space. At 904, a second memory utilization is determined for a second buffer among the plurality of buffers. At 906, the memory space is repartitioned by redefining a division of the memory space between the first buffer and the second buffer based on the first memory utilization and the second memory utilization.
This application uses examples to illustrate the invention. The patentable scope of the invention may include other examples.

Claims (30)

What is claimed is:
1. A hardware-implemented multi-buffer, comprising:
a buffer memory comprising a shared memory space, wherein the memory space is shared between at least a first buffer and a second buffer, and wherein a dynamic delineation of the memory space between the first buffer and the second buffer is identified by a divider address;
a dynamic buffer control circuit, wherein the dynamic buffer control circuit comprises:
a control memory configured to store:
the divider address;
a first memory utilization metric associated with the first buffer; and
a second memory utilization metric associated with the second buffer; and
one or more comparator circuits configured to compare the first memory utilization metric and the second memory utilization metric, wherein the dynamic buffer control circuit is configured to change the divider address based on the comparison.
2. The multi-buffer of claim 1, wherein the multi-buffer is configured to prevent the first buffer from accessing buffer memory past the divider address, and wherein the multi-buffer is configured to prevent the second buffer from accessing buffer memory before the divider address.
3. The multi-buffer of claim 1, wherein the first buffer and the second buffer are first-in-first-out buffers;
wherein the control memory is configured to store an upper head address, an upper tail address, a lower head address, and a lower tail address.
4. The multi-buffer of claim 3, wherein the one or more comparator circuits are configured to determine whether:

(D−UH)<(LT−D);
wherein D is the divider address, UH is the upper head address, and LT is the lower tail address;
wherein the dynamic buffer control circuit is configured to change the divider address in a first direction when (D−UH)<(LT−D) is true, and wherein the dynamic buffer control circuit is configured to change the divider address in a second direction when (D−UH)<(LT−D) is not true.
5. The multi-buffer of claim 3, wherein the one or more comparator circuits are configured to determine whether:
UH>UT and LH>LT;
wherein UT is the upper tail address and LH is the lower head address;
wherein the dynamic buffer control circuit is configured to prevent change of the divider address when UT>UH or LT>LH is true.
6. The multi-buffer of claim 3, further comprising:
a write control circuit configured to direct incoming data to one of the first buffer or the second buffer, wherein the write control circuit is configured to instruct the dynamic buffer control circuit to increment the upper head address when the incoming data is directed to the first buffer, and wherein the write control circuit is configured to instruct the dynamic buffer control circuit to increment the lower head address when the incoming data is directed to the second buffer.
7. The multi-buffer of claim 3, further comprising:
a read control circuit configured to read data from one of the first buffer or the second buffer, wherein the read control circuit is configured to instruct the dynamic buffer control circuit to increment the upper tail address when data is read from the first buffer, and wherein the read control circuit is configured to instruct the dynamic buffer control circuit to increment the lower tail address when data is read from the second buffer.
8. The multi-buffer of claim 1, wherein the memory space is divided among the first buffer, the second buffer, and a third buffer, wherein dynamic delineation of the memory space among the first buffer, the second buffer, and the third buffer is identified by the divider address and a second divider address; and
wherein the control memory is further configured to store the second divider address and a third memory utilization metric associated with the third buffer; and
wherein the one or more comparator circuits are configured to compare the second memory utilization metric and the third memory utilization metric, wherein the dynamic buffer control circuit changes the second divider address based on the comparison of the second memory utilization metric and the third memory utilization metric.
9. The multi-buffer of claim 8, wherein the control memory is further configured to store a size threshold value;
wherein the one or more comparator circuits are configured to determine whether:

(SD−D)<T;
wherein D is the divider address, SD is the second divider address, and T is the size threshold value;
wherein the dynamic buffer control circuit is configured to prevent change of the divider address or the second divider address when (SD−D)<T is true.
10. The multi-buffer of claim 1, wherein the control memory is configured to store a timing threshold;
wherein the dynamic buffer control circuit further comprises:
a timing circuit;
wherein the one or more comparator circuits are configured to repeat the comparison of the first memory utilization metric and the second memory utilization metric based on the timing circuit and the timing threshold.
11. A method, comprising:
determining a first memory utilization of a first buffer among a plurality of buffers sharing a memory space, wherein the first memory utilization is determined based on an upper head address and an upper tail address;
determining a second memory utilization of a second buffer among the plurality of buffers, wherein the second memory utilization is determined based on a lower head address and a lower tail address, and wherein the first buffer and the second buffer are first-in-first-out buffers; and
repartitioning the memory space by redefining a division of the memory space between the first buffer and the second buffer based on the first memory utilization and the second memory utilization, wherein the division of the memory space is defined by a divider, the divider being associated with a divider address in the memory space between the first buffer and the second buffer, wherein the division of the memory space is redefined by changing the divider address in the memory space, and wherein the redefining comprises:
changing the divider address in a first direction when (D−UH)<(LT−D), and
changing the divider address in a second direction when (D−UH)>(LT−D), wherein D is the divider address, UH is the upper head address, and LT is the lower tail address.
12. The method of claim 11, wherein the first buffer is prevented from accessing memory past the divider address, and wherein the second buffer is prevented from accessing memory before the divider address.
13. The method of claim 11, wherein the redefining further comprises:
determining whether the first buffer or the second buffer is in a wrap-around condition; and
halting the redefining when the first buffer or the second buffer is in the wrap-around condition.
14. The method of claim 11, further comprising:
determining a third memory utilization in a third buffer among the plurality of buffers; and
redefining a division of the memory space between the second buffer and the third buffer by changing a second divider associated with an address between the second buffer and the third buffer based on the second memory utilization and the third memory utilization.
15. The method of claim 14, wherein the redefining the division between the second buffer and the third buffer further comprises:
determining a prospective address for the second divider;
determining a distance between the first divider and the prospective address; and
halting the redefining of the division between the second buffer and the third buffer when the distance is less than a minimum size threshold distance.
16. The method of claim 11, further comprising:
waiting a predefined period of time;
re-determining the first memory utilization and the second memory utilization;
performing another redefining of the division of memory space based on the re-determined first memory utilization and the re-determined second memory utilization.
17. A method, comprising:
determining a first memory utilization of a first buffer among a plurality of buffers sharing a memory space, wherein the first memory utilization is determined based on an upper head address and an upper tail address;
determining a second memory utilization of a second buffer among the plurality of buffers, wherein the second memory utilization is determined based on a lower head address and a lower tail address, and wherein the first buffer and the second buffer are first-in-first-out buffers; and
repartitioning the memory space by redefining a division of the memory space between the first buffer and the second buffer based on the first memory utilization and the second memory utilization, wherein the division of the memory space is defined by a divider, the divider being associated with a divider address in the memory space between the first buffer and the second buffer, wherein the division of the memory space is redefined by changing the divider address in the memory space, and wherein the redefining comprises:
changing the divider address in a first direction when (FirstFifoSize−(UH−UT))<(SecondFifoSize−(LH−LT)); and
changing the divider address in a second direction when (FirstFifoSize−(UH−UT))>(SecondFifoSize−(LH−LT));
wherein UH is the upper head address, UT is the upper tail address, LH is the lower head address, LT is the lower tail address, FirstFifoSize is a memory allocation to the first buffer, and SecondFifoSize is a memory allocation to the second buffer.
18. The method of claim 17, wherein the first buffer is prevented from accessing memory past the divider address, and wherein the second buffer is prevented from accessing memory before the divider address.
19. The method of claim 17, wherein the redefining further comprises:
determining whether the first buffer or the second buffer is in a wrap-around condition; and
halting the redefining when the first buffer or the second buffer is in the wrap-around condition.
20. The method of claim 17, further comprising:
determining a third memory utilization in a third buffer among the plurality of buffers; and
redefining a division of the memory space between the second buffer and the third buffer by changing a second divider associated with an address between the second buffer and the third buffer based on the second memory utilization and the third memory utilization.
21. The method of claim 20, wherein the redefining the division between the second buffer and the third buffer further comprises:
determining a prospective address for the second divider;
determining a distance between the first divider and the prospective address; and
halting the redefining of the division between the second buffer and the third buffer when the distance is less than a minimum size threshold distance.
22. The method of claim 17, further comprising:
waiting a predefined period of time;
re-determining the first memory utilization and the second memory utilization;
performing another redefining of the division of memory space based on the re-determined first memory utilization and the re-determined second memory utilization.
23. A network device, comprising:
a plurality of network ports; and
a hardware-implemented multi-buffer, comprising:
a buffer memory comprising a shared memory space, wherein the memory space is shared between at least a first buffer and a second buffer, wherein a dynamic delineation of the memory space between the first buffer and the second buffer is identified by a divider address, and wherein the first buffer is associated with a first of the plurality of network ports and the second buffer is associated with a second of the plurality of network ports;
a dynamic buffer control circuit, wherein the dynamic buffer control circuit comprises:
a control memory configured to store:
the divider address;
a first memory utilization metric associated with the first buffer; and
a second memory utilization metric associated with the second buffer; and
one or more comparator circuits configured to compare the first memory utilization metric and the second memory utilization metric, wherein the dynamic buffer control circuit changes the divider address based on the comparison.
24. A method, comprising:
determining a first memory utilization of a first buffer among a plurality of buffers sharing a memory space;
determining a second memory utilization of a second buffer among the plurality of buffers;
repartitioning the memory space by redefining a division of the memory space between the first buffer and the second buffer based on the first memory utilization and the second memory utilization, wherein the division of the memory space is defined by a divider, the divider being associated with a divider address in the memory space between the first buffer and the second buffer, and wherein the division of the memory space is redefined by changing the divider address in the memory space;
determining a third memory utilization in a third buffer among the plurality of buffers; and
redefining a division of the memory space between the second buffer and the third buffer by changing a second divider associated with an address between the second buffer and the third buffer based on the second memory utilization and the third memory utilization, wherein the redefining the division between the second buffer and the third buffer further comprises:
determining a prospective address for the second divider,
determining a distance between the first divider and the prospective address, and
halting the redefining of the division between the second buffer and the third buffer when the distance is less than a minimum size threshold distance.
25. The method of claim 24, wherein the first buffer is prevented from accessing memory past the divider address, and wherein the second buffer is prevented from accessing memory before the divider address.
26. The method of claim 24, wherein the first buffer and the second buffer are first-in-first-out buffers;
wherein the first memory utilization is determined based on an upper head address and an upper tail address; and
wherein the second memory utilization is determined based on a lower head address and a lower tail address.
27. The method of claim 26, wherein the redefining further comprises:
changing the divider address in a first direction when (D−UH)<(LT−D); and
changing the divider address in a second direction when (D−UH)>(LT−D);
wherein D is the divider address, UH is the upper head address, and LT is the lower tail address.
28. The method of claim 26, wherein the redefining further comprises:
changing the divider address in a first direction when (FirstFifoSize−(UH−UT))<(SecondFifoSize−(LH−LT)); and
changing the divider address in a second direction when (FirstFifoSize−(UH−UT))>(SecondFifoSize−(LH−LT));
wherein UH is the upper head address, UT is the upper tail address, LH is the lower head address, LT is the lower tail address, FirstFifoSize is a memory allocation to the first buffer, and SecondFifoSize is a memory allocation to the second buffer.
29. The method of claim 26, wherein the redefining the division of the memory space between the first buffer and the second buffer further comprises:
determining whether the first buffer or the second buffer is in a wrap-around condition; and
halting the redefining when the first buffer or the second buffer is in the wrap-around condition.
30. The method of claim 24, further comprising:
waiting a predefined period of time;
re-determining the first memory utilization and the second memory utilization;
performing another redefining of the division of the memory space between the first buffer and the second buffer based on the re-determined first memory utilization and the re-determined second memory utilization.
US13/678,304 2011-11-18 2012-11-15 Method and apparatus for automated division of a multi-buffer Active 2033-07-23 US9026735B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/678,304 US9026735B1 (en) 2011-11-18 2012-11-15 Method and apparatus for automated division of a multi-buffer

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161561383P 2011-11-18 2011-11-18
US13/678,304 US9026735B1 (en) 2011-11-18 2012-11-15 Method and apparatus for automated division of a multi-buffer

Publications (1)

Publication Number Publication Date
US9026735B1 true US9026735B1 (en) 2015-05-05

Family

ID=53001820

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/678,304 Active 2033-07-23 US9026735B1 (en) 2011-11-18 2012-11-15 Method and apparatus for automated division of a multi-buffer

Country Status (1)

Country Link
US (1) US9026735B1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9442674B1 (en) 2015-11-20 2016-09-13 International Business Machines Corporation Using a plurality of sub-buffers and a free segment list to allocate segments to a plurality of threads to use for writing data
US9483410B1 (en) 2015-11-20 2016-11-01 International Business Machines Corporation Utilization based multi-buffer dynamic adjustment management
US9571578B1 (en) 2015-11-20 2017-02-14 International Business Machines Corporation Utilization based multi-buffer self-calibrated dynamic adjustment management
WO2017112225A1 (en) * 2015-12-24 2017-06-29 Intel Corporation Soc fabric extensions for configurable memory maps through memory range screens and selectable address flattening
US9852075B2 (en) 2015-11-20 2017-12-26 International Business Machines Corporation Allocate a segment of a buffer to each of a plurality of threads to use for writing data
EP3352087A4 (en) * 2016-12-05 2018-08-22 Huawei Technologies Co., Ltd. Control method for data read/write command in nvme over fabric framework, device and system
US20180367844A1 (en) * 2015-12-18 2018-12-20 Telefonaktiebolaget Lm Ericsson (Publ) Video playback buffer control
US10838665B2 (en) 2016-12-05 2020-11-17 Huawei Technologies Co., Ltd. Method, device, and system for buffering data for read/write commands in NVME over fabric architecture

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6094695A (en) * 1998-03-11 2000-07-25 Texas Instruments Incorporated Storage buffer that dynamically adjusts boundary between two storage areas when one area is full and the other has an empty data register
US6441917B1 (en) * 1996-01-10 2002-08-27 Canon Kabushiki Kaisha Buffer memory managing method and printing apparatus using the method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6441917B1 (en) * 1996-01-10 2002-08-27 Canon Kabushiki Kaisha Buffer memory managing method and printing apparatus using the method
US6094695A (en) * 1998-03-11 2000-07-25 Texas Instruments Incorporated Storage buffer that dynamically adjusts boundary between two storage areas when one area is full and the other has an empty data register

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9483410B1 (en) 2015-11-20 2016-11-01 International Business Machines Corporation Utilization based multi-buffer dynamic adjustment management
US9571578B1 (en) 2015-11-20 2017-02-14 International Business Machines Corporation Utilization based multi-buffer self-calibrated dynamic adjustment management
US9442674B1 (en) 2015-11-20 2016-09-13 International Business Machines Corporation Using a plurality of sub-buffers and a free segment list to allocate segments to a plurality of threads to use for writing data
US9798466B2 (en) 2015-11-20 2017-10-24 International Business Machines Corporation Using a plurality of sub-buffers and a free segment list to allocate segments to a plurality of threads to use for writing data
US9852075B2 (en) 2015-11-20 2017-12-26 International Business Machines Corporation Allocate a segment of a buffer to each of a plurality of threads to use for writing data
US10176101B2 (en) 2015-11-20 2019-01-08 International Business Machines Corporation Allocate a segment of a buffer to each of a plurality of threads to use for writing data
US20180367844A1 (en) * 2015-12-18 2018-12-20 Telefonaktiebolaget Lm Ericsson (Publ) Video playback buffer control
US10856046B2 (en) * 2015-12-18 2020-12-01 Telefonaktiebolaget Lm Ericsson (Publ) Video playback buffer control
WO2017112225A1 (en) * 2015-12-24 2017-06-29 Intel Corporation Soc fabric extensions for configurable memory maps through memory range screens and selectable address flattening
US9898222B2 (en) 2015-12-24 2018-02-20 Intel IP Corporation SoC fabric extensions for configurable memory maps through memory range screens and selectable address flattening
EP3352087A4 (en) * 2016-12-05 2018-08-22 Huawei Technologies Co., Ltd. Control method for data read/write command in nvme over fabric framework, device and system
US10838665B2 (en) 2016-12-05 2020-11-17 Huawei Technologies Co., Ltd. Method, device, and system for buffering data for read/write commands in NVME over fabric architecture
EP3825857A1 (en) * 2016-12-05 2021-05-26 Huawei Technologies Co., Ltd. Method, device, and system for controlling data read/write command in nvme over fabric architecture
US11762581B2 (en) * 2016-12-05 2023-09-19 Huawei Technologies Co., Ltd. Method, device, and system for controlling data read/write command in NVMe over fabric architecture

Similar Documents

Publication Publication Date Title
US9026735B1 (en) Method and apparatus for automated division of a multi-buffer
US11784760B2 (en) Method and system for contiguous HARQ memory management with memory splitting
US8225026B2 (en) Data packet access control apparatus and method thereof
US10193831B2 (en) Device and method for packet processing with memories having different latencies
US9929970B1 (en) Efficient resource tracking
US10248350B2 (en) Queue management method and apparatus
US7904677B2 (en) Memory control device
US8886741B2 (en) Receive queue models to reduce I/O cache consumption
KR20030053445A (en) Method and apparatus for buffer partitioning without loss of data
US11916790B2 (en) Congestion control measures in multi-host network adapter
US9063841B1 (en) External memory management in a network device
US10061513B2 (en) Packet processing system, method and device utilizing memory sharing
US7409624B2 (en) Memory command unit throttle and error recovery
US10038652B2 (en) Self tuning buffer allocation in a shared-memory switch
US10031884B2 (en) Storage apparatus and method for processing plurality of pieces of client data
US10990447B1 (en) System and method for controlling a flow of storage access requests
US10067868B2 (en) Memory architecture determining the number of replicas stored in memory banks or devices according to a packet size
CN111970213A (en) Queuing system
US20100054272A1 (en) Storage device capable of accommodating high-speed network using large-capacity low-speed memory
US20150301963A1 (en) Dynamic Temporary Use of Packet Memory As Resource Memory
US7802148B2 (en) Self-correcting memory system
EP3771164B1 (en) Technologies for providing adaptive polling of packet queues
JP2021144324A (en) Communication device, method for controlling communication device, and integrated circuit
US20230010161A1 (en) Expandable Queue
JP2014194672A (en) Memory control device and memory control method

Legal Events

Date Code Title Description
AS Assignment

Owner name: MARVELL ISRAEL (M.I.S.L) LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TOROK, RUVEN;SHAFRIR, OREN;REEL/FRAME:029311/0682

Effective date: 20121115

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8