US20180113639A1 - Method and system for efficient variable length memory frame allocation - Google Patents

Method and system for efficient variable length memory frame allocation Download PDF

Info

Publication number
US20180113639A1
US20180113639A1 US15/335,014 US201615335014A US2018113639A1 US 20180113639 A1 US20180113639 A1 US 20180113639A1 US 201615335014 A US201615335014 A US 201615335014A US 2018113639 A1 US2018113639 A1 US 2018113639A1
Authority
US
United States
Prior art keywords
frame
frames
super
super frame
allocated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/335,014
Inventor
Horia Simionescu
Eugene Saghi
Sridhar Rao Veerla
Panthini Pandit
Timothy Hoglund
Gowrisankar RADHAKRISHNAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
Avago Technologies General IP Singapore Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avago Technologies General IP Singapore Pte Ltd filed Critical Avago Technologies General IP Singapore Pte Ltd
Priority to US15/335,014 priority Critical patent/US20180113639A1/en
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PANDIT, PANTHINI, VEERLA, SRIDHAR RAO, HOGLUND, TIMOTHY, RADHAKRISHNAN, GOWRISANKAR, SAGHI, EUGENE, SIMIONESCU, HORIA
Publication of US20180113639A1 publication Critical patent/US20180113639A1/en
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED MERGER (SEE DOCUMENT FOR DETAILS). Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED CORRECTIVE ASSIGNMENT TO CORRECT THE EXECUTION DATE OF THE MERGER AND APPLICATION NOS. 13/237,550 AND 16/103,107 FROM THE MERGER PREVIOUSLY RECORDED ON REEL 047231 FRAME 0369. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER. Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0864Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using pseudo-associative means, e.g. set-associative or hashing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0875Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0895Caches characterised by their organisation or structure of parts of caches, e.g. directory or tag array
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/30Providing cache or TLB in specific location of a processing system
    • G06F2212/302In image processor or graphics adapter
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/45Caching of specific data in cache memory
    • G06F2212/452Instruction code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/60Details of cache memory
    • G06F2212/604Details relating to cache allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/62Details of cache specific to multiprocessor cache arrangements
    • G06F2212/621Coherency control relating to peripheral accessing, e.g. from DMA or I/O device

Definitions

  • the present disclosure is generally directed toward computer memory allocation techniques.
  • FIG. 1 is a block diagram depicting a computing system in accordance with at least some embodiments of the present disclosure
  • FIG. 2 is a block diagram depicting details of an illustrative RAID controller in accordance with at least some embodiments of the present disclosure
  • FIG. 3 is a block diagram depicting a first illustrative data structure used in accordance with at least some embodiments of the present disclosure
  • FIG. 4 is a block diagram depicting a second illustrative data structure used in accordance with at least some embodiments of the present disclosure
  • FIG. 5 is a block diagram depicting a third illustrative data structure used in accordance with at least some embodiments of the present disclosure.
  • FIG. 6 is a flow diagram depicting a method of responding to a frame allocation request in accordance with at least some embodiments of the present disclosure
  • FIG. 7 is a flow diagram depicting a method of allocating additional super frames from a stack of free super frames in accordance with at least some embodiments of the present disclosure
  • FIG. 8 is a flow diagram depicting an additional method of responding to a frame allocation request in accordance with at least some embodiments of the present disclosure.
  • FIG. 9 is a flow diagram depicting a method of releasing a super frame back to a stack of free super frames in accordance with at least some embodiments of the present disclosure.
  • a frame allocation method in which frames are allocated based on variable sized pools called super frames.
  • frames and super frames are described with respect to specific sizes or ranges of sizes, it should be appreciated that embodiments of the present disclosure are not limited to particular frame sizes or super frame sizes. Indeed, while a typical allocation of 2 kB and 128 Byte super frames will be described, it should not be construed as limiting embodiments of the present disclosure.
  • a super frame may refer to a large frame that contains at least two sub frames of a particular size or range of sizes (e.g., 64 Bytes/sub frame).
  • a super frame of size 2 Kbyte may contain approximately 32 contiguous 64 Byte sub frames.
  • a 128 Byte super frame may contain two 64 Byte sub frames.
  • the 2 Kbyte and 128 Byte illustrative super frames are used for illustrative purposes—it should be appreciated that super frames of any frame size can be used (e.g., a power of 2 can be used to determine any type of possible super frame size).
  • a state of each sub frame within a super frame is maintained and indicated within a bitmap.
  • one bit within the bitmap may be used to indicate if a particular sub frame is currently in use (or not). Additional bits in the bitmap can also be used to indicate the usage type for the sub frame. For instance, additional bits in the bitmap can be used to indicate whether a sub frame is used for a Local Message ID (LMID) or some other memory type is being used. If there is a need to find out all the sub frames that are used for LMIDs, for example, then this information stored in the additional bits becomes quite useful.
  • LMID Local Message ID
  • a super frame can be provisioned from various types of memory like SRAM/DRAM or characterized by Slow versus Fast access memory.
  • a super frame pool may be configured to contain all super frames of the same or similar type and same or similar access type (e.g., all SRAM slow access super frames may be combined in a common super frame pool whereas other super frames are combined in other super frame pools).
  • a frame or sub frame allocation request can be configured to indicate the desired or required pool type (e.g., 2 Kbytes, 128 Bytes, etc.), followed by desired or required access type of Slow or Fast, and the requested frame size, which is typically an exponent of 2. (e.g., ‘0’ indicates 1 sub frame, ‘1’ indicates 2 sub frames, and ‘2’ indicates 4 sub frames, etc.). It should be appreciated, that a separate stack of super frames can be maintained for each pool.
  • a super frame tracker for each pool type and access type, a super frame tracker is maintained.
  • the tracker contains the super frame ID that is currently allocated and not fully used and the usage count for the super frame.
  • an entry is added to the appropriate index in the tracker table. For example, if a super frame is allocated from the Fast Frame, 2 Kbyte Pool, the super frame Id and the number of sub frames used (e.g., usage count) is added to the 2 Kbyte Pool, Fast access tracker into index 4.
  • the super frame bitmap can also updated to indicate which sub frame is currently in use.
  • the tracker also maintains the usage count.
  • the usage count may indicate which sub frame is available next. For example, count 1 indicates that Frame at index 0 is in use whereas count 2 indicates that sub frames at indices 0 and 1 are in use. This avoids the need to search for free sub frames within the tracker.
  • the sub frame indexed with the count would be the next frame that can be allocated.
  • the super frame ID is stored in the allocation pointer specific to that frame size: e.g., 64, 128, 256, 512 and 1 K.
  • the allocation pointer e.g. 64, 128, 256, 512 and 1 K.
  • the count is incremented. If the usage count becomes equal to the size of the super frame, then the super frame ID is removed from the tracker. This indicates that this super frame cannot be used further (e.g., the super frame is completely allocated).
  • the sub frame allocation is performed in terms of forward lookup (e.g., from 0 to maximum sub frames available). Sub frames are allocated until all the sub frames belonging to a particular super frame are exhausted (e.g., even if some sub frames get freed or released) because the sub frames would not be re-allocated for the further requests until the entire super frame becomes free and is re-used for allocation. This is ensured by checking that usage counter in the super frame tracker is only incremented and never decremented even when a sub frame is getting freed or released.
  • a super frame After a super frame is allocated, then it can be used to fulfill requests for frames with sizes ranging from 64 Byte to 2 Kbyte (only square sizes are valid).
  • a linear search from 64 Byte index until the largest frame size for the pool is performed to see if a super frame is available. If a super frame ID is valid in a particular index and if it has required number of sub frames to satisfy the request, the allocation is completed from this index. This makes sure that there is no internal fragmentation. As expected, if the entirety of the request cannot be satisfied from any of the index, then a new super frame is allocated and the super frame ID is added to the index corresponding to the request size.
  • the subsequent frame allocation requests could be fulfilled from the same super frame as long as there are sufficient sub frames left within the super frame.
  • the number of sub frames available is less than the size associated with the allocation pointer and the super frame ID is moved from the current allocation pointer to the allocation pointer designated for the lower size.
  • the allocation pointers are scanned upward until another super frame that can fulfill the request is found. If no super frame is found or none of the super frames found can fulfill the request, a new super frame is allocated from the stack and the frame allocation process continues as described.
  • the corresponding bits in the parent super frame bitmap get cleared. Furthermore, if the entire bitmap becomes clear for a particular super frame, then the super frame gets released (e.g., that particular super frame's super frame ID is pushed back into the allocation stack).
  • a super frame may not be configured to be freed directly. Rather, it can be freed when all of the bits are cleared as part of the freeing process of sub frames.
  • the frame allocation mechanisms described herein can have particular characteristics like Slow virtual disks, Fast virtual disks, etc.
  • the frame allocation mechanisms can also have characteristics like allocation from SRAM, DRAM, etc. or may be based on size of super frame (e.g., whether 2 Kbyte or 128 Bytes). Characterizing the super frames helps ensure that a module that requires frames that gets freed up fast uses only such frames so that the super frame gets freed without getting blocked because of the slow requests.
  • the disclosed frame allocation mechanisms provide a more efficient allocation strategy than most existing or known frame allocation techniques.
  • the disclosed frame allocation mechanisms can cater to the needs of hardware caching acceleration where frames of various sizes and various characteristics are required.
  • the computing system 100 is shown to include a host system 104 , a controller 108 (e.g., a RAID controller), and a storage array 112 having a plurality of storage devices 136 a -N therein.
  • the system 100 may utilize any type of data storage architecture.
  • the particular architecture depicted and described herein e.g., a RAID architecture
  • RAID-0 also referred to as a RAID level 0
  • data blocks are stored in order across one or more of the storage devices 136 a -N without redundancy. This effectively means that none of the data blocks are copies of another data block and there is no parity block to recover from failure of a storage device 136 .
  • a RAID-1 also referred to as a RAID level 1
  • RAID-1 uses one or more of the storage devices 136 a -N to store a data block and an equal number of additional mirror devices for storing copies of a stored data block.
  • Higher level RAID schemes can further segment the data into bits, bytes, or blocks for storage across multiple storage devices 136 a -N.
  • One or more of the storage devices 136 a -N may also be used to store error correction or parity information.
  • a single unit of storage can be spread across multiple devices 136 a -N and such a unit of storage may be referred to as a stripe.
  • a stripe may include the related data written to multiple devices 136 a -N as well as the parity information written to a parity storage device 136 a -N.
  • a RAID-5 also referred to as a RAID level 5
  • the data being stored is segmented into blocks for storage across multiple devices 136 a -N with a single parity block for each stripe distributed in a particular configuration across the multiple devices 136 a -N.
  • This scheme can be compared to a RAID-6 (also referred to as a RAID level 6) scheme in which dual parity blocks are determined for a stripe and are distributed across each of the multiple devices 136 a -N in the array 112 .
  • One of the functions of the RAID controller 108 is to make the multiple storage devices 136 a -N in the array 112 appear to a host system 104 as a single high capacity disk drive.
  • the RAID controller 108 may be configured to automatically distribute data supplied from the host system 104 across the multiple storage devices 136 a -N (potentially with parity information) without ever exposing the manner in which the data is actually distributed to the host system 104 .
  • the host system 104 is shown to include a processor 116 , an interface 120 , and memory 124 . It should be appreciated that the host system 104 may include additional components without departing from the scope of the present disclosure.
  • the host system 104 in some embodiments, corresponds to a user computer, laptop, workstation, server, collection of servers, or the like. Thus, the host system 104 may or may not be designed to receive input directly from a human user.
  • the processor 116 of the host system 104 may include a microprocessor, central processing unit (CPU), collection of microprocessors, or the like.
  • the memory 124 may be designed to store instructions that enable functionality of the host system 104 when executed by the processor 116 .
  • the memory 124 may also store data that is eventually written by the host system 104 to the storage array 112 . Further still, the memory 124 may be used to store data that is retrieved from the storage array 112 .
  • Illustrative memory 124 devices may include, without limitation, volatile or non-volatile computer memory (e.g., flash memory, RAM, DRAM, ROM, EEPROM, etc.).
  • the interface 120 of the host system 104 enables the host system 104 to communicate with the RAID controller 108 via a host interface 128 of the RAID controller 108 .
  • the interface 120 and host interface(s) 128 may be of a same or similar type (e.g., utilize a common protocol, a common communication medium, etc.) such that commands issued by the host system 104 are receivable at the RAID controller 108 and data retrieved by the RAID controller 108 is transmittable back to the host system 104 .
  • the interfaces 120 , 128 may correspond to parallel or serial computer interfaces that utilize wired or wireless communication channels.
  • the interfaces 120 , 128 may include hardware that enables such wired or wireless communications.
  • the communication protocol used between the host system 104 and the RAID controller 108 may correspond to any type of known host/memory control protocol.
  • Non-limiting examples of protocols that may be used between interfaces 120 , 128 include SAS, SATA, SCSI, FibreChannel (FC), iSCSI, ATA over Ethernet, InfiniBand, or the like.
  • the RAID controller 108 may provide the ability to represent the entire storage array 112 to the host system 104 as a single high volume data storage device. Any known mechanism can be used to accomplish this task.
  • the RAID controller 108 may help to manager the storage devices 136 a -N (which can be hard disk drives, sold-state drives, or combinations thereof) so as to operate as a logical unit.
  • the RAID controller 108 may be physically incorporated into the host device 104 as a Peripheral Component Interconnect (PCI) expansion (e.g., PCI express (PCI)e) card or the like. In such situations, the RAID controller 108 may be referred to as a RAID adapter.
  • PCI Peripheral Component Interconnect
  • the storage devices 136 a -N in the storage array 112 may be of similar types or may be of different types without departing from the scope of the present disclosure.
  • the storage devices 136 a -N may be co-located with one another or may be physically located in different geographical locations.
  • the nature of the storage interface 132 may depend upon the types of storage devices 136 a -N used in the storage array 112 and the desired capabilities of the array 112 .
  • the storage interface 132 may correspond to a virtual interface or an actual interface. As with the other interfaces described herein, the storage interface 132 may include serial or parallel interface technologies. Examples of the storage interface 132 include, without limitation, SAS, SATA, SCSI, FC, iSCSI, ATA over Ethernet, InfiniBand, or the like.
  • the RAID controller 108 is shown to include the host interface(s) 128 and storage interface(s) 132 .
  • the RAID controller 108 is also shown to include a processor 204 , memory 208 , one or more drivers 212 , and a power source 216 .
  • the processor 204 may include an Integrated Circuit (IC) chip or multiple IC chips, a CPU, a microprocessor, or the like.
  • the processor 204 may be configured to execute instructions in memory 208 that are shown to include frame allocation instructions 224 , bitmap management instructions 228 , index management instructions 232 , and frame type analysis instructions 236 .
  • the processor 204 may modify one or more data entries (e.g., bit values) in a super frame bitmap 220 that is shown to be maintained internally to the RAID controller 108 . It should be appreciated, however, that some or all of the super frame bitmap 220 may be stored and/or maintained external to the RAID controller 108 . Alternatively or additionally, the super frame bitmap 220 may be stored or contained within memory 208 of the RAID controller 108 .
  • the memory 208 may be volatile and/or non-volatile in nature. As indicated above, the memory 208 may include any hardware component or collection of hardware components that are capable of storing instructions and communicating those instructions to the processor 204 for execution. Non-limiting examples of memory 208 include RAM, ROM, flash memory, EEPROM, variants thereof, combinations thereof, and the like.
  • the instructions stored in memory 208 are shown to be different instruction sets, but it should be appreciated that the instructions can be combined into a smaller number of instruction sets without departing from the scope of the present disclosure.
  • the frame allocation instructions 224 when executed, may enable the processor 204 to respond to frame allocation requests, identify available super frames and sub frames therein, allocate such super frames or sub frames as appropriate, and communicate that such an allocation has occurred.
  • the bitmap management instructions 228 when executed, may enable the processor 204 to recognize that the frame allocation instructions 224 have allocated a super frame or sub frame. Based on that recognition, the bitmap management instructions 228 may adjust values for entries 240 a -M within the super frame bitmap 220 . For instance, when a new super frame is allocated for a frame allocation request, the bitmap management instructions 228 may change a bit value for a corresponding entry 240 a -M of the now-allocated super frame in the bitmap 220 . If a super frame is cleared and no longer allocated, then the corresponding entry 240 a -M in the bitmap 220 may be changed back to an original value indicating non-allocation.
  • the index management instructions 232 when executed, may enable the processor 204 to manage usage counts for super frames allocated by the frame allocation instructions 224 .
  • the index management instructions 232 may increment or update a count assigned to the allocated super frame. If the usage count becomes equal to the size of the super frame, then the corresponding super frame ID can be removed from being tracked by the index management instructions. Such an action may indicate that the super frame is no longer eligible for further use or allocation.
  • the frame type analysis instructions 236 when executed, may enable the processor 204 to analyze frames and characteristics thereof. For instance, the frame type analysis instructions 236 may determine whether a particular super frame or sub frame is a fast or slow type of super frame or sub frame. The frame type analysis instructions 236 may alternatively or additionally enable a processor 204 to determine whether the super frame or sub frame is being allocated from a particular memory type (e.g., SRAM, DRAM, etc.).
  • a particular memory type e.g., SRAM, DRAM, etc.
  • the super frame 300 is shown to include a plurality of sub frames 304 , which could be organized into a plurality of 64 Byte columns.
  • Each sub frame 304 may be of a particular size and the size of one sub frame 304 does not necessarily need to be the same as the size of other sub frames 304 .
  • Illustrative sizes of sub frames 304 can be 64 Bytes, 128 Bytes, 256 Bytes, 512 Bytes, or 1 Kbyte.
  • adjacent sub frames may be assigned sub frame IDs incrementally. That is, adjacent sub frames may have sequential sub frame IDs.
  • the sub frames 304 may have different characteristics than other sub frames 304 .
  • the sub frames 304 which are allocate for a particular allocation request may depend upon the size of the sub frame and the frame size identified in the allocation request. It may be desirable for the frame allocation instructions 224 to identify sub frames 304 which have a size greater than or equal to the frame size identified in the allocation request and allocate a next available sub frame having the appropriate size.
  • the frame allocation instructions 224 may be designed to allocate sub frames in a forward lookup manner meaning that sub frames 304 within the super frame 300 are all allocated until every sub frame 304 within the super frame 300 has been allocated.
  • the frame allocation instructions 224 may perform a linear search until the largest frame size from the pool of available super frames that can accommodate the frame request is identified. This search may be completed using a search index that helps ensure there is no internal fragmentation of the super frame.
  • the index may be maintained and updated as super frames are used and sub frames therefrom are allocated.
  • the index may include usage counters for super frames and the index may be maintained by the index management instructions 232 . If the entirety of a request cannot be satisfied from any of the index, then a new super frame is allocated and the super frame ID is added to the index corresponding to the size request.
  • the sub frames 304 may also have usage information stored therein.
  • data contained within each corresponding sub frame 304 unit may be updated to reflect the allocation and/or type of allocation.
  • each sub frame 304 may have one or a set of bits stored therein (or associated therewith) that reflect a usage condition of the corresponding sub frame.
  • SGLs Scatter Gather Lists
  • the super frame 400 shown in FIG. 4 is shown to have a corresponding size of 128 Bytes and is constructed of X sub frames 404 .
  • the super frame 400 is organized similarly to super frame 300 except that super frame 400 has a different number of sub frames 404 and the number of columns 408 may be different from the number of columns in the super frame 300 .
  • Each sub frame 404 may be designed for allocation in request to a frame allocation request. Depending upon the size requested in the frame allocation request, a different number of sub frames 404 may be allocated to fulfill the request.
  • the sub frames 404 may be allocated linearly (e.g., lower numbered sub frames 404 may be allocated before higher numbered sub frames 404 ) if the size of such sub frames 404 allow.
  • the sub frames 404 may also have usage information stored therein.
  • data contained within each corresponding sub frame 404 unit may be updated to reflect the allocation.
  • each sub frame 404 may have one or a set of bits stored therein (or associated therewith) that reflect a usage condition of the corresponding sub frame.
  • the super frame 400 still corresponds to a set of consecutively numbered sub frames 404 .
  • the data structure 500 may correspond to an example of the super frame bitmap 220 without departing from the scope of the present disclosure.
  • the data structure 500 may correspond to part or all of an index used to track super frame usage.
  • the data structure 500 is shown to include a number of fields that enable tracking of super frame allocations.
  • the fields included in the data structure 500 include a pool type field 504 , an access type field 508 , a frame size field 512 , a frame ID field 516 , and a usage count field 520 .
  • a data structure 500 in the format depicted in FIG. 5 may be used as a super frame tracker.
  • the super frame tracker may contain the super frame identifier (in the frame ID field 516 ) that is current allocated and not fully used.
  • a usage count may also be updated to reflect the incomplete usage.
  • an entry can be added to the appropriate index in the super frame tracker.
  • the super frame ID 516 and the number of sub frames used (which may also be referred to as the usage count 520 ) is added to the 2 Kbyte pool, Fast access tracker into index #4.
  • the bitmap 220 can also be updated to indicate which sub frame is currently in use and the super frame to which the sub frame belongs.
  • the data structure 500 may also be used to maintain the ongoing usage count in the usage count field 520 .
  • the usage count field 520 may also reflect which sub frame is available for the next allocation request. For example, count “1” may indicate that sub frame at index 0 is in use whereas count “2” may indicate that sub frames and indices 0 and 1 are both in use. This type of count system helps avoid the need for searching all free sub frames within the tracker. Rather, the sub frame indexed with the count would correspond to the next available sub frame that is free for allocation. Thus, tracking of available and non-available sub frames can be completed with a single Byte of data, thereby avoiding the need to search every single sub frame to determine whether it is available (or not).
  • the pool type field 504 provides information related to whether a particular super frame is retrieved from or belongs to a set of relatively large super frames (e.g., 2 Kbyte super frames) or whether the particular super frame is retrieved from or belongs to a set of relatively small super frames (e.g., 128 Byte super frames). This information may be represented using one or several bits or it may be represented using a string (e.g., an alphanumeric string).
  • the frame allocation instructions 224 , bitmap management instructions 228 , and index management instructions 232 may all work cooperatively to help simultaneously analyze allocation requests and update the appropriate data structures (e.g., bitmap 220 and data structures 300 , 400 , 500 ).
  • the super frame ID is stored in the allocation pointer specific to that frame size as defined in the frame size field 512 . For instance, if the 64 byte sub frame is allocated from a super frame, then the frame ID 516 entry for corresponding frame size 512 entry is updated to include the identifier of the super frame from which the sub frame was allocated.
  • the corresponding usage count 520 is incremented by the index management instructions 232 .
  • the usage count becomes equal to the size of the super frame, then the super frame ID is removed, which indicated that the super frame is no longer available for use.
  • a RAID controller 108 or components thereof can be configured to perform some or all of the features described herein.
  • the described functions can be performed in a component other than a RAID controller 108 .
  • the described functions can be performed within a host system 104 or in some other memory controller other than a RAID controller 108 .
  • the method begins when a controller 108 receives a frame allocation request from a host system 104 (step 604 ).
  • the frame allocation request may be received in one or many packets of data. Alternatively or additionally, the frame request may be received in some other non-packet format.
  • the frame allocation request may include an indication of a size of frame required to fulfill the request (e.g., a frame request size) along with possibly other information pertinent to the frame request (e.g., access type requested, pool type requested, etc.).
  • the controller 108 may invoke the frame allocation instructions 224 to allocate a super frame from a stack of free super frames (step 608 ).
  • the specific super frame that is chosen by the frame allocation instructions 224 may be chosen to match the frame request size, the access type requested, and/or the pool type requested.
  • the frame allocation instructions 224 and/or index management instructions 232 may update appropriate entries in the bitmap 220 (step 612 ) and within the data structures 300 , 400 , or 500 to reflect the allocation of the chosen super frame. Furthermore, an identifier associated with the chosen super frame (e.g., a super frame ID) may be determined by the frame allocation instructions 224 (step 616 ) and that super frame ID may be entered into the appropriate data structures 300 , 400 , 500 to reflect that the super frame has been allocated and sub frames from that super frame have been allocated.
  • the super frame (or sub frames therein) are enabled to store data in connection with the frame allocation request (step 620 ). This data may be stored in any storage device 136 a -N or the like that is associated with the allocated super frame/sub frame.
  • the method begins with the frame allocation instructions 224 analyzing a frame allocation request after a super frame has already been partially allocated for a previous frame request.
  • the frame allocation instructions 224 analyzes subsequent frame allocation requests with respect to remaining frames (step 704 ).
  • the frame allocation instructions 224 will identify/determine that the remaining sub frames within an allocated super frame are insufficient to store the data in connection with the recently-received frame allocation request (step 708 ).
  • the frame allocation instructions 224 will allocate a second super frame from the stack of free super frames (step 712 ). If necessary, the frame allocation instructions 224 may allocate multiple super frames to accommodate a frame request in which the requested frame size is larger than can be supported with a single super frame.
  • the frame allocation instructions 224 and/or index management instructions 232 may update appropriate entries in the bitmap 220 (step 716 ) and within the data structures 300 , 400 , or 500 to reflect the allocation of the second super frame (and possibly other super frames). Furthermore, an identifier associated with the second super frame (e.g., a super frame ID #2) may be determined by the frame allocation instructions 224 (step 720 ) and that super frame ID may be entered into the appropriate data structures 300 , 400 , 500 to reflect that the super frame has been allocated and sub frames from that super frame have been allocated.
  • the super frame (or sub frames therein) are enabled to store data in connection with the frame allocation request (step 724 ). This data may be stored in any storage device 136 a -N or the like that is associated with the allocated super frame/sub frame.
  • the method begins when a frame allocation request is received at the controller 108 (step 804 ).
  • the frame allocation request received in this step may define one or multiple characteristics associated with the desired frame or frame type.
  • the allocation request may indicate a desired frame usage type (e.g., LMID or other memory type), desired frame access type (e.g., Slow or Fast), desired frame size, and/or desired pool type (e.g., 2 Kbyte versus 128 Byte).
  • the frame allocation instructions 224 may then determine whether a full super frame is necessary to accommodate the frame allocation request (step 808 ). If the query of step 808 is answered affirmatively, then the method continues with the frame allocation instructions 224 searching/traversing the data structure 500 starting from Index 0 (step 812 ). As the frame allocation instructions 224 search the data structure 500 , the frame allocation instructions 224 determine whether the frame allocation request can be satisfied from the index currently being analyzed (step 816 ). If the answer to this query is negative, then the Index is incremented (step 820 ) and the analysis of step 816 is repeated as long as the current Index is not greater than a predefined maximum Index (step 824 ).
  • the frame allocation instructions 224 and/or the index management instructions 232 will obtain a new super frame, set the appropriate super frame ID, update the tracker information, update the bitmap 220 for the appropriate sub frames being allocated from within the super frame, and then increment the usage count for the super frame having the sub frames allocated from therein (step 828 ). As discussed above, the amount by which the usage count is incremented will depend upon the sub frame that is allocated and the size of the allocated sub frame. The method then proceeds by returning the allocated sub frame for data storage (step 832 ).
  • step 816 if a sub frame is identified from an already-allocated super frame prior to the Index reaching the maximum index, then the appropriately sized sub frame from the already-allocated super frame is allocated. This results in the frame allocation instructions 224 and/or the index management instructions 232 setting the super frame ID and the sub frame ID for the allocated sub frame and then incrementing the usage count for the allocated sub frame (step 844 ). Thereafter, the index management instructions 232 will determine whether the usage count is greater than or equal to the maximum number of frames for the pool being analyzed (step 848 ). If the usage count is greater than or equal to the maximum number of frames for the pool, then the tracker index is invalidated (step 852 ), after which the method proceeds to step 832 .
  • step 856 the index management instructions 232 determining whether the Index is equal to the current index. If this query is answered negatively, then the method proceeds to step 832 . If the query of step 856 is answered affirmatively, then the index management instructions 232 invalidate the current index, set the tracker to a new target index that corresponds to an index of the super frame ID that was set in step 844 (step 860 ). Thereafter, the method proceeds to step 832 .
  • step 808 if a full frame is requested, then the frame allocation instructions 224 will allocated a new super frame from the stack of free super frames (step 836 ). Thereafter or simultaneous therewith, all of the bits in the super frame bitmap are initialized. During this initialization, the bits in the super frame bitmap have their corresponding sub frame IDs set equal to the super frame ID times the super frame size (step 840 ). This ensures that all of the sub frames within the newly allocated super frame maintain continuous addressing, which ultimately increases the speed with which sub frames are analyzed for later distribution toward a frame allocation request. Thereafter, the method proceeds to step 832 .
  • the method begins when a request is received at the controller 108 to free a super frame (step 904 ). This request may be initiated by the host system 104 or some other component in the system 100 .
  • a super frame has its sub frames and their corresponding information analyzed (step 908 ). This analysis may be performed by the frame allocation instructions 224 , the index management instructions 232 , or some other component of the controller 108 .
  • the appropriate bits (or data fields) in the super frame bitmap are then cleared (step 912 ). Thereafter, an inquiry is made as to whether or not all of the bitmap has been cleared (step 916 ). If so, then the super frame is released back to the stack or pool of free super frames (step 920 ). If not, then the method will simply end (step 924 ) without releasing the super frame back to the stack or pool of free super frames.

Abstract

A system and method for efficient variable length memory frame allocation are described. The method is described to include receiving a frame allocation request from a host system, allocating a super frame from a stack of free super frames for the frame allocation request, the super frame comprising a set of consecutively numbered frames, updating entries in a super frame bitmap to indicate that the super frame has been allocated from the stack of free super frames, determining a super frame identifier for the allocated super frame, and enabling the super frame or the set of consecutively numbered frames to be allocated to storing data in connection with the frame allocation request or subsequent frame allocation requests from the host system.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This Non-Provisional Patent Application claims the benefit of U.S. Provisional Patent Application No. 62/410,752, filed Oct. 20, 2016, the entire disclosure of which is hereby incorporated herein by reference.
  • FIELD OF THE DISCLOSURE
  • The present disclosure is generally directed toward computer memory allocation techniques.
  • BACKGROUND
  • Traditional Dynamic memory allocation schemes require high memory usage to maintain metadata and also computation in terms of search for a free block or freeing of a used block. Advanced caching algorithms require many sizes of fixed memory blocks to be allocated run time. The life time of these blocks varies based on the usage of such a block whether it is to store a temporary state of cache or whether to issue write/read requests to devices etc. Typically, dynamic memory allocation for such use cases is not optimal. The memory allocation strategy has to be simple, fast and easy to find issues in allocation algorithms. Apart from that memory blocks need clear separation in terms of the life span of the blocks.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present disclosure is described in conjunction with the appended figures, which are not necessarily drawn to scale:
  • FIG. 1 is a block diagram depicting a computing system in accordance with at least some embodiments of the present disclosure;
  • FIG. 2 is a block diagram depicting details of an illustrative RAID controller in accordance with at least some embodiments of the present disclosure;
  • FIG. 3 is a block diagram depicting a first illustrative data structure used in accordance with at least some embodiments of the present disclosure;
  • FIG. 4 is a block diagram depicting a second illustrative data structure used in accordance with at least some embodiments of the present disclosure;
  • FIG. 5 is a block diagram depicting a third illustrative data structure used in accordance with at least some embodiments of the present disclosure;
  • FIG. 6 is a flow diagram depicting a method of responding to a frame allocation request in accordance with at least some embodiments of the present disclosure;
  • FIG. 7 is a flow diagram depicting a method of allocating additional super frames from a stack of free super frames in accordance with at least some embodiments of the present disclosure;
  • FIG. 8 is a flow diagram depicting an additional method of responding to a frame allocation request in accordance with at least some embodiments of the present disclosure; and
  • FIG. 9 is a flow diagram depicting a method of releasing a super frame back to a stack of free super frames in accordance with at least some embodiments of the present disclosure.
  • DETAILED DESCRIPTION
  • The ensuing description provides embodiments only, and is not intended to limit the scope, applicability, or configuration of the claims. Rather, the ensuing description will provide those skilled in the art with an enabling description for implementing the described embodiments. It being understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the appended claims.
  • Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and this disclosure.
  • As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term “and/or” includes any and all combinations of one or more of the associated listed items.
  • As will be discussed in further detail herein, embodiments of the present disclosure contemplate a frame allocation method in which frames are allocated based on variable sized pools called super frames. Although frames and super frames are described with respect to specific sizes or ranges of sizes, it should be appreciated that embodiments of the present disclosure are not limited to particular frame sizes or super frame sizes. Indeed, while a typical allocation of 2 kB and 128 Byte super frames will be described, it should not be construed as limiting embodiments of the present disclosure. In some embodiments, a super frame, as used herein, may refer to a large frame that contains at least two sub frames of a particular size or range of sizes (e.g., 64 Bytes/sub frame). As a non-limiting example, a super frame of size 2 Kbyte may contain approximately 32 contiguous 64 Byte sub frames. As another non-limiting example, a 128 Byte super frame may contain two 64 Byte sub frames. The 2 Kbyte and 128 Byte illustrative super frames are used for illustrative purposes—it should be appreciated that super frames of any frame size can be used (e.g., a power of 2 can be used to determine any type of possible super frame size).
  • In some embodiments, a state of each sub frame within a super frame is maintained and indicated within a bitmap. As a non-limiting example, one bit within the bitmap may be used to indicate if a particular sub frame is currently in use (or not). Additional bits in the bitmap can also be used to indicate the usage type for the sub frame. For instance, additional bits in the bitmap can be used to indicate whether a sub frame is used for a Local Message ID (LMID) or some other memory type is being used. If there is a need to find out all the sub frames that are used for LMIDs, for example, then this information stored in the additional bits becomes quite useful.
  • In some embodiments, a super frame can be provisioned from various types of memory like SRAM/DRAM or characterized by Slow versus Fast access memory. A super frame pool may be configured to contain all super frames of the same or similar type and same or similar access type (e.g., all SRAM slow access super frames may be combined in a common super frame pool whereas other super frames are combined in other super frame pools).
  • In some embodiments, a frame or sub frame allocation request can be configured to indicate the desired or required pool type (e.g., 2 Kbytes, 128 Bytes, etc.), followed by desired or required access type of Slow or Fast, and the requested frame size, which is typically an exponent of 2. (e.g., ‘0’ indicates 1 sub frame, ‘1’ indicates 2 sub frames, and ‘2’ indicates 4 sub frames, etc.). It should be appreciated, that a separate stack of super frames can be maintained for each pool.
  • In some embodiments, for each pool type and access type, a super frame tracker is maintained. The tracker contains the super frame ID that is currently allocated and not fully used and the usage count for the super frame. Whenever a sub frame is allocated from a super frame, an entry is added to the appropriate index in the tracker table. For example, if a super frame is allocated from the Fast Frame, 2 Kbyte Pool, the super frame Id and the number of sub frames used (e.g., usage count) is added to the 2 Kbyte Pool, Fast access tracker into index 4. The super frame bitmap can also updated to indicate which sub frame is currently in use.
  • In some embodiments, the tracker also maintains the usage count. The usage count may indicate which sub frame is available next. For example, count 1 indicates that Frame at index 0 is in use whereas count 2 indicates that sub frames at indices 0 and 1 are in use. This avoids the need to search for free sub frames within the tracker. The sub frame indexed with the count would be the next frame that can be allocated.
  • Based on the size of first request to be serviced from a freshly allocated super frame, the super frame ID is stored in the allocation pointer specific to that frame size: e.g., 64, 128, 256, 512 and 1 K. There would not necessarily need to be a tracker for the largest super frame in the pool. For example, assume there is no tracker for the 2 Kbyte super frame. If there is a request for 2 Kbyte frame, then the entirety of the super frame is allocated directly from the super frame stack and since it is used in full, there is no need for it to go into the tracker. On the other hand, whenever a sub frame is allocated from the tracker, the count is incremented. If the usage count becomes equal to the size of the super frame, then the super frame ID is removed from the tracker. This indicates that this super frame cannot be used further (e.g., the super frame is completely allocated).
  • In some embodiments, the sub frame allocation is performed in terms of forward lookup (e.g., from 0 to maximum sub frames available). Sub frames are allocated until all the sub frames belonging to a particular super frame are exhausted (e.g., even if some sub frames get freed or released) because the sub frames would not be re-allocated for the further requests until the entire super frame becomes free and is re-used for allocation. This is ensured by checking that usage counter in the super frame tracker is only incremented and never decremented even when a sub frame is getting freed or released.
  • After a super frame is allocated, then it can be used to fulfill requests for frames with sizes ranging from 64 Byte to 2 Kbyte (only square sizes are valid). When a sub frame needs to be allocated, a linear search from 64 Byte index until the largest frame size for the pool is performed to see if a super frame is available. If a super frame ID is valid in a particular index and if it has required number of sub frames to satisfy the request, the allocation is completed from this index. This makes sure that there is no internal fragmentation. As expected, if the entirety of the request cannot be satisfied from any of the index, then a new super frame is allocated and the super frame ID is added to the index corresponding to the request size.
  • The subsequent frame allocation requests could be fulfilled from the same super frame as long as there are sufficient sub frames left within the super frame. After an allocation is performed from a particular index, the number of sub frames available is less than the size associated with the allocation pointer and the super frame ID is moved from the current allocation pointer to the allocation pointer designated for the lower size.
  • When a new request cannot be fulfilled from a super frame located at the lowest allocation pointer, the allocation pointers are scanned upward until another super frame that can fulfill the request is found. If no super frame is found or none of the super frames found can fulfill the request, a new super frame is allocated from the stack and the frame allocation process continues as described.
  • In some embodiments, when an allocated sub frame gets released the corresponding bits in the parent super frame bitmap get cleared. Furthermore, if the entire bitmap becomes clear for a particular super frame, then the super frame gets released (e.g., that particular super frame's super frame ID is pushed back into the allocation stack). A super frame may not be configured to be freed directly. Rather, it can be freed when all of the bits are cleared as part of the freeing process of sub frames.
  • In some embodiments, when an allocation is requested for a frame size which is the same as the super frame size, then there is no search involved. A new super frame is immediately allocated directly from the free stack and the request is granted and all the bits corresponding to the super frame are set to indicate that the super frame is completely used.
  • It should be appreciated that the frame allocation mechanisms described herein can have particular characteristics like Slow virtual disks, Fast virtual disks, etc. The frame allocation mechanisms can also have characteristics like allocation from SRAM, DRAM, etc. or may be based on size of super frame (e.g., whether 2 Kbyte or 128 Bytes). Characterizing the super frames helps ensure that a module that requires frames that gets freed up fast uses only such frames so that the super frame gets freed without getting blocked because of the slow requests.
  • As can be appreciated, the disclosed frame allocation mechanisms provide a more efficient allocation strategy than most existing or known frame allocation techniques. The disclosed frame allocation mechanisms can cater to the needs of hardware caching acceleration where frames of various sizes and various characteristics are required.
  • With reference now to FIG. 1, additional details of a computing system 100 capable of implementing frame allocation techniques will be described in accordance with at least some embodiments of the present disclosure. The computing system 100 is shown to include a host system 104, a controller 108 (e.g., a RAID controller), and a storage array 112 having a plurality of storage devices 136 a-N therein. The system 100 may utilize any type of data storage architecture. The particular architecture depicted and described herein (e.g., a RAID architecture) should not be construed as limiting embodiments of the present disclosure. If implemented as a RAID architecture, however, it should be appreciated that any type of RAID scheme may be employed (e.g., RAID-0, RAID-1, RAID-2, . . . , RAID-5, RAID-6, etc.).
  • In a RAID-0 (also referred to as a RAID level 0) scheme, data blocks are stored in order across one or more of the storage devices 136 a-N without redundancy. This effectively means that none of the data blocks are copies of another data block and there is no parity block to recover from failure of a storage device 136. A RAID-1 (also referred to as a RAID level 1) scheme, on the other hand, uses one or more of the storage devices 136 a-N to store a data block and an equal number of additional mirror devices for storing copies of a stored data block. Higher level RAID schemes can further segment the data into bits, bytes, or blocks for storage across multiple storage devices 136 a-N. One or more of the storage devices 136 a-N may also be used to store error correction or parity information.
  • A single unit of storage can be spread across multiple devices 136 a-N and such a unit of storage may be referred to as a stripe. A stripe, as used herein and as is well known in the data storage arts, may include the related data written to multiple devices 136 a-N as well as the parity information written to a parity storage device 136 a-N. In a RAID-5 (also referred to as a RAID level 5) scheme, the data being stored is segmented into blocks for storage across multiple devices 136 a-N with a single parity block for each stripe distributed in a particular configuration across the multiple devices 136 a-N. This scheme can be compared to a RAID-6 (also referred to as a RAID level 6) scheme in which dual parity blocks are determined for a stripe and are distributed across each of the multiple devices 136 a-N in the array 112.
  • One of the functions of the RAID controller 108 is to make the multiple storage devices 136 a-N in the array 112 appear to a host system 104 as a single high capacity disk drive. Thus, the RAID controller 108 may be configured to automatically distribute data supplied from the host system 104 across the multiple storage devices 136 a-N (potentially with parity information) without ever exposing the manner in which the data is actually distributed to the host system 104.
  • In the depicted embodiment, the host system 104 is shown to include a processor 116, an interface 120, and memory 124. It should be appreciated that the host system 104 may include additional components without departing from the scope of the present disclosure. The host system 104, in some embodiments, corresponds to a user computer, laptop, workstation, server, collection of servers, or the like. Thus, the host system 104 may or may not be designed to receive input directly from a human user.
  • The processor 116 of the host system 104 may include a microprocessor, central processing unit (CPU), collection of microprocessors, or the like. The memory 124 may be designed to store instructions that enable functionality of the host system 104 when executed by the processor 116. The memory 124 may also store data that is eventually written by the host system 104 to the storage array 112. Further still, the memory 124 may be used to store data that is retrieved from the storage array 112. Illustrative memory 124 devices may include, without limitation, volatile or non-volatile computer memory (e.g., flash memory, RAM, DRAM, ROM, EEPROM, etc.).
  • The interface 120 of the host system 104 enables the host system 104 to communicate with the RAID controller 108 via a host interface 128 of the RAID controller 108. In some embodiments, the interface 120 and host interface(s) 128 may be of a same or similar type (e.g., utilize a common protocol, a common communication medium, etc.) such that commands issued by the host system 104 are receivable at the RAID controller 108 and data retrieved by the RAID controller 108 is transmittable back to the host system 104. The interfaces 120, 128 may correspond to parallel or serial computer interfaces that utilize wired or wireless communication channels. The interfaces 120, 128 may include hardware that enables such wired or wireless communications. The communication protocol used between the host system 104 and the RAID controller 108 may correspond to any type of known host/memory control protocol. Non-limiting examples of protocols that may be used between interfaces 120, 128 include SAS, SATA, SCSI, FibreChannel (FC), iSCSI, ATA over Ethernet, InfiniBand, or the like.
  • The RAID controller 108 may provide the ability to represent the entire storage array 112 to the host system 104 as a single high volume data storage device. Any known mechanism can be used to accomplish this task. The RAID controller 108 may help to manager the storage devices 136 a-N (which can be hard disk drives, sold-state drives, or combinations thereof) so as to operate as a logical unit. In some embodiments, the RAID controller 108 may be physically incorporated into the host device 104 as a Peripheral Component Interconnect (PCI) expansion (e.g., PCI express (PCI)e) card or the like. In such situations, the RAID controller 108 may be referred to as a RAID adapter.
  • The storage devices 136 a-N in the storage array 112 may be of similar types or may be of different types without departing from the scope of the present disclosure. The storage devices 136 a-N may be co-located with one another or may be physically located in different geographical locations. The nature of the storage interface 132 may depend upon the types of storage devices 136 a-N used in the storage array 112 and the desired capabilities of the array 112. The storage interface 132 may correspond to a virtual interface or an actual interface. As with the other interfaces described herein, the storage interface 132 may include serial or parallel interface technologies. Examples of the storage interface 132 include, without limitation, SAS, SATA, SCSI, FC, iSCSI, ATA over Ethernet, InfiniBand, or the like.
  • With reference now to FIG. 2 additional details of a RAID controller 108 will be described in accordance with at least some embodiments of the present disclosure. The RAID controller 108 is shown to include the host interface(s) 128 and storage interface(s) 132. The RAID controller 108 is also shown to include a processor 204, memory 208, one or more drivers 212, and a power source 216.
  • The processor 204 may include an Integrated Circuit (IC) chip or multiple IC chips, a CPU, a microprocessor, or the like. The processor 204 may be configured to execute instructions in memory 208 that are shown to include frame allocation instructions 224, bitmap management instructions 228, index management instructions 232, and frame type analysis instructions 236. Furthermore, in connection with executing the bitmap management instructions, the processor 204 may modify one or more data entries (e.g., bit values) in a super frame bitmap 220 that is shown to be maintained internally to the RAID controller 108. It should be appreciated, however, that some or all of the super frame bitmap 220 may be stored and/or maintained external to the RAID controller 108. Alternatively or additionally, the super frame bitmap 220 may be stored or contained within memory 208 of the RAID controller 108.
  • The memory 208 may be volatile and/or non-volatile in nature. As indicated above, the memory 208 may include any hardware component or collection of hardware components that are capable of storing instructions and communicating those instructions to the processor 204 for execution. Non-limiting examples of memory 208 include RAM, ROM, flash memory, EEPROM, variants thereof, combinations thereof, and the like.
  • The instructions stored in memory 208 are shown to be different instruction sets, but it should be appreciated that the instructions can be combined into a smaller number of instruction sets without departing from the scope of the present disclosure. The frame allocation instructions 224, when executed, may enable the processor 204 to respond to frame allocation requests, identify available super frames and sub frames therein, allocate such super frames or sub frames as appropriate, and communicate that such an allocation has occurred.
  • The bitmap management instructions 228, when executed, may enable the processor 204 to recognize that the frame allocation instructions 224 have allocated a super frame or sub frame. Based on that recognition, the bitmap management instructions 228 may adjust values for entries 240 a-M within the super frame bitmap 220. For instance, when a new super frame is allocated for a frame allocation request, the bitmap management instructions 228 may change a bit value for a corresponding entry 240 a-M of the now-allocated super frame in the bitmap 220. If a super frame is cleared and no longer allocated, then the corresponding entry 240 a-M in the bitmap 220 may be changed back to an original value indicating non-allocation.
  • The index management instructions 232, when executed, may enable the processor 204 to manage usage counts for super frames allocated by the frame allocation instructions 224. In particular, as a new super frame becomes freshly allocated, the index management instructions 232 may increment or update a count assigned to the allocated super frame. If the usage count becomes equal to the size of the super frame, then the corresponding super frame ID can be removed from being tracked by the index management instructions. Such an action may indicate that the super frame is no longer eligible for further use or allocation.
  • The frame type analysis instructions 236, when executed, may enable the processor 204 to analyze frames and characteristics thereof. For instance, the frame type analysis instructions 236 may determine whether a particular super frame or sub frame is a fast or slow type of super frame or sub frame. The frame type analysis instructions 236 may alternatively or additionally enable a processor 204 to determine whether the super frame or sub frame is being allocated from a particular memory type (e.g., SRAM, DRAM, etc.).
  • With reference now to FIG. 3, additional details of an illustrative 2 KB super frame 300 data structure will be described in accordance with at least some embodiments of the present disclosure. The super frame 300 is shown to include a plurality of sub frames 304, which could be organized into a plurality of 64 Byte columns. Each sub frame 304 may be of a particular size and the size of one sub frame 304 does not necessarily need to be the same as the size of other sub frames 304. Illustrative sizes of sub frames 304 can be 64 Bytes, 128 Bytes, 256 Bytes, 512 Bytes, or 1 Kbyte. In some embodiments, adjacent sub frames may be assigned sub frame IDs incrementally. That is, adjacent sub frames may have sequential sub frame IDs. Some of the sub frames 304 may have different characteristics than other sub frames 304. In some embodiments, the sub frames 304 which are allocate for a particular allocation request may depend upon the size of the sub frame and the frame size identified in the allocation request. It may be desirable for the frame allocation instructions 224 to identify sub frames 304 which have a size greater than or equal to the frame size identified in the allocation request and allocate a next available sub frame having the appropriate size. Furthermore, the frame allocation instructions 224 may be designed to allocate sub frames in a forward lookup manner meaning that sub frames 304 within the super frame 300 are all allocated until every sub frame 304 within the super frame 300 has been allocated. When a frame needs to be allocated, the frame allocation instructions 224 may perform a linear search until the largest frame size from the pool of available super frames that can accommodate the frame request is identified. This search may be completed using a search index that helps ensure there is no internal fragmentation of the super frame. The index may be maintained and updated as super frames are used and sub frames therefrom are allocated. The index may include usage counters for super frames and the index may be maintained by the index management instructions 232. If the entirety of a request cannot be satisfied from any of the index, then a new super frame is allocated and the super frame ID is added to the index corresponding to the size request.
  • The sub frames 304 may also have usage information stored therein. In particular, as sub frames 304 are allocated, then data contained within each corresponding sub frame 304 unit may be updated to reflect the allocation and/or type of allocation. As a non-limiting example, each sub frame 304 may have one or a set of bits stored therein (or associated therewith) that reflect a usage condition of the corresponding sub frame. As example of such information may be stored using 2 Bits of data (e.g., 00=unused sub frame; 01=sub frame used for LMID; 10=sub frame used for Scatter Gather Lists (SGLs)). As shown in FIG. 3, however, the super frame 300 still corresponds to a set of consecutively numbered sub frames 304.
  • With reference now to FIG. 4, additional details of another super frame 400 will be described in accordance with at least some embodiments of the present disclosure. The super frame 400 shown in FIG. 4 is shown to have a corresponding size of 128 Bytes and is constructed of X sub frames 404. The super frame 400 is organized similarly to super frame 300 except that super frame 400 has a different number of sub frames 404 and the number of columns 408 may be different from the number of columns in the super frame 300. Each sub frame 404 may be designed for allocation in request to a frame allocation request. Depending upon the size requested in the frame allocation request, a different number of sub frames 404 may be allocated to fulfill the request. The sub frames 404 may be allocated linearly (e.g., lower numbered sub frames 404 may be allocated before higher numbered sub frames 404) if the size of such sub frames 404 allow.
  • The sub frames 404 may also have usage information stored therein. In particular, as sub frames 404 are allocated, then data contained within each corresponding sub frame 404 unit may be updated to reflect the allocation. As a non-limiting example, each sub frame 404 may have one or a set of bits stored therein (or associated therewith) that reflect a usage condition of the corresponding sub frame. As example of such information may be stored using 2 Bits of data (e.g., 00=unused sub frame; 01=sub frame used for LMID; 10=sub frame used for SGL). As shown in FIG. 4, however, the super frame 400 still corresponds to a set of consecutively numbered sub frames 404.
  • With reference now to FIG. 5, additional details of a data structure 500 used to store super frame information will be described in accordance with at least some embodiments of the present disclosure. The data structure 500 may correspond to an example of the super frame bitmap 220 without departing from the scope of the present disclosure. Alternatively or additionally, the data structure 500 may correspond to part or all of an index used to track super frame usage. In particular, the data structure 500 is shown to include a number of fields that enable tracking of super frame allocations. The fields included in the data structure 500 include a pool type field 504, an access type field 508, a frame size field 512, a frame ID field 516, and a usage count field 520. In some embodiments, for each pool type and access type, a data structure 500 in the format depicted in FIG. 5 may be used as a super frame tracker. The super frame tracker may contain the super frame identifier (in the frame ID field 516) that is current allocated and not fully used. In such a scenario, a usage count may also be updated to reflect the incomplete usage. Whenever a frame is allocated, an entry can be added to the appropriate index in the super frame tracker. As a non-limiting example, if a super frame is allocated from a fast frame, 2 Kbyte pool, then the super frame ID 516 and the number of sub frames used (which may also be referred to as the usage count 520) is added to the 2 Kbyte pool, Fast access tracker into index #4. The bitmap 220 can also be updated to indicate which sub frame is currently in use and the super frame to which the sub frame belongs.
  • The data structure 500 may also be used to maintain the ongoing usage count in the usage count field 520. The usage count field 520 may also reflect which sub frame is available for the next allocation request. For example, count “1” may indicate that sub frame at index 0 is in use whereas count “2” may indicate that sub frames and indices 0 and 1 are both in use. This type of count system helps avoid the need for searching all free sub frames within the tracker. Rather, the sub frame indexed with the count would correspond to the next available sub frame that is free for allocation. Thus, tracking of available and non-available sub frames can be completed with a single Byte of data, thereby avoiding the need to search every single sub frame to determine whether it is available (or not).
  • The pool type field 504 provides information related to whether a particular super frame is retrieved from or belongs to a set of relatively large super frames (e.g., 2 Kbyte super frames) or whether the particular super frame is retrieved from or belongs to a set of relatively small super frames (e.g., 128 Byte super frames). This information may be represented using one or several bits or it may be represented using a string (e.g., an alphanumeric string). The frame allocation instructions 224, bitmap management instructions 228, and index management instructions 232 may all work cooperatively to help simultaneously analyze allocation requests and update the appropriate data structures (e.g., bitmap 220 and data structures 300, 400, 500).
  • Based on the size of the first request to be serviced from a freshly allocated super frame, the super frame ID is stored in the allocation pointer specific to that frame size as defined in the frame size field 512. For instance, if the 64 byte sub frame is allocated from a super frame, then the frame ID 516 entry for corresponding frame size 512 entry is updated to include the identifier of the super frame from which the sub frame was allocated.
  • As can be seen there is no tracker from the largest frame in the pool. That is, there is no particular need for a tracker for the entire 2 Kbyte super frame if a request consumes the entirety of that super frame storage. Rather, if there is such a request, then the super frame is allocated directly from the super frame stack and since it is in full use, there is no need to parse which sub frames were allocated from the super frame and which were not (since all were allocated).
  • Conversely, whenever a sub frame is allocated from the data structure 500, the corresponding usage count 520 is incremented by the index management instructions 232. When the usage count becomes equal to the size of the super frame, then the super frame ID is removed, which indicated that the super frame is no longer available for use.
  • With reference now to FIGS. 6-9, additional details of frame allocation and associated bitmap and tracker/index management will be described in accordance with at least some embodiments of the present disclosure. Although certain steps will be described as being performed by particular components, it should be appreciated that embodiments of the present disclosure are not so limited. In particular, a RAID controller 108 or components thereof can be configured to perform some or all of the features described herein. Alternatively or additionally, the described functions can be performed in a component other than a RAID controller 108. For instance, the described functions can be performed within a host system 104 or in some other memory controller other than a RAID controller 108.
  • With reference initially to FIG. 6, a method of responding to a frame allocation request will be described in accordance with at least some embodiments of the present disclosure. The method begins when a controller 108 receives a frame allocation request from a host system 104 (step 604). The frame allocation request may be received in one or many packets of data. Alternatively or additionally, the frame request may be received in some other non-packet format. The frame allocation request may include an indication of a size of frame required to fulfill the request (e.g., a frame request size) along with possibly other information pertinent to the frame request (e.g., access type requested, pool type requested, etc.).
  • In response to receiving the frame allocation request, the controller 108 may invoke the frame allocation instructions 224 to allocate a super frame from a stack of free super frames (step 608). The specific super frame that is chosen by the frame allocation instructions 224 may be chosen to match the frame request size, the access type requested, and/or the pool type requested.
  • After or as the super frame is allocated, the frame allocation instructions 224 and/or index management instructions 232 may update appropriate entries in the bitmap 220 (step 612) and within the data structures 300, 400, or 500 to reflect the allocation of the chosen super frame. Furthermore, an identifier associated with the chosen super frame (e.g., a super frame ID) may be determined by the frame allocation instructions 224 (step 616) and that super frame ID may be entered into the appropriate data structures 300, 400, 500 to reflect that the super frame has been allocated and sub frames from that super frame have been allocated. Once allocated, the super frame (or sub frames therein) are enabled to store data in connection with the frame allocation request (step 620). This data may be stored in any storage device 136 a-N or the like that is associated with the allocated super frame/sub frame.
  • With reference now to FIG. 7, a method of allocating additional super frames from a stack of free super frames will be described in accordance with at least some embodiments of the present disclosure. The method begins with the frame allocation instructions 224 analyzing a frame allocation request after a super frame has already been partially allocated for a previous frame request. The frame allocation instructions 224 analyzes subsequent frame allocation requests with respect to remaining frames (step 704). In this particular scenario, the frame allocation instructions 224 will identify/determine that the remaining sub frames within an allocated super frame are insufficient to store the data in connection with the recently-received frame allocation request (step 708).
  • In response to making this determination, the frame allocation instructions 224 will allocate a second super frame from the stack of free super frames (step 712). If necessary, the frame allocation instructions 224 may allocate multiple super frames to accommodate a frame request in which the requested frame size is larger than can be supported with a single super frame.
  • After or as the second super frame is allocated, the frame allocation instructions 224 and/or index management instructions 232 may update appropriate entries in the bitmap 220 (step 716) and within the data structures 300, 400, or 500 to reflect the allocation of the second super frame (and possibly other super frames). Furthermore, an identifier associated with the second super frame (e.g., a super frame ID #2) may be determined by the frame allocation instructions 224 (step 720) and that super frame ID may be entered into the appropriate data structures 300, 400, 500 to reflect that the super frame has been allocated and sub frames from that super frame have been allocated. Once allocated, the super frame (or sub frames therein) are enabled to store data in connection with the frame allocation request (step 724). This data may be stored in any storage device 136 a-N or the like that is associated with the allocated super frame/sub frame.
  • With reference now to FIG. 8, additional details of a method of responding to a frame allocation request will be described in accordance with at least some embodiments of the present disclosure. The method begins when a frame allocation request is received at the controller 108 (step 804). As with other frame allocation requests described herein, the frame allocation request received in this step may define one or multiple characteristics associated with the desired frame or frame type. In particular, the allocation request may indicate a desired frame usage type (e.g., LMID or other memory type), desired frame access type (e.g., Slow or Fast), desired frame size, and/or desired pool type (e.g., 2 Kbyte versus 128 Byte).
  • The frame allocation instructions 224 may then determine whether a full super frame is necessary to accommodate the frame allocation request (step 808). If the query of step 808 is answered affirmatively, then the method continues with the frame allocation instructions 224 searching/traversing the data structure 500 starting from Index 0 (step 812). As the frame allocation instructions 224 search the data structure 500, the frame allocation instructions 224 determine whether the frame allocation request can be satisfied from the index currently being analyzed (step 816). If the answer to this query is negative, then the Index is incremented (step 820) and the analysis of step 816 is repeated as long as the current Index is not greater than a predefined maximum Index (step 824). If no available sub frame or super frame is found before the Index exceeds the maximum Index, then the frame allocation instructions 224 and/or the index management instructions 232 will obtain a new super frame, set the appropriate super frame ID, update the tracker information, update the bitmap 220 for the appropriate sub frames being allocated from within the super frame, and then increment the usage count for the super frame having the sub frames allocated from therein (step 828). As discussed above, the amount by which the usage count is incremented will depend upon the sub frame that is allocated and the size of the allocated sub frame. The method then proceeds by returning the allocated sub frame for data storage (step 832).
  • Referring back to step 816, if a sub frame is identified from an already-allocated super frame prior to the Index reaching the maximum index, then the appropriately sized sub frame from the already-allocated super frame is allocated. This results in the frame allocation instructions 224 and/or the index management instructions 232 setting the super frame ID and the sub frame ID for the allocated sub frame and then incrementing the usage count for the allocated sub frame (step 844). Thereafter, the index management instructions 232 will determine whether the usage count is greater than or equal to the maximum number of frames for the pool being analyzed (step 848). If the usage count is greater than or equal to the maximum number of frames for the pool, then the tracker index is invalidated (step 852), after which the method proceeds to step 832.
  • On the other hand, if the usage count is less than the maximum number of frames for the pool, then the method proceeds with the index management instructions 232 determining whether the Index is equal to the current index (step 856). If this query is answered negatively, then the method proceeds to step 832. If the query of step 856 is answered affirmatively, then the index management instructions 232 invalidate the current index, set the tracker to a new target index that corresponds to an index of the super frame ID that was set in step 844 (step 860). Thereafter, the method proceeds to step 832.
  • Referring back to step 808, if a full frame is requested, then the frame allocation instructions 224 will allocated a new super frame from the stack of free super frames (step 836). Thereafter or simultaneous therewith, all of the bits in the super frame bitmap are initialized. During this initialization, the bits in the super frame bitmap have their corresponding sub frame IDs set equal to the super frame ID times the super frame size (step 840). This ensures that all of the sub frames within the newly allocated super frame maintain continuous addressing, which ultimately increases the speed with which sub frames are analyzed for later distribution toward a frame allocation request. Thereafter, the method proceeds to step 832.
  • With reference now to FIG. 9, details of a method of releasing a super frame back to a stack of free super frames will be described in accordance with at least some embodiments of the present disclosure. The method begins when a request is received at the controller 108 to free a super frame (step 904). This request may be initiated by the host system 104 or some other component in the system 100.
  • In response to receiving the request, then a super frame has its sub frames and their corresponding information analyzed (step 908). This analysis may be performed by the frame allocation instructions 224, the index management instructions 232, or some other component of the controller 108. The appropriate bits (or data fields) in the super frame bitmap are then cleared (step 912). Thereafter, an inquiry is made as to whether or not all of the bitmap has been cleared (step 916). If so, then the super frame is released back to the stack or pool of free super frames (step 920). If not, then the method will simply end (step 924) without releasing the super frame back to the stack or pool of free super frames.
  • Specific details were given in the description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
  • While illustrative embodiments of the disclosure have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed, and that the appended claims are intended to be construed to include such variations, except as limited by the prior art.

Claims (20)

What is claimed is:
1. A method for efficient variable length memory frame allowance, the method comprising:
receiving a frame allocation request from a host system;
allocating a super frame from a stack of free super frames for the frame allocation request, the super frame comprising a set of consecutively numbered frames;
updating entries in a super frame bitmap to indicate that the super frame has been allocated from the stack of free super frames;
determining a super frame identifier for the allocated super frame; and
enabling the super frame or the set of consecutively numbered frames to be allocated to storing data in connection with the frame allocation request or subsequent frame allocation requests from the host system.
2. The method of claim 1, wherein the set of consecutively numbered frames are allocated to subsequent requests.
3. The method of claim 2, wherein the frame allocation request corresponds to a request for data storage in an amount less than a total amount of data than can be stored in the super frame and wherein frames from the set of consecutively numbered frames are allocated in order until a sufficient number of frames have been allocated to accommodate the request for data storage in the amount less than the total amount of data that can be stored in the super frame.
4. The method of claim 3, wherein further subsequent frame allocation requests are analyzed to determine whether remaining frames in the set of consecutively numbered frames are sufficient to accommodate the further subsequent frame allocation requests.
5. The method of claim 4, further comprising:
first analyzing the further subsequent frame allocation requests with respect to remaining frames in the set of consecutively numbered frames; and
in the event that the remaining frames in the set of consecutively numbered frames are insufficient to store data in connection with the further subsequent frame allocation requests, then, in response thereto, allocating a second super frame from the stack of free super frames for the further subsequent frame allocation requests, the second super frame comprising a second set of consecutively numbered frames.
6. The method of claim 5, wherein the second set of consecutively numbered frames are sequentially numbered with respect to the set of consecutively numbered frames belonging to the super frame.
7. The method of claim 5, further comprising:
updating entries in the super frame bitmap to indicate that the second super frame has been allocated from the stack of free super frames; and
determining a second super frame identifier for the allocated second super frame.
8. The method of claim 1, further comprising:
determining that all frames in the set of consecutively numbered frames belonging to the super frame are no longer required for allocation to data;
marking all of the frames in the set of consecutively numbered frames belonging to the super frame as available; and
returning the super frame back to the stack of free super frames.
9. The method of claim 1, further comprising:
assigning the super frame to a register from a set of registers based on an amount of unallocated frames in the set of consecutively numbered frames; and
enabling data allocation decisions to be made based on an ordered analysis of the set of registers, wherein a first register in the set of registers that is analyzed in the ordered analysis is assigned to a first super frame having fewer unallocated frames than a second super frame that is assigned to a second register in the set of registers.
10. The method of claim 1, wherein the set of consecutively numbered frames in the super frame are designated with as being either a fast access type or a slow access type.
11. A computing system, comprising:
a processor; and
computer memory coupled to the processor, the computer memory including instructions that are executable by the processor, the instructions comprising:
instructions that receive and process a frame allocation request from a host system;
instructions that allocate a super frame from a stack of free super frames for the frame allocation request, the super frame comprising a set of consecutively numbered frames;
instructions that update entries in a super frame bitmap to indicate that the super frame has been allocated from the stack of free super frames;
instructions that determine a super frame identifier for the allocated super frame; and
instructions that enable the super frame or the set of consecutively numbered frames to be allocated to storing data in connection with the frame allocation request or subsequent frame allocation requests from the host system.
12. The computing system of claim 11, wherein the set of consecutively numbered frames are allocated to subsequent requests.
13. The computing system of claim 12, wherein the frame allocation request corresponds to a request for data storage in an amount less than a total amount of data than can be stored in the super frame and wherein frames from the set of consecutively numbered frames are allocated in order until a sufficient number of frames have been allocated to accommodate the request for data storage in the amount less than the total amount of data that can be stored in the super frame.
14. The computing system of claim 13, wherein further subsequent frame allocation requests are analyzed to determine whether remaining frames in the set of consecutively numbered frames are sufficient to accommodate the further subsequent frame allocation requests.
15. The computing system of claim 14, wherein the instructions further enable the processor to:
first analyze the further subsequent frame allocation requests with respect to remaining frames in the set of consecutively numbered frames; and
in the event that the remaining frames in the set of consecutively numbered frames are insufficient to store data in connection with the further subsequent frame allocation requests, then, in response thereto, allocate a second super frame from the stack of free super frames for the further subsequent frame allocation requests, the second super frame comprising a second set of consecutively numbered frames.
16. The computing system of claim 15, wherein the second set of consecutively numbered frames are sequentially numbered with respect to the set of consecutively numbered frames belonging to the super frame.
17. The computing system of claim 15, wherein the instructions further enable the processor to:
updating entries in the super frame bitmap to indicate that the second super frame has been allocated from the stack of free super frames; and
determining a second super frame identifier for the allocated second super frame.
18. The computing system of claim 11, wherein the instructions further enable the processor to:
determining that all frames in the set of consecutively numbered frames belonging to the super frame are no longer required for allocation to data;
marking all of the frames in the set of consecutively numbered frames belonging to the super frame as available; and
returning the super frame back to the stack of free super frames.
19. The computing system of claim 11, wherein the instructions further enable the processor to:
assigning the super frame to a register from a set of registers based on an amount of unallocated frames in the set of consecutively numbered frames; and
enabling data allocation decisions to be made based on an ordered analysis of the set of registers, wherein a first register in the set of registers that is analyzed in the ordered analysis is assigned to a first super frame having fewer unallocated frames than a second super frame that is assigned to a second register in the set of registers.
20. The computing system of claim 11, wherein the set of consecutively numbered frames in the super frame are designated with as being either a fast access type or a slow access type.
US15/335,014 2016-10-20 2016-10-26 Method and system for efficient variable length memory frame allocation Abandoned US20180113639A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/335,014 US20180113639A1 (en) 2016-10-20 2016-10-26 Method and system for efficient variable length memory frame allocation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662410752P 2016-10-20 2016-10-20
US15/335,014 US20180113639A1 (en) 2016-10-20 2016-10-26 Method and system for efficient variable length memory frame allocation

Publications (1)

Publication Number Publication Date
US20180113639A1 true US20180113639A1 (en) 2018-04-26

Family

ID=61969606

Family Applications (5)

Application Number Title Priority Date Filing Date
US15/335,014 Abandoned US20180113639A1 (en) 2016-10-20 2016-10-26 Method and system for efficient variable length memory frame allocation
US15/335,030 Active 2036-11-29 US10108359B2 (en) 2016-10-20 2016-10-26 Method and system for efficient cache buffering in a system having parity arms to enable hardware acceleration
US15/335,025 Abandoned US20180113810A1 (en) 2016-10-20 2016-10-26 Method and system for efficient hashing optimized for hardware accelerated caching
US15/335,039 Active 2036-11-25 US10078460B2 (en) 2016-10-20 2016-10-26 Memory controller utilizing scatter gather list techniques
US15/335,037 Active US10223009B2 (en) 2016-10-20 2016-10-26 Method and system for efficient cache buffering supporting variable stripe sizes to enable hardware acceleration

Family Applications After (4)

Application Number Title Priority Date Filing Date
US15/335,030 Active 2036-11-29 US10108359B2 (en) 2016-10-20 2016-10-26 Method and system for efficient cache buffering in a system having parity arms to enable hardware acceleration
US15/335,025 Abandoned US20180113810A1 (en) 2016-10-20 2016-10-26 Method and system for efficient hashing optimized for hardware accelerated caching
US15/335,039 Active 2036-11-25 US10078460B2 (en) 2016-10-20 2016-10-26 Memory controller utilizing scatter gather list techniques
US15/335,037 Active US10223009B2 (en) 2016-10-20 2016-10-26 Method and system for efficient cache buffering supporting variable stripe sizes to enable hardware acceleration

Country Status (1)

Country Link
US (5) US20180113639A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10250473B2 (en) * 2016-11-29 2019-04-02 Red Hat Israel, Ltd. Recovery from a networking backend disconnect
US10642821B2 (en) * 2017-03-17 2020-05-05 Apple Inc. Elastic data storage system
US10282116B2 (en) * 2017-07-19 2019-05-07 Avago Technologies International Sales Pte. Limited Method and system for hardware accelerated cache flush
US20190087111A1 (en) * 2017-09-15 2019-03-21 Seagate Technology Llc Common logical block addressing translation layer for a storage array
US10496297B2 (en) * 2017-11-21 2019-12-03 Micron Technology, Inc. Data categorization based on invalidation velocities
US10970205B2 (en) * 2018-05-31 2021-04-06 Micron Technology, Inc. Logical-to-physical data structures for tracking logical block addresses indicative of a collision
US10789176B2 (en) * 2018-08-09 2020-09-29 Intel Corporation Technologies for a least recently used cache replacement policy using vector instructions
US11061676B2 (en) 2019-04-24 2021-07-13 International Business Machines Corporation Scatter gather using key-value store
WO2021050883A1 (en) * 2019-09-12 2021-03-18 Oracle International Corporation Accelerated building and probing of hash tables using symmetric vector processing
US11106585B2 (en) * 2019-10-31 2021-08-31 EMC IP Holding Company, LLC System and method for deduplication aware read cache in a log structured storage array

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5931920A (en) 1997-08-05 1999-08-03 Adaptec, Inc. Command interpreter system in an I/O controller
US6175900B1 (en) 1998-02-09 2001-01-16 Microsoft Corporation Hierarchical bitmap-based memory manager
KR100528967B1 (en) 2002-12-18 2005-11-15 한국전자통신연구원 Apparatus and method for controlling memory allocation for variable sized packets
EP1619584A1 (en) 2004-02-13 2006-01-25 Jaluna SA Memory allocation
US7730239B2 (en) 2006-06-23 2010-06-01 Intel Corporation Data buffer management in a resource limited environment
US8266116B2 (en) 2007-03-12 2012-09-11 Broadcom Corporation Method and apparatus for dual-hashing tables
US9280609B2 (en) 2009-09-08 2016-03-08 Brocade Communications Systems, Inc. Exact match lookup scheme
US8430283B2 (en) * 2011-08-04 2013-04-30 Julian Jaeyoon CHUNG T-shirts hanger
US9134909B2 (en) 2011-08-30 2015-09-15 International Business Machines Corporation Multiple I/O request processing in a storage system
US8938603B2 (en) 2012-05-31 2015-01-20 Samsung Electronics Co., Ltd. Cache system optimized for cache miss detection
US9489955B2 (en) * 2014-01-30 2016-11-08 Qualcomm Incorporated Indicating frame parameter reusability for coding vectors

Also Published As

Publication number Publication date
US20180113633A1 (en) 2018-04-26
US10223009B2 (en) 2019-03-05
US20180113634A1 (en) 2018-04-26
US20180113810A1 (en) 2018-04-26
US10078460B2 (en) 2018-09-18
US20180113635A1 (en) 2018-04-26
US10108359B2 (en) 2018-10-23

Similar Documents

Publication Publication Date Title
US20180113639A1 (en) Method and system for efficient variable length memory frame allocation
US9495294B2 (en) Enhancing data processing performance by cache management of fingerprint index
US10922235B2 (en) Method and system for address table eviction management
US9785575B2 (en) Optimizing thin provisioning in a data storage system through selective use of multiple grain sizes
US9665485B2 (en) Logical and physical block addressing for efficiently storing data to improve access speed in a data deduplication system
EP3168737A2 (en) Distributed multimode storage management
EP3729251A1 (en) Virtualized ocssds spanning physical ocssd channels
US20140195725A1 (en) Method and system for data storage
US8775766B2 (en) Extent size optimization
US20150227468A1 (en) Combining virtual mapping metadata and physical space mapping metadata
US10430329B2 (en) Quality of service aware storage class memory/NAND flash hybrid solid state drive
US8935304B2 (en) Efficient garbage collection in a compressed journal file
US9348748B2 (en) Heal leveling
US20130024616A1 (en) Storage System and Its Logical Unit Management Method
US11740816B1 (en) Initial cache segmentation recommendation engine using customer-specific historical workload analysis
US10929032B1 (en) Host hinting for smart disk allocation to improve sequential access performance
US10649906B2 (en) Method and system for hardware accelerated row lock for a write back volume
US10698621B2 (en) Block reuse for memory operations
US20190339898A1 (en) Method, system and computer program product for managing data storage in data storage systems
US10528438B2 (en) Method and system for handling bad blocks in a hardware accelerated caching solution
US11144445B1 (en) Use of compression domains that are more granular than storage allocation units
US11907123B2 (en) Flash memory garbage collection
US20150143041A1 (en) Storage control apparatus and control method
US11288204B2 (en) Logical and physical address field size reduction by alignment-constrained writing technique
EP4287028A1 (en) Storage device providing high purge performance and memory block management method thereof

Legal Events

Date Code Title Description
AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SIMIONESCU, HORIA;SAGHI, EUGENE;VEERLA, SRIDHAR RAO;AND OTHERS;SIGNING DATES FROM 20160920 TO 20160921;REEL/FRAME:040148/0240

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED, SINGAPORE

Free format text: MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047231/0369

Effective date: 20180509

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047231/0369

Effective date: 20180509

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE EXECUTION DATE OF THE MERGER AND APPLICATION NOS. 13/237,550 AND 16/103,107 FROM THE MERGER PREVIOUSLY RECORDED ON REEL 047231 FRAME 0369. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:048549/0113

Effective date: 20180905

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED, SINGAPORE

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE EXECUTION DATE OF THE MERGER AND APPLICATION NOS. 13/237,550 AND 16/103,107 FROM THE MERGER PREVIOUSLY RECORDED ON REEL 047231 FRAME 0369. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:048549/0113

Effective date: 20180905

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION