WO2017019129A1 - Computing system cache - Google Patents

Computing system cache Download PDF

Info

Publication number
WO2017019129A1
WO2017019129A1 PCT/US2016/024278 US2016024278W WO2017019129A1 WO 2017019129 A1 WO2017019129 A1 WO 2017019129A1 US 2016024278 W US2016024278 W US 2016024278W WO 2017019129 A1 WO2017019129 A1 WO 2017019129A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
data blocks
host
caching
listing
Prior art date
Application number
PCT/US2016/024278
Other languages
French (fr)
Inventor
Dhanaraj MARUTHACHALAM
Shanmugaraja NALLASAMY
Piyush Prakash MOGHE
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Publication of WO2017019129A1 publication Critical patent/WO2017019129A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space

Definitions

  • FIG. 1 illustrates a computing environment comprising a host computing system for caching a plurality of data blocks, according to an example of the present subject matter
  • FIG. 2 illustrates a computing environment comprising various components of a host computing system for caching a plurality of data blocks, according to an example of the present subject matter
  • FIG. 3 illustrates a computing environment for caching a plurality of data blocks, according to an example of the present subject matter
  • FIG. 4 illustrates an example method for caching a plurality of data blocks, according to an example of the present subject matter
  • FIG. 5 illustrates another example method for caching a plurality of data blocks, according to an example of the present subject matter.
  • FIG. 6 is a block diagram of a network environment implementing a non- transitory computer-readable medium for caching data blocks, according to an example of the present subject matter.
  • a host computing system may store data in a storage array. Such storage array may be in communication with the host computing system. Once stored, the data may be accessed by the host computing system. The host computing system may access the storage array for performing write and read operations onto the data stored in the storage array.
  • both the host computing system and the storage array may comprise multiple caches at different layers, where each cache may function independently, without coordinating with other caches. As a consequence, each cache may end up caching same data.
  • Caching of the same data within all or some of the caches may reduce effective storage space of the caches, and would be costly and inefficient. This may also result in an increase in the power consumption, and may adversely affect the performance of the system and add to caching and latency overheads. Furthermore, the same caching technique or mechanism may also be implemented at the different layers which in turn may require additional computational capacity. Also, when multiple caches are present in a computer system, the data gets flushed from each cache to a lower level cache, thereby causing performance overheads.
  • Both the host computing system and the storage array may comprise multiple caches.
  • multiple listings of data blocks may be generated, with one listing for each cache.
  • Each listing lists a different set of data blocks for caching. Consequently, each cache caches a different set of data blocks.
  • Examples of the caches distributed between the host computing system and the storage array include, but are not limited to, a host cache and a storage cache.
  • the host cache may be associated with a caching priority greater than a caching priority of the storage array.
  • an access latency of the host cache is lower than an access latency of the storage cache.
  • Access latency may be understood as time taken to access data.
  • two listings are generated, one for the host cache and other for the storage cache. Each of the listings lists a different set of data blocks for caching within the respective host cache and the storage cache. Thus, multiple copies of same data are not cached concurrently within the host cache and the storage cache. As a result, memory within the caches of the host computing system and the storage array would be efficiently utilized.
  • a listing of data blocks may be generated for each cache.
  • the listings may be generated based on a number of times each of the data blocks has been accessed for data operations, and a caching attribute corresponding to each of the host cache and the storage cache. For instance, for generating a listing for the host cache, a set of data blocks that are most accessed is identified from amongst the data blocks. Further, for generating a listing for the storage cache, a set of data blocks that are accessed most after the set of data blocks identified for the host cache, is identified. The listing for the storage cache lists the next most accessed set of data blocks.
  • the generated listings are communicated to the host cache.
  • the listings may be periodically generated and communicated to the host cache.
  • caching of data blocks is initiated in the host cache in accordance with the listing corresponding to the host cache, i.e., the set of data blocks identified for the host cache are cached at the host cache.
  • the host cache may further communicate the listing corresponding to the storage cache to the storage cache for caching.
  • listings for such additional caches may also be generated and provided to respective caches for caching of data blocks.
  • FIGS. 1 to 6 The above approaches are further described with reference to FIGS. 1 to 6. It should be noted that the description and figures merely illustrate the principles of the present subject matter. It may be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present subject matter. Further, while aspects of described system and method for implementing a caching technique or mechanism be implemented in any number of different computing systems, environments, and/or implementations, the examples and implementations are described in the context of the following system(s).
  • FIG. 1 illustrates a computing environment 100 comprising a host computing system 102 for caching a plurality of data blocks, according to an example of the present subject matter.
  • the host computing system 102 may be deployed in an environment having a network-based system for data storage and communication.
  • the host computing system 102 may be implemented on a computing device, such as a laptop computer, a desktop computer, a workstation, or a server.
  • the host computing system 102 includes a host cache 104 and a caching engine 106.
  • the host cache 104 may include a plurality of data blocks which store data.
  • the host computing system 102 may be in communication with a storage array (not shown in FIG.1 ).
  • the storage array may include a storage cache.
  • the host cache 104 may be associated with a caching priority greater than a caching priority of the storage cache.
  • the host computing system 102 may initiate a caching technique or mechanism on the data blocks referenced in the host cache 104.
  • the caching engine 106 may obtain a caching attribute from the host cache 104.
  • the caching engine 106 may also obtain, from the storage cache its caching attribute.
  • Caching attributes may be considered as attributes corresponding to respective caches, and may include cache identity, type of memory, and available memory. It should be noted that the present caching attributes are illustrative, and should not be construed as limitations onto the present subject matter.
  • the caching engine 106 may determine a reference count indicating the number of times each of the data blocks has been accessed for data operations. Subsequently, the caching engine 106 may generate a host listing for the host cache 104 based on the number of times each data block has been accessed and its corresponding caching attribute. Similarly, the caching engine 106 may generate a storage listing for the storage cache based on the number of times each data block has been accessed and its corresponding caching attribute. As mentioned above, the caching engine 106 may periodically generate the host listing and the storage listing.
  • a first set of data blocks that are most accessed is identified from amongst the plurality of data blocks.
  • the number of data blocks in the first set may depend on available memory within the host cache 104.
  • the host listing provides or lists the first set of data blocks for caching within the host cache 104. In such a manner, the host listing would include the set of data blocks, i.e., the first set of data blocks, which have been accessed most number of times for performing data operations, such as read-write operations.
  • a second set of data blocks that are accessed most after the first set of data blocks is identified from amongst the plurality of data blocks.
  • the number of data blocks in the second set may depend on available memory within the storage cache.
  • the storage listing provides the second set of data blocks for caching within the storage cache.
  • the caching engine 106 may initiate caching of data blocks in the host cache 104 in accordance with the host listing, and in the storage cache in accordance with the storage listing.
  • FIG. 2 illustrates a computing environment 200 comprising various components of a host computing system 102 for caching a plurality of data blocks, according to an example of the present subject matter.
  • the host computing system 102 may be implemented on a computing device, such as a laptop computer, a desktop computer, a workstation, or a server.
  • the host computing system 102 includes interface(s) 202 and memory 204.
  • the interface(s) 202 may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, network devices, and the like.
  • the interface(s) 202 facilitate communication between the host computing system 102 and various other computing devices.
  • the memory 204 may include any non-transitory computer-readable medium including, for example, volatile memory, such as RAM, or non-volatile memory, such as EPROM, flash memory, and the like.
  • the host computing system 102 further includes a host cache 104 and a secondary host cache 206.
  • the host cache 104 may comprise a plurality of data blocks which store data, where size of each data block may be equal.
  • the host cache 104 may cache data blocks that have been accessed maximum number of times for data operations, such as read-write operations, and the secondary host cache 206 may cache data blocks that are accessed most after the data blocks cached into the host cache 104.
  • examples of the secondary host cache 206 include, but are not limited to, a Solid-State Drive (SSD) based cache and a Direct-Attached Storage (DAS) based cache.
  • SSD Solid-State Drive
  • DAS Direct-Attached Storage
  • the host computing system 102 may further include engine(s) 208 and data 210.
  • the engine(s) 208 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the engine(s) 208.
  • programming for the engine(s) 208 may be processor executable instructions stored on a non- transitory machine-readable storage medium and the hardware for the engine(s) 208 may include a processing resource (for example, one or more processors), to execute such instructions.
  • the machine-readable storage medium may store instructions that, when executed by the processing resource, implement engine(s) 208.
  • the host computing system 102 may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the host computing system 102 and the processing resource.
  • the engine(s) 208 may be implemented by electronic circuitry.
  • the data 210 includes data that is either predefined or generated as a result of the functionalities implemented by any of the engine(s) 208.
  • the engine(s) 208 include a caching engine 106 and other engine(s) 212.
  • the other engine(s) 212 may implement functionalities that supplement applications or functions performed by the host computing system 102.
  • the data 210 may include caching attributes 214, caching information 216, and other data 218.
  • the host computing system 102 may be in communication with a storage array 220. Data that is either generated or accessed by the host computing system 102 may be stored in the storage array 220. Once stored, the host computing system 102 may access the storage array 220 for performing write and read operations onto the data stored in the storage array 220.
  • the storage array 220 may be either within, or coupled to the computing device on which the host computing system 102 is implemented, through a communication channel 226.
  • the storage array 220 may comprise a storage cache 222 and a secondary storage cache 224.
  • Example of the secondary storage cache 224 includes a Solid-State Drive (SSD) based cache.
  • SSD Solid-State Drive
  • SAN Storage Area Network
  • NAS Network Attached Storage
  • DAS Direct Attached Storage
  • the present subject matter can also be applied to any environment (other than storage), where multi-level caches exist within a same system or across systems. For instance, multi-level caches across client and server systems and multi-level caches within host/server between application cache and system/kernel caches.
  • the caching attributes 214 may include information related to each of the host cache 104, the secondary host cache 206, the storage cache 222, and the secondary storage cache 224.
  • the information may include caching attributes corresponding to the respective caches.
  • a caching attribute may include cache identity, type of memory, and available memory.
  • the caching information 216 may include information related to the plurality of data blocks, such as size of the data blocks and a number of times each of the plurality of data blocks has been accessed for data operations, such as read-write operations.
  • the other data 218 may include data generated and saved by the engine(s) 208 for implementing various functionalities of the host computing system 102.
  • the above described caches of the host computing system 102 and the storage array 220 may be organized as a hierarchy of cache levels, with a first level cache at the top of the hierarchy. Further, the caches in the hierarchy are ordered by their access latency and caching priority, such that the first level cache is associated with lowest access latency, and consequently highest caching priority. Access latency may be understood as time taken to access data.
  • the host cache 104 is a first level cache (C1 )
  • the secondary host cache 206 is a second level cache (C2)
  • the storage cache 222 is a third level cache (C3)
  • the secondary storage cache 224 is a fourth level cache (C4).
  • the host cache 104 (i.e., C1 ) is associated with a caching priority greater than a caching priority of the secondary host cache 206 (C2), the storage cache 222 (C3), and the secondary storage cache 224 (C4). Further, an access latency of the host cache 104 is lower than access latencies of the other caches.
  • the host computing system 102 may initiate a caching operation.
  • the caching operation may be directed for the plurality of data blocks within the host cache 104.
  • Caching operation may be implemented using a variety of caching techniques which include, but are not limited to, a Least Recently Used (LRU). Such techniques, such as LRU, aim at keeping recently accessed data blocks at the top of a cache and data blocks that have been accessed least recently, at the bottom of the cache.
  • LRU Least Recently Used
  • the caching operation may facilitate in identifying data blocks to be cached within different caches.
  • the caching engine 106 may determine presence of caches distributed between the host computing system 102 and the storage array 220. For instance, presence of caches, such as the host cache 104, the secondary host cache 206, the storage cache 222, and the secondary storage cache 224 may be determined. Upon determining the presence of the caches, the caching engine 106 may obtain a corresponding caching attribute from each of the host cache 104, the secondary host cache 206, the storage cache 222, and the secondary storage cache 224.
  • a caching attribute may be considered as attributes corresponding to respective caches, and may include cache identity, type of memory, and available memory.
  • the caching engine 106 may obtain the corresponding caching attribute from the storage cache 222 and the secondary storage cache 224 by way of a communication command over the communication channel 226.
  • Examples of the communication channel 226 include, but are not limited to, Storage Area Network (SAN) protocols and Network Attached Storage (NAS) protocols.
  • the SAN protocols include Fibre Channel (FC), Internet Small Computer System Interface (iSCSI), and Serial Attached SCSI (SAS) on a media, such as Ethernet and FC.
  • NAS protocols include Network File System (NFS), Common Internet File System (CIFS), The Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), and the like.
  • the host computing system 102 becomes aware of caching and hardware capabilities of the storage cache 222 and the secondary storage cache 224 within the storage array 220.
  • the caching engine 106 may store the caching attributes in the caching attributes 214 for future reference.
  • the caching engine 106 may determine a number of times each of the plurality of data blocks referenced in the host cache 104, has been accessed for data operations.
  • the caching engine 106 may maintain a Least Recently Used (LRU) list to track data blocks that are referenced recently in the host cache 104.
  • the caching engine 106 may also determine a reference count indicating the number of times each of the plurality of data blocks has been accessed.
  • a counter may be assigned to every data block that is a part of the LRU list and incremented by one each time a reference is made to that data block in the host cache 104. In such a manner, the caching engine 106 is able to determine most recently accessed data blocks and least recently accessed data blocks.
  • the caching engine 106 may store the information related to access count of the data blocks in the caching information 216.
  • the caching engine 106 may then generate a listing of data blocks for each cache. Each listing lists a different set of data blocks. A listing for a cache may be generated based on the number of times each data block has been accessed, the size of the data blocks, and available memory within that cache. In the present example, the caching engine 106 generates a host listing for the host cache 104, such that the host listing provides data blocks for caching within the host cache 104. For generating the host listing, a first set of data blocks that are most accessed is identified from amongst the plurality of data blocks.
  • the first set of data blocks is identified based on the number of times each data block has been accessed, the size of the data blocks, and the available memory within the host cache 104. For instance, if the total number of data blocks referenced recently in the host cache 104 is 500, such that each data block has a size of 4 KiloByte (KB), and the available memory within the host cache 104 is 1 MegaByte (MB), then the first set of data blocks may include 256 data blocks that have been accessed maximum number of times for data operations. In such a manner, the host listing would include the first set of data blocks which have been accessed most number of time for performing data operations, such as read-write operations.
  • the caching engine 106 generates a secondary host listing for the secondary host cache 206 based on a similar approach as that for the host cache 104.
  • the secondary host listing is generated based on identifying the second set of data blocks that are most accessed after the first set of data blocks, from amongst the plurality of data blocks.
  • the second set of data blocks is identified based on the number of times each data block has been accessed, the size of the data blocks, and the available memory within the secondary host cache 206.
  • the caching engine 106 may generate a storage listing for the storage cache 222, and a secondary storage listing for the secondary storage cache 224.
  • the storage listing lists a third set of data blocks that are accessed most after the first and second set of data blocks, for caching within the storage cache 222.
  • the secondary storage listing lists a fourth set of data blocks that are accessed most after the first, second, and third set of data blocks, for caching within the secondary storage cache 224.
  • the listings may be generated based on the caching priority associated with each cache.
  • the caching engine 106 may periodically generate the host listing, the secondary host listing, the storage listing, and the secondary storage listing. For this, the caching engine 106 may periodically determine the number of times each data block has been accessed.
  • the caching engine 106 may initiate caching of data blocks in the caches according to respective listings. For example, the caching engine 106 may provide all the listings to the host cache 104. In an example, the caching engine 106 may periodically communicate the listings to the host cache 104. The host cache 104 may further communicate the secondary host listing to the secondary host cache 206, and the storage listing and the secondary storage listing to the storage cache 222 and the secondary storage cache 224, respectively. In an example, the host cache 104 may communicate the storage listing and the secondary storage listing to the storage cache 222 and the secondary storage cache 224, respectively, using the communication channel 226.
  • the host cache 104 may cache data blocks in the host cache 104 in accordance with the host listing.
  • the secondary host cache 206, the storage cache 222, and the secondary storage cache 224 may cache data blocks in accordance with respective listings.
  • the secondary host cache 206, the storage cache 222, and the secondary storage cache 224 may cache all or some of the data blocks listed in the respective listings. Therefore, the host cache 104 acts as a master layer and manages caching of data blocks at other caches (slave layers).
  • each host computing system may generate listings for the storage array 220 based on a number of caches within the storage array 220.
  • the storage array 220 may sort or merge the listings.
  • most recently accessed data blocks are cached within the host cache 104
  • second most recently accessed data blocks i.e., when considered with respect to the first set of data blocks
  • third most recently accessed data blocks i.e., when considered with respect to the first and second set of data blocks
  • fourth most recently accessed data blocks are cached with the secondary storage cache 224.
  • each listing lists a different set of data blocks multiple copies of same data are not cached concurrently within the host cache 104, the secondary host cache 206, the storage cache 222, and the secondary storage cache 224.
  • available memory within the caches is efficiently utilized.
  • the caching operation is implemented or initiated on the host cache 104 and not on all the caches, thereby reducing overhead of executing the caching operation at each cache.
  • the Read-ahead caching operation may be implemented at the host cache 104 and not at other caches.
  • LUN Logical Unit Number
  • RAID Redundant Array of Independent Disks
  • the flushing operation may be delayed to take advantage of full-stripe write, i.e., flushing of consecutive data blocks may be delayed to accumulate data blocks for its full-stripe.
  • flushing of consecutive data blocks may be delayed to accumulate data blocks for its full-stripe.
  • number of times that a parity computation is performed may be minimized.
  • the consecutive data blocks may get flushed directly to the storage array 220 for forming a full-stripe data.
  • FIG. 3 illustrates a computing environment 300 for caching a plurality of data blocks, according to an example of the present subject matter.
  • the computing environment 300 includes a host computing system 102 connected to a storage array 220 via a communication channel 226.
  • Example of the communication channel 226 includes a Small Computer System Interface (SCSI) command.
  • SCSI Small Computer System Interface
  • Exchange of information between the host computing system 102 and the storage array 220 takes place via the communication channel 226.
  • the host computing system 102 includes a host cache 104, a Solid-State Drive (SSD) based cache 302, and a Direct- Attached Storage (DAS) based cache 304.
  • the host cache 104 may comprise a plurality of data blocks.
  • the storage array 220 includes a storage cache 222 and SSD based cache 306.
  • the caches are organized in a hierarchical manner, where the host cache 104 is a first level cache in the hierarchy and the SSD based cache 306 is a last level cache. Further, the host cache 104 is associated with a caching priority greater than a caching priority of the SSD based cache 302, the storage cache 222, and the SSD based cache 306. Moreover, an access latency of the host cache 104 is lower than access latencies of the other caches.
  • the host computing system 102 may initiate a caching operation on the plurality of data blocks referenced in the host cache 104. On initiation of the caching operation, the host computing system 102 may obtain a caching attribute from the host cache 104. In a similar manner, the host computing system 102 may also obtain a corresponding caching attribute from each of the SSD based cache 302, the DAS based cache 304, the storage cache 222, and the SSD based cache 306.
  • the caching attribute may include one of cache identity, type of memory and available memory of any one of the host cache, the storage cache, a secondary host cache, and a secondary storage cache.
  • the host computing system 102 may obtain the caching attributes from the caches within the storage array 220 via the communication channel 226. The host computing system 102 may then periodically generate a listing of data blocks for each cache based on the caching attributes. Upon generating the listings, the host computing system 102 may initiate caching of data blocks in the caches in accordance with respective listings.
  • FIGS. 4 and 5 illustrate methods 400 and 500, respectively, for caching a plurality of data blocks, according to an example implementation of the present subject matter.
  • the order in which the methods are described is not intended to be construed as a limitation, and any number of the described method blocks may be combined in any order to implement the aforementioned methods, or an alternative method.
  • methods 400 and 500 may be implemented by processing resource or computing device(s) through any suitable hardware, non- transitory machine readable instructions, or combination thereof.
  • methods 400 and 500 may be performed by programmed computing devices, such as host computing system 102 as depicted in FIGS. 1 -3. Furthermore, the methods 400 and 500 may be executed based on instructions stored in a non-transitory computer readable medium.
  • the non-transitory computer readable medium may include, for example, digital memories, magnetic storage media, such as one or more magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media.
  • the methods 400 and 500 are described below with reference to the host computing system 102 as described above, other suitable systems for the execution of these methods can also be utilized. Additionally, implementation of these methods is not limited to such examples.
  • the method 400 includes identifying a first set of data blocks that are most accessed and a second set of data blocks that are accessed most after the first set of data blocks, from amongst a plurality of data blocks referenced in a host cache of a host computing system, where the first set of data blocks and the second set of data blocks are identified based on a number of times each of the plurality of data blocks has been accessed for data operations, such as read- write operations.
  • the first set of data blocks is identified for the host cache and the second set of data blocks is identified for other cache.
  • the host cache may be in communication with the other cache over a communication channel.
  • the other cache may comprise a secondary host cache within the host computing system, a storage cache associated with a storage array in communication with the host computing system, or a secondary storage cache within the storage array.
  • the first set of data blocks and the second set of data blocks are identified based on the number of times each data blocks has been accessed and a caching attribute corresponding to the host cache and the other cache.
  • the caching attribute may include cache identity, type of memory, and available memory.
  • the caching engine 106 of the host computing system 102 may identify the first set of data blocks that are most accessed and the second set of data blocks that are accessed most after the first set of data blocks, from amongst a plurality of data blocks referenced in the host cache 104 of the host computing system 102.
  • a first listing and a second listing are generated for the host cache and the other cache, respectively, such that the first listing lists the first set of data blocks and the second listing lists the second set of data blocks.
  • the first listing and the second listing may be periodically generated.
  • the first listing would include the set of data blocks, i.e., the first set of data blocks, which have been accessed most number of time for performing data operations.
  • the number of data blocks in the first set may depend on available memory within the host cache 104.
  • the second listing would include the second set of data blocks.
  • the number of data blocks in the second set of data blocks may depend on available memory within the other cache.
  • the caching engine 106 generates the first listing and the second listing for the host cache and the other cache, respectively.
  • the first set of data blocks is cached according to the first listing into the host cache
  • the second set of data blocks is cached according to the second listing into the other cache.
  • the first listing and the second listing are provided to the host cache 104.
  • the host cache 104 caches the data blocks in accordance with the first listing.
  • data blocks which are accessed most number of times are cached within the host cache 104.
  • the host cache 104 provides the second listing to the other cache for caching.
  • the other cache caches the data blocks in accordance with the second listing.
  • the caching engine 106 may initiate caching of the first set of data blocks at the host cache 104 and the second set of data blocks at the other cache.
  • a first set of data blocks is identified, from amongst the plurality of data blocks, as most accessed.
  • the first set of data blocks may be identified based on a number of times each data block has been accessed and a caching attribute of the host cache.
  • the caching attribute may include cache identity, type of memory, and available memory.
  • a counter may be assigned to every data block that is referenced in the host cache 104 and incremented by one each time a reference is made to that data block. Based on the counters, the first set of data blocks may be identified.
  • the caching engine 106 of the host computing system 102 may identify the first set of data blocks, from amongst a plurality of data blocks referenced in the host cache 104 of the host computing system 102.
  • a first listing for the host cache is generated, such that the first listing lists the first set of data blocks.
  • the first listing would include the data blocks which have been accessed most number of time.
  • the caching engine 106 generates the first listing for the host cache 104 based on the number of times each data block has been accessed and its caching attribute.
  • At block 506 at least a second set of data blocks other than the first set of data blocks is identified for at least one other cache, from remaining plurality of data blocks.
  • the second set of data blocks is identified based on a caching attribute of the other cache and the number of times each of the plurality of data blocks has been accessed.
  • the other cache may include at least one of a secondary host cache 206 within the host computing system 102, a storage cache 222 associated with a storage array 220 in communication with the host computing system 102, and a secondary storage cache 224 within the storage array 220.
  • a caching attribute may be considered as attributes corresponding to respective caches, and may include cache identity, type of memory, and available memory.
  • the sets of data blocks are identified. For example, if there are two caches, then two sets of data blocks are identified. In another example, if there are three caches, then three sets of data blocks are identified.
  • a set of data blocks for a cache is identified based on a caching priority of that cache. For instance, if a first cache (C1 ) has a caching priority greater than a caching priority of a second cache (C2), then a set of data blocks that is identified for the first cache (C1 ) includes data blocks that are accessed most, and a set of data blocks that is identified for the second cache (C2) includes data blocks that are accessed most after the data blocks identified for the first cache (C1 ).
  • a second set of data blocks other than the first set of data blocks is identified from the remaining plurality of data blocks.
  • the second set of data blocks includes data blocks that are accessed most after the first set of data blocks.
  • the caching engine 106 identifies the second set of data blocks other than the first set of data blocks for the other cache, from remaining plurality of data blocks.
  • the caching engine 106 may determine the second set of data blocks based on the caching attribute of the other cache and the number of times each of data block has been accessed.
  • a second listing is generated for the other cache, such that the second listing lists the second set of data blocks.
  • a listing is generated for each cache.
  • the second listing may list a second set of data blocks and the third listing may list a third set of data blocks.
  • the listings may be generated based on the caching priority associated with each cache.
  • the caching engine 106 of the host computing system 102 may generate the second listing for the other cache, such that the second listing lists the second set of data blocks.
  • the first listing for the host cache 104 includes a set of data blocks that are most accessed and the second listing for the other cache includes a set of data blocks that are accessed most after the set of data blocks identified for the host cache 104.
  • caching of data blocks is initiated in the host cache in accordance with the first listing, and in the other cache in accordance with the second listing.
  • the caching engine 106 may provide the first listing and the second listing to the host cache 104.
  • the host cache 104 may further communicate the second listing to the other cache using a communication channel 226.
  • Example of the communication channel 226 includes, but is not limited to, a Small Computer System Interface (SCSI) command.
  • SCSI Small Computer System Interface
  • the method blocks of the caching operation described above may be repeated to periodically generate the listings for the caches. Further, the listings may be periodically communicated to the host cache 104 for initiating caching of data blocks.
  • FIG. 6 illustrates a block diagram of a network environment 600 implementing a non-transitory computer-readable medium, for caching data blocks, in accordance with an example of the present subject matter.
  • the network environment 600 may comprise at least a portion of a public networking environment or a private networking environment, or a combination thereof.
  • the network environment 600 includes a processing resource 602 communicatively coupled to a non-transitory computer readable medium 604, hereinafter referred to as computer readable medium 604, through a communication link 606.
  • the processing resource 602 can be a computing device, such as a host computing system 102.
  • the computer readable medium 604 can be, for example, an internal memory device of the computing device or an external memory device.
  • the communication link 606 may be a direct communication link, such as any memory read/write interface.
  • the communication link 606 may be an indirect communication link, such as a network interface.
  • the processing resource 602 can access the computer readable medium 604 through a network 608.
  • the network 608 may be a single network or a combination of multiple networks and may use a variety of different communication protocols.
  • the processing resource 602 and the computer readable medium 604 may also be coupled to data sources 610 through the communication link 606, and/or to communication devices 612 over the network 608.
  • the coupling with the data sources 610 enables in receiving the requested data in an offline environment
  • the coupling with the communication devices 612 enables in receiving the requested data in an online environment.
  • the computer readable medium 604 includes a set of computer readable instructions, implementing a caching module 614.
  • the set of computer readable instructions can be accessed by the processing resource 602 through the communication link 606 and subsequently executed to perform acts for caching the data blocks.
  • the execution of the instructions by the processing resource 602 has been described with reference to various components introduced earlier with reference to description of FIGS. 1 , 2, and 3.
  • the caching module 614 may initiate a caching operation for a computing environment comprising a plurality of caches distributed between a host computing system 102 storing data in data blocks and a storage array 220.
  • the host computing system 102 may be in communication with the storage array 220 through a communication channel 226.
  • Example of the communication channel 226 includes a Small Computer System Interface (SCSI) command.
  • examples of the plurality of caches distributed between the host computing system 102 and the storage array 220 include, but are not limited to, a host cache 104, a secondary host cache 206, a storage cache 222, and a secondary storage cache 224.
  • the host cache 104 and the secondary host cache 206 are associated with the host computing system 102
  • the storage cache 222 and the secondary storage cache 224 are associated with the storage array 220.
  • the caching module 614 may determine a number of times each of the data blocks has been accessed for data operations, such as read-write operations. To determine the number of times each of the data blocks has been accessed, the caching module 614 may determine a reference count indicating the number of times each of the data blocks has been accessed. In an example, a counter may be assigned to every data block and the counter may be incremented by one each time a reference is made to that data block.
  • the caching module 614 may obtain a caching attribute from each cache.
  • a caching attribute may be considered as attributes corresponding to respective caches, and may include cache identity, type of memory, and available memory.
  • the caching attributes may be obtained by way of a communication command over the communication channel 226.
  • the caching module 614 may generate listings of data blocks for caching the data blocks.
  • the caching module 614 generates a primary listing for the host cache 104. The primary listing provides a first set of data blocks which are accessed maximum number of times for the data operations. Further, the caching module 614 may generate secondary listings for other caches for data blocks other than the first set of data blocks.
  • the caching module 614 initiates caching of data blocks in the host cache 104 in accordance with the primary listing, and in the other caches in accordance with the secondary listings.
  • caching technique or mechanism for the plurality of data blocks have been described in language specific to structural features and/or methods, it is to be understood that the present subject matter is not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed and explained in the context of a few implementations for caching technique or mechanism for the plurality of data blocks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

Examples of computing system cache are described herein. According to an example, a first set of data blocks that are most accessed and a second set of data blocks that are accessed most after the first set of data blocks are identified, from amongst a plurality of data blocks referenced in a host cache of a host computing system. The sets of data blocks are identified based on number of times each data block has been accessed. Further, first listing and second listing are generated for the host cache and other cache, respectively. The first listing lists the first set of data blocks and the second listing lists the second set of data blocks. Thereafter, the first set of data blocks is cached into the host cache according to the first listing and the second set of data blocks is cached into the other cache according to the second listing

Description

COMPUTING SYSTEM CACHE
BACKGROUND
[0001 ] Organizations rely on and produce ever-increasing amount of data which may be stored for varying time periods. The data is generally stored in persistent storages either within, or coupled to computer systems. However, owing to slow speed of the persistent storages, average access time for reading data may be high. To reduce the average access time, a copy of frequently or recently accessed data is cached in a cache memory. A cache memory is relatively small, but faster than a persistent storage. Whenever a data request is received by a computer system, instead of the persistent storage, the cache memory is queried first to check if the requested data is in the cache memory. If the requested data is in the cache memory, then the data is fetched more quickly, which may reduce the average access time for reading the data.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] The following detailed description references the drawings, wherein:
[0003] FIG. 1 illustrates a computing environment comprising a host computing system for caching a plurality of data blocks, according to an example of the present subject matter;
[0004] FIG. 2 illustrates a computing environment comprising various components of a host computing system for caching a plurality of data blocks, according to an example of the present subject matter;
[0005] FIG. 3 illustrates a computing environment for caching a plurality of data blocks, according to an example of the present subject matter;
[0006] FIG. 4 illustrates an example method for caching a plurality of data blocks, according to an example of the present subject matter;
[0007] FIG. 5 illustrates another example method for caching a plurality of data blocks, according to an example of the present subject matter; and
[0008] FIG. 6 is a block diagram of a network environment implementing a non- transitory computer-readable medium for caching data blocks, according to an example of the present subject matter. DETAILED DESCRIPTION
[0009] Generally, a host computing system may store data in a storage array. Such storage array may be in communication with the host computing system. Once stored, the data may be accessed by the host computing system. The host computing system may access the storage array for performing write and read operations onto the data stored in the storage array. In such systems, both the host computing system and the storage array may comprise multiple caches at different layers, where each cache may function independently, without coordinating with other caches. As a consequence, each cache may end up caching same data.
[0010] Caching of the same data within all or some of the caches may reduce effective storage space of the caches, and would be costly and inefficient. This may also result in an increase in the power consumption, and may adversely affect the performance of the system and add to caching and latency overheads. Furthermore, the same caching technique or mechanism may also be implemented at the different layers which in turn may require additional computational capacity. Also, when multiple caches are present in a computer system, the data gets flushed from each cache to a lower level cache, thereby causing performance overheads.
[001 1 ] Approaches for implementing cache techniques for a host computing system and a storage array, are described. Both the host computing system and the storage array may comprise multiple caches. In an example, for affecting caching of data blocks within the caches of the host computing system and the storage array, multiple listings of data blocks may be generated, with one listing for each cache. Each listing lists a different set of data blocks for caching. Consequently, each cache caches a different set of data blocks. Examples of the caches distributed between the host computing system and the storage array include, but are not limited to, a host cache and a storage cache. The host cache may be associated with a caching priority greater than a caching priority of the storage array. Further, an access latency of the host cache is lower than an access latency of the storage cache. Access latency may be understood as time taken to access data. In the present example, two listings are generated, one for the host cache and other for the storage cache. Each of the listings lists a different set of data blocks for caching within the respective host cache and the storage cache. Thus, multiple copies of same data are not cached concurrently within the host cache and the storage cache. As a result, memory within the caches of the host computing system and the storage array would be efficiently utilized.
[0012] In accordance with an example, a listing of data blocks may be generated for each cache. In an example, the listings may be generated based on a number of times each of the data blocks has been accessed for data operations, and a caching attribute corresponding to each of the host cache and the storage cache. For instance, for generating a listing for the host cache, a set of data blocks that are most accessed is identified from amongst the data blocks. Further, for generating a listing for the storage cache, a set of data blocks that are accessed most after the set of data blocks identified for the host cache, is identified. The listing for the storage cache lists the next most accessed set of data blocks.
[0013] Thereafter, the generated listings are communicated to the host cache. In an example, the listings may be periodically generated and communicated to the host cache. Once communicated, caching of data blocks is initiated in the host cache in accordance with the listing corresponding to the host cache, i.e., the set of data blocks identified for the host cache are cached at the host cache. The host cache may further communicate the listing corresponding to the storage cache to the storage cache for caching. In a similar manner, if the host computing system and the storage array include additional caches, accordingly listings for such additional caches may also be generated and provided to respective caches for caching of data blocks.
[0014] With the approaches described above, multiple copies of same data are not cached concurrently within the host cache and the storage cache. As a result, memory within the host cache and the storage cache would be efficiently utilized. Further, since the most accessed data blocks are cached in the host cache, Input/Output (I/O) response time is also reduced.
[0015] The various approaches are further described in conjunction with the following figures. It should be understood that the description and figures merely illustrate the principles of the present subject matter. Further, various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present subject matter and are included within its scope.
[0016] The above approaches are further described with reference to FIGS. 1 to 6. It should be noted that the description and figures merely illustrate the principles of the present subject matter. It may be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present subject matter. Further, while aspects of described system and method for implementing a caching technique or mechanism be implemented in any number of different computing systems, environments, and/or implementations, the examples and implementations are described in the context of the following system(s).
[0017] FIG. 1 illustrates a computing environment 100 comprising a host computing system 102 for caching a plurality of data blocks, according to an example of the present subject matter. In an example, the host computing system 102 may be deployed in an environment having a network-based system for data storage and communication. The host computing system 102 may be implemented on a computing device, such as a laptop computer, a desktop computer, a workstation, or a server. In the present example, the host computing system 102 includes a host cache 104 and a caching engine 106. The host cache 104 may include a plurality of data blocks which store data. Further, the host computing system 102 may be in communication with a storage array (not shown in FIG.1 ). The storage array may include a storage cache. In an example, the host cache 104 may be associated with a caching priority greater than a caching priority of the storage cache.
[0018] In operation, the host computing system 102 may initiate a caching technique or mechanism on the data blocks referenced in the host cache 104. On initiation of the caching technique or mechanism, the caching engine 106 may obtain a caching attribute from the host cache 104. In a similar manner, the caching engine 106 may also obtain, from the storage cache its caching attribute. Caching attributes may be considered as attributes corresponding to respective caches, and may include cache identity, type of memory, and available memory. It should be noted that the present caching attributes are illustrative, and should not be construed as limitations onto the present subject matter.
[0019] Returning to the present caching technique or mechanism, the caching engine 106 may determine a reference count indicating the number of times each of the data blocks has been accessed for data operations. Subsequently, the caching engine 106 may generate a host listing for the host cache 104 based on the number of times each data block has been accessed and its corresponding caching attribute. Similarly, the caching engine 106 may generate a storage listing for the storage cache based on the number of times each data block has been accessed and its corresponding caching attribute. As mentioned above, the caching engine 106 may periodically generate the host listing and the storage listing.
[0020] In the present example, for generating the host listing, a first set of data blocks that are most accessed is identified from amongst the plurality of data blocks. In an example, the number of data blocks in the first set may depend on available memory within the host cache 104. The host listing provides or lists the first set of data blocks for caching within the host cache 104. In such a manner, the host listing would include the set of data blocks, i.e., the first set of data blocks, which have been accessed most number of times for performing data operations, such as read-write operations.
[0021] Further, for generating the storage listing, a second set of data blocks that are accessed most after the first set of data blocks is identified from amongst the plurality of data blocks. The number of data blocks in the second set may depend on available memory within the storage cache. The storage listing provides the second set of data blocks for caching within the storage cache. Subsequently, the caching engine 106 may initiate caching of data blocks in the host cache 104 in accordance with the host listing, and in the storage cache in accordance with the storage listing.
[0022] It should be noted that as a result of the approaches described above, data blocks which are accessed most number of times are determined and cached within the host cache 104 and not within both the host cache 104 and the storage cache. The next set of data blocks which have been less frequently accessed in recent time (i.e., when considered with respect to the first set of data blocks) is cached within the storage cache. In this manner, multiple copies of same data are not cached concurrently within the host cache 104 and the storage cache. As a result, available memory within the host cache 104 and the storage cache would be efficiently utilized. Further, cost and power consumption associated with implementing the caching technique or mechanism for the host computing system 102 and the storage array are substantially reduced. These and other aspects are described further in detail in conjunction with FIG. 2 and FIG. 3.
[0023] FIG. 2 illustrates a computing environment 200 comprising various components of a host computing system 102 for caching a plurality of data blocks, according to an example of the present subject matter. The host computing system 102 may be implemented on a computing device, such as a laptop computer, a desktop computer, a workstation, or a server. The host computing system 102 includes interface(s) 202 and memory 204. The interface(s) 202 may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, network devices, and the like. The interface(s) 202 facilitate communication between the host computing system 102 and various other computing devices. The memory 204 may include any non-transitory computer-readable medium including, for example, volatile memory, such as RAM, or non-volatile memory, such as EPROM, flash memory, and the like.
[0024] The host computing system 102 further includes a host cache 104 and a secondary host cache 206. The host cache 104 may comprise a plurality of data blocks which store data, where size of each data block may be equal. In an example, the host cache 104 may cache data blocks that have been accessed maximum number of times for data operations, such as read-write operations, and the secondary host cache 206 may cache data blocks that are accessed most after the data blocks cached into the host cache 104. Further, examples of the secondary host cache 206 include, but are not limited to, a Solid-State Drive (SSD) based cache and a Direct-Attached Storage (DAS) based cache.
[0025] The host computing system 102 may further include engine(s) 208 and data 210. The engine(s) 208 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the engine(s) 208. In examples described herein, such combinations of hardware and programming may be implemented in a number of different ways. For example, the programming for the engine(s) 208 may be processor executable instructions stored on a non- transitory machine-readable storage medium and the hardware for the engine(s) 208 may include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement engine(s) 208. In such examples, the host computing system 102 may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the host computing system 102 and the processing resource. In other examples, the engine(s) 208 may be implemented by electronic circuitry.
[0026] The data 210 includes data that is either predefined or generated as a result of the functionalities implemented by any of the engine(s) 208. In an example, the engine(s) 208 include a caching engine 106 and other engine(s) 212. The other engine(s) 212 may implement functionalities that supplement applications or functions performed by the host computing system 102. Further, the data 210 may include caching attributes 214, caching information 216, and other data 218.
[0027] According to the present example, the host computing system 102 may be in communication with a storage array 220. Data that is either generated or accessed by the host computing system 102 may be stored in the storage array 220. Once stored, the host computing system 102 may access the storage array 220 for performing write and read operations onto the data stored in the storage array 220. In an example, the storage array 220 may be either within, or coupled to the computing device on which the host computing system 102 is implemented, through a communication channel 226. Further, the storage array 220 may comprise a storage cache 222 and a secondary storage cache 224. Example of the secondary storage cache 224 includes a Solid-State Drive (SSD) based cache.
[0028] Although the present subject matter has been described in the context of the storage array 220 (Storage Area Network (SAN)), however, any other storage systems, such as Network Attached Storage (NAS) and Direct Attached Storage (DAS) would also be within the scope of the present subject matter. Further, the present subject matter can also be applied to any environment (other than storage), where multi-level caches exist within a same system or across systems. For instance, multi-level caches across client and server systems and multi-level caches within host/server between application cache and system/kernel caches.
[0029] In an example, the caching attributes 214 may include information related to each of the host cache 104, the secondary host cache 206, the storage cache 222, and the secondary storage cache 224. The information may include caching attributes corresponding to the respective caches. A caching attribute may include cache identity, type of memory, and available memory. Further, the caching information 216 may include information related to the plurality of data blocks, such as size of the data blocks and a number of times each of the plurality of data blocks has been accessed for data operations, such as read-write operations. The other data 218 may include data generated and saved by the engine(s) 208 for implementing various functionalities of the host computing system 102.
[0030] In an example, the above described caches of the host computing system 102 and the storage array 220 may be organized as a hierarchy of cache levels, with a first level cache at the top of the hierarchy. Further, the caches in the hierarchy are ordered by their access latency and caching priority, such that the first level cache is associated with lowest access latency, and consequently highest caching priority. Access latency may be understood as time taken to access data. In the present example, the host cache 104 is a first level cache (C1 ), the secondary host cache 206 is a second level cache (C2), the storage cache 222 is a third level cache (C3), and the secondary storage cache 224 is a fourth level cache (C4). Accordingly, the host cache 104 (i.e., C1 ) is associated with a caching priority greater than a caching priority of the secondary host cache 206 (C2), the storage cache 222 (C3), and the secondary storage cache 224 (C4). Further, an access latency of the host cache 104 is lower than access latencies of the other caches.
[0031] In operation, the host computing system 102 may initiate a caching operation. The caching operation may be directed for the plurality of data blocks within the host cache 104. Caching operation may be implemented using a variety of caching techniques which include, but are not limited to, a Least Recently Used (LRU). Such techniques, such as LRU, aim at keeping recently accessed data blocks at the top of a cache and data blocks that have been accessed least recently, at the bottom of the cache. In an example, the caching operation may facilitate in identifying data blocks to be cached within different caches.
[0032] On initiation of the caching operation, the caching engine 106 may determine presence of caches distributed between the host computing system 102 and the storage array 220. For instance, presence of caches, such as the host cache 104, the secondary host cache 206, the storage cache 222, and the secondary storage cache 224 may be determined. Upon determining the presence of the caches, the caching engine 106 may obtain a corresponding caching attribute from each of the host cache 104, the secondary host cache 206, the storage cache 222, and the secondary storage cache 224. A caching attribute may be considered as attributes corresponding to respective caches, and may include cache identity, type of memory, and available memory.
[0033] In an example, the caching engine 106 may obtain the corresponding caching attribute from the storage cache 222 and the secondary storage cache 224 by way of a communication command over the communication channel 226. Examples of the communication channel 226 include, but are not limited to, Storage Area Network (SAN) protocols and Network Attached Storage (NAS) protocols. In an example, the SAN protocols include Fibre Channel (FC), Internet Small Computer System Interface (iSCSI), and Serial Attached SCSI (SAS) on a media, such as Ethernet and FC. Further, examples of NAS protocols include Network File System (NFS), Common Internet File System (CIFS), The Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), and the like. In such a manner, the host computing system 102 becomes aware of caching and hardware capabilities of the storage cache 222 and the secondary storage cache 224 within the storage array 220. In the present example, the caching engine 106 may store the caching attributes in the caching attributes 214 for future reference.
[0034] Subsequently, the caching engine 106 may determine a number of times each of the plurality of data blocks referenced in the host cache 104, has been accessed for data operations. In an example, the caching engine 106 may maintain a Least Recently Used (LRU) list to track data blocks that are referenced recently in the host cache 104. The caching engine 106 may also determine a reference count indicating the number of times each of the plurality of data blocks has been accessed. In an example, a counter may be assigned to every data block that is a part of the LRU list and incremented by one each time a reference is made to that data block in the host cache 104. In such a manner, the caching engine 106 is able to determine most recently accessed data blocks and least recently accessed data blocks. In an example, the caching engine 106 may store the information related to access count of the data blocks in the caching information 216.
[0035] Returning to the present caching technique or mechanism, the caching engine 106 may then generate a listing of data blocks for each cache. Each listing lists a different set of data blocks. A listing for a cache may be generated based on the number of times each data block has been accessed, the size of the data blocks, and available memory within that cache. In the present example, the caching engine 106 generates a host listing for the host cache 104, such that the host listing provides data blocks for caching within the host cache 104. For generating the host listing, a first set of data blocks that are most accessed is identified from amongst the plurality of data blocks. According to the present example, the first set of data blocks is identified based on the number of times each data block has been accessed, the size of the data blocks, and the available memory within the host cache 104. For instance, if the total number of data blocks referenced recently in the host cache 104 is 500, such that each data block has a size of 4 KiloByte (KB), and the available memory within the host cache 104 is 1 MegaByte (MB), then the first set of data blocks may include 256 data blocks that have been accessed maximum number of times for data operations. In such a manner, the host listing would include the first set of data blocks which have been accessed most number of time for performing data operations, such as read-write operations.
[0036] Further, the caching engine 106 generates a secondary host listing for the secondary host cache 206 based on a similar approach as that for the host cache 104. The secondary host listing is generated based on identifying the second set of data blocks that are most accessed after the first set of data blocks, from amongst the plurality of data blocks. The second set of data blocks is identified based on the number of times each data block has been accessed, the size of the data blocks, and the available memory within the secondary host cache 206.
[0037] In the manner as described above, cache listings for other cache levels are also generated. For example, the caching engine 106 may generate a storage listing for the storage cache 222, and a secondary storage listing for the secondary storage cache 224. The storage listing lists a third set of data blocks that are accessed most after the first and second set of data blocks, for caching within the storage cache 222. The secondary storage listing lists a fourth set of data blocks that are accessed most after the first, second, and third set of data blocks, for caching within the secondary storage cache 224. The listings may be generated based on the caching priority associated with each cache. In an example, the caching engine 106 may periodically generate the host listing, the secondary host listing, the storage listing, and the secondary storage listing. For this, the caching engine 106 may periodically determine the number of times each data block has been accessed.
[0038] To this end, the caching engine 106 may initiate caching of data blocks in the caches according to respective listings. For example, the caching engine 106 may provide all the listings to the host cache 104. In an example, the caching engine 106 may periodically communicate the listings to the host cache 104. The host cache 104 may further communicate the secondary host listing to the secondary host cache 206, and the storage listing and the secondary storage listing to the storage cache 222 and the secondary storage cache 224, respectively. In an example, the host cache 104 may communicate the storage listing and the secondary storage listing to the storage cache 222 and the secondary storage cache 224, respectively, using the communication channel 226.
[0039] Once the host listing is provided to the host cache 104, the host cache 104 may cache data blocks in the host cache 104 in accordance with the host listing. Similarly, the secondary host cache 206, the storage cache 222, and the secondary storage cache 224 may cache data blocks in accordance with respective listings. In an example, the secondary host cache 206, the storage cache 222, and the secondary storage cache 224 may cache all or some of the data blocks listed in the respective listings. Therefore, the host cache 104 acts as a master layer and manages caching of data blocks at other caches (slave layers).
[0040] Although, a single host computing system 102 is described in communication with the storage array 220, in an example implementation, multiple host computing systems may be implemented in communication with the storage array 220. In such a scenario, each host computing system may generate listings for the storage array 220 based on a number of caches within the storage array 220. On receiving the listings from the multiple host computing systems, the storage array 220 may sort or merge the listings.
[0041] As a result of the approaches described above, most recently accessed data blocks are cached within the host cache 104, second most recently accessed data blocks (i.e., when considered with respect to the first set of data blocks) are cached within the secondary host cache 206, third most recently accessed data blocks (i.e., when considered with respect to the first and second set of data blocks) are cached with the storage cache 222, and fourth most recently accessed data blocks (i.e., when considered with respect to the first, second, and third set of data blocks) are cached with the secondary storage cache 224.
[0042] Since each listing lists a different set of data blocks, multiple copies of same data are not cached concurrently within the host cache 104, the secondary host cache 206, the storage cache 222, and the secondary storage cache 224. Thus, available memory within the caches is efficiently utilized. Further, the caching operation is implemented or initiated on the host cache 104 and not on all the caches, thereby reducing overhead of executing the caching operation at each cache.
[0043] Moreover, as described above, since the caches are ordered by their access latency and caching priority, the listings are generated based on the caching priority associated with each cache. Thus, access latency for the most recently accessed data blocks is significantly reduced. This provides for significant reduction in storage as well as caching and latency overheads. As a consequence, performance and efficiency of the host computing system 102 are significantly improved.
[0044] Although general concepts related to the claimed subject matter have been described in conjunction with the LRU caching operation. However, any other similar caching operations, such as Read-ahead caching operation for sequential reads, and flushing operation would also be within the scope of the present subject matter. In an example, the Read-ahead caching operation may be implemented at the host cache 104 and not at other caches.
[0045] Further, according to the flushing operation, when a cache, for example, a host cache 104, reaches its capacity to cache data blocks, the data blocks may be flushed to lower caches. In the present example, the data blocks may be flushed from the host cache 104 directly to the storage cache 222, thereby avoiding multi-level flushes. Consequently, performance overhead is significantly reduced. In an example, some of Logical Unit Number (LUN) attributes, such as Redundant Array of Independent Disks (RAID) type, stripe/strip size, and the like may be propagated from the storage array 220 to the host computing system 102 for initiation of the caching operation. For example, the flushing operation may be delayed to take advantage of full-stripe write, i.e., flushing of consecutive data blocks may be delayed to accumulate data blocks for its full-stripe. As a result, number of times that a parity computation is performed may be minimized. In case of multiple caches in the host computing system 102 and the storage array 120, the consecutive data blocks may get flushed directly to the storage array 220 for forming a full-stripe data.
[0046] Further, although a single secondary cache is shown in both the host computing system 102 and the storage array 220, there may be more than one secondary cache in both the host computing system 102 and the storage array 220, as illustrated in FIG. 3.
[0047] FIG. 3 illustrates a computing environment 300 for caching a plurality of data blocks, according to an example of the present subject matter. As shown in FIG. 3, the computing environment 300 includes a host computing system 102 connected to a storage array 220 via a communication channel 226. Example of the communication channel 226 includes a Small Computer System Interface (SCSI) command. Exchange of information between the host computing system 102 and the storage array 220 takes place via the communication channel 226. The host computing system 102 includes a host cache 104, a Solid-State Drive (SSD) based cache 302, and a Direct- Attached Storage (DAS) based cache 304. The host cache 104 may comprise a plurality of data blocks. Further, the storage array 220 includes a storage cache 222 and SSD based cache 306. As can be seen in FIG. 3, the caches are organized in a hierarchical manner, where the host cache 104 is a first level cache in the hierarchy and the SSD based cache 306 is a last level cache. Further, the host cache 104 is associated with a caching priority greater than a caching priority of the SSD based cache 302, the storage cache 222, and the SSD based cache 306. Moreover, an access latency of the host cache 104 is lower than access latencies of the other caches.
[0048] In operation, the host computing system 102 may initiate a caching operation on the plurality of data blocks referenced in the host cache 104. On initiation of the caching operation, the host computing system 102 may obtain a caching attribute from the host cache 104. In a similar manner, the host computing system 102 may also obtain a corresponding caching attribute from each of the SSD based cache 302, the DAS based cache 304, the storage cache 222, and the SSD based cache 306. The caching attribute may include one of cache identity, type of memory and available memory of any one of the host cache, the storage cache, a secondary host cache, and a secondary storage cache. In an example, the host computing system 102 may obtain the caching attributes from the caches within the storage array 220 via the communication channel 226. The host computing system 102 may then periodically generate a listing of data blocks for each cache based on the caching attributes. Upon generating the listings, the host computing system 102 may initiate caching of data blocks in the caches in accordance with respective listings.
[0049] FIGS. 4 and 5 illustrate methods 400 and 500, respectively, for caching a plurality of data blocks, according to an example implementation of the present subject matter. The order in which the methods are described is not intended to be construed as a limitation, and any number of the described method blocks may be combined in any order to implement the aforementioned methods, or an alternative method. Furthermore, methods 400 and 500 may be implemented by processing resource or computing device(s) through any suitable hardware, non- transitory machine readable instructions, or combination thereof.
[0050] It may also be understood that methods 400 and 500 may be performed by programmed computing devices, such as host computing system 102 as depicted in FIGS. 1 -3. Furthermore, the methods 400 and 500 may be executed based on instructions stored in a non-transitory computer readable medium. The non-transitory computer readable medium may include, for example, digital memories, magnetic storage media, such as one or more magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. Although, the methods 400 and 500 are described below with reference to the host computing system 102 as described above, other suitable systems for the execution of these methods can also be utilized. Additionally, implementation of these methods is not limited to such examples.
[0051] With reference to the method 400 as depicted in FIG. 4, at block 402, the method 400 includes identifying a first set of data blocks that are most accessed and a second set of data blocks that are accessed most after the first set of data blocks, from amongst a plurality of data blocks referenced in a host cache of a host computing system, where the first set of data blocks and the second set of data blocks are identified based on a number of times each of the plurality of data blocks has been accessed for data operations, such as read- write operations. In an example, the first set of data blocks is identified for the host cache and the second set of data blocks is identified for other cache. In an example, the host cache may be in communication with the other cache over a communication channel.
[0052] In an example, the other cache may comprise a secondary host cache within the host computing system, a storage cache associated with a storage array in communication with the host computing system, or a secondary storage cache within the storage array. Further, the first set of data blocks and the second set of data blocks are identified based on the number of times each data blocks has been accessed and a caching attribute corresponding to the host cache and the other cache. The caching attribute may include cache identity, type of memory, and available memory. In an example, the caching engine 106 of the host computing system 102 may identify the first set of data blocks that are most accessed and the second set of data blocks that are accessed most after the first set of data blocks, from amongst a plurality of data blocks referenced in the host cache 104 of the host computing system 102.
[0053] At block 404, a first listing and a second listing are generated for the host cache and the other cache, respectively, such that the first listing lists the first set of data blocks and the second listing lists the second set of data blocks. In an example, the first listing and the second listing may be periodically generated. In such a manner, the first listing would include the set of data blocks, i.e., the first set of data blocks, which have been accessed most number of time for performing data operations. In an example, the number of data blocks in the first set may depend on available memory within the host cache 104. Further, the second listing would include the second set of data blocks. In an example, the number of data blocks in the second set of data blocks may depend on available memory within the other cache. In an example, the caching engine 106 generates the first listing and the second listing for the host cache and the other cache, respectively.
[0054] At block 406, the first set of data blocks is cached according to the first listing into the host cache, and the second set of data blocks is cached according to the second listing into the other cache. In an example, the first listing and the second listing are provided to the host cache 104. The host cache 104 caches the data blocks in accordance with the first listing. Thus, data blocks which are accessed most number of times are cached within the host cache 104. Further, the host cache 104 provides the second listing to the other cache for caching. The other cache caches the data blocks in accordance with the second listing. In an example, the caching engine 106 may initiate caching of the first set of data blocks at the host cache 104 and the second set of data blocks at the other cache.
[0055] With reference to method 500 as depicted in FIGS. 5, at block 502, for caching a plurality of data blocks referenced in a host cache of a host computing system, a first set of data blocks is identified, from amongst the plurality of data blocks, as most accessed. For instance, the first set of data blocks may be identified based on a number of times each data block has been accessed and a caching attribute of the host cache. The caching attribute may include cache identity, type of memory, and available memory. In an example, a counter may be assigned to every data block that is referenced in the host cache 104 and incremented by one each time a reference is made to that data block. Based on the counters, the first set of data blocks may be identified. In an example, the caching engine 106 of the host computing system 102 may identify the first set of data blocks, from amongst a plurality of data blocks referenced in the host cache 104 of the host computing system 102.
[0056] At block 504, a first listing for the host cache is generated, such that the first listing lists the first set of data blocks. Thus, the first listing would include the data blocks which have been accessed most number of time. In an example, the caching engine 106 generates the first listing for the host cache 104 based on the number of times each data block has been accessed and its caching attribute.
[0057] At block 506, at least a second set of data blocks other than the first set of data blocks is identified for at least one other cache, from remaining plurality of data blocks. In an example, the second set of data blocks is identified based on a caching attribute of the other cache and the number of times each of the plurality of data blocks has been accessed. In an example, the other cache may include at least one of a secondary host cache 206 within the host computing system 102, a storage cache 222 associated with a storage array 220 in communication with the host computing system 102, and a secondary storage cache 224 within the storage array 220. In an example, a caching attribute may be considered as attributes corresponding to respective caches, and may include cache identity, type of memory, and available memory. Further, based on the number of caches, the sets of data blocks are identified. For example, if there are two caches, then two sets of data blocks are identified. In another example, if there are three caches, then three sets of data blocks are identified.
[0058] Further, a set of data blocks for a cache is identified based on a caching priority of that cache. For instance, if a first cache (C1 ) has a caching priority greater than a caching priority of a second cache (C2), then a set of data blocks that is identified for the first cache (C1 ) includes data blocks that are accessed most, and a set of data blocks that is identified for the second cache (C2) includes data blocks that are accessed most after the data blocks identified for the first cache (C1 ).
[0059] In the present example, in case the other cache includes the secondary host cache 206, then a second set of data blocks other than the first set of data blocks is identified from the remaining plurality of data blocks. The second set of data blocks includes data blocks that are accessed most after the first set of data blocks. In an example, the caching engine 106 identifies the second set of data blocks other than the first set of data blocks for the other cache, from remaining plurality of data blocks. In an example, the caching engine 106 may determine the second set of data blocks based on the caching attribute of the other cache and the number of times each of data block has been accessed.
[0060] At block 508, a second listing is generated for the other cache, such that the second listing lists the second set of data blocks. In an example, a listing is generated for each cache. Thus, if there are two caches, two listings would be generated, i.e., a second listing and a third listing. The second listing may list a second set of data blocks and the third listing may list a third set of data blocks. In an example, the listings may be generated based on the caching priority associated with each cache. Further, the caching engine 106 of the host computing system 102 may generate the second listing for the other cache, such that the second listing lists the second set of data blocks. Thus, the first listing for the host cache 104 includes a set of data blocks that are most accessed and the second listing for the other cache includes a set of data blocks that are accessed most after the set of data blocks identified for the host cache 104.
[0061] At block 510, caching of data blocks is initiated in the host cache in accordance with the first listing, and in the other cache in accordance with the second listing. For example, the caching engine 106 may provide the first listing and the second listing to the host cache 104. The host cache 104 may further communicate the second listing to the other cache using a communication channel 226. Example of the communication channel 226 includes, but is not limited to, a Small Computer System Interface (SCSI) command. Once the first listing is provided to the host cache 104, the host cache 104 may cache data blocks in the host cache 104 in accordance with the host listing. Similarly, the other cache may cache data blocks in accordance with the second listing.
[0062] The method blocks of the caching operation described above may be repeated to periodically generate the listings for the caches. Further, the listings may be periodically communicated to the host cache 104 for initiating caching of data blocks.
[0063] FIG. 6 illustrates a block diagram of a network environment 600 implementing a non-transitory computer-readable medium, for caching data blocks, in accordance with an example of the present subject matter. The network environment 600 may comprise at least a portion of a public networking environment or a private networking environment, or a combination thereof. In one implementation, the network environment 600 includes a processing resource 602 communicatively coupled to a non-transitory computer readable medium 604, hereinafter referred to as computer readable medium 604, through a communication link 606. In an example, the processing resource 602 can be a computing device, such as a host computing system 102.
[0064] The computer readable medium 604 can be, for example, an internal memory device of the computing device or an external memory device. In one implementation, the communication link 606 may be a direct communication link, such as any memory read/write interface. In another implementation, the communication link 606 may be an indirect communication link, such as a network interface. In such a case, the processing resource 602 can access the computer readable medium 604 through a network 608. The network 608 may be a single network or a combination of multiple networks and may use a variety of different communication protocols.
[0065] The processing resource 602 and the computer readable medium 604 may also be coupled to data sources 610 through the communication link 606, and/or to communication devices 612 over the network 608. The coupling with the data sources 610 enables in receiving the requested data in an offline environment, and the coupling with the communication devices 612 enables in receiving the requested data in an online environment.
[0066] In one implementation, the computer readable medium 604 includes a set of computer readable instructions, implementing a caching module 614. The set of computer readable instructions, referred to as instructions hereinafter, can be accessed by the processing resource 602 through the communication link 606 and subsequently executed to perform acts for caching the data blocks. For discussion purposes, the execution of the instructions by the processing resource 602 has been described with reference to various components introduced earlier with reference to description of FIGS. 1 , 2, and 3.
[0067] On execution by the processing resource 602, the caching module 614 may initiate a caching operation for a computing environment comprising a plurality of caches distributed between a host computing system 102 storing data in data blocks and a storage array 220. In an example, the host computing system 102 may be in communication with the storage array 220 through a communication channel 226. Example of the communication channel 226 includes a Small Computer System Interface (SCSI) command. Further, examples of the plurality of caches distributed between the host computing system 102 and the storage array 220 include, but are not limited to, a host cache 104, a secondary host cache 206, a storage cache 222, and a secondary storage cache 224. In an example, the host cache 104 and the secondary host cache 206 are associated with the host computing system 102, and the storage cache 222 and the secondary storage cache 224 are associated with the storage array 220.
[0068] Subsequently, the caching module 614 may determine a number of times each of the data blocks has been accessed for data operations, such as read-write operations. To determine the number of times each of the data blocks has been accessed, the caching module 614 may determine a reference count indicating the number of times each of the data blocks has been accessed. In an example, a counter may be assigned to every data block and the counter may be incremented by one each time a reference is made to that data block.
[0069] Thereafter, the caching module 614 may obtain a caching attribute from each cache. A caching attribute may be considered as attributes corresponding to respective caches, and may include cache identity, type of memory, and available memory. In an example, the caching attributes may be obtained by way of a communication command over the communication channel 226. Once the caching attributes are obtained, the caching module 614 may generate listings of data blocks for caching the data blocks. In an example, the caching module 614 generates a primary listing for the host cache 104. The primary listing provides a first set of data blocks which are accessed maximum number of times for the data operations. Further, the caching module 614 may generate secondary listings for other caches for data blocks other than the first set of data blocks. Finally, the caching module 614 initiates caching of data blocks in the host cache 104 in accordance with the primary listing, and in the other caches in accordance with the secondary listings.
[0070] Although implementations of caching technique or mechanism for the plurality of data blocks have been described in language specific to structural features and/or methods, it is to be understood that the present subject matter is not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed and explained in the context of a few implementations for caching technique or mechanism for the plurality of data blocks.

Claims

CLAIMS What is claimed is:
1 . A host computing system comprising:
a host cache; and
a caching engine, wherein the caching engine is to,
generate a host listing for the host cache and a storage listing for a storage cache of a storage array in communication with the host computing system, wherein each of the host listing and the storage listing provides data blocks for caching within the respective host cache and the storage cache, and wherein the host listing and the storage listing are generated based on:
a number of times each of a plurality of data blocks in the host cache has been accessed; and
a caching attribute corresponding to each of the host cache and the storage cache.
2. The host computing system as claimed in claim 1 , wherein to generate the host listing, the caching engine is to:
identify a first set of data blocks from amongst the plurality of data blocks that are most accessed, based on available memory within the host cache; and generate the host listing, wherein the host listing provides the first set of data blocks for caching within the host cache.
3. The host computing system as claimed in claim 2, wherein to generate the storage listing, the caching engine is to:
identify a second set of data blocks from amongst the plurality of data blocks that are accessed most after the first set of data blocks, based on available memory within the storage cache; and
generate the storage listing, wherein the storage listing provides the second set of data blocks for caching within the storage cache.
4. The host computing system as claimed in claim 3, wherein the caching engine is to further:
determine presence of a secondary host cache associated with the host computing system;
identify a third set of data blocks from amongst the plurality of data blocks that are accessed most after the first set of data blocks and that are accessed more frequently than the second set of data blocks, based on available memory within the secondary host cache; and
generate a secondary host listing for the secondary host cache, wherein the secondary host listing provides the third set of data blocks for caching within the secondary host cache.
5. The host computing system as claimed in claim 4, wherein the caching engine is to further:
determine presence of a secondary storage cache associated with the storage array;
identify a fourth set of data blocks from amongst the plurality of data blocks that are accessed most after the first set of data blocks, the second set of data blocks, and the third set of data blocks, based on a caching attribute of the secondary storage cache; and
generate a secondary storage listing for the secondary storage cache, wherein the secondary storage listing provides the fourth set of data blocks for caching within the secondary storage cache.
6. The host computing system as claimed in claim 1 , wherein the caching attribute is one of cache identity, type of memory and available memory of any one of the host cache, the storage cache, a secondary host cache, and a secondary storage cache.
7. The host computing system as claimed in claim 4, wherein the secondary host cache comprises one of a Solid-State Drive (SSD) based cache and a Direct-Attached Storage (DAS) based cache.
8. The host computing system as claimed in claim 1 , wherein caching priorities of the host cache, a secondary host cache of the host computing system, the storage cache, and a secondary storage cache of the storage array, are in a decreasing order.
9. The host computing system as claimed in claim 5, wherein the secondary storage cache comprises a Solid-State Drive (SSD) based cache.
10. A method comprising:
identifying a first set of data blocks that are most accessed and a second set of data blocks that are accessed most after the first set of data blocks, from amongst a plurality of data blocks referenced in a host cache of a host computing system, wherein the first set of data blocks and the second set of data blocks are identified based on a number of times each of the plurality of data blocks has been accessed for data operations;
generating a first listing for the host cache and a second listing for other cache in communication with the host cache, such that the first listing lists the first set of data blocks and the second listing lists the second set of data blocks; and
caching of the first set of data blocks according to the first listing into the host cache, and of the second set of data blocks according to the second listing into the other cache.
1 1 . The method as claimed in claim 10, wherein the identifying of the first set of data blocks and the second set of data blocks is based on a caching attribute corresponding to each of the host cache and the other cache.
12. The method as claimed in claim 10, wherein the other cache comprises one of a secondary host cache within the host computing system, a storage cache associated with a storage array in communication with the host computing system, and a secondary storage cache within the storage array.
13. A non-transitory machine-readable storage medium having instructions executable by a processing resource to:
for a computing environment comprising a plurality of caches distributed between a host computing system storing data in data blocks and a storage array, determine a number of times each of the data blocks has been accessed for data operations;
generate a primary listing for a host cache associated with the host computing system, wherein the primary listing provides a first set of data blocks which are accessed maximum number of times for the data operations;
generate secondary listings for other caches for data blocks other than the first set of data blocks; and
initiate caching of data blocks in the host cache in accordance with the primary listing, and in the other caches in accordance with the secondary listings.
14. The non-transitory machine-readable storage medium as claimed in claim 13, wherein the primary listing and the secondary listings are generated based on caching attributes associated with the host cache and the other caches, respectively.
15. The non-transitory machine-readable storage medium as claimed in claim 14, wherein the caching attributes are obtained by the host computing system using a Small Computer System Interface (SCSI) command.
PCT/US2016/024278 2015-07-24 2016-03-25 Computing system cache WO2017019129A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN3822/CHE/2015 2015-07-24
IN3822CH2015 2015-07-24

Publications (1)

Publication Number Publication Date
WO2017019129A1 true WO2017019129A1 (en) 2017-02-02

Family

ID=57884881

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/024278 WO2017019129A1 (en) 2015-07-24 2016-03-25 Computing system cache

Country Status (1)

Country Link
WO (1) WO2017019129A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050138292A1 (en) * 2002-04-01 2005-06-23 Douglas Sullivan Provision of a victim cache within a storage cache heirarchy
US7039765B1 (en) * 2002-12-19 2006-05-02 Hewlett-Packard Development Company, L.P. Techniques for cache memory management using read and write operations
US20130111133A1 (en) * 2011-10-31 2013-05-02 International Business Machines Corporation Dynamically adjusted threshold for population of secondary cache
US20140013052A1 (en) * 2012-07-06 2014-01-09 Seagate Technology Llc Criteria for selection of data for a secondary cache
US20150046633A1 (en) * 2013-08-12 2015-02-12 Kabushiki Kaisha Toshiba Cache control method and storage device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050138292A1 (en) * 2002-04-01 2005-06-23 Douglas Sullivan Provision of a victim cache within a storage cache heirarchy
US7039765B1 (en) * 2002-12-19 2006-05-02 Hewlett-Packard Development Company, L.P. Techniques for cache memory management using read and write operations
US20130111133A1 (en) * 2011-10-31 2013-05-02 International Business Machines Corporation Dynamically adjusted threshold for population of secondary cache
US20140013052A1 (en) * 2012-07-06 2014-01-09 Seagate Technology Llc Criteria for selection of data for a secondary cache
US20150046633A1 (en) * 2013-08-12 2015-02-12 Kabushiki Kaisha Toshiba Cache control method and storage device

Similar Documents

Publication Publication Date Title
US9495294B2 (en) Enhancing data processing performance by cache management of fingerprint index
US9817765B2 (en) Dynamic hierarchical memory cache awareness within a storage system
US9430404B2 (en) Thinly provisioned flash cache with shared storage pool
EP2891051B1 (en) Block-level access to parallel storage
US10296255B1 (en) Data migration techniques
US10310980B2 (en) Prefetch command optimization for tiered storage systems
US8782335B2 (en) Latency reduction associated with a response to a request in a storage system
JP7116381B2 (en) Dynamic relocation of data using cloud-based ranks
US9817865B2 (en) Direct lookup for identifying duplicate data in a data deduplication system
US20160034394A1 (en) Methods and systems for using predictive cache statistics in a storage system
US9069680B2 (en) Methods and systems for determining a cache size for a storage system
US20160350012A1 (en) Data source and destination timestamps
US10853252B2 (en) Performance of read operations by coordinating read cache management and auto-tiering
US11068299B1 (en) Managing file system metadata using persistent cache
US9298397B2 (en) Nonvolatile storage thresholding for ultra-SSD, SSD, and HDD drive intermix
US9864688B1 (en) Discarding cached data before cache flush
EP3995968B1 (en) A storage server, a method of operating the same storage server and a data center including the same storage server
US11315028B2 (en) Method and apparatus for increasing the accuracy of predicting future IO operations on a storage system
WO2017019129A1 (en) Computing system cache
US20210011851A1 (en) Determining pre-fetching per storage unit on a storage system
US10366014B1 (en) Fast snap copy
KR20160127449A (en) Distributed file system based clustering using high speed semiconductor storage device
US11907541B2 (en) Adaptive read prefetch to reduce host latency and increase bandwidth for sequential read streams
US11436151B2 (en) Semi-sequential drive I/O performance
US10565068B1 (en) Primary array data dedup/compression using block backup statistics

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16830953

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16830953

Country of ref document: EP

Kind code of ref document: A1