US6957294B1 - Disk volume virtualization block-level caching - Google Patents

Disk volume virtualization block-level caching Download PDF

Info

Publication number
US6957294B1
US6957294B1 US10/295,161 US29516102A US6957294B1 US 6957294 B1 US6957294 B1 US 6957294B1 US 29516102 A US29516102 A US 29516102A US 6957294 B1 US6957294 B1 US 6957294B1
Authority
US
United States
Prior art keywords
volume
caching
disk
memory
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US10/295,161
Inventor
Michael J. Saunders
Vincent S. Yip
Joseph P. Neill
Richard Grzegorek
James R. Hunter
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisys Corp
Original Assignee
Unisys Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US10/295,161 priority Critical patent/US6957294B1/en
Application filed by Unisys Corp filed Critical Unisys Corp
Assigned to UNISYS CORPORATION reassignment UNISYS CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GRZEGOREK, RICHARD, HUNTER, JAMES R., JR., NEILL, JOSEPH P., SAUNDERS, MICHAEL J., YIP, VINCENT S.
Application granted granted Critical
Publication of US6957294B1 publication Critical patent/US6957294B1/en
Assigned to UNISYS HOLDING CORPORATION, UNISYS CORPORATION reassignment UNISYS HOLDING CORPORATION RELEASE BY SECURED PARTY Assignors: CITIBANK, N.A.
Assigned to UNISYS CORPORATION, UNISYS HOLDING CORPORATION reassignment UNISYS CORPORATION RELEASE BY SECURED PARTY Assignors: CITIBANK, N.A.
Assigned to DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE reassignment DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE PATENT SECURITY AGREEMENT (PRIORITY LIEN) Assignors: UNISYS CORPORATION
Assigned to DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE reassignment DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE PATENT SECURITY AGREEMENT (JUNIOR LIEN) Assignors: UNISYS CORPORATION
Assigned to GENERAL ELECTRIC CAPITAL CORPORATION, AS AGENT reassignment GENERAL ELECTRIC CAPITAL CORPORATION, AS AGENT SECURITY AGREEMENT Assignors: UNISYS CORPORATION
Assigned to UNISYS CORPORATION reassignment UNISYS CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: DEUTSCHE BANK TRUST COMPANY
Assigned to UNISYS CORPORATION reassignment UNISYS CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE
Assigned to WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL TRUSTEE reassignment WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL TRUSTEE PATENT SECURITY AGREEMENT Assignors: UNISYS CORPORATION
Assigned to UNISYS CORPORATION reassignment UNISYS CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: WELLS FARGO BANK, NATIONAL ASSOCIATION (SUCCESSOR TO GENERAL ELECTRIC CAPITAL CORPORATION)
Assigned to UNISYS CORPORATION reassignment UNISYS CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: WELLS FARGO BANK, NATIONAL ASSOCIATION
Adjusted expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F2003/0697Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers device management, e.g. handlers, drivers, I/O schedulers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/31Providing disk cache in a specific location of a storage system
    • G06F2212/311In host system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems

Definitions

  • the invention relates generally to the field of managing storage assets. More particularly, the invention provides a system and method for caching based on volumes rather than disks or files.
  • Storage consolidation is one way in which the expanding needs of businesses are being addressed.
  • Storage consolidation means centralizing and sharing storage resources among a number of application servers.
  • Storage consolidation is often enabled by a Storage Area Network (SAN).
  • SAN Storage Area Network
  • a SAN provides high-speed connections between servers and storage units so that many servers can share capacity residing on a single storage subsystem.
  • One drawback, however, is the cost: these storage subsystems are expensive.
  • Disk drives are notoriously slow because they are mechanical devices, i.e., the disk has to spin, and the read/write heads have to move across the disk. Latencies are enormous in comparison to speeds at which memory accesses can occur. To address these performance issues, frequently caching is employed.
  • Caching is a way of speeding up access to frequently used information for faster response.
  • a cache can be a reserved section of main memory or it can be an independent high-speed storage device.
  • a memory cache is a small block of high-speed memory located between the CPU and the main memory. By keeping as much frequently-accessed information as possible in high-speed memory, the processor avoids the need to access slower memory.
  • a disk cache is a mechanism for improving the time it takes to read from or write to a hard disk. The disk cache may be part of the hard disk or it can be a specified portion of memory.
  • Disk caching works under the same principle as memory caching, that is, the most recently accessed data from the disk (as well as a certain number of sectors adjacent thereto) is stored in a memory buffer. When an application needs to access data from the disk, it first checks the disk cache to see if the data is there. Disk caching can dramatically improve the performance of applications, because accessing a byte of data in memory can be thousands of times faster than accessing a byte of data on a hard disk. Disk caching is typically done on a physical disk or file basis.
  • disk caching is done on a physical disk or file basis, inefficient use of storage resources may result.
  • one client may be assigned disk drive one for data storage and another client may be assigned disk drive two for data storage. If client one needs more space for storage, client one can't use part of client two's available disk space, because all of disk two is assigned to client two.
  • Another disk drive e.g., disk three
  • client two might only be using 20% of its storage capacity and client one might only be using 50% of its storage capacity (disks one and three).
  • caching systems are not readily tunable. In order to change the caching characteristics, the system typically must be taken down and re-initialized. Hence, disk caching can be inflexible and inefficient.
  • caching could be based on client needs rather than on physical devices. It would also be helpful if storage usage and caching characteristics could be dynamically tuned based on client data usage patterns. The present invention addresses these needs.
  • the present invention provides systems and methods, whereby a pool of global memory can be allocated among a set of client/servers so that each volume associated with a client/server is allocated a portion of cache memory in the global cache memory space.
  • the amount of memory to be used for caching the volume's input/output operations (I/Os), the cache type, the cache replacement policy, and the maximum cache I/O read size are individually settable, can be specified by volume and can be changed dynamically without stopping volume caching.
  • the cache page size can also be specified by volume, but caching must be inactive for initial setting or any changes. Volume caching I/O statistics may be collected on an individual volume basis.
  • FIG. 1 is a block diagram of a Storage Area Network
  • FIG. 2 is a block diagram of a Storage Area Network in accordance with one embodiment of the present invention.
  • FIG. 3 is a block diagram of a volume-based disk caching system in accordance with one embodiment of the invention.
  • FIG. 4 is a flow diagram of a method for volume-based caching in accordance with one embodiment of the invention.
  • FIG. 1 is a block diagram of an exemplary Storage Area Network (SAN).
  • Client/servers 10 a , 10 b , 10 c etc. are in communication with a storage controller 8 .
  • Storage controller 8 controls access of client/servers 10 a , 10 b , 10 c etc. to storage units 40 a , 40 b and 40 c .
  • storage controller 8 assigns one or more particular storage units to a client/server.
  • storage unit 40 a is assigned to client/server 10 a
  • storage unit 40 b is assigned to client/server 40 b and so on.
  • FIG. 2 illustrates a Storage Area Network in accordance with one embodiment of the present invention.
  • Client/servers 10 a , 10 b , 10 c etc. are in communication with a resource manager 14 .
  • Resource manager 14 controls access of client/servers 10 a , 10 b , 10 c , etc. to storage assets 40 .
  • Storage assets 40 include one or more storage units, which in FIG. 2 , are represented by storage devices 40 a , 40 b , 40 c , etc.
  • storage devices 40 a , 40 b , 40 c , etc. are partitioned into one or more volumes (represented in FIG. 2 as V 1 42 , V 2 44 , V 3 46 , V 4 48 and V 5 50 ).
  • Each client/server 10 a , 10 b , 10 c is associated with one or more volumes, V 1 42 , V 2 44 , V 3 46 , V 4 48 or V 5 50 , so that, for example, client/server 10 a may be assigned to volume 1 (V 1 ) 42 of storage unit 40 a and client/server 10 b may be assigned to volume 2 (V 2 ) 44 of the same storage unit 40 a and so on. As described further below, each volume is assigned a separate volume disk caching space (not shown in FIG. 2 ).
  • Attributes of the volume disk caching space include the following: the amount of memory to be used for caching the volume's input/output operations (I/Os), the cache page size (the size of each cache page for the volume, where the size of the cache page is a multiple of the hosting computing device's operating system page size), the cache type (which type of caching method (e.g., write through, write back, etc.) will be used for the volume), the cache replacement policy (e.g., Least Frequently Used versus Least Recently Used), and the maximum cache read I/O size, (i.e., the largest read IO operation that will be cached).
  • I/Os input/output operations
  • the cache page size the size of each cache page for the volume, where the size of the cache page is a multiple of the hosting computing device's operating system page size
  • the cache type which type of caching method (e.g., write through, write back, etc.) will be used for the volume
  • the cache replacement policy e.g., Least
  • I/O statistics may be collected by volume. I/O statistics include read hits per volume, read misses per volume and the like.
  • FIG. 3 illustrates an environment in which a system for volume-based disk caching in accordance with one embodiment of the invention may be implemented.
  • Client/servers 10 a , 10 b , 10 c , etc. are communicatively coupled via network 11 to a computing device 12 on which is resident resource manager 14 . Coupled to computing device 12 are storage assets 40 .
  • the computing device on which resource manager 14 is resident may be any computing device 12 including, but not limited to: a personal computer (PC), an automated teller machine, a server computer, a hand-held or laptop device, a multi-processor system, a microprocessor-based system, programmable consumer electronics, a network PC, a minicomputer, a mainframe computer, and the like.
  • the computing device 12 may include resource manager 14 , an operating system 15 and a system memory 28 connected by a system bus 31 .
  • the computing device 12 may include a variety of computer readable media.
  • Computer readable media can be any available media that can be accessed by the computing device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
  • Computer readable media may comprise computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CDROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 12 .
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • computing device 12 includes a number of resource managers 14 (e.g., 12 or fewer).
  • System memory 28 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two.
  • System memory 28 in one embodiment of the invention includes global cache memory space 30 .
  • Global cache memory space 30 includes free global cache memory space 32 (memory not assigned to any specific volume) and volume disk caching space 38 , represented in FIG. 3 as portioned into individual volume disk caching spaces V 1 33 , V 2 34 , V 3 35 , V 4 36 and V 5 37 .
  • Free global cache memory space 32 is global cache memory space 30 that has not been assigned to volumes. Individual volume disk caching spaces are assigned or associated with individual storage device volumes. In FIG.
  • volume disk caching space V 1 33 has been assigned to volume 1 V 1 42 of storage device 40 a
  • volume disk caching space V 2 34 has been assigned to volume 2 V 2 44 of storage device 40 a and so on.
  • global cache memory space 30 may include any number of individual volume disk caching spaces, the individual volume disk caching spaces represented in FIG. 3 by V 1 33 , V 2 34 , V 3 35 , V 4 36 and V 5 37 are merely exemplary.
  • individual volume disk caching spaces V 1 33 , V 2 34 , V 3 35 , V 4 36 and V 5 37 may be of the same size (occupy the same amount of memory) or may be of different size (occupy different amounts of memory).
  • Storage devices 40 may be any suitable kind of storage devices that can be partitioned into logical volumes.
  • the resource manager 14 manages storage assets 40 , (represented in FIG. 3 by storage devices 40 a , 40 b , 40 c , etc.), from an enterprise-wide perspective by virtualizing storage; that is, storage is not defined in terms of discrete devices or subsystems but as a collective pool of storage capacity.
  • the pool can be allocated among a number of client/servers 10 a , 10 b , 10 c , etc. according to rules established by, for example, a storage administrator. These rules (not shown) may be stored in a datastore.
  • storage assets 40 include disks 40 a , 40 b and 40 c .
  • Disk 40 a has been partitioned into volume 1 V 1 42 and volume 2 V 2 44 .
  • Disk 40 b has been partitioned into volume 3 V 3 46 and volume 4 V 4 48 and disk 40 c comprises volume 5 V 5 50 .
  • Disks 40 a , 40 b and 40 c may be partitioned into any suitable number of volumes.
  • a single volume may span any suitable number of disks, for example, volume 1 in another embodiment may include disk 40 a and 40 b , and so on.
  • the resource manager 14 is in the data path so that all traffic (generally in the form of input/output request packets 54 ) between the client/servers 10 a , 10 b , 10 c , etc. and the storage assets 40 , is routed through network 11 to resource manager 14 .
  • the resource manager 14 in one embodiment of the invention includes a volume disk cache driver 22 .
  • the resource manager 14 may also include a centralized management facility (CMF) 24 .
  • CMS centralized management facility
  • the volume disk cache driver 22 may be implemented as an upper level filter of a logical disk manager 18 (such as, but not limited to, the Logical Disk Manager (LDM) for Microsoft® Windows® 2000) so that input/output requests are intercepted by the volume disk cache driver 22 before the request is passed on to the logical disk manager 18 and from the logical disk manager 18 to the disk driver 20 .
  • the volume disk cache driver 22 determines if the data that is the subject of the I/O request packet 54 is present in the individual volume disk caching space V 1 33 , V 2 34 , V 3 35 , V 4 36 or V 5 37 associated with the volume specified in the I/O request packet 54 .
  • the data is retrieved from the individual volume disk caching space V 1 33 , V 2 34 , V 3 35 , V 4 36 or V 5 37 and the I/O request is never passed on to the logical disk manager 18 and disk driver 20 .
  • the I/O request is passed on to the logical disk manager 18 and disk driver 20 and the I/O operation is performed.
  • disk driver 20 and logical disk manager 18 return control to volume disk cache driver 22 .
  • Logical disk cache driver 22 stores the retrieved data in the individual volume disk caching space V 1 33 , V 2 34 , V 3 35 , V 4 36 or V 5 37 associated with the volume from which the data was retrieved.
  • the logical disk manager 18 may, for example, be a subsystem of Windows® 2000 that consists of user mode and device driver mode components and typically manages storage devices such as storage devices 40 a , 40 b , 40 c , etc.
  • volume disk cache driver 22 By designing the volume disk cache driver 22 at this level, (i.e., as an upper level filter of the logical disk manager 18 ), all the device-specific characteristics (whether the device is a simple-volume, spanned volume, RAID volume, striping-enabled volume, etc.) are managed by the logical disk manager 18 and are transparent to the volume disk cache driver 22 .
  • the volume disk cache driver 22 performs the functions of the logical disk manager 18 and disk driver 20 in addition to volume disk caching.
  • the volume disk cache driver 22 is implemented in accordance with the standard Microsoft® Windows® 2000 Driver Model.
  • the Microsoft® Windows® 2000 model provides a framework for device drivers that operate in Microsoft® Windows® 2000 and later MICROSOFT operating systems.
  • the volume disk cache driver 22 can be offered as a stand-alone product offering enhanced disk caching capabilities on any MICROSOFT Windows® server or on any server that adheres to the Microsoft® Windows® 2000 model.
  • the volume disk cache driver 22 is designed to improve the data access speed of client/servers 10 a , 10 b , 10 c , etc. across the network 11 by implementing a volume-based cache, thereby providing client/servers 10 a , 10 b , 10 c , etc. high-speed access to frequently used data.
  • a volume-based cache thereby providing client/servers 10 a , 10 b , 10 c , etc. high-speed access to frequently used data.
  • volume 1 V 1 42 may be associated with individual disk caching space (i.e., memory) V 1 33 , volume V 2 44 with individual disk caching space (i.e., memory) V 2 34 , volume 3 V 3 46 with individual disk caching space (i.e., memory) V 3 35 , volume 4 V 4 48 with individual disk caching space (i.e., memory) V 4 36 , volume 5 V 5 37 with individual disk caching space (i.e., memory V 5 37 ) and so on.
  • individual disk caching space i.e., memory
  • V 2 44 with individual disk caching space (i.e., memory) V 2 34
  • volume 3 V 3 46 with individual disk caching space (i.e., memory) V 3 35
  • volume 4 V 4 48 with individual disk caching space (i.e., memory) V 4 36
  • volume 5 V 5 37 with individual disk caching space (i.e., memory V 5 37 ) and so on.
  • the individual disk caching spaces 38 are allocated in the global cache memory 30 .
  • the volume disk cache driver 22 enables the client/servers 10 a , 10 b , 10 c , etc. to read data from and write data to system memory 28 instead of from physical disks 40 a , 40 b , 40 c , etc.
  • the CMF 24 provides an interface between a user and the resource manager 14 .
  • user commands input at console 52 may be translated by the CMF 24 into commands that initiate cache functions.
  • the CMF 24 may control allocation and de-allocation of global cache memory space 30 and individual volume disk cache spaces V 1 33 , V 2 34 , V 3 35 , V 4 36 and V 5 37 through setting or changing the value of attributes.
  • the CMF 24 in one embodiment can increment and decrement the size of both global cache memory space 30 and volume disk cache spaces V 1 33 , V 2 34 , V 3 35 , V 4 36 and V 5 37 at any time that the resource manager 14 is running, without interruption of disk caching.
  • changing the size of global cache memory space 30 affects the amount of physical memory available to the operating system 15 and any applications running on the system.
  • changing the size of an individual volume disk cache space such as V 1 33 , V 2 34 , V 3 35 , V 4 36 or V 5 37 , for example, may change the distribution of the currently allocated global cache memory space 30 .
  • the CMF 24 in one embodiment of the invention will validate any parameters representing values of volume-based attributes of the individual volume disk caches that it passes to volume disk cache driver 22 .
  • the CMF 24 will also retain all global cache memory space 30 and individual volume disk cache spaces V 1 42 , V 2 44 , V 3 46 , V 4 48 and V 5 50 attributes across CMF 24 instantiations (e.g., across computing device 12 reboots).
  • the CMF 24 may also reissue the proper commands to the volume disk cache driver 22 to reconfigure the memory space 30 and individual disk volume caching spaces V 1 33 , V 2 34 , V 3 35 , V 4 36 and V 5 37 .
  • Attributes of the individual volume disk caching spaces V 1 33 , V 2 34 , V 3 35 , V 4 36 and V 5 37 include the following: the amount of memory to be used for caching the volume's input/output operations (I/Os), the cache page size (the size of each cache page for the volume, where the size of the cache page is a multiple of the hosting computing device's operating system page size), the cache type (which type of caching method (e.g., write through, write back, etc.) will be used for the volume), the cache replacement policy (e.g., Least Frequently Used, Least Recently Used, etc.) and the maximum cache read I/O size, (i.e., the largest read IO operation that will be cached).
  • I/Os input/output operations
  • the cache page size the size of each cache page for the volume, where the size of the cache page is a multiple of the hosting computing device's operating system page size
  • the cache type which type of caching method (e.g., write through, write
  • I/O statistics may be collected by volume. I/O statistics include read hits per volume, read misses per volume and the like.
  • FIG. 4 is a flow diagram of a method for setting up volume-based caching in accordance with one embodiment of the invention.
  • storage assets 40 are partitioned into volumes, (e.g., physical device 40 a is partitioned into logical volume 1 V 1 42 and logical volume 2 V 2 44 , physical device 40 b is partitioned into logical volume 3 V 3 46 and logical volume 4 V 4 48 and physical device 40 c comprises logical volume 5 V 5 50 ).
  • a user-defined subset of system memory 28 is reserved for volume disk caching.
  • This space is referred to as global cache memory space 30 .
  • this subset of system memory 28 is reserved for volume disk caching when the system is initialized. Because global cache memory space is obtained by reserving a portion of the computing device 12 's system memory 28 , less memory is available for operating system 15 (e.g., WINDOWS®operating system) or for application use.
  • a system administrator typically assigns the actual amount of physical memory allocated as global cache memory space 30 when the volume disk cache driver 22 is installed. In one embodiment, the amount of memory allocated to global cache memory space 30 can be increased or decreased (e.g., by a system administrator using console 52 to interface with CMF 24 ) as needs dictate.
  • portions of the global cache memory space 30 may be allocated to individual volumes, volume V 1 42 , volume 2 V 2 44 , volume 3 , V 3 46 , volume 4 V 4 48 , volume 5 V 5 50 , etc.
  • Each cached volume can have a different amount of cache memory allocated for its use. The amount of cache memory space assigned to a particular volume can be changed without interrupting caching.
  • At step 406 one or more sets of parameters or attribute values that determines the volume disk caching characteristics for each particular volume is received. Attributes of the volume disk caching space may include any or all of the attributes described above with respect to FIGS. 2 and 3 . In addition I/O statistics may be collected by volume. I/O statistics include read hits per volume, read misses per volume and the like. In one embodiment of the invention, the volume-based attributes are input from console 52 , received by the CMF 24 , validated and formatted into commands sent to the volume disk cache driver 22 .
  • the attribute values for each volume are independent of the attribute values for all other volumes, so that, for example, 10 Gigabytes of memory may be reserved for volume 1 V 1 42 caching and 100 Gigabytes of memory may be reserved for volume 1 V 2 44 caching.
  • the page size for V 1 33 may be set at 4K and the page size for V 2 34 may be set at 16K or 64K. In one embodiment of the invention, page size is a multiple of 4K.
  • caching is activated.
  • an I/O request packet 54 is received (intercepted).
  • I/O request packet 54 directed to one of storage devices 40 a , 40 b , 40 c , etc. is received at resource manager 14 from client/server 10 a , 10 b , or 10 c , etc. via the network 11 .
  • Each I/O request packet 54 includes, in one embodiment, parameters including: volume identifier, request size and start logical sector address.
  • Volume identifier in one embodiment of the invention is a unique identifier used to distinguish between volumes.
  • volume identifier may refer to which volume, volume 1 V 1 42 , volume 2 V 2 44 , volume 3 V 3 46 , volume 4 V 4 48 or volume 5 V 5 50 , the request is directed.
  • Request size refers to the size of the data requested and start logical sector address refers to the location on the storage device where the requested information can be found. The start logical sector address may also determine where data in the volume disk cache space is to be read.
  • the volume disk cache driver 22 Based on the parameters in the I/O request packet 54 , the volume disk cache driver 22 knows where to read, store and update the data in both the individual volume disk cache space V 1 33 , V 2 34 , V 3 35 , V 4 36 or V 5 and on the logical volume, volume 1 V 1 42 , volume 2 V 2 44 , volume 3 V 3 46 , volume 4 V 4 48 or volume 5 V 5 50 .
  • the volume disk cache space 38 is filled. With each read request, the volume disk cache driver 22 checks to see if the region being read has already been stored in the specified individual volume disk cache space. If all the data is in the specified individual volume disk cache space, the data is moved into a buffer associated with the input I/O request packet 54 and the user request is completed.
  • the resource manager 14 directs the I/O request packet 54 to the volume disk cache driver 22 .
  • V 1 32 , V 2 33 , V 3 34 , or V 4 35 assigned to the volume specified in the I/O request packet 54 the data will be retrieved from the specified volume at step 412 and returned to the requestor at step 414 .
  • a read command is generated and is issued to the volume.
  • the read preferably starts with the first page needed to fetch data into the specified individual volume disk caching space V 1 32 , V 2 33 . V 3 34 , or V 4 35 and will proceed to the last page needed to satisfy the read request.
  • any succeeding pages that are already in the cache preferably are treated as not being present (marked as invalid). This may be more efficient as one large read is likely to require less time than the multiple smaller reads needed if all non-contiguous page hits for a given user request were honored.
  • the read request generated by the volume disk cache driver 22 is built on cache page boundaries so that a disk cache page is the smallest unit of accessible cached data for a particular volume.
  • the starting address will be rounded down to a modulo-page-size boundary and the length rounded up to a modulo-page-size, such that a minimum of the entire requested region is read, preferably providing a baseline look-ahead capability.
  • other predictive methods of read look-ahead may be employed such as gathering statistics on patterns of volume access in order to determine heuristics on which to determine the size of the read region.
  • the I/O request packet 54 includes a read or write request and the data is not in the individual volume disk caching space V 1 32 , V 2 33 , V 3 34 , or V 4 35 , the data that is not in the individual volume disk caching space V 1 32 , V 2 33 , V 3 34 , or V 4 35 will be stored to the specified individual volume disk caching space V 1 32 , V 2 33 , V 3 34 , or V 4 35 .
  • the write request will update the data previously stored in the specified individual volume disk cache space V 1 32 , V 2 33 , V 3 34 , or V 4 35 as well as on the storage media volume 1 V 1 42 , volume 2 V 2 44 , volume 3 V 3 46 , volume 4 V 4 48 or volume 5 V 5 50 , allowing subsequent reads of the altered data to again benefit from having the data in volume disk caching space 38 .
  • volume disk cache driver 22 maintains a history of each time each cache page is accessed and, when pages must be purged, the oldest or least-recently-used pages are purged first. In another embodiment, the volume disk cache driver 22 may apply heuristics based on statistics gathered on volume access to decide which pages to purge.
  • the methods and system described above may be embodied in the form of program code (i.e., instructions) stored on a computer-readable medium, such as a floppy diskette, CD-ROM, DVD-ROM, DVD-RAM, hard disk drive, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
  • program code i.e., instructions
  • the present invention may also be embodied in the form of program code that is transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, over a network, including the Internet or an intranet, or via any other form of transmission, wherein, when the program code is received and loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
  • a machine such as a computer
  • the program code When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates analogously to specific logic circuits.
  • the program code may be implemented in a high level programming language, such as, for example, C, C++, or Java. Alternatively, the program code may be implemented in assembly or machine language. In any case, the language may be a compiled or an interpreted language.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present invention provides systems and methods for allocating a pool of global memory among a set of client/servers so that storage volumes associated with a plurality of client/servers are each allocated a portion of the pool of global memory for caching of data from that volume. The amount of memory to be used for caching the volume's input/output operations (I/Os), the page size, the cache type, the cache replacement policy and the maximum cache read can be specified by volume. The amount of memory to be used for caching the volume's input/output operations, the cache type, the cache replacement policy and the maximum cache read I/O size can be changed dynamically by the changing volume-based attributes.

Description

FIELD OF THE INVENTION
The invention relates generally to the field of managing storage assets. More particularly, the invention provides a system and method for caching based on volumes rather than disks or files.
BACKGROUND
Information is critical to the success of nearly every kind of business imaginable. Until recently, direct-attached storage typically provided capacity to applications running on a server. Typically, this meant one or more disk drives connected via a Small Computer System Interface (SCSI) located inside the server or connected externally to the server. Today, businesses are finding that these legacy storage architectures no longer meet their needs. In addition to a dramatic increase in the need for capacity, today's businesses may require data sharing, high performance, high availability and cost control.
Storage consolidation is one way in which the expanding needs of businesses are being addressed. Storage consolidation means centralizing and sharing storage resources among a number of application servers. Storage consolidation is often enabled by a Storage Area Network (SAN). A SAN provides high-speed connections between servers and storage units so that many servers can share capacity residing on a single storage subsystem. One drawback, however, is the cost: these storage subsystems are expensive.
Another approach to solving capacity problems is to improve performance. Disk drives are notoriously slow because they are mechanical devices, i.e., the disk has to spin, and the read/write heads have to move across the disk. Latencies are enormous in comparison to speeds at which memory accesses can occur. To address these performance issues, frequently caching is employed.
Caching is a way of speeding up access to frequently used information for faster response. A cache can be a reserved section of main memory or it can be an independent high-speed storage device.
A memory cache is a small block of high-speed memory located between the CPU and the main memory. By keeping as much frequently-accessed information as possible in high-speed memory, the processor avoids the need to access slower memory. A disk cache is a mechanism for improving the time it takes to read from or write to a hard disk. The disk cache may be part of the hard disk or it can be a specified portion of memory.
Disk caching works under the same principle as memory caching, that is, the most recently accessed data from the disk (as well as a certain number of sectors adjacent thereto) is stored in a memory buffer. When an application needs to access data from the disk, it first checks the disk cache to see if the data is there. Disk caching can dramatically improve the performance of applications, because accessing a byte of data in memory can be thousands of times faster than accessing a byte of data on a hard disk. Disk caching is typically done on a physical disk or file basis.
Typically, in a storage subsystem either all the disks in a storage subsystem are cached or they are not, and all are cached in the same way. Hence, if one client does not require caching, but another client does, client one's disk will have to be cached so that client two's disk can be cached. As another example, perhaps each of 100 clients is assigned 1/100 of the available memory space for caching but client one's data is accessed in much larger segments than is client two's data. Regardless, both clients' data will be accessed in the same way (typically by accessing a certain number of sectors or clusters).
Because disk caching is done on a physical disk or file basis, inefficient use of storage resources may result. For example, one client may be assigned disk drive one for data storage and another client may be assigned disk drive two for data storage. If client one needs more space for storage, client one can't use part of client two's available disk space, because all of disk two is assigned to client two. Another disk drive (e.g., disk three) must be assigned to client one, leading to potential inefficient use of storage resources: client two might only be using 20% of its storage capacity and client one might only be using 50% of its storage capacity (disks one and three).
Finally, caching systems are not readily tunable. In order to change the caching characteristics, the system typically must be taken down and re-initialized. Hence, disk caching can be inflexible and inefficient.
It would be helpful if caching could be based on client needs rather than on physical devices. It would also be helpful if storage usage and caching characteristics could be dynamically tuned based on client data usage patterns. The present invention addresses these needs.
SUMMARY OF THE INVENTION
The present invention provides systems and methods, whereby a pool of global memory can be allocated among a set of client/servers so that each volume associated with a client/server is allocated a portion of cache memory in the global cache memory space. The amount of memory to be used for caching the volume's input/output operations (I/Os), the cache type, the cache replacement policy, and the maximum cache I/O read size are individually settable, can be specified by volume and can be changed dynamically without stopping volume caching. The cache page size can also be specified by volume, but caching must be inactive for initial setting or any changes. Volume caching I/O statistics may be collected on an individual volume basis.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing summary, as well as the following detailed description of embodiments of the invention, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there is shown in the drawings exemplary constructions of the invention; however, the invention is not limited to the specific methods and instrumentalities disclosed. In the drawings:
FIG. 1 is a block diagram of a Storage Area Network;
FIG. 2 is a block diagram of a Storage Area Network in accordance with one embodiment of the present invention;
FIG. 3 is a block diagram of a volume-based disk caching system in accordance with one embodiment of the invention; and
FIG. 4 is a flow diagram of a method for volume-based caching in accordance with one embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 is a block diagram of an exemplary Storage Area Network (SAN). Client/ servers 10 a, 10 b, 10 c etc. are in communication with a storage controller 8. Storage controller 8 controls access of client/ servers 10 a, 10 b, 10 c etc. to storage units 40 a, 40 b and 40 c. Typically, storage controller 8 assigns one or more particular storage units to a client/server. Hence, for example, in FIG. 1 perhaps storage unit 40 a is assigned to client/server 10 a, storage unit 40 b is assigned to client/server 40 b and so on.
FIG. 2 illustrates a Storage Area Network in accordance with one embodiment of the present invention. Client/ servers 10 a, 10 b, 10 c etc. are in communication with a resource manager 14. Resource manager 14 controls access of client/ servers 10 a, 10 b, 10 c, etc. to storage assets 40. Storage assets 40 include one or more storage units, which in FIG. 2, are represented by storage devices 40 a, 40 b, 40 c, etc. In accordance with the present invention, however, storage devices 40 a, 40 b, 40 c, etc. are partitioned into one or more volumes (represented in FIG. 2 as V1 42, V2 44, V3 46, V4 48 and V5 50). Each client/ server 10 a, 10 b, 10 c is associated with one or more volumes, V1 42, V2 44, V3 46, V4 48 or V5 50, so that, for example, client/server 10 a may be assigned to volume 1 (V1) 42 of storage unit 40 a and client/server 10 b may be assigned to volume 2 (V2) 44 of the same storage unit 40 a and so on. As described further below, each volume is assigned a separate volume disk caching space (not shown in FIG. 2). Because client/server 10 a is assigned to a separate volume than is client/server 10 b and each volume has its own volume disk caching space, even though client/ server 10 a and 10 b share the same physical disk, 40 a, client/server 10 a is not able to access client/server 10 b's data and vice versa Attributes of the volume disk caching space include the following: the amount of memory to be used for caching the volume's input/output operations (I/Os), the cache page size (the size of each cache page for the volume, where the size of the cache page is a multiple of the hosting computing device's operating system page size), the cache type (which type of caching method (e.g., write through, write back, etc.) will be used for the volume), the cache replacement policy (e.g., Least Frequently Used versus Least Recently Used), and the maximum cache read I/O size, (i.e., the largest read IO operation that will be cached). The amount of memory to be used for caching the volume's input/output operations (I/Os), and the maximum cache read I/O size, (i.e., the largest read I/O operation that will be cached) can be changed dynamically for a volume without stopping caching. I/O statistics may be collected by volume. I/O statistics include read hits per volume, read misses per volume and the like.
FIG. 3 illustrates an environment in which a system for volume-based disk caching in accordance with one embodiment of the invention may be implemented. Client/ servers 10 a, 10 b, 10 c, etc. are communicatively coupled via network 11 to a computing device 12 on which is resident resource manager 14. Coupled to computing device 12 are storage assets 40. The computing device on which resource manager 14 is resident may be any computing device 12 including, but not limited to: a personal computer (PC), an automated teller machine, a server computer, a hand-held or laptop device, a multi-processor system, a microprocessor-based system, programmable consumer electronics, a network PC, a minicomputer, a mainframe computer, and the like. The computing device 12 may include resource manager 14, an operating system 15 and a system memory 28 connected by a system bus 31. The computing device 12 may include a variety of computer readable media. Computer readable media can be any available media that can be accessed by the computing device 12 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CDROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 12. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. In one embodiment computing device 12 includes a number of resource managers 14 (e.g., 12 or fewer).
System memory 28 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. System memory 28 in one embodiment of the invention includes global cache memory space 30. Global cache memory space 30 includes free global cache memory space 32 (memory not assigned to any specific volume) and volume disk caching space 38, represented in FIG. 3 as portioned into individual volume disk caching spaces V1 33, V2 34, V3 35, V4 36 and V5 37. Free global cache memory space 32 is global cache memory space 30 that has not been assigned to volumes. Individual volume disk caching spaces are assigned or associated with individual storage device volumes. In FIG. 3, individual volume disk caching space V1 33 has been assigned to volume 1 V1 42 of storage device 40 a, volume disk caching space V2 34 has been assigned to volume 2 V2 44 of storage device 40 a and so on. It will be understood that global cache memory space 30 may include any number of individual volume disk caching spaces, the individual volume disk caching spaces represented in FIG. 3 by V1 33, V2 34, V3 35, V4 36 and V5 37 are merely exemplary. Similarly individual volume disk caching spaces V1 33, V2 34, V3 35, V4 36 and V5 37 may be of the same size (occupy the same amount of memory) or may be of different size (occupy different amounts of memory).
Storage devices 40 may be any suitable kind of storage devices that can be partitioned into logical volumes. The resource manager 14 manages storage assets 40, (represented in FIG. 3 by storage devices 40 a, 40 b, 40 c, etc.), from an enterprise-wide perspective by virtualizing storage; that is, storage is not defined in terms of discrete devices or subsystems but as a collective pool of storage capacity. The pool can be allocated among a number of client/ servers 10 a, 10 b, 10 c, etc. according to rules established by, for example, a storage administrator. These rules (not shown) may be stored in a datastore. For example, in FIG. 3, storage assets 40 include disks 40 a, 40 b and 40 c. Disk 40 a has been partitioned into volume 1 V1 42 and volume 2 V2 44. Disk 40 b has been partitioned into volume 3 V3 46 and volume 4 V4 48 and disk 40 c comprises volume 5 V5 50. It should be understood that the way disks 40 a, 40 b and 40 c are partitioned in FIG. 3 is merely exemplary. Disks 40 a, 40 b, 40 c, etc. may be partitioned into any suitable number of volumes. Similarly, a single volume may span any suitable number of disks, for example, volume 1 in another embodiment may include disk 40 a and 40 b, and so on.
The resource manager 14 is in the data path so that all traffic (generally in the form of input/output request packets 54) between the client/ servers 10 a, 10 b, 10 c, etc. and the storage assets 40, is routed through network 11 to resource manager 14. The resource manager 14 in one embodiment of the invention includes a volume disk cache driver 22. The resource manager 14 may also include a centralized management facility (CMF) 24.
The volume disk cache driver 22 may be implemented as an upper level filter of a logical disk manager 18 (such as, but not limited to, the Logical Disk Manager (LDM) for Microsoft® Windows® 2000) so that input/output requests are intercepted by the volume disk cache driver 22 before the request is passed on to the logical disk manager 18 and from the logical disk manager 18 to the disk driver 20. The volume disk cache driver 22 determines if the data that is the subject of the I/O request packet 54 is present in the individual volume disk caching space V1 33, V2 34, V3 35, V4 36 or V5 37 associated with the volume specified in the I/O request packet 54. If the data is present, the data is retrieved from the individual volume disk caching space V1 33, V2 34, V3 35, V4 36 or V5 37 and the I/O request is never passed on to the logical disk manager 18 and disk driver 20. If the data is not present in the individual volume disk caching space V1 33, V2 34, V3 35, V4 36 or V5 37 associated with the volume specified in the I/O request packet 54, the I/O request is passed on to the logical disk manager 18 and disk driver 20 and the I/O operation is performed. Upon return, however, disk driver 20 and logical disk manager 18 return control to volume disk cache driver 22. Logical disk cache driver 22 stores the retrieved data in the individual volume disk caching space V1 33, V2 34, V3 35, V4 36 or V5 37 associated with the volume from which the data was retrieved.
The logical disk manager 18 may, for example, be a subsystem of Windows® 2000 that consists of user mode and device driver mode components and typically manages storage devices such as storage devices 40 a, 40 b, 40 c, etc.
By designing the volume disk cache driver 22 at this level, (i.e., as an upper level filter of the logical disk manager 18), all the device-specific characteristics (whether the device is a simple-volume, spanned volume, RAID volume, striping-enabled volume, etc.) are managed by the logical disk manager 18 and are transparent to the volume disk cache driver 22. In another embodiment, the volume disk cache driver 22 performs the functions of the logical disk manager 18 and disk driver 20 in addition to volume disk caching.
In one embodiment of the invention, the volume disk cache driver 22 is implemented in accordance with the standard Microsoft® Windows® 2000 Driver Model. The Microsoft® Windows® 2000 model provides a framework for device drivers that operate in Microsoft® Windows® 2000 and later MICROSOFT operating systems. By implementing volume disk cache driver 22 in accordance with the Microsoft® Windows® 2000 model, the volume disk cache driver 22 can be offered as a stand-alone product offering enhanced disk caching capabilities on any MICROSOFT Windows® server or on any server that adheres to the Microsoft® Windows® 2000 model.
The volume disk cache driver 22 is designed to improve the data access speed of client/ servers 10 a, 10 b, 10 c, etc. across the network 11 by implementing a volume-based cache, thereby providing client/ servers 10 a, 10 b, 10 c, etc. high-speed access to frequently used data. Once data is read from a logical volume, the data is stored in cache memory. Subsequent access to that same data will retrieve it from the cache memory instead of requiring a direct access to the volume on which the data is stored. As described above, each individual formatted volume, volume 1 V1 42, volume 2 V2 44, volume 3 V3 46, volume 4 V4 48, volume 5 V5 50, etc., enabled to the resource manager 14 may have its own cache memory: for example, volume 1 V1 42 may be associated with individual disk caching space (i.e., memory) V1 33, volume V2 44 with individual disk caching space (i.e., memory) V2 34, volume 3 V3 46 with individual disk caching space (i.e., memory) V3 35, volume 4 V4 48 with individual disk caching space (i.e., memory) V4 36, volume 5 V5 37 with individual disk caching space (i.e., memory V5 37) and so on. The individual disk caching spaces 38 are allocated in the global cache memory 30. The volume disk cache driver 22 enables the client/ servers 10 a, 10 b, 10 c, etc. to read data from and write data to system memory 28 instead of from physical disks 40 a, 40 b, 40 c, etc.
In one embodiment, the CMF 24 provides an interface between a user and the resource manager 14. In one embodiment, user commands input at console 52 may be translated by the CMF 24 into commands that initiate cache functions. The CMF 24 may control allocation and de-allocation of global cache memory space 30 and individual volume disk cache spaces V1 33, V2 34, V3 35, V4 36 and V5 37 through setting or changing the value of attributes. The CMF 24 in one embodiment can increment and decrement the size of both global cache memory space 30 and volume disk cache spaces V1 33, V2 34, V3 35, V4 36 and V5 37 at any time that the resource manager 14 is running, without interruption of disk caching. It will be understood that changing the size of global cache memory space 30 affects the amount of physical memory available to the operating system 15 and any applications running on the system. Similarly, changing the size of an individual volume disk cache space, such as V1 33, V2 34, V3 35, V4 36 or V5 37, for example, may change the distribution of the currently allocated global cache memory space 30.
The CMF 24 in one embodiment of the invention will validate any parameters representing values of volume-based attributes of the individual volume disk caches that it passes to volume disk cache driver 22. In one embodiment, the CMF 24 will also retain all global cache memory space 30 and individual volume disk cache spaces V1 42, V2 44, V3 46, V4 48 and V5 50 attributes across CMF 24 instantiations (e.g., across computing device 12 reboots). The CMF 24 may also reissue the proper commands to the volume disk cache driver 22 to reconfigure the memory space 30 and individual disk volume caching spaces V1 33, V2 34, V3 35, V4 36 and V5 37.
Attributes of the individual volume disk caching spaces V1 33, V2 34, V3 35, V4 36 and V5 37 include the following: the amount of memory to be used for caching the volume's input/output operations (I/Os), the cache page size (the size of each cache page for the volume, where the size of the cache page is a multiple of the hosting computing device's operating system page size), the cache type (which type of caching method (e.g., write through, write back, etc.) will be used for the volume), the cache replacement policy (e.g., Least Frequently Used, Least Recently Used, etc.) and the maximum cache read I/O size, (i.e., the largest read IO operation that will be cached). The amount of memory to be used for caching the volume's input/output operations (I/Os), and the maximum cache read I/O size, (i.e., the largest read IO operation that will be cached) can be changed dynamically for a volume without stopping caching. I/O statistics may be collected by volume. I/O statistics include read hits per volume, read misses per volume and the like.
FIG. 4 is a flow diagram of a method for setting up volume-based caching in accordance with one embodiment of the invention. Referring now concurrently to FIGS. 3 and 4, at step 402 storage assets 40 are partitioned into volumes, (e.g., physical device 40 a is partitioned into logical volume 1 V1 42 and logical volume 2 V2 44, physical device 40 b is partitioned into logical volume 3 V3 46 and logical volume 4 V4 48 and physical device 40 c comprises logical volume 5 V5 50).
At step 404 a user-defined subset of system memory 28 is reserved for volume disk caching. This space is referred to as global cache memory space 30. In one embodiment, this subset of system memory 28 is reserved for volume disk caching when the system is initialized. Because global cache memory space is obtained by reserving a portion of the computing device 12's system memory 28, less memory is available for operating system 15 (e.g., WINDOWS®operating system) or for application use. A system administrator typically assigns the actual amount of physical memory allocated as global cache memory space 30 when the volume disk cache driver 22 is installed. In one embodiment, the amount of memory allocated to global cache memory space 30 can be increased or decreased (e.g., by a system administrator using console 52 to interface with CMF 24) as needs dictate.
Once the global cache memory space 30 has been allocated, portions of the global cache memory space 30 may be allocated to individual volumes, volume V1 42, volume 2 V2 44, volume 3, V3 46, volume 4 V4 48, volume 5 V5 50, etc. Each cached volume can have a different amount of cache memory allocated for its use. The amount of cache memory space assigned to a particular volume can be changed without interrupting caching.
At step 406 one or more sets of parameters or attribute values that determines the volume disk caching characteristics for each particular volume is received. Attributes of the volume disk caching space may include any or all of the attributes described above with respect to FIGS. 2 and 3. In addition I/O statistics may be collected by volume. I/O statistics include read hits per volume, read misses per volume and the like. In one embodiment of the invention, the volume-based attributes are input from console 52, received by the CMF 24, validated and formatted into commands sent to the volume disk cache driver 22.
It should be noted that the attribute values for each volume are independent of the attribute values for all other volumes, so that, for example, 10 Gigabytes of memory may be reserved for volume 1 V1 42 caching and 100 Gigabytes of memory may be reserved for volume 1 V2 44 caching. Similarly, for example, the page size for V1 33 may be set at 4K and the page size for V2 34 may be set at 16K or 64K. In one embodiment of the invention, page size is a multiple of 4K.
At step 407, caching is activated.
At step 408 an I/O request packet 54 is received (intercepted). In one embodiment of the invention, I/O request packet 54 directed to one of storage devices 40 a, 40 b, 40 c, etc. is received at resource manager 14 from client/ server 10 a, 10 b, or 10 c, etc. via the network 11. Each I/O request packet 54 includes, in one embodiment, parameters including: volume identifier, request size and start logical sector address. Volume identifier in one embodiment of the invention is a unique identifier used to distinguish between volumes. In the example, volume identifier may refer to which volume, volume 1 V1 42, volume 2 V2 44, volume 3 V3 46, volume 4 V4 48 or volume 5 V5 50, the request is directed. Request size refers to the size of the data requested and start logical sector address refers to the location on the storage device where the requested information can be found. The start logical sector address may also determine where data in the volume disk cache space is to be read. Based on the parameters in the I/O request packet 54, the volume disk cache driver 22 knows where to read, store and update the data in both the individual volume disk cache space V1 33, V2 34, V3 35, V4 36 or V5 and on the logical volume, volume 1 V1 42, volume 2 V2 44, volume 3 V3 46, volume 4 V4 48 or volume 5 V5 50.
As I/O requests are made, the volume disk cache space 38 is filled. With each read request, the volume disk cache driver 22 checks to see if the region being read has already been stored in the specified individual volume disk cache space. If all the data is in the specified individual volume disk cache space, the data is moved into a buffer associated with the input I/O request packet 54 and the user request is completed.
Hence, in one embodiment, the resource manager 14 directs the I/O request packet 54 to the volume disk cache driver 22. At step 410, it is determined whether the data requested in I/O request packet 54 is available in the volume disk cache space 38. If it is determined that the data is available in the individual volume disk caching space V1 32, V2 33. V3 34, or V4 35 assigned to the volume, at step 414, the data will be retrieved from the specified individual disk caching space V1 32, V2 33. V3 34, or V4 35 and sent to the requester. If it is determined that the data is not available from the individual volume disk caching space V1 32, V2 33, V3 34, or V4 35 assigned to the volume specified in the I/O request packet 54, the data will be retrieved from the specified volume at step 412 and returned to the requestor at step 414.
If the region being read has not already been stored in the specified individual volume disk cache space V1 32, V2 33. V3 34, or V4 35, a read command is generated and is issued to the volume. The read preferably starts with the first page needed to fetch data into the specified individual volume disk caching space V1 32, V2 33. V3 34, or V4 35 and will proceed to the last page needed to satisfy the read request. Once a read start point is established, any succeeding pages that are already in the cache preferably are treated as not being present (marked as invalid). This may be more efficient as one large read is likely to require less time than the multiple smaller reads needed if all non-contiguous page hits for a given user request were honored. When the read completes, the user data, plus additional data if the user request was not on cache page boundaries, is in the specified individual volume disk caching space. The requested data is moved to the user buffer and the requestor's I/O request is completed. In one embodiment, the read request generated by the volume disk cache driver 22 is built on cache page boundaries so that a disk cache page is the smallest unit of accessible cached data for a particular volume. In one embodiment, the starting address will be rounded down to a modulo-page-size boundary and the length rounded up to a modulo-page-size, such that a minimum of the entire requested region is read, preferably providing a baseline look-ahead capability. Alternatively, other predictive methods of read look-ahead may be employed such as gathering statistics on patterns of volume access in order to determine heuristics on which to determine the size of the read region.
In one embodiment, if the I/O request packet 54 includes a read or write request and the data is not in the individual volume disk caching space V1 32, V2 33, V3 34, or V4 35, the data that is not in the individual volume disk caching space V1 32, V2 33, V3 34, or V4 35 will be stored to the specified individual volume disk caching space V1 32, V2 33, V3 34, or V4 35. In one embodiment, if the I/O request packet 54 includes a write request, the write request will update the data previously stored in the specified individual volume disk cache space V1 32, V2 33, V3 34, or V4 35 as well as on the storage media volume 1 V1 42, volume 2 V2 44, volume 3 V3 46, volume 4 V4 48 or volume 5 V5 50, allowing subsequent reads of the altered data to again benefit from having the data in volume disk caching space 38.
Periodically, old data must be removed from the volume disk caching space 38 so that new data can be stored. At step 416 it is determined if data needs to be purged. If data does not need to be purged, processing continues at step 408. If data needs to be purged, the data is purged and processing continues at step 408. In one embodiment, the volume disk cache driver 22 maintains a history of each time each cache page is accessed and, when pages must be purged, the oldest or least-recently-used pages are purged first. In another embodiment, the volume disk cache driver 22 may apply heuristics based on statistics gathered on volume access to decide which pages to purge.
The methods and system described above may be embodied in the form of program code (i.e., instructions) stored on a computer-readable medium, such as a floppy diskette, CD-ROM, DVD-ROM, DVD-RAM, hard disk drive, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. The present invention may also be embodied in the form of program code that is transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, over a network, including the Internet or an intranet, or via any other form of transmission, wherein, when the program code is received and loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates analogously to specific logic circuits. The program code may be implemented in a high level programming language, such as, for example, C, C++, or Java. Alternatively, the program code may be implemented in assembly or machine language. In any case, the language may be a compiled or an interpreted language.
It is noted that the foregoing examples have been provided merely for the purpose of explanation and are in no way to be construed as limiting of the present invention. While the invention has been described with reference to preferred embodiments, it is understood that the words used herein are words of description and illustration, rather than words of limitation. Further, although the invention has been described herein with reference to particular means, materials and embodiments, the invention is not intended to be limited to the particulars disclosed herein; rather, the invention extends to all functionally equivalent structures, methods and uses, such as are within the scope of the appended claims. Those skilled in the art, having the benefit of the teachings of this specification, may effect numerous modifications thereto and changes may be made without departing from the scope and spirit of the invention in its aspects.

Claims (22)

1. A system for accessing data on a computer storage medium, comprising:
at least one storage unit coupled to a resource manager, the at least one storage unit partitioned into at least one of a plurality of volumes;
the resource manager managing a pool of available memory, wherein the pool is allocated among the plurality of volumes according to a set of attributes associated with each volume, wherein the set of attributes comprises a size of caching space that can be changed dynamically, and wherein the pool of available memory may be dynamically reallocated among the plurality of volumes.
2. The system of claim 1 wherein the set of attributes associated with each volume further comprises at least one of: a caching page size, a cache type, a cache replacement policy and a maximum cache read size.
3. The system of claim 2 wherein the maximum cache read size specified by volume can be changed dynamically.
4. The system of claim 1 further comprising at least one of a plurality of clients coupled to the resource manager, wherein the at least one client is associated with the at least one volume.
5. The system of claim 1 wherein the resource manager comprises a volume disk cache driver.
6. The system of claim 5 wherein the volume disk cache driver receives an input/output request from a requestor, the input/output request requesting data from a first volume, the first volume associated with a first volume disk cache space.
7. The system of claim 6 wherein the volume disk cache driver returns the requested data to the requestor from the first volume disk cache space.
8. The system of claim 1 wherein the resource manager further comprises an interface for changing the set of attributes associated with at least one volume of the plurality of volumes.
9. The system of claim 1 wherein the volume disk cache driver is implemented as a upper level filter to a logical disk manager.
10. A method of volume disk caching comprising:
reserving a pool of memory for caching at least one of a plurality of logical disk volumes;
receiving an attribute value associated with a size of the pool of memory to be reserved;
allocating the pool of memory among the at least one of a plurality of logical disk volumes in accordance with a set of attributes associated with the at least one logical disk volume, wherein the set of attributes comprises a size of caching space that can be changed dynamically.
11. The method of claim 10 further comprising reserving the pool of memory from system memory.
12. The method of claim 10 further comprising dynamically reallocating the pool of memory in response to receiving an updated set of attributes associated with at least one of the volumes.
13. The method of claim 10 wherein the set of attributes includes an attribute for setting the amount of memory to be used for caching the at least one logical disk volume.
14. The method of claim 13, wherein the attribute can be changed without interrupting caching.
15. The method of claim 10 wherein the set of attributes includes an attribute for setting the size of each cache page for the at least one logical disk volume.
16. The method of claim 10 wherein the attributes includes an attribute for setting the type of caching to be used to write through caching.
17. The method of claim 10 wherein the set of attributes includes an attribute for setting the type of caching to be used to write back caching.
18. The method of claim 10 wherein the set of attributes includes an attribute for specifying a cache replacement policy.
19. The method of claim 18 wherein the attribute can be changed without stopping caching.
20. The method of claim 10 wherein the set of attributes includes an attribute for setting the size of the largest cache read operation.
21. The method of claim 20 wherein the attribute can be changed without stopping caching.
22. A computer-readable medium having stored thereon program code for performing volume-based caching, the program code when executed by a computer causing the computer to:
reserve a pool of memory for caching at least one of a plurality of logical disk volumes;
receive an attribute value associated with a size of the pool of memory to be reserved; and
allocate the pool of memory among the at least one of a plurality of logical disk volumes in accordance with a set of attributes associated with the at least one logical disk volume, wherein the set of attributes comprises a size of caching space that can be changed dynamically.
US10/295,161 2002-11-15 2002-11-15 Disk volume virtualization block-level caching Expired - Fee Related US6957294B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/295,161 US6957294B1 (en) 2002-11-15 2002-11-15 Disk volume virtualization block-level caching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/295,161 US6957294B1 (en) 2002-11-15 2002-11-15 Disk volume virtualization block-level caching

Publications (1)

Publication Number Publication Date
US6957294B1 true US6957294B1 (en) 2005-10-18

Family

ID=35066295

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/295,161 Expired - Fee Related US6957294B1 (en) 2002-11-15 2002-11-15 Disk volume virtualization block-level caching

Country Status (1)

Country Link
US (1) US6957294B1 (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030009444A1 (en) * 2001-06-14 2003-01-09 Eidler Christopher William Secured shared storage architecture
US20030055972A1 (en) * 2001-07-09 2003-03-20 Fuller William Tracy Methods and systems for shared storage virtualization
US20050055603A1 (en) * 2003-08-14 2005-03-10 Soran Philip E. Virtual disk drive system and method
US20050091466A1 (en) * 2003-10-27 2005-04-28 Larson Douglas V. Method and program product for avoiding cache congestion by offsetting addresses while allocating memory
US20050132133A1 (en) * 2003-12-10 2005-06-16 International Business Machines Corporation Method and system for equalizing usage of storage media
US20050246401A1 (en) * 2004-04-30 2005-11-03 Edwards John K Extension of write anywhere file system layout
US20060248307A1 (en) * 2003-12-24 2006-11-02 Masayuki Yamamoto Configuration management apparatus and method
US7325097B1 (en) * 2003-06-26 2008-01-29 Emc Corporation Method and apparatus for distributing a logical volume of storage for shared access by multiple host computers
US20080065582A1 (en) * 2006-09-07 2008-03-13 Brian Gerard Goodman Data library background operations system apparatus and method
WO2008037585A1 (en) 2006-09-26 2008-04-03 International Business Machines Corporation Cache disk storage upgrade
US20080303839A1 (en) * 2007-06-08 2008-12-11 Kevin Quennesson Facilitating caching in an image-processing system
US20090157852A1 (en) * 2007-12-14 2009-06-18 Michail Krupkin Flexible and scalable method and apparatus for dynamic subscriber services configuration and management
US20090164698A1 (en) * 2007-12-24 2009-06-25 Yung-Li Ji Nonvolatile storage device with NCQ supported and writing method for a nonvolatile storage device
US7886111B2 (en) 2006-05-24 2011-02-08 Compellent Technologies System and method for raid management, reallocation, and restriping
US20110302386A1 (en) * 2010-06-07 2011-12-08 Hitachi, Ltd. Method and apparatus to manage special rearrangement in automated tier management
US20120047502A1 (en) * 2004-02-03 2012-02-23 Hitachi, Ltd. Computer system, control apparatus, storage system and computer device
CN101727293B (en) * 2008-10-23 2012-05-23 成都市华为赛门铁克科技有限公司 Method, device and system for setting solid state disk (SSD) memory
US8468292B2 (en) 2009-07-13 2013-06-18 Compellent Technologies Solid state drive data storage system and method
US20130238851A1 (en) * 2012-03-07 2013-09-12 Netapp, Inc. Hybrid storage aggregate block tracking
US8601035B2 (en) 2007-06-22 2013-12-03 Compellent Technologies Data storage space recovery system and method
US20140115226A1 (en) * 2012-10-18 2014-04-24 International Business Machines Corporation Cache management based on physical memory device characteristics
US8903830B2 (en) 2004-04-30 2014-12-02 Netapp, Inc. Extension of write anywhere file layout write allocation
US20140359226A1 (en) * 2013-05-30 2014-12-04 Hewlett-Packard Development Company, L.P. Allocation of cache to storage volumes
US20150100663A1 (en) * 2013-10-07 2015-04-09 Hitachi, Ltd. Computer system, cache management method, and computer
US9104334B2 (en) * 2013-08-20 2015-08-11 Avago Technologies General Ip (Singapore) Pte. Ltd Performance improvements in input/output operations between a host system and an adapter-coupled cache
US9146851B2 (en) 2012-03-26 2015-09-29 Compellent Technologies Single-level cell and multi-level cell hybrid solid state drive
US9489150B2 (en) 2003-08-14 2016-11-08 Dell International L.L.C. System and method for transferring data between different raid data storage types for current data and replay data
US9894156B2 (en) 2015-09-22 2018-02-13 International Business Machines Corporation Distributed global data vaulting mechanism for grid based storage
US10866912B2 (en) 2017-03-10 2020-12-15 Toshiba Memory Corporation Integrated heterogeneous solid state storage drive

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5247660A (en) * 1989-07-13 1993-09-21 Filetek, Inc. Method of virtual memory storage allocation with dynamic adjustment
US6061763A (en) * 1994-07-12 2000-05-09 Sybase, Inc. Memory management system employing multiple buffer caches
US20020091901A1 (en) * 2000-05-25 2002-07-11 Amnon Romm Disk caching
US20020103889A1 (en) * 2000-02-11 2002-08-01 Thomas Markson Virtual storage layer approach for dynamically associating computer storage with processing hosts
US20030093647A1 (en) * 2001-11-14 2003-05-15 Hitachi, Ltd. Storage system having means for acquiring execution information of database management system
US20030204671A1 (en) * 2002-04-26 2003-10-30 Hitachi, Ltd. Storage system
US20040044827A1 (en) * 2002-08-29 2004-03-04 International Business Machines Corporation Method, system, and article of manufacture for managing storage pools

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5247660A (en) * 1989-07-13 1993-09-21 Filetek, Inc. Method of virtual memory storage allocation with dynamic adjustment
US6061763A (en) * 1994-07-12 2000-05-09 Sybase, Inc. Memory management system employing multiple buffer caches
US20020103889A1 (en) * 2000-02-11 2002-08-01 Thomas Markson Virtual storage layer approach for dynamically associating computer storage with processing hosts
US20020091901A1 (en) * 2000-05-25 2002-07-11 Amnon Romm Disk caching
US20030093647A1 (en) * 2001-11-14 2003-05-15 Hitachi, Ltd. Storage system having means for acquiring execution information of database management system
US20030204671A1 (en) * 2002-04-26 2003-10-30 Hitachi, Ltd. Storage system
US20040044827A1 (en) * 2002-08-29 2004-03-04 International Business Machines Corporation Method, system, and article of manufacture for managing storage pools

Cited By (82)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7693970B2 (en) 2001-06-14 2010-04-06 Savvis Communications Corporation Secured shared storage architecture
US20030009444A1 (en) * 2001-06-14 2003-01-09 Eidler Christopher William Secured shared storage architecture
US20030055972A1 (en) * 2001-07-09 2003-03-20 Fuller William Tracy Methods and systems for shared storage virtualization
US7734781B2 (en) * 2001-07-09 2010-06-08 Savvis Communications Corporation Methods and systems for shared storage virtualization
US7325097B1 (en) * 2003-06-26 2008-01-29 Emc Corporation Method and apparatus for distributing a logical volume of storage for shared access by multiple host computers
US9489150B2 (en) 2003-08-14 2016-11-08 Dell International L.L.C. System and method for transferring data between different raid data storage types for current data and replay data
US8321721B2 (en) 2003-08-14 2012-11-27 Compellent Technologies Virtual disk drive system and method
US7945810B2 (en) 2003-08-14 2011-05-17 Compellent Technologies Virtual disk drive system and method
US20070180306A1 (en) * 2003-08-14 2007-08-02 Soran Philip E Virtual Disk Drive System and Method
US20070234111A1 (en) * 2003-08-14 2007-10-04 Soran Philip E Virtual Disk Drive System and Method
US20070234110A1 (en) * 2003-08-14 2007-10-04 Soran Philip E Virtual Disk Drive System and Method
US20070234109A1 (en) * 2003-08-14 2007-10-04 Soran Philip E Virtual Disk Drive System and Method
US9047216B2 (en) 2003-08-14 2015-06-02 Compellent Technologies Virtual disk drive system and method
US8555108B2 (en) 2003-08-14 2013-10-08 Compellent Technologies Virtual disk drive system and method
US8473776B2 (en) 2003-08-14 2013-06-25 Compellent Technologies Virtual disk drive system and method
US7398418B2 (en) 2003-08-14 2008-07-08 Compellent Technologies Virtual disk drive system and method
US7404102B2 (en) 2003-08-14 2008-07-22 Compellent Technologies Virtual disk drive system and method
US9436390B2 (en) 2003-08-14 2016-09-06 Dell International L.L.C. Virtual disk drive system and method
US8560880B2 (en) 2003-08-14 2013-10-15 Compellent Technologies Virtual disk drive system and method
US7493514B2 (en) 2003-08-14 2009-02-17 Compellent Technologies Virtual disk drive system and method
US9021295B2 (en) 2003-08-14 2015-04-28 Compellent Technologies Virtual disk drive system and method
US7941695B2 (en) 2003-08-14 2011-05-10 Compellent Technolgoies Virtual disk drive system and method
US7574622B2 (en) 2003-08-14 2009-08-11 Compellent Technologies Virtual disk drive system and method
US10067712B2 (en) 2003-08-14 2018-09-04 Dell International L.L.C. Virtual disk drive system and method
US7613945B2 (en) 2003-08-14 2009-11-03 Compellent Technologies Virtual disk drive system and method
US8020036B2 (en) 2003-08-14 2011-09-13 Compellent Technologies Virtual disk drive system and method
US7962778B2 (en) 2003-08-14 2011-06-14 Compellent Technologies Virtual disk drive system and method
US7849352B2 (en) 2003-08-14 2010-12-07 Compellent Technologies Virtual disk drive system and method
US20050055603A1 (en) * 2003-08-14 2005-03-10 Soran Philip E. Virtual disk drive system and method
US20050091466A1 (en) * 2003-10-27 2005-04-28 Larson Douglas V. Method and program product for avoiding cache congestion by offsetting addresses while allocating memory
US7237084B2 (en) * 2003-10-27 2007-06-26 Hewlett-Packard Development Company, L.P. Method and program product for avoiding cache congestion by offsetting addresses while allocating memory
US8244974B2 (en) * 2003-12-10 2012-08-14 International Business Machines Corporation Method and system for equalizing usage of storage media
US20050132133A1 (en) * 2003-12-10 2005-06-16 International Business Machines Corporation Method and system for equalizing usage of storage media
US20060248307A1 (en) * 2003-12-24 2006-11-02 Masayuki Yamamoto Configuration management apparatus and method
US7865687B2 (en) 2003-12-24 2011-01-04 Hitachi, Ltd. Configuration management apparatus and method
US20090307391A1 (en) * 2003-12-24 2009-12-10 Masayuki Yamamoto Configuration management apparatus and method
US7600092B2 (en) * 2003-12-24 2009-10-06 Hitachi, Ltd. Configuration management apparatus and method
US20120047502A1 (en) * 2004-02-03 2012-02-23 Hitachi, Ltd. Computer system, control apparatus, storage system and computer device
US8495254B2 (en) * 2004-02-03 2013-07-23 Hitachi, Ltd. Computer system having virtual storage apparatuses accessible by virtual machines
US8099576B1 (en) 2004-04-30 2012-01-17 Netapp, Inc. Extension of write anywhere file system layout
US8990539B2 (en) 2004-04-30 2015-03-24 Netapp, Inc. Extension of write anywhere file system layout
US8903830B2 (en) 2004-04-30 2014-12-02 Netapp, Inc. Extension of write anywhere file layout write allocation
US20050246401A1 (en) * 2004-04-30 2005-11-03 Edwards John K Extension of write anywhere file system layout
US7409494B2 (en) * 2004-04-30 2008-08-05 Network Appliance, Inc. Extension of write anywhere file system layout
US9430493B2 (en) 2004-04-30 2016-08-30 Netapp, Inc. Extension of write anywhere file layout write allocation
US8583892B2 (en) 2004-04-30 2013-11-12 Netapp, Inc. Extension of write anywhere file system layout
US9251049B2 (en) 2004-08-13 2016-02-02 Compellent Technologies Data storage space recovery system and method
US8230193B2 (en) 2006-05-24 2012-07-24 Compellent Technologies System and method for raid management, reallocation, and restriping
US10296237B2 (en) 2006-05-24 2019-05-21 Dell International L.L.C. System and method for raid management, reallocation, and restripping
US7886111B2 (en) 2006-05-24 2011-02-08 Compellent Technologies System and method for raid management, reallocation, and restriping
US9244625B2 (en) 2006-05-24 2016-01-26 Compellent Technologies System and method for raid management, reallocation, and restriping
US20080065582A1 (en) * 2006-09-07 2008-03-13 Brian Gerard Goodman Data library background operations system apparatus and method
WO2008037585A1 (en) 2006-09-26 2008-04-03 International Business Machines Corporation Cache disk storage upgrade
JP2010504576A (en) * 2006-09-26 2010-02-12 インターナショナル・ビジネス・マシーンズ・コーポレーション Upgrading cache disk storage
US9286228B2 (en) * 2007-06-08 2016-03-15 Apple Inc. Facilitating caching in an image-processing system
US20080303839A1 (en) * 2007-06-08 2008-12-11 Kevin Quennesson Facilitating caching in an image-processing system
US8274520B2 (en) * 2007-06-08 2012-09-25 Apple Inc. Facilitating caching in an image-processing system
US8601035B2 (en) 2007-06-22 2013-12-03 Compellent Technologies Data storage space recovery system and method
US20090157852A1 (en) * 2007-12-14 2009-06-18 Michail Krupkin Flexible and scalable method and apparatus for dynamic subscriber services configuration and management
US9313108B2 (en) * 2007-12-14 2016-04-12 Ericsson Ab Flexible and scalable method and apparatus for dynamic subscriber services configuration and management
US8583854B2 (en) * 2007-12-24 2013-11-12 Skymedi Corporation Nonvolatile storage device with NCQ supported and writing method for a nonvolatile storage device
US20090164698A1 (en) * 2007-12-24 2009-06-25 Yung-Li Ji Nonvolatile storage device with NCQ supported and writing method for a nonvolatile storage device
CN101727293B (en) * 2008-10-23 2012-05-23 成都市华为赛门铁克科技有限公司 Method, device and system for setting solid state disk (SSD) memory
US8819334B2 (en) 2009-07-13 2014-08-26 Compellent Technologies Solid state drive data storage system and method
US8468292B2 (en) 2009-07-13 2013-06-18 Compellent Technologies Solid state drive data storage system and method
US20110302386A1 (en) * 2010-06-07 2011-12-08 Hitachi, Ltd. Method and apparatus to manage special rearrangement in automated tier management
US8463990B2 (en) * 2010-06-07 2013-06-11 Hitachi, Ltd. Method and apparatus to perform automated page-based tier management of storing data in tiered storage using pool groups
US20130238851A1 (en) * 2012-03-07 2013-09-12 Netapp, Inc. Hybrid storage aggregate block tracking
US9146851B2 (en) 2012-03-26 2015-09-29 Compellent Technologies Single-level cell and multi-level cell hybrid solid state drive
US20140115226A1 (en) * 2012-10-18 2014-04-24 International Business Machines Corporation Cache management based on physical memory device characteristics
US9235513B2 (en) * 2012-10-18 2016-01-12 International Business Machines Corporation Cache management based on physical memory device characteristics
US9229862B2 (en) * 2012-10-18 2016-01-05 International Business Machines Corporation Cache management based on physical memory device characteristics
US20140115225A1 (en) * 2012-10-18 2014-04-24 International Business Machines Corporation Cache management based on physical memory device characteristics
US20140359226A1 (en) * 2013-05-30 2014-12-04 Hewlett-Packard Development Company, L.P. Allocation of cache to storage volumes
US9223713B2 (en) * 2013-05-30 2015-12-29 Hewlett Packard Enterprise Development Lp Allocation of cache to storage volumes
US9104334B2 (en) * 2013-08-20 2015-08-11 Avago Technologies General Ip (Singapore) Pte. Ltd Performance improvements in input/output operations between a host system and an adapter-coupled cache
US9699254B2 (en) * 2013-10-07 2017-07-04 Hitachi, Ltd. Computer system, cache management method, and computer
JP2015075818A (en) * 2013-10-07 2015-04-20 株式会社日立製作所 Computer system, cache management method, and computer
US20150100663A1 (en) * 2013-10-07 2015-04-09 Hitachi, Ltd. Computer system, cache management method, and computer
US9894156B2 (en) 2015-09-22 2018-02-13 International Business Machines Corporation Distributed global data vaulting mechanism for grid based storage
US10171583B2 (en) 2015-09-22 2019-01-01 International Business Machines Corporation Distributed global data vaulting mechanism for grid based storage
US10866912B2 (en) 2017-03-10 2020-12-15 Toshiba Memory Corporation Integrated heterogeneous solid state storage drive

Similar Documents

Publication Publication Date Title
US6957294B1 (en) Disk volume virtualization block-level caching
US9405476B2 (en) Systems and methods for a file-level cache
US8996807B2 (en) Systems and methods for a multi-level cache
JP6326378B2 (en) Hybrid storage aggregate block tracking
US10338839B2 (en) Memory system and method for controlling nonvolatile memory
US8904146B1 (en) Techniques for data storage array virtualization
US7769952B2 (en) Storage system for controlling disk cache
US9710187B1 (en) Managing data relocation in storage systems
US8688932B2 (en) Virtual computer system and method of controlling the same
US8051243B2 (en) Free space utilization in tiered storage systems
US8639876B2 (en) Extent allocation in thinly provisioned storage environment
EP2778919A2 (en) System, method and computer-readable medium for dynamic cache sharing in a flash-based caching solution supporting virtual machines
US8423727B2 (en) I/O conversion method and apparatus for storage system
US7062608B2 (en) Storage device adapter equipped with integrated cache
JP2022539950A (en) Storage system, memory management method and management node
US8868877B2 (en) Creating encrypted storage volumes based on thin-provisioning mode information
US11803329B2 (en) Methods and systems for processing write requests in a storage system
US10831374B2 (en) Minimizing seek times in a hierarchical storage management (HSM) system
US20080109630A1 (en) Storage system, storage unit, and storage management system
WO2013023090A2 (en) Systems and methods for a file-level cache
US20070079064A1 (en) Disk cache control apparatus
US11853574B1 (en) Container flush ownership assignment
US20190227957A1 (en) Method for using deallocated memory for caching in an i/o filtering framework
US11144445B1 (en) Use of compression domains that are more granular than storage allocation units
US11775174B1 (en) Systems and methods of data migration in a tiered storage system based on volume priority category

Legal Events

Date Code Title Description
AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SAUNDERS, MICHAEL J.;YIP, VINCENT S.;NEILL, JOSEPH P.;AND OTHERS;REEL/FRAME:013366/0224

Effective date: 20030116

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023312/0044

Effective date: 20090601

Owner name: UNISYS HOLDING CORPORATION, DELAWARE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023312/0044

Effective date: 20090601

Owner name: UNISYS CORPORATION,PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023312/0044

Effective date: 20090601

Owner name: UNISYS HOLDING CORPORATION,DELAWARE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023312/0044

Effective date: 20090601

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023263/0631

Effective date: 20090601

Owner name: UNISYS HOLDING CORPORATION, DELAWARE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023263/0631

Effective date: 20090601

Owner name: UNISYS CORPORATION,PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023263/0631

Effective date: 20090601

Owner name: UNISYS HOLDING CORPORATION,DELAWARE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023263/0631

Effective date: 20090601

AS Assignment

Owner name: DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERA

Free format text: PATENT SECURITY AGREEMENT (PRIORITY LIEN);ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:023355/0001

Effective date: 20090731

AS Assignment

Owner name: DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERA

Free format text: PATENT SECURITY AGREEMENT (JUNIOR LIEN);ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:023364/0098

Effective date: 20090731

AS Assignment

Owner name: GENERAL ELECTRIC CAPITAL CORPORATION, AS AGENT, IL

Free format text: SECURITY AGREEMENT;ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:026509/0001

Effective date: 20110623

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:DEUTSCHE BANK TRUST COMPANY;REEL/FRAME:030004/0619

Effective date: 20121127

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE;REEL/FRAME:030082/0545

Effective date: 20121127

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATE

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:042354/0001

Effective date: 20170417

Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL TRUSTEE, NEW YORK

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:042354/0001

Effective date: 20170417

REMI Maintenance fee reminder mailed
AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION (SUCCESSOR TO GENERAL ELECTRIC CAPITAL CORPORATION);REEL/FRAME:044416/0358

Effective date: 20171005

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.)

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Expired due to failure to pay maintenance fee

Effective date: 20171018

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION;REEL/FRAME:054231/0496

Effective date: 20200319