WO2004021190A2 - Moving data among storage units - Google Patents

Moving data among storage units Download PDF

Info

Publication number
WO2004021190A2
WO2004021190A2 PCT/GB2003/003551 GB0303551W WO2004021190A2 WO 2004021190 A2 WO2004021190 A2 WO 2004021190A2 GB 0303551 W GB0303551 W GB 0303551W WO 2004021190 A2 WO2004021190 A2 WO 2004021190A2
Authority
WO
WIPO (PCT)
Prior art keywords
storage
pool
storage pool
data
target
Prior art date
Application number
PCT/GB2003/003551
Other languages
French (fr)
Other versions
WO2004021190A3 (en
Inventor
Kevin Lee Gibble
Gregory Tad Kishi
Jonathan Wayne Peake
Original Assignee
International Business Machines Corporation
Ibm United Kingdom Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corporation, Ibm United Kingdom Limited filed Critical International Business Machines Corporation
Priority to EP03790999A priority Critical patent/EP1540455B1/en
Priority to JP2004532263A priority patent/JP4502807B2/en
Priority to AU2003251066A priority patent/AU2003251066A1/en
Priority to CA 2497326 priority patent/CA2497326C/en
Priority to DE2003613783 priority patent/DE60313783T2/en
Publication of WO2004021190A2 publication Critical patent/WO2004021190A2/en
Publication of WO2004021190A3 publication Critical patent/WO2004021190A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0686Libraries, e.g. tape libraries, jukebox
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B15/00Driving, starting or stopping record carriers of filamentary or web form; Driving both such record carriers and heads; Guiding such record carriers or containers therefor; Control thereof; Control of operating function
    • G11B15/02Control of operating function, e.g. switching from recording to reproducing
    • G11B15/026Control of operating function, e.g. switching from recording to reproducing by using processor, e.g. microcomputer

Definitions

  • the present invention relates to moving data among storage units.
  • a tape controller will perform a reclamation process to improve the utilization of the tape storage units.
  • the reclamation process involves copying active data from one or more tapes having both inactive and active data to fewer tapes that only have active data.
  • the tapes from which the data is copied are then added to a scratch pool of available tapes from which they may be selected and used to store future data. Empty tapes may be returned to a scratch pool or retained for exclusive use of the current pool .
  • This process improves storage capacity utilization by aggregating active data from multiple tapes to a single tape that stores a greater percentage of active data. Reclamation is necessary because as data is modified, older versions of the data on various tapes becomes outdated or inactive. Tapes that have both inactive and active data are not fully utilized because data is written sequentially and inactive data cannot simply be replaced with active data.
  • a tape is scheduled for reclamation when the amount of active data in a tape reaches a reclamation threshold.
  • the reclamation threshold would be set to a higher level to more frequently consolidate data from tapes with a lower utilization to a single tape with a higher utilization.
  • the reclamation process consumes substantial tape library resources to move the data from tape to tape and can affect other tape library operations.
  • the data movement that occurs during reclamation can interfere with the data movement to tape that occurs in a hierarchical storage management (HSM) system when data is migrated from a faster access storage device, such as an array of hard disk drives, to slower access storage device, such as tape.
  • HSM hierarchical storage management
  • Setting the reclamation threshold to a higher level to increase tape utilization will increase the frequency of the reclamation process and thereby consume substantial tape library resources and perhaps interfere with other tape library operations, such as data migration when the tape library is used in a hierarchical storage management system.
  • setting the reclamation threshold lower will reduce the frequency of reclamation because the amount of active data must fall to a relatively low level before reclamation begins. Reducing the frequency of reclamation will consume less tape library resources and minimize interference with other tape library operations, such as data migration from disk to tape. However, reducing the frequency of reclamation allows tapes to remain with a lower storage capacity utilization because reclamation is not performed until the tape storage capacity utilization is at the lower threshold level. If storage capacity utilization is lower, then the data is dispersed across more tapes at a lower capacity utilization.
  • At least two of the storage pools may have different thresholds .
  • the storage units in the source storage pool may have a lower storage capacity than the storage units in the target storage pool.
  • the source storage pool may comprise a first storage pool
  • the target storage pool may comprise a second storage pool
  • a third storage pool is identified as a target storage pool in the storage pool information for the second storage pool, and whereby data from one selected storage unit in the second storage pool is moved to the third storage pool when the threshold for the second storage pool is reached.
  • Described implementations provide techniques for managing data in storage pools and reclaiming data in a storage unit in one source pool in a storage unit in a different target storage pool, where the source and target storage pool may have different attributes.
  • FIG. 1 illustrates a computing environment in which aspects of the invention are implemented
  • FIG. 2 illustrates an alternative computing environment in which aspects of the invention are implemented
  • FIG. 2 illustrates an alternative computing environment in which aspects of the invention are implemented
  • FIGs. 3a, 3b, and 3c illustrate data structures maintaining information on logical volumes, physical volumes and storage pools, respectively, in accordance with implementations of the invention
  • FIGs. 4 and 5 illustrates logic to perform tape reclamation operations in accordance with implementations of the invention.
  • FIG. 6 illustrates an architecture of computing components in the computing environment, such as the hosts and tape server, and any other computing devices .
  • FIG. 1 illustrates a computing environment in which aspects of the invention may be implemented.
  • a tape server 2 provides host systems 4a, 4b...4n access to logical volumes stored on tape cartridges (also referred to as physical volumes) 6a, 6b, 6c, 6d, 6e, 6f, 6g.
  • the tape cartridges 6a, 6b...6g are organized into logical groups referred to as pools 8a, 8b.
  • a tape controller 10 includes hardware and/or software to manage access to the tape cartridges 6a, 6b...6g in the pools 8a, 8b and perform reclamation in accordance with implementations described herein.
  • a scratch pool 8c includes tape cartridges 6h, 6i, and 6j that are empty, free and available for use with another pool if additional tape storage is needed for logical volumes in a pool .
  • FIG. 1 shows a certain number of tape cartridges and storage pools
  • any number of tape cartridges and storage pools may be used, where the storage pools may include any number of tape cartridges.
  • the tape server 2 may comprise an automated tape library and include a gripper assembly (not shown) to access and load the tape cartridges 6a, 6b...6j into one or more accessible tape drives (not shown) and include cartridge slots (not shown) to store the tape cartridges.
  • the tape cartridges may be manually loaded into one or more tape drives accessible to the tape server 2.
  • the tape server 2 may comprise any tape library or tape controller system known in the art.
  • the tape cartridges 6a, 6b...6j may comprise any type of sequential access magnetic storage media known in the art, including Digital Linear Tape (DLT) , Linear Tape Open (LTO) , etc.
  • the hosts 4a, 4b...4n may comprise any computing device known in the art, such as a personal computer, laptop computer, workstation, mainframe, telephony device, handheld computer, server, network appliance, etc.
  • the hosts 4a, 4b...4n may connect to the tape server 2 via a direct cable connection or over a network, such as a Local Area Network (LAN) , Wide Area Network (WAN), Storage Area Network (SAN), the Internet, an Intranet, etc.
  • LAN Local Area Network
  • WAN Wide Area Network
  • SAN Storage Area Network
  • the Internet an Intranet, etc.
  • FIG. 2 illustrates an alternative implementation where the tape server 2 shown in FIG. 1 is included in a hierarchical storage management (HSM) system as tape server 32.
  • the hosts 34a, 34b...34n perform Input/Output (I/O) operations with respect to a disk array 36 through a storage server 38.
  • the disk array 36 may comprise a single hard disk drive, a Redundant Array of Independent Disks (RAID) , Just a Bunch of Disks (JBOD) , or any other storage medium that allows for faster access than the storage medium managed by the tape server 32.
  • the storage server 38 may comprise any server class machine suitable for handling I/O requests from multiple sources, such as an enterprise class storage server.
  • the storage server 38 includes storage management software 40, which manages the migration of data from the disk array 36 to the tape server 32 for storage on tapes (physical volumes) in storage pools 42, such as the storage pools 8a, 8b shown in FIG. 1.
  • the storage management software 40 may migrate data from the disk array 36 to the tape server 32 using hierarchical storage management (HSM) algorithms and techniques known in the art, such as the HSM operations implemented in the Tivoli® Space Manager products (Tivoli is a registered trademark of International Business Machines Corporation) .
  • HSM hierarchical storage management
  • the storage management software 38 may implement virtual tape server functions so that the hosts 34a, 34b...34n use tape access operations to access data in the disk array 36, where the disk array 36 operates as a large high speed buffer for the tape storage, relative to the slower access tape cartridge medium.
  • the hosts 34a, 34b...34n may use tape I/O commands to access data in the disk array 36 as tape logical volumes.
  • the storage management software 38 would use HSM algorithms to migrate data from the disk array 36 to the tape server 32.
  • the storage management software 38 may include virtual tape server software known in the art, such as the software used with the IBM TotalStorageTM Virtual Tape Server (TotalStorage is a trademark of IBM) to implement a virtual tape server environment.
  • the tape server 2, 32 that is performing reclamation operations may be directly connected to the hosts performing the tape operations or may receive data from a disk array as part of HSM migration, a virtual tape server system, backup or other data management operations performed at the disk array level. Additionally, the tape server 32 could be contained within the storage server 38.
  • system administrators can assign physical volumes to pools to allow classification of tapes according to some predefined criteria. For instance, in an organization, there may be separate storage pools of tape cartridges for different units within the organization. In a corporate organization, there may be separate storage pools for different departments, e.g., accounting, marketing, finance, engineering, etc., so that data from a particular department is stored on tape cartridges that only store that particular class of data. Alternatively, storage pools may be defined for data having different rates of usage. For instance, one pool may be for data that has been modified or accessed recently and another pool may be used for archived or backup data. Still further, pools may be designated for different groups of users, such as those with a high level of access, those with limited access, etc. Thus, the storage pools may be used to assign tape cartridges group data by class or type.
  • the tape controller 10 maintains data structures in memory 12, including logical volume records 14, physical volume records 16, and pool records 18.
  • the memory 12 may comprise a volatile memory device, e.g., a random access memory (RAM) or a non-volatile storage, e.g., a hard disk drive. These records may be maintained in a relational or object oriented database, a table or any other data structure known in the art.
  • RAM random access memory
  • non-volatile storage e.g., a hard disk drive.
  • FIG. 3a illustrates the information maintained in each logical volume record 50, where a logical volume record 50 is maintained for each logical volume stored in a tape cartridge 6a, 6b...6g, including:
  • ID 52 an identifier of the logical volume.
  • Current Physical Volume (s) 54 identifies one or more physical volumes (tape cartridge 6a, 6b...6j ) including the logical volume.
  • a logical volume may span multiple physical volumes or multiple logical volumes may be stored on a single physical volume.
  • the pool in which the logical volume is assigned can be determined from the storage pool associated with the current physical volume including the logical volume.
  • Location on Physical Volume (s) 56 indicates the location of the logical volume on the one or more physical volumes including the logical volume.
  • FIG. 3b illustrates the information in each physical volume record
  • a physical volume record 70 is maintained for each physical volume or tape cartridge 6a, 6b...6j that may be accessed by the tape server 2 through a tape drive, including:
  • ID 72 provides a unique identifier of a physical volume.
  • Home Pool 74 indicates the home pool to which the physical volume is assigned. If a physical volume (tape cartridge) is moved from one pool to another, than the home pool is reassigned to the target pool to which the physical volume is reassigned. A "borrow" changes only the current pool and the home pool remains the same. If a tape cartridge is borrowed two or more times, then the home pool will still specify the same pool from which the tape was initially borrowed, such as the scratch pool, but the current pool is changed.
  • Current Pool 76 indicates the current pool to which the physical volume is assigned, such that a physical volume stores data of the type associated with the current pool.
  • Media Type 78 Indicates a media type of the physical volume, such as "J” or "K” .
  • Target Pool 80 the default indicates no target pool. If the field indicates a known storage pool, then this field indicates that the physical volume is involved in a pending move operation and is to be moved to the specified target pool after the active data from the physical volume is copied to an empty tape.
  • Priority Reclamation 82 indicates that reclamation for the physical volume occurs during the scheduled reclamation period, but the physical volume is assigned a higher reclamation priority than other cartridges to be reclaimed so that the physical volume is scheduled for reclamation before other tape cartridges to be reclaimed.
  • the default may be that priority reclamation is off indicating that reclamation will occur during a normally scheduled reclamation period at the normal assigned reclamation priority.
  • Inhibit Reclamation Schedule 84 If the priority reclamation 82 indicates a priority reclamation, then this field may indicate to schedule the reclamation immediately, even if reclamation would occur outside of the scheduled reclamation period during a critical use time. If this inhibit option is not selected, then the priority reclamation would occur during the normal scheduled reclamation period.
  • FIG. 3c illustrates the information maintained with a pool record , where there is one pool record 90 for each defined pool, including:
  • ID 92 provides a unique identifier of a pool. This ID may have a descriptive name indicating the type or class of data stored in the pool, e.g., accounting data, marketing data, research and development, archival data, high security users, etc. If a pool record 90 is maintained for the scratch pool, then the scratch pool may have a unique scratch pool identifier.
  • Borrowing 94 indicates whether physical volumes (tape cartridges) may be borrowed by the pool from the scratch pool.
  • Return Policy 96 indicates whether a physical volume (tape cartridge) moved from one pool to another must be returned to the home pool when the tape is reclaimed or released, i.e., the tape no longer has any active data.
  • Media Type 98 a field that indicates the media type(s) of physical volumes associated with the pool .
  • Reclamation Threshold 100 Indicates the reclamation threshold for the pool, which is the capacity utilization that triggers the reclamation process for tapes in the pool, such that a tape (physical volume) in the pool is reclaimed if its active data is less than the reclamation threshold for that pool. Each pool may have a different reclamation threshold.
  • Target Reclamation Pool 102 Indicates a storage pool to which data is copied from the tape cartridge in the current pool during reclamation. For instance, when reclamation is performed, the data on a cartridge in one storage pool is moved to a tape cartridge in the storage pool indicated in the reclamation pool field 102. This allows data to move to different storage pools to be reclaimed at different reclamation thresholds. If a different storage pool is not indicated in field 102 or if a default "undefined" value is indicated in field 102, then the data is reclaimed to the same storage pool .
  • Both the reclamation threshold 100 and target reclamation pool 102 values may be set by the system administrator for defined storage pools.
  • the reclamation thresholds 100 indicated in the pool records 18 may be set at different levels for different pools. Thus, one pool may have a lower reclamation threshold than another pool.
  • data may be initially stored in a storage pool having a low reclamation threshold 100 and a target reclamation pool 102 indicating a succeeding storage pool having a higher reclamation threshold 100. For instance, data may initially be stored on tapes in storage pool A that has a low reclamation threshold of say 10%.
  • the target reclamation pool 102 for storage pool A may indicate storage pool B that has a high reclamation threshold, e.g., 90%.
  • logical volumes reclaimed from tapes in storage pool A are stored in tapes in storage pool B, so that reclamation causes logical volumes to move from one storage pool to another.
  • Data stored in the first storage pool A may include data that is frequently updated, and thus expires at a fast rate, as well as data that is infrequently updated, such as archival data. Setting the reclamation threshold low for the first storage pool A ensures that reclamation occurs with respect to data that is infrequently accessed, such as archival data, because most of the frequently accessed data is inactive because it would have expired (i.e., been modified) before the low reclamation threshold is reached.
  • reclamation at storage pool A with the low reclamation threshold would likely involve the movement of mostly infrequently accessed (archival) data to storage pool B.
  • Data in storage pool B is reclaimed at a higher reclamation threshold to improve storage capacity utilization for the relatively less frequently accessed data.
  • reclamation will not substantially degrade tape server 4, 34 performance because the data in storage pool B is infrequently accessed and thus will not likely frequently expire so as to trigger reclamations at the higher reclamation threshold at a rate that degrades performance.
  • the initial storage pool effectively filters out frequently used data to move data that is relatively infrequently accessed to the next storage pool where a higher reclamation threshold can be used to improve storage capacity utilization with minimal effects on performance.
  • data can be reclaimed through more than two pools, where each pool through which the data is moved has an increasing reclamation threshold to provide an increased storage capacity utilization for data that is infrequently accessed.
  • reclamation at each storage pool filters out the relatively more frequently accessed data so that the relatively infrequently used data in the storage pool is promoted to succeeding storage pools for storage on tapes at an increasing storage capacity utilization.
  • FIG. 4 illustrates logic implemented in the tape controller 10 to select tapes 6a, 6b...6g within one storage pool 8a, 8b for reclamation.
  • Control begins at block 200 where the tape controller 10 selects one of the storage pools 8a, 8b in which to process tapes for reclamation. This process would be performed with respect to each storage pool 8a, 8b, other than the scratch pool 8c which includes empty tapes (physical volumes) 6h, 6i, 6j .
  • a loop is performed at blocks 202 through 208 for each tape cartridge i in the selected storage pool 8a, 8b. If (at block 204) the percentage of active data is less than or equal to the reclamation threshold 100 indicated in the pool record 90 (FIG.
  • the tape controller 10 calls (at block 206) the reclamation process for tape i.
  • the tape i would be subject to reclamation according to the logic of FIG. 5 during a predesignated reclamation period, which typically occurs during low use hours.
  • control proceeds to block 208 to consider the next tape in the selected pool for reclamation.
  • the reclamation threshold 100 can be set at different values for different storage pools 8a, 8b, the tapes in different storage pools may be subject to reclamation at different rates depending on their reclamation threshold.
  • FIG. 5 illustrates logic implemented in the tape controller 10 to perform reclamation on tapes subject to reclamation according to the logic of FIG. 4.
  • Control begins at block 250 with the initiation of the reclamation process, which may occur during regularly scheduled reclamation periods. Oftentimes reclamation is scheduled to occur during time periods at which the tape server 2, 32 is experiencing low usage so as not to interfere with normal tape drive operations. Alternatively, reclamation may occur soon after the tape controller 10 decides in FIG. 4 to subject a tape cartridge 6a, 6b...6g to reclamation.
  • a loop is performed at blocks 252 through 266 for each tape (physical volume) i subject to reclamation.
  • the storage pool for tape i is determined (at block 254) from the current pool 76 field of the physical volume record 70 (FIG. 3b) for tape i.
  • the tape controller 10 accesses (at block 258) a target tape from the storage pool indicated in the target reclamation pool field 102. Otherwise, if the target reclamation pool field 102 does not indicate to reclaim to a different storage pool, then the tape controller 10 accesses (at block 260) a target tape from the current storage pool of tape i. After accessing a free target tape, the tape controller 10 moves (at block 262) , or sequentially writes, the data from tape i to the accessed target tape and releases (at block 264) tape i as a free tape. Control then proceeds (at block 266) back to block 252 to perform reclamation with respect to the next tape scheduled for reclamation.
  • the storage administrator may have data initially stored in a storage pool having a relatively lower reclamation threshold to flush out frequently accessed data, i.e., data that expires at a faster rate, and then reclaim the data from such initial storage pool to a succeeding storage pool having a higher reclamation threshold.
  • the succeeding storage pool has a higher reclamation threshold, data may not be reclaimed more frequently because the data in the succeeding storage pool expires at a slower rate, thereby taking longer to reach the reclamation threshold.
  • the succeeding storage pools may further designate a further succeeding reclamation storage pool in field 102 to cause data to be reclaimed through a series of different storage pools, where each succeeding pool may have a higher reclamation threshold than the previous pool.
  • the initial storage pool having the lower reclamation threshold and the next succeeding pool at the higher reclamation threshold may have different capacity tapes.
  • the initial storage pool may have "J" tapes and the succeeding storage pool would have "K” tapes, where "K” media tapes have a greater storage capacity. In this way, the initial reclamation at the lower threshold would occur more frequently by placing the data on smaller capacity tapes to provide for more efficient recall. Storing the less frequently accessed data, e.g., archival data, in the succeeding storage pool on a larger capacity tape packs data at a higher utilization on the larger capacity tape to improve volumetric efficiency.
  • the described implementations provide techniques for increasing storage capacity utilization by allowing the use of higher reclamation thresholds in a manner that avoids triggering thresholds at a rate that would harm system performance.
  • the described techniques for reclaiming physical volumes in storage pools may be implemented as a method, apparatus or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof.
  • article of manufacture refers to code or logic implemented in hardware logic (e.g., an integrated circuit chip, Programmable Gate Array (PGA) , Application Specific Integrated Circuit (ASIC), etc.) or a computer readable medium, such as magnetic storage medium (e.g., hard disk drives, floppy disks,, tape, etc.), optical storage (CD-ROMs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, firmware, programmable logic, etc.).
  • Code in the computer readable medium is accessed and executed by a processor.
  • the code in which preferred embodiments are implemented may further be accessible through a transmission media or from a file server over a network.
  • the article of manufacture in which the code is implemented may comprise a transmission media, such as a network transmission line, wireless transmission media, signals propagating through space, radio waves, infrared signals, etc.
  • the "article of manufacture” may comprise the medium in which the code is embodied.
  • the "article of manufacture” may comprise a combination of hardware and software components in which the code is embodied, processed, and executed.
  • the article of manufacture may comprise any information bearing medium known in the art .
  • the physical volumes subject to the storage pool management operations described herein, such as reclamation were stored in tape cartridges.
  • the physical volumes subject to the storage pool management operations may be stored in any non-volatile storage unit medium known in the art, including optical disks, hard disk drive, non-volatile Random Access Memory (RAM) devices, etc.
  • the server would include the necessary drives or interfaces through which data in the alternative storage unit component is accessed.
  • each succeeding storage pool indicated in the target reclamation pool field 102 has a higher reclamation threshold than the preceding storage pool from which the data came.
  • a succeeding target storage pool to which data is reclaimed may have a lower or equal reclamation threshold.
  • succeeding target storage pools may have reclamation thresholds that are higher or lower than the threshold in any of the preceding target storage pools .
  • the reclamation threshold is satisfied if the data in the tape cartridge is less than the threshold amount.
  • alternative thresholds and threshold measurements may be used.
  • FIGs. 3a, 3b, and 3c show the records as having specific types of information.
  • the logical volume, physical volume, and storage pool records may have fewer, more or different fields than shown in the figures.
  • sequence of tape selection in FIG. 5 may be based on the amount of active data on the tape instead of an index.
  • n and i are used to denote integer values indicating a certain number of elements. These variables may denote any number when used at different instances with the same or different elements.
  • the illustrated logic of FIGs. 4 and 5 shows certain events occurring in a certain order. In alternative implementations, certain operations may be performed in a different order, modified or removed. Moreover, steps may be added to the above described logic and still conform to the described implementations. Further, operations described herein may occur sequentially or certain operations may be processed in parallel. Yet further, operations may be performed by a single processing unit or by distributed processing units.
  • FIG. 6 illustrates one implementation of a computer architecture 600 that may be used in the hosts 4a, 4b...4n and tape server 2 (FIG. 1) .
  • the architecture 600 may include a processor 602 (e.g., a microprocessor), a memory 604 (e.g., a volatile memory device), and storage 606 (e.g., a non-volatile storage, such as magnetic disk drives, optical disk drives, a tape drive, etc.).
  • the storage 606 may comprise an internal storage device or an attached or network accessible storage. Programs in the storage 606 are loaded into the memory 604 and executed by the processor 602 in a manner known in the art.
  • the architecture further includes a network card 608 to enable communication with a network.
  • An input device 610 is used to provide user input to the processor 602, and may include a keyboard, mouse, pen-stylus, microphone, touch sensitive display screen, or any other activation or input mechanism known in the art.
  • An output device 612 is capable of rendering information transmitted from the processor 602, or other component, such as a display monitor, printer, storage, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Computer Hardware Design (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Vehicle Body Suspensions (AREA)

Abstract

Storage pool information indicates an assignment of a plurality of storage units to a plurality of storage pools, wherein each pool is assigned zero or more storage units, wherein data associated with one storage pool is stored in a storage unit assigned to the storage pool, wherein the storage pool information for each pool indicates a threshold and target storage pool, and wherein the target storage pool is capable of being different from the storage pool. One storage unit associated with a source storage pool is selected and a determination is made of the threshold from the storage pool information for the source storage pool. A determination is made of whether the selected storage unit satisfies the determined threshold and if the selected storage unit satisfies the determined threshold, then a target storage unit in the target storage pool is selected if the storage pool information for the source storage pool indicates a target storage pool different from the source storage pool. Data from the selected storage unit is copied to the selected target storage unit.

Description

MOVING DATA AMONG STORAGE UNITS
The present invention relates to moving data among storage units.
In a tape library system, a tape controller will perform a reclamation process to improve the utilization of the tape storage units. The reclamation process involves copying active data from one or more tapes having both inactive and active data to fewer tapes that only have active data. The tapes from which the data is copied are then added to a scratch pool of available tapes from which they may be selected and used to store future data. Empty tapes may be returned to a scratch pool or retained for exclusive use of the current pool . This process improves storage capacity utilization by aggregating active data from multiple tapes to a single tape that stores a greater percentage of active data. Reclamation is necessary because as data is modified, older versions of the data on various tapes becomes outdated or inactive. Tapes that have both inactive and active data are not fully utilized because data is written sequentially and inactive data cannot simply be replaced with active data.
A tape is scheduled for reclamation when the amount of active data in a tape reaches a reclamation threshold. In order to optimize tape utilization, the reclamation threshold would be set to a higher level to more frequently consolidate data from tapes with a lower utilization to a single tape with a higher utilization. However, the reclamation process consumes substantial tape library resources to move the data from tape to tape and can affect other tape library operations. For instance, the data movement that occurs during reclamation can interfere with the data movement to tape that occurs in a hierarchical storage management (HSM) system when data is migrated from a faster access storage device, such as an array of hard disk drives, to slower access storage device, such as tape. Setting the reclamation threshold to a higher level to increase tape utilization will increase the frequency of the reclamation process and thereby consume substantial tape library resources and perhaps interfere with other tape library operations, such as data migration when the tape library is used in a hierarchical storage management system.
On the other hand, setting the reclamation threshold lower will reduce the frequency of reclamation because the amount of active data must fall to a relatively low level before reclamation begins. Reducing the frequency of reclamation will consume less tape library resources and minimize interference with other tape library operations, such as data migration from disk to tape. However, reducing the frequency of reclamation allows tapes to remain with a lower storage capacity utilization because reclamation is not performed until the tape storage capacity utilization is at the lower threshold level. If storage capacity utilization is lower, then the data is dispersed across more tapes at a lower capacity utilization.
Thus, there is always a tradeoff of tape library performance and storage capacity utilization that must be considered when determining how the reclamation threshold.
For these reasons, there is a need in the art for improved techniques for handling data reclamation in a storage system.
Provided are a method, as claimed in claim 1 and a corresponding system and program for managing data in storage units.
Preferably, at least two of the storage pools may have different thresholds .
Preferably, the storage units in the source storage pool may have a lower storage capacity than the storage units in the target storage pool.
Preferably, the source storage pool may comprise a first storage pool, the target storage pool may comprise a second storage pool, wherein a third storage pool is identified as a target storage pool in the storage pool information for the second storage pool, and whereby data from one selected storage unit in the second storage pool is moved to the third storage pool when the threshold for the second storage pool is reached.
Described implementations provide techniques for managing data in storage pools and reclaiming data in a storage unit in one source pool in a storage unit in a different target storage pool, where the source and target storage pool may have different attributes.
Referring now to the drawings in which like reference numbers represent corresponding parts throughout :
FIG. 1 illustrates a computing environment in which aspects of the invention are implemented;
FIG. 2 illustrates an alternative computing environment in which aspects of the invention are implemented; FIG. 2 illustrates an alternative computing environment in which aspects of the invention are implemented;
FIGs. 3a, 3b, and 3c illustrate data structures maintaining information on logical volumes, physical volumes and storage pools, respectively, in accordance with implementations of the invention;
FIGs. 4 and 5 illustrates logic to perform tape reclamation operations in accordance with implementations of the invention; and
FIG. 6 illustrates an architecture of computing components in the computing environment, such as the hosts and tape server, and any other computing devices .
FIG. 1 illustrates a computing environment in which aspects of the invention may be implemented. A tape server 2 provides host systems 4a, 4b...4n access to logical volumes stored on tape cartridges (also referred to as physical volumes) 6a, 6b, 6c, 6d, 6e, 6f, 6g. In certain implementations, the tape cartridges 6a, 6b...6g are organized into logical groups referred to as pools 8a, 8b. A tape controller 10 includes hardware and/or software to manage access to the tape cartridges 6a, 6b...6g in the pools 8a, 8b and perform reclamation in accordance with implementations described herein. A scratch pool 8c includes tape cartridges 6h, 6i, and 6j that are empty, free and available for use with another pool if additional tape storage is needed for logical volumes in a pool .
Although FIG. 1 shows a certain number of tape cartridges and storage pools, any number of tape cartridges and storage pools may be used, where the storage pools may include any number of tape cartridges. The tape server 2 may comprise an automated tape library and include a gripper assembly (not shown) to access and load the tape cartridges 6a, 6b...6j into one or more accessible tape drives (not shown) and include cartridge slots (not shown) to store the tape cartridges. In further implementations, the tape cartridges may be manually loaded into one or more tape drives accessible to the tape server 2.
The tape server 2 may comprise any tape library or tape controller system known in the art. The tape cartridges 6a, 6b...6j may comprise any type of sequential access magnetic storage media known in the art, including Digital Linear Tape (DLT) , Linear Tape Open (LTO) , etc. The hosts 4a, 4b...4n may comprise any computing device known in the art, such as a personal computer, laptop computer, workstation, mainframe, telephony device, handheld computer, server, network appliance, etc. The hosts 4a, 4b...4n may connect to the tape server 2 via a direct cable connection or over a network, such as a Local Area Network (LAN) , Wide Area Network (WAN), Storage Area Network (SAN), the Internet, an Intranet, etc.
FIG. 2 illustrates an alternative implementation where the tape server 2 shown in FIG. 1 is included in a hierarchical storage management (HSM) system as tape server 32. The hosts 34a, 34b...34n perform Input/Output (I/O) operations with respect to a disk array 36 through a storage server 38. The disk array 36 may comprise a single hard disk drive, a Redundant Array of Independent Disks (RAID) , Just a Bunch of Disks (JBOD) , or any other storage medium that allows for faster access than the storage medium managed by the tape server 32. The storage server 38 may comprise any server class machine suitable for handling I/O requests from multiple sources, such as an enterprise class storage server. In certain implementations, the storage server 38 includes storage management software 40, which manages the migration of data from the disk array 36 to the tape server 32 for storage on tapes (physical volumes) in storage pools 42, such as the storage pools 8a, 8b shown in FIG. 1. In certain implementations, the storage management software 40 may migrate data from the disk array 36 to the tape server 32 using hierarchical storage management (HSM) algorithms and techniques known in the art, such as the HSM operations implemented in the Tivoli® Space Manager products (Tivoli is a registered trademark of International Business Machines Corporation) .
In still further implementations, the storage management software 38 may implement virtual tape server functions so that the hosts 34a, 34b...34n use tape access operations to access data in the disk array 36, where the disk array 36 operates as a large high speed buffer for the tape storage, relative to the slower access tape cartridge medium. The hosts 34a, 34b...34n may use tape I/O commands to access data in the disk array 36 as tape logical volumes. The storage management software 38 would use HSM algorithms to migrate data from the disk array 36 to the tape server 32. The storage management software 38 may include virtual tape server software known in the art, such as the software used with the IBM TotalStorage™ Virtual Tape Server (TotalStorage is a trademark of IBM) to implement a virtual tape server environment.
Thus, the tape server 2, 32 that is performing reclamation operations may be directly connected to the hosts performing the tape operations or may receive data from a disk array as part of HSM migration, a virtual tape server system, backup or other data management operations performed at the disk array level. Additionally, the tape server 32 could be contained within the storage server 38.
In certain implementations, system administrators can assign physical volumes to pools to allow classification of tapes according to some predefined criteria. For instance, in an organization, there may be separate storage pools of tape cartridges for different units within the organization. In a corporate organization, there may be separate storage pools for different departments, e.g., accounting, marketing, finance, engineering, etc., so that data from a particular department is stored on tape cartridges that only store that particular class of data. Alternatively, storage pools may be defined for data having different rates of usage. For instance, one pool may be for data that has been modified or accessed recently and another pool may be used for archived or backup data. Still further, pools may be designated for different groups of users, such as those with a high level of access, those with limited access, etc. Thus, the storage pools may be used to assign tape cartridges group data by class or type.
In certain implementations, the tape controller 10 maintains data structures in memory 12, including logical volume records 14, physical volume records 16, and pool records 18. The memory 12 may comprise a volatile memory device, e.g., a random access memory (RAM) or a non-volatile storage, e.g., a hard disk drive. These records may be maintained in a relational or object oriented database, a table or any other data structure known in the art.
FIG. 3a illustrates the information maintained in each logical volume record 50, where a logical volume record 50 is maintained for each logical volume stored in a tape cartridge 6a, 6b...6g, including:
ID 52 : an identifier of the logical volume.
Current Physical Volume (s) 54: identifies one or more physical volumes (tape cartridge 6a, 6b...6j ) including the logical volume. A logical volume may span multiple physical volumes or multiple logical volumes may be stored on a single physical volume. The pool in which the logical volume is assigned can be determined from the storage pool associated with the current physical volume including the logical volume. Location on Physical Volume (s) 56: indicates the location of the logical volume on the one or more physical volumes including the logical volume.
FIG. 3b illustrates the information in each physical volume record
70, where a physical volume record 70 is maintained for each physical volume or tape cartridge 6a, 6b...6j that may be accessed by the tape server 2 through a tape drive, including:
ID 72 : provides a unique identifier of a physical volume.
Home Pool 74: indicates the home pool to which the physical volume is assigned. If a physical volume (tape cartridge) is moved from one pool to another, than the home pool is reassigned to the target pool to which the physical volume is reassigned. A "borrow" changes only the current pool and the home pool remains the same. If a tape cartridge is borrowed two or more times, then the home pool will still specify the same pool from which the tape was initially borrowed, such as the scratch pool, but the current pool is changed.
Current Pool 76: indicates the current pool to which the physical volume is assigned, such that a physical volume stores data of the type associated with the current pool.
Media Type 78: Indicates a media type of the physical volume, such as "J" or "K" .
Target Pool 80 : the default indicates no target pool. If the field indicates a known storage pool, then this field indicates that the physical volume is involved in a pending move operation and is to be moved to the specified target pool after the active data from the physical volume is copied to an empty tape.
Priority Reclamation 82 : indicates that reclamation for the physical volume occurs during the scheduled reclamation period, but the physical volume is assigned a higher reclamation priority than other cartridges to be reclaimed so that the physical volume is scheduled for reclamation before other tape cartridges to be reclaimed. The default may be that priority reclamation is off indicating that reclamation will occur during a normally scheduled reclamation period at the normal assigned reclamation priority.
Inhibit Reclamation Schedule 84: If the priority reclamation 82 indicates a priority reclamation, then this field may indicate to schedule the reclamation immediately, even if reclamation would occur outside of the scheduled reclamation period during a critical use time. If this inhibit option is not selected, then the priority reclamation would occur during the normal scheduled reclamation period.
FIG. 3c illustrates the information maintained with a pool record , where there is one pool record 90 for each defined pool, including:
ID 92 : provides a unique identifier of a pool. This ID may have a descriptive name indicating the type or class of data stored in the pool, e.g., accounting data, marketing data, research and development, archival data, high security users, etc. If a pool record 90 is maintained for the scratch pool, then the scratch pool may have a unique scratch pool identifier.
Borrowing 94 : indicates whether physical volumes (tape cartridges) may be borrowed by the pool from the scratch pool.
Return Policy 96: indicates whether a physical volume (tape cartridge) moved from one pool to another must be returned to the home pool when the tape is reclaimed or released, i.e., the tape no longer has any active data.
Media Type 98: a field that indicates the media type(s) of physical volumes associated with the pool .
Reclamation Threshold 100: Indicates the reclamation threshold for the pool, which is the capacity utilization that triggers the reclamation process for tapes in the pool, such that a tape (physical volume) in the pool is reclaimed if its active data is less than the reclamation threshold for that pool. Each pool may have a different reclamation threshold.
Target Reclamation Pool 102 : Indicates a storage pool to which data is copied from the tape cartridge in the current pool during reclamation. For instance, when reclamation is performed, the data on a cartridge in one storage pool is moved to a tape cartridge in the storage pool indicated in the reclamation pool field 102. This allows data to move to different storage pools to be reclaimed at different reclamation thresholds. If a different storage pool is not indicated in field 102 or if a default "undefined" value is indicated in field 102, then the data is reclaimed to the same storage pool .
Both the reclamation threshold 100 and target reclamation pool 102 values may be set by the system administrator for defined storage pools.
In certain implementations, the reclamation thresholds 100 indicated in the pool records 18 may be set at different levels for different pools. Thus, one pool may have a lower reclamation threshold than another pool. In one implementation, data may be initially stored in a storage pool having a low reclamation threshold 100 and a target reclamation pool 102 indicating a succeeding storage pool having a higher reclamation threshold 100. For instance, data may initially be stored on tapes in storage pool A that has a low reclamation threshold of say 10%. The target reclamation pool 102 for storage pool A may indicate storage pool B that has a high reclamation threshold, e.g., 90%. Thus, logical volumes reclaimed from tapes in storage pool A are stored in tapes in storage pool B, so that reclamation causes logical volumes to move from one storage pool to another.
In implementations where data moves from tapes in a lower reclamation threshold storage pool to a higher reclamation threshold storage pool, storage capacity utilization is optimized while the impact of reclamation operations on the tape server 2, 32 performance is minimized for the following reasons. Data stored in the first storage pool A may include data that is frequently updated, and thus expires at a fast rate, as well as data that is infrequently updated, such as archival data. Setting the reclamation threshold low for the first storage pool A ensures that reclamation occurs with respect to data that is infrequently accessed, such as archival data, because most of the frequently accessed data is inactive because it would have expired (i.e., been modified) before the low reclamation threshold is reached. Thus, reclamation at storage pool A with the low reclamation threshold would likely involve the movement of mostly infrequently accessed (archival) data to storage pool B. Data in storage pool B is reclaimed at a higher reclamation threshold to improve storage capacity utilization for the relatively less frequently accessed data. However, even though storage pool B has a higher reclamation threshold, reclamation will not substantially degrade tape server 4, 34 performance because the data in storage pool B is infrequently accessed and thus will not likely frequently expire so as to trigger reclamations at the higher reclamation threshold at a rate that degrades performance.
By using multiple storage pools with different reclamation thresholds, the initial storage pool effectively filters out frequently used data to move data that is relatively infrequently accessed to the next storage pool where a higher reclamation threshold can be used to improve storage capacity utilization with minimal effects on performance.
In further implementations, data can be reclaimed through more than two pools, where each pool through which the data is moved has an increasing reclamation threshold to provide an increased storage capacity utilization for data that is infrequently accessed. In this way, reclamation at each storage pool filters out the relatively more frequently accessed data so that the relatively infrequently used data in the storage pool is promoted to succeeding storage pools for storage on tapes at an increasing storage capacity utilization.
FIG. 4 illustrates logic implemented in the tape controller 10 to select tapes 6a, 6b...6g within one storage pool 8a, 8b for reclamation. Control begins at block 200 where the tape controller 10 selects one of the storage pools 8a, 8b in which to process tapes for reclamation. This process would be performed with respect to each storage pool 8a, 8b, other than the scratch pool 8c which includes empty tapes (physical volumes) 6h, 6i, 6j . A loop is performed at blocks 202 through 208 for each tape cartridge i in the selected storage pool 8a, 8b. If (at block 204) the percentage of active data is less than or equal to the reclamation threshold 100 indicated in the pool record 90 (FIG. 3c) for the selected storage pool 8a, 8b, then the tape controller 10 calls (at block 206) the reclamation process for tape i. In such case, the tape i would be subject to reclamation according to the logic of FIG. 5 during a predesignated reclamation period, which typically occurs during low use hours. After designating a tape to be reclaimed or if the active data on tape i does not fall below the reclamation threshold 100, then control proceeds to block 208 to consider the next tape in the selected pool for reclamation. As discussed, because the reclamation threshold 100 can be set at different values for different storage pools 8a, 8b, the tapes in different storage pools may be subject to reclamation at different rates depending on their reclamation threshold.
FIG. 5 illustrates logic implemented in the tape controller 10 to perform reclamation on tapes subject to reclamation according to the logic of FIG. 4. Control begins at block 250 with the initiation of the reclamation process, which may occur during regularly scheduled reclamation periods. Oftentimes reclamation is scheduled to occur during time periods at which the tape server 2, 32 is experiencing low usage so as not to interfere with normal tape drive operations. Alternatively, reclamation may occur soon after the tape controller 10 decides in FIG. 4 to subject a tape cartridge 6a, 6b...6g to reclamation. A loop is performed at blocks 252 through 266 for each tape (physical volume) i subject to reclamation. The storage pool for tape i is determined (at block 254) from the current pool 76 field of the physical volume record 70 (FIG. 3b) for tape i. If (at block 256) the pool record 90 for the determined storage pool including tape i has a target reclamation pool 102 that is different from the storage pool including tape i, then the tape controller 10 accesses (at block 258) a target tape from the storage pool indicated in the target reclamation pool field 102. Otherwise, if the target reclamation pool field 102 does not indicate to reclaim to a different storage pool, then the tape controller 10 accesses (at block 260) a target tape from the current storage pool of tape i. After accessing a free target tape, the tape controller 10 moves (at block 262) , or sequentially writes, the data from tape i to the accessed target tape and releases (at block 264) tape i as a free tape. Control then proceeds (at block 266) back to block 252 to perform reclamation with respect to the next tape scheduled for reclamation.
As discussed, the storage administrator may have data initially stored in a storage pool having a relatively lower reclamation threshold to flush out frequently accessed data, i.e., data that expires at a faster rate, and then reclaim the data from such initial storage pool to a succeeding storage pool having a higher reclamation threshold. Although the succeeding storage pool has a higher reclamation threshold, data may not be reclaimed more frequently because the data in the succeeding storage pool expires at a slower rate, thereby taking longer to reach the reclamation threshold. Further, as discussed, the succeeding storage pools may further designate a further succeeding reclamation storage pool in field 102 to cause data to be reclaimed through a series of different storage pools, where each succeeding pool may have a higher reclamation threshold than the previous pool.
In further implementations, the initial storage pool having the lower reclamation threshold and the next succeeding pool at the higher reclamation threshold may have different capacity tapes. In one implementation, the initial storage pool may have "J" tapes and the succeeding storage pool would have "K" tapes, where "K" media tapes have a greater storage capacity. In this way, the initial reclamation at the lower threshold would occur more frequently by placing the data on smaller capacity tapes to provide for more efficient recall. Storing the less frequently accessed data, e.g., archival data, in the succeeding storage pool on a larger capacity tape packs data at a higher utilization on the larger capacity tape to improve volumetric efficiency.
The described implementations provide techniques for increasing storage capacity utilization by allowing the use of higher reclamation thresholds in a manner that avoids triggering thresholds at a rate that would harm system performance.
The described techniques for reclaiming physical volumes in storage pools may be implemented as a method, apparatus or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The term "article of manufacture" as used herein refers to code or logic implemented in hardware logic (e.g., an integrated circuit chip, Programmable Gate Array (PGA) , Application Specific Integrated Circuit (ASIC), etc.) or a computer readable medium, such as magnetic storage medium (e.g., hard disk drives, floppy disks,, tape, etc.), optical storage (CD-ROMs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, firmware, programmable logic, etc.). Code in the computer readable medium is accessed and executed by a processor. The code in which preferred embodiments are implemented may further be accessible through a transmission media or from a file server over a network. In such cases, the article of manufacture in which the code is implemented may comprise a transmission media, such as a network transmission line, wireless transmission media, signals propagating through space, radio waves, infrared signals, etc. Thus, the "article of manufacture" may comprise the medium in which the code is embodied. Additionally, the "article of manufacture" may comprise a combination of hardware and software components in which the code is embodied, processed, and executed. Of course, those skilled in the art will recognize that many modifications may be made to this configuration without departing from the scope of the present invention, and that the article of manufacture may comprise any information bearing medium known in the art .
In described implementations, the physical volumes subject to the storage pool management operations described herein, such as reclamation, were stored in tape cartridges. However, in alternative implementations, the physical volumes subject to the storage pool management operations may be stored in any non-volatile storage unit medium known in the art, including optical disks, hard disk drive, non-volatile Random Access Memory (RAM) devices, etc. In such alternative storage unit media, the server would include the necessary drives or interfaces through which data in the alternative storage unit component is accessed.
In the described implementations, each succeeding storage pool indicated in the target reclamation pool field 102 has a higher reclamation threshold than the preceding storage pool from which the data came. However, in alternative implementations, a succeeding target storage pool to which data is reclaimed may have a lower or equal reclamation threshold. Further, succeeding target storage pools may have reclamation thresholds that are higher or lower than the threshold in any of the preceding target storage pools .
In the described implementations, the reclamation threshold is satisfied if the data in the tape cartridge is less than the threshold amount. In alternative implementations, alternative thresholds and threshold measurements may be used.
The data structures shown in FIGs. 3a, 3b, and 3c show the records as having specific types of information. In alternative implementations, the logical volume, physical volume, and storage pool records may have fewer, more or different fields than shown in the figures.
In further implementations, the sequence of tape selection in FIG. 5 may be based on the amount of active data on the tape instead of an index.
In the described implementations, certain variables, such as n and i are used to denote integer values indicating a certain number of elements. These variables may denote any number when used at different instances with the same or different elements. The illustrated logic of FIGs. 4 and 5 shows certain events occurring in a certain order. In alternative implementations, certain operations may be performed in a different order, modified or removed. Moreover, steps may be added to the above described logic and still conform to the described implementations. Further, operations described herein may occur sequentially or certain operations may be processed in parallel. Yet further, operations may be performed by a single processing unit or by distributed processing units.
FIG. 6 illustrates one implementation of a computer architecture 600 that may be used in the hosts 4a, 4b...4n and tape server 2 (FIG. 1) . The architecture 600 may include a processor 602 (e.g., a microprocessor), a memory 604 (e.g., a volatile memory device), and storage 606 (e.g., a non-volatile storage, such as magnetic disk drives, optical disk drives, a tape drive, etc.). The storage 606 may comprise an internal storage device or an attached or network accessible storage. Programs in the storage 606 are loaded into the memory 604 and executed by the processor 602 in a manner known in the art. The architecture further includes a network card 608 to enable communication with a network. An input device 610 is used to provide user input to the processor 602, and may include a keyboard, mouse, pen-stylus, microphone, touch sensitive display screen, or any other activation or input mechanism known in the art. An output device 612 is capable of rendering information transmitted from the processor 602, or other component, such as a display monitor, printer, storage, etc.

Claims

1. A method for managing data in storage units, comprising: maintaining storage pool information indicating an assignment of a plurality of storage units to a plurality of storage pools, wherein each pool is assigned zero or more storage units, wherein data associated with one storage pool is stored in a storage unit assigned to the storage pool, wherein the storage pool information for each pool indicates a threshold and target storage pool, and wherein the target storage pool is capable of being different from the storage pool; selecting one storage unit associated with a source storage pool; determining the threshold from the storage pool information for the source storage pool; determining whether the selected storage unit satisfies the determined threshold; if the selected storage unit satisfies the determined threshold, then selecting a target storage unit in the target storage pool if the storage pool information for the source storage pool indicates a target storage pool different from the source storage pool; and copying data from the selected storage unit to the selected target storage unit.
2. The method of claim 1, wherein at least two of the storage pools have different thresholds.
3. The method of claim 1, wherein the selected storage unit satisfies the determined threshold if an amount of active data in the selected storage unit is less than the threshold.
4. The method of claim 1, further comprising: selecting the target storage unit from the source storage pool if a different target storage pool is not indicated in the storage pool information for the source storage pool .
5. The method of claim 1, wherein the threshold for the source storage pool is lower than the threshold for the target storage pool.
6. The method of claim 1, wherein the storage units in the source storage pool have a lower storage capacity than the storage units in the target storage pool.
7. The method of claim 1, wherein the source storage pool comprises a first storage pool, wherein the target storage pool comprises a second storage pool, wherein a third storage pool is identified as a target storage pool in the storage pool information for the second storage pool, and whereby data from one selected storage unit in the second storage pool is moved to the third storage pool when the threshold for the second storage pool is reached.
8. The method of claim 1, wherein the source storage pool stores data transferred from a storage device.
9. The method of claim 8, wherein the storage device has a higher data access rate than the storage units.
10. The method of claim 8, wherein the storage units comprise tape cartridges, and wherein the storage device operates as a tape buffer to which data is written using tape Input/Output commands.
11. The method of claim 8, wherein the storage units comprise tape cartridges in a virtual tape server and wherein the storage device comprises a virtual tape buffer in said virtual tape server.
12. The method of claim 1, wherein the storage units comprise sequential access tape cartridges.
13. A system for managing data, comprising: storage units; means for maintaining storage pool information indicating an assignment of a plurality of storage units to a plurality of storage pools, wherein each pool is assigned zero or more storage units, wherein data associated with one storage pool is stored in a storage unit assigned to the storage pool, wherein the storage pool information for each pool indicates a threshold and target storage pool, and wherein the target storage pool is capable of being different from the storage pool; means for selecting one storage unit associated with a source storage pool; means for determining the threshold from the storage pool information for the source storage pool; means for determining whether the selected storage unit satisfies the determined threshold; means for selecting, if the selected storage unit satisfies the determined threshold, a target storage unit in the target storage pool if the storage pool information for the source storage pool indicates a target storage pool different from the source storage pool; and means for copying data from the selected storage unit to the selected target storage unit .
14. The system of claim 13, wherein at least two of the storage pools have different thresholds.
15. The system of claim 13, wherein the selected storage unit satisfies the determined threshold if an amount of active data in the selected storage unit is less than the threshold.
16. The system of claim 13, further comprising: selecting the target storage unit from the source storage pool if a different target storage pool is not indicated in the storage pool information for the source storage pool .
17. The system of claim 13, wherein the threshold for the source storage pool is lower than the threshold for the target storage pool.
18. The system of claim 13, wherein the storage units in the source storage pool have a lower storage capacity than the storage units in the target storage pool .
19. The system of claim 13, wherein the source storage pool comprises a first storage pool, wherein the target storage pool comprises a second storage pool, wherein a third storage pool is identified as a target storage pool in the storage pool information for the second storage pool, and whereby data from one selected storage unit in the second storage pool is moved to the third storage pool when the threshold for the second storage pool is reached.
20. The system of claim 13, wherein the source storage pool stores data transferred from a storage device.
21. The system of claim 20, wherein the storage device has a higher data access rate than the storage units.
22. The system of claim 21, wherein the storage units comprise tape cartridges, and wherein the storage device operates as a tape buffer to which data is written using tape Input/Output commands.
23. The system of claim 20, wherein the storage units comprise tape cartridges in a virtual tape server and wherein the storage device comprises a virtual tape buffer in said virtual tape server.
24. The system of claim 13, wherein the storage units comprise sequential access tape cartridges.
25. A computer program product for, when executed on a computer system, instructing the computer system to carry out the method of any preceding method claim.
PCT/GB2003/003551 2002-08-29 2003-08-13 Moving data among storage units WO2004021190A2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP03790999A EP1540455B1 (en) 2002-08-29 2003-08-13 Moving data among storage units
JP2004532263A JP4502807B2 (en) 2002-08-29 2003-08-13 Data movement between storage units
AU2003251066A AU2003251066A1 (en) 2002-08-29 2003-08-13 Moving data among storage units
CA 2497326 CA2497326C (en) 2002-08-29 2003-08-13 Moving data among storage units
DE2003613783 DE60313783T2 (en) 2002-08-29 2003-08-13 MOVING DATA BETWEEN MEMORY UNITS

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/231,815 2002-08-29
US10/231,815 US7103731B2 (en) 2002-08-29 2002-08-29 Method, system, and program for moving data among storage units

Publications (2)

Publication Number Publication Date
WO2004021190A2 true WO2004021190A2 (en) 2004-03-11
WO2004021190A3 WO2004021190A3 (en) 2004-09-23

Family

ID=31976824

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2003/003551 WO2004021190A2 (en) 2002-08-29 2003-08-13 Moving data among storage units

Country Status (10)

Country Link
US (2) US7103731B2 (en)
EP (1) EP1540455B1 (en)
JP (1) JP4502807B2 (en)
KR (1) KR100633982B1 (en)
CN (1) CN1295591C (en)
AT (1) ATE362132T1 (en)
AU (1) AU2003251066A1 (en)
CA (1) CA2497326C (en)
DE (1) DE60313783T2 (en)
WO (1) WO2004021190A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7599905B2 (en) * 2001-07-19 2009-10-06 Emc Corporation Method and system for allocating multiple attribute storage properties to selected data storage resources

Families Citing this family (140)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7581077B2 (en) 1997-10-30 2009-08-25 Commvault Systems, Inc. Method and system for transferring data in a storage operation
US6418478B1 (en) 1997-10-30 2002-07-09 Commvault Systems, Inc. Pipelined high speed data transfer mechanism
US7035880B1 (en) 1999-07-14 2006-04-25 Commvault Systems, Inc. Modular backup and retrieval system used in conjunction with a storage area network
US7389311B1 (en) 1999-07-15 2008-06-17 Commvault Systems, Inc. Modular backup and retrieval system
US7395282B1 (en) 1999-07-15 2008-07-01 Commvault Systems, Inc. Hierarchical backup and retrieval system
US7155481B2 (en) 2000-01-31 2006-12-26 Commvault Systems, Inc. Email attachment management in a computer system
US7003641B2 (en) 2000-01-31 2006-02-21 Commvault Systems, Inc. Logical view with granular access to exchange data managed by a modular data and storage management system
US6658436B2 (en) 2000-01-31 2003-12-02 Commvault Systems, Inc. Logical view and access to data managed by a modular data and storage management system
US8346733B2 (en) 2006-12-22 2013-01-01 Commvault Systems, Inc. Systems and methods of media management, such as management of media to and from a media storage library
US7603518B2 (en) 2005-12-19 2009-10-13 Commvault Systems, Inc. System and method for improved media identification in a storage device
JP3966459B2 (en) 2002-05-23 2007-08-29 株式会社日立製作所 Storage device management method, system, and program
US7103731B2 (en) * 2002-08-29 2006-09-05 International Business Machines Corporation Method, system, and program for moving data among storage units
US6985916B2 (en) * 2002-08-29 2006-01-10 International Business Machines Corporation Method, system, and article of manufacture for returning physical volumes
US6952757B2 (en) * 2002-08-29 2005-10-04 International Business Machines Corporation Method, system, and program for managing storage units in storage pools
US6954831B2 (en) * 2002-08-29 2005-10-11 International Business Machines Corporation Method, system, and article of manufacture for borrowing physical volumes
CA2498174C (en) 2002-09-09 2010-04-13 Commvault Systems, Inc. Dynamic storage device pooling in a computer system
GB2409553B (en) 2002-09-16 2007-04-04 Commvault Systems Inc System and method for optimizing storage operations
US20040153481A1 (en) * 2003-01-21 2004-08-05 Srikrishna Talluri Method and system for effective utilization of data storage capacity
JP4322031B2 (en) * 2003-03-27 2009-08-26 株式会社日立製作所 Storage device
US7174433B2 (en) * 2003-04-03 2007-02-06 Commvault Systems, Inc. System and method for dynamically sharing media in a computer network
WO2004090789A2 (en) 2003-04-03 2004-10-21 Commvault Systems, Inc. System and method for extended media retention
US7454569B2 (en) 2003-06-25 2008-11-18 Commvault Systems, Inc. Hierarchical system and method for performing storage operations in a computer network
US7529782B2 (en) 2003-11-13 2009-05-05 Commvault Systems, Inc. System and method for performing a snapshot and for restoring data
US7546324B2 (en) 2003-11-13 2009-06-09 Commvault Systems, Inc. Systems and methods for performing storage operations using network attached storage
WO2005065084A2 (en) 2003-11-13 2005-07-21 Commvault Systems, Inc. System and method for providing encryption in pipelined storage operations in a storage network
US7266655B1 (en) 2004-04-29 2007-09-04 Veritas Operating Corporation Synthesized backup set catalog
US8879197B2 (en) 2004-09-27 2014-11-04 Spectra Logic, Corporation Self-describing a predefined pool of tape cartridges
US20060080500A1 (en) * 2004-10-07 2006-04-13 Unisys Corporation Method and system for managing data transfer between different types of tape media
US7788299B2 (en) * 2004-11-03 2010-08-31 Spectra Logic Corporation File formatting on a non-tape media operable with a streaming protocol
US7500053B1 (en) 2004-11-05 2009-03-03 Commvvault Systems, Inc. Method and system for grouping storage system components
US7536291B1 (en) 2004-11-08 2009-05-19 Commvault Systems, Inc. System and method to support simulated storage operations
US7617262B2 (en) 2005-12-19 2009-11-10 Commvault Systems, Inc. Systems and methods for monitoring application data in a data replication system
US7962709B2 (en) 2005-12-19 2011-06-14 Commvault Systems, Inc. Network redirector systems and methods for performing data replication
US7636743B2 (en) 2005-12-19 2009-12-22 Commvault Systems, Inc. Pathname translation in a data replication system
EP1974296B8 (en) 2005-12-19 2016-09-21 Commvault Systems, Inc. Systems and methods for performing data replication
US8655850B2 (en) 2005-12-19 2014-02-18 Commvault Systems, Inc. Systems and methods for resynchronizing information
US7651593B2 (en) 2005-12-19 2010-01-26 Commvault Systems, Inc. Systems and methods for performing data replication
US7606844B2 (en) 2005-12-19 2009-10-20 Commvault Systems, Inc. System and method for performing replication copy storage operations
US7752206B2 (en) * 2006-01-02 2010-07-06 International Business Machines Corporation Method and data processing system for managing a mass storage system
US7895295B1 (en) 2006-01-19 2011-02-22 Sprint Communications Company L.P. Scoring data flow characteristics to assign data flows to storage systems in a data storage infrastructure for a communication network
US7752437B1 (en) 2006-01-19 2010-07-06 Sprint Communications Company L.P. Classification of data in data flows in a data storage infrastructure for a communication network
US7788302B1 (en) 2006-01-19 2010-08-31 Sprint Communications Company L.P. Interactive display of a data storage infrastructure for a communication network
US7797395B1 (en) 2006-01-19 2010-09-14 Sprint Communications Company L.P. Assignment of data flows to storage systems in a data storage infrastructure for a communication network
US7801973B1 (en) 2006-01-19 2010-09-21 Sprint Communications Company L.P. Classification of information in data flows in a data storage infrastructure for a communication network
US8510429B1 (en) 2006-01-19 2013-08-13 Sprint Communications Company L.P. Inventory modeling in a data storage infrastructure for a communication network
US20070208780A1 (en) * 2006-03-02 2007-09-06 Anglin Matthew J Apparatus, system, and method for maintaining metadata for offline repositories in online databases for efficient access
US8069191B2 (en) * 2006-07-13 2011-11-29 International Business Machines Corporation Method, an apparatus and a system for managing a snapshot storage pool
US9037828B2 (en) 2006-07-13 2015-05-19 International Business Machines Corporation Transferring storage resources between snapshot storage pools and volume storage pools in a data storage system
US8726242B2 (en) 2006-07-27 2014-05-13 Commvault Systems, Inc. Systems and methods for continuous data replication
US7539783B2 (en) 2006-09-22 2009-05-26 Commvault Systems, Inc. Systems and methods of media management, such as management of media to and from a media storage library, including removable media
CN101715575A (en) 2006-12-06 2010-05-26 弗森多系统公司(dba弗森-艾奥) Adopt device, the system and method for data pipe management data
US7831566B2 (en) 2006-12-22 2010-11-09 Commvault Systems, Inc. Systems and methods of hierarchical storage management, such as global management of storage operations
US8312323B2 (en) 2006-12-22 2012-11-13 Commvault Systems, Inc. Systems and methods for remote monitoring in a computer network and reporting a failed migration operation without accessing the data being moved
US8719809B2 (en) 2006-12-22 2014-05-06 Commvault Systems, Inc. Point in time rollback and un-installation of software
US8290808B2 (en) 2007-03-09 2012-10-16 Commvault Systems, Inc. System and method for automating customer-validated statement of work for a data storage environment
US8738588B2 (en) * 2007-03-26 2014-05-27 International Business Machines Corporation Sequential media reclamation and replication
US8001340B2 (en) * 2007-04-19 2011-08-16 International Business Machines Corporation Method for determining allocation of tape drive resources for a secure data erase process
US8006050B2 (en) 2007-04-19 2011-08-23 International Business Machines Corporation System for determining allocation of tape drive resources for a secure data erase process
US9141303B2 (en) * 2007-04-19 2015-09-22 International Business Machines Corporation Method for selectively performing a secure data erase to ensure timely erasure
US9098717B2 (en) 2007-04-19 2015-08-04 International Business Machines Corporation System for selectively performing a secure data erase to ensure timely erasure
US8706976B2 (en) 2007-08-30 2014-04-22 Commvault Systems, Inc. Parallel access virtual tape library and drives
JP4918940B2 (en) * 2007-09-28 2012-04-18 富士通株式会社 Primary center virtual tape device, secondary center virtual tape device, virtual library system, and virtual tape control method
US7836226B2 (en) 2007-12-06 2010-11-16 Fusion-Io, Inc. Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment
US9519540B2 (en) 2007-12-06 2016-12-13 Sandisk Technologies Llc Apparatus, system, and method for destaging cached data
US8291245B2 (en) * 2008-04-17 2012-10-16 International Business Machines Corporation Method, apparatus and system for reducing power consumption based on storage device data migration
US20100070466A1 (en) 2008-09-15 2010-03-18 Anand Prahlad Data transfer techniques within data storage devices, such as network attached storage performing data migration
US9495382B2 (en) 2008-12-10 2016-11-15 Commvault Systems, Inc. Systems and methods for performing discrete data replication
US8204859B2 (en) 2008-12-10 2012-06-19 Commvault Systems, Inc. Systems and methods for managing replicated database data
US9104629B2 (en) * 2009-07-09 2015-08-11 International Business Machines Corporation Autonomic reclamation processing on sequential storage media
US8316055B2 (en) * 2009-09-10 2012-11-20 General Electric Company System and method to manage storage of data to multiple removable data storage mediums
WO2011036020A1 (en) * 2009-09-25 2011-03-31 International Business Machines Corporation Data storage
CN102667703B (en) 2009-11-27 2015-09-16 国际商业机器公司 For the system and method for the optimization recycling in Virtual Tape Library System
WO2011092738A1 (en) * 2010-01-28 2011-08-04 株式会社日立製作所 Management system and method for storage system that has pools constructed from real domain groups having different performances
US8504517B2 (en) 2010-03-29 2013-08-06 Commvault Systems, Inc. Systems and methods for selective data replication
US8504515B2 (en) 2010-03-30 2013-08-06 Commvault Systems, Inc. Stubbing systems and methods in a data replication environment
US8352422B2 (en) 2010-03-30 2013-01-08 Commvault Systems, Inc. Data restore systems and methods in a replication environment
US8725698B2 (en) 2010-03-30 2014-05-13 Commvault Systems, Inc. Stub file prioritization in a data replication system
US8255738B2 (en) * 2010-05-18 2012-08-28 International Business Machines Corporation Recovery from medium error on tape on which data and metadata are to be stored by using medium to medium data copy
WO2011150391A1 (en) 2010-05-28 2011-12-01 Commvault Systems, Inc. Systems and methods for performing data replication
US8341346B2 (en) * 2010-06-25 2012-12-25 International Business Machines Corporation Offloading volume space reclamation operations to virtual tape systems
US9244779B2 (en) 2010-09-30 2016-01-26 Commvault Systems, Inc. Data recovery operations, such as recovery from modified network data management protocol data
US9311147B2 (en) * 2010-10-20 2016-04-12 Quantum Corporation Method for media allocation in a partitioned removable media storage library
US9021198B1 (en) 2011-01-20 2015-04-28 Commvault Systems, Inc. System and method for sharing SAN storage
US9141527B2 (en) * 2011-02-25 2015-09-22 Intelligent Intellectual Property Holdings 2 Llc Managing cache pools
US8538926B2 (en) * 2011-03-08 2013-09-17 Rackspace Us, Inc. Massively scalable object storage system for storing object replicas
US20130013566A1 (en) * 2011-07-08 2013-01-10 International Business Machines Corporation Storage group synchronization in data replication environments
US9251086B2 (en) 2012-01-24 2016-02-02 SanDisk Technologies, Inc. Apparatus, system, and method for managing a cache
US9471578B2 (en) 2012-03-07 2016-10-18 Commvault Systems, Inc. Data storage system utilizing proxy device for storage operations
US9298715B2 (en) 2012-03-07 2016-03-29 Commvault Systems, Inc. Data storage system utilizing proxy device for storage operations
WO2013148096A1 (en) 2012-03-30 2013-10-03 Commvault Systems, Inc. Informaton management of mobile device data
US9342537B2 (en) 2012-04-23 2016-05-17 Commvault Systems, Inc. Integrated snapshot interface for a data storage system
US8959297B2 (en) 2012-06-04 2015-02-17 Spectra Logic Corporation Retrieving a user data set from multiple memories
US9037672B2 (en) 2012-06-15 2015-05-19 Hewlett-Packard Development Company, L.P. Non-volatile memory physical networks
US10078474B1 (en) * 2012-06-29 2018-09-18 Emc Corporation Method of maintaining list of scratch volumes in shared filesystems across multiple nodes
US10379988B2 (en) 2012-12-21 2019-08-13 Commvault Systems, Inc. Systems and methods for performance monitoring
US9069799B2 (en) 2012-12-27 2015-06-30 Commvault Systems, Inc. Restoration of centralized data storage manager, such as data storage manager in a hierarchical data storage system
US9886346B2 (en) 2013-01-11 2018-02-06 Commvault Systems, Inc. Single snapshot for multiple agents
US9336226B2 (en) 2013-01-11 2016-05-10 Commvault Systems, Inc. Criteria-based data synchronization management
US9052828B2 (en) * 2013-05-31 2015-06-09 International Business Machines Corporation Optimal volume placement across remote replication relationships
WO2015008375A1 (en) * 2013-07-19 2015-01-22 株式会社日立製作所 Storage device, and storage control method
CN103500072A (en) * 2013-09-27 2014-01-08 华为技术有限公司 Data migration method and data migration device
US9639426B2 (en) 2014-01-24 2017-05-02 Commvault Systems, Inc. Single snapshot for multiple applications
US9495251B2 (en) 2014-01-24 2016-11-15 Commvault Systems, Inc. Snapshot readiness checking and reporting
US9632874B2 (en) 2014-01-24 2017-04-25 Commvault Systems, Inc. Database application backup in single snapshot for multiple applications
US9753812B2 (en) 2014-01-24 2017-09-05 Commvault Systems, Inc. Generating mapping information for single snapshot for multiple applications
JP6464606B2 (en) * 2014-08-18 2019-02-06 富士通株式会社 Storage device, storage device control program, and storage device control method
US9774672B2 (en) 2014-09-03 2017-09-26 Commvault Systems, Inc. Consolidated processing of storage-array commands by a snapshot-control media agent
US10042716B2 (en) 2014-09-03 2018-08-07 Commvault Systems, Inc. Consolidated processing of storage-array commands using a forwarder media agent in conjunction with a snapshot-control media agent
US9648105B2 (en) 2014-11-14 2017-05-09 Commvault Systems, Inc. Unified snapshot storage management, using an enhanced storage manager and enhanced media agents
US9448731B2 (en) 2014-11-14 2016-09-20 Commvault Systems, Inc. Unified snapshot storage management
US9898213B2 (en) 2015-01-23 2018-02-20 Commvault Systems, Inc. Scalable auxiliary copy processing using media agent resources
US9904481B2 (en) 2015-01-23 2018-02-27 Commvault Systems, Inc. Scalable auxiliary copy processing in a storage management system using media agent resources
US9928144B2 (en) 2015-03-30 2018-03-27 Commvault Systems, Inc. Storage management of data using an open-archive architecture, including streamlined access to primary data originally stored on network-attached storage and archived to secondary storage
US10042556B2 (en) 2015-07-30 2018-08-07 International Business Machines Corporation Reclamation of storage medium
US10101913B2 (en) 2015-09-02 2018-10-16 Commvault Systems, Inc. Migrating data to disk without interrupting running backup operations
JP6531574B2 (en) * 2015-09-03 2019-06-19 富士通株式会社 Storage device, storage device control program and storage device control method
US9996459B2 (en) * 2015-09-21 2018-06-12 International Business Machines Corporation Reclaiming of sequential storage medium
US10503753B2 (en) 2016-03-10 2019-12-10 Commvault Systems, Inc. Snapshot replication operations based on incremental block change tracking
US11010261B2 (en) 2017-03-31 2021-05-18 Commvault Systems, Inc. Dynamically allocating streams during restoration of data
US10742735B2 (en) 2017-12-12 2020-08-11 Commvault Systems, Inc. Enhanced network attached storage (NAS) services interfacing to cloud storage
US10740022B2 (en) 2018-02-14 2020-08-11 Commvault Systems, Inc. Block-level live browsing and private writable backup copies using an ISCSI server
US10521132B1 (en) * 2018-06-17 2019-12-31 International Business Machines Corporation Dynamic scratch pool management on a virtual tape system
US10452305B1 (en) 2018-06-20 2019-10-22 International Business Machines Corporation Tape drive data reclamation
US10732843B2 (en) 2018-06-20 2020-08-04 International Business Machines Corporation Tape drive data reclamation
US10884646B2 (en) 2018-11-06 2021-01-05 International Business Machines Corporation Data management system for storage tiers
US11531705B2 (en) 2018-11-16 2022-12-20 International Business Machines Corporation Self-evolving knowledge graph
US11042318B2 (en) 2019-07-29 2021-06-22 Commvault Systems, Inc. Block-level data replication
US11221948B2 (en) 2019-10-25 2022-01-11 EMC IP Holding Company LLC Coordinated reclaiming of data storage space
US11137925B2 (en) * 2019-11-06 2021-10-05 EMC IP Holding Company, LLC System and method for dynamically determining and non-disruptively re-balancing memory reclamation memory pools
US11681525B2 (en) * 2019-11-25 2023-06-20 EMC IP Holding Company LLC Moving files between storage devices based on analysis of file operations
US11231866B1 (en) 2020-07-22 2022-01-25 International Business Machines Corporation Selecting a tape library for recall in hierarchical storage
JPWO2022038874A1 (en) * 2020-08-21 2022-02-24
WO2022049832A1 (en) * 2020-09-04 2022-03-10 富士フイルム株式会社 Information processing device, information processing method, and information processing program
US11334269B2 (en) * 2020-10-06 2022-05-17 International Business Machines Corporation Content driven storage and retrieval of files
JP2023026230A (en) * 2021-08-13 2023-02-24 富士フイルム株式会社 Device, method, and program for processing information
US11593223B1 (en) 2021-09-02 2023-02-28 Commvault Systems, Inc. Using resource pool administrative entities in a data storage management system to provide shared infrastructure to tenants
US11954353B2 (en) 2021-09-24 2024-04-09 International Business Machines Corporation Tape-to-tape copying between nodes of magnetic tape file systems
US11809731B2 (en) 2021-09-28 2023-11-07 International Business Machines Corporation Appending data to a tape cartridge during recall operations
US11809285B2 (en) 2022-02-09 2023-11-07 Commvault Systems, Inc. Protecting a management database of a data storage management system to meet a recovery point objective (RPO)
US12056018B2 (en) 2022-06-17 2024-08-06 Commvault Systems, Inc. Systems and methods for enforcing a recovery point objective (RPO) for a production database without generating secondary copies of the production database

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5584008A (en) * 1991-09-12 1996-12-10 Hitachi, Ltd. External storage unit comprising active and inactive storage wherein data is stored in an active storage if in use and archived to an inactive storage when not accessed in predetermined time by the host processor
US5875481A (en) * 1997-01-30 1999-02-23 International Business Machines Corporation Dynamic reconfiguration of data storage devices to balance recycle throughput

Family Cites Families (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4530055A (en) 1982-03-03 1985-07-16 Sperry Corporation Hierarchical memory system with variable regulation and priority of writeback from cache memory to bulk memory
US5253351A (en) 1988-08-11 1993-10-12 Hitachi, Ltd. Memory controller with a cache memory and control method of cache memory including steps of determining memory access threshold values
US5043885A (en) 1989-08-08 1991-08-27 International Business Machines Corporation Data cache using dynamic frequency based replacement and boundary criteria
EP0463874A2 (en) 1990-06-29 1992-01-02 Digital Equipment Corporation Cache arrangement for file system in digital data processing system
US5164909A (en) 1990-11-21 1992-11-17 Storage Technology Corporation Virtual robot for a multimedia automated cartridge library system
GB9111524D0 (en) 1991-05-29 1991-07-17 Hewlett Packard Co Data storage method and apparatus
CA2121852A1 (en) 1993-04-29 1994-10-30 Larry T. Jost Disk meshing and flexible storage mapping with enhanced flexible caching
US5546557A (en) 1993-06-14 1996-08-13 International Business Machines Corporation System for storing and managing plural logical volumes in each of several physical volumes including automatically creating logical volumes in peripheral data storage subsystem
US5636355A (en) 1993-06-30 1997-06-03 Digital Equipment Corporation Disk cache management techniques using non-volatile storage
JP2682811B2 (en) * 1994-03-22 1997-11-26 インターナショナル・ビジネス・マシーンズ・コーポレイション Data storage management system and method
US5829023A (en) 1995-07-17 1998-10-27 Cirrus Logic, Inc. Method and apparatus for encoding history of file access to support automatic file caching on portable and desktop computers
US5680640A (en) 1995-09-01 1997-10-21 Emc Corporation System for migrating data by selecting a first or second transfer means based on the status of a data element map initialized to a predetermined state
US5696929A (en) * 1995-10-03 1997-12-09 Intel Corporation Flash EEPROM main memory in a computer system
US5799324A (en) * 1996-05-10 1998-08-25 International Business Machines Corporation System and method for management of persistent data in a log-structured disk array
US5673382A (en) * 1996-05-30 1997-09-30 International Business Machines Corporation Automated management of off-site storage volumes for disaster recovery
US5933840A (en) * 1997-05-19 1999-08-03 International Business Machines Corporation Garbage collection in log-structured information storage systems using age threshold selection of segments
US6067599A (en) * 1997-05-29 2000-05-23 International Business Machines Corporation Time delayed auto-premigeration of files in a virtual data storage system
US5926834A (en) 1997-05-29 1999-07-20 International Business Machines Corporation Virtual data storage system with an overrun-resistant cache using an adaptive throttle based upon the amount of cache free space
US6105037A (en) 1997-12-12 2000-08-15 International Business Machines Corporation Apparatus for performing automated reconcile control in a virtual tape system
US6304880B1 (en) 1997-12-12 2001-10-16 International Business Machines Corporation Automated reclamation scheduling override in a virtual tape server
US6038490A (en) 1998-01-29 2000-03-14 International Business Machines Corporation Automated data storage library dual picker interference avoidance
US5956301A (en) 1998-03-25 1999-09-21 International Business Machines Corporation Automated data storage library media handling with a plurality of pickers having multiple grippers
US6163773A (en) 1998-05-05 2000-12-19 International Business Machines Corporation Data storage system with trained predictive cache management engine
US6151666A (en) * 1998-05-27 2000-11-21 Storage Technology Corporation Method for reclaiming fragmented space on a physical data storage cartridge
EP0992698B1 (en) * 1998-09-11 2006-08-16 JTEKT Corporation Bearing device
US6725241B1 (en) * 1999-03-31 2004-04-20 International Business Machines Corporation Method and apparatus for freeing memory in a data processing system
US6336163B1 (en) 1999-07-30 2002-01-01 International Business Machines Corporation Method and article of manufacture for inserting volumes for import into a virtual tape server
US6351685B1 (en) 1999-11-05 2002-02-26 International Business Machines Corporation Wireless communication between multiple intelligent pickers and with a central job queue in an automated data storage library
GB2366014B (en) * 2000-08-19 2004-10-13 Ibm Free space collection in information storage systems
US6832289B2 (en) * 2001-10-11 2004-12-14 International Business Machines Corporation System and method for migrating data
US6983351B2 (en) * 2002-04-11 2006-01-03 International Business Machines Corporation System and method to guarantee overwrite of expired data in a virtual tape server
US7103731B2 (en) * 2002-08-29 2006-09-05 International Business Machines Corporation Method, system, and program for moving data among storage units
US6954831B2 (en) * 2002-08-29 2005-10-11 International Business Machines Corporation Method, system, and article of manufacture for borrowing physical volumes
US6985916B2 (en) * 2002-08-29 2006-01-10 International Business Machines Corporation Method, system, and article of manufacture for returning physical volumes
US6954768B2 (en) * 2002-08-29 2005-10-11 International Business Machines Corporation Method, system, and article of manufacture for managing storage pools
US6978325B2 (en) * 2002-08-29 2005-12-20 International Business Machines Corporation Transferring data in virtual tape server, involves determining availability of small chain of data, if large chain is not available while transferring data to physical volumes in peak mode
US6952757B2 (en) * 2002-08-29 2005-10-04 International Business Machines Corporation Method, system, and program for managing storage units in storage pools
US7249218B2 (en) * 2002-08-29 2007-07-24 International Business Machines Corporation Method, system, and program for managing an out of available space condition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5584008A (en) * 1991-09-12 1996-12-10 Hitachi, Ltd. External storage unit comprising active and inactive storage wherein data is stored in an active storage if in use and archived to an inactive storage when not accessed in predetermined time by the host processor
US5875481A (en) * 1997-01-30 1999-02-23 International Business Machines Corporation Dynamic reconfiguration of data storage devices to balance recycle throughput

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7599905B2 (en) * 2001-07-19 2009-10-06 Emc Corporation Method and system for allocating multiple attribute storage properties to selected data storage resources

Also Published As

Publication number Publication date
KR100633982B1 (en) 2006-10-16
US20040044854A1 (en) 2004-03-04
AU2003251066A8 (en) 2004-03-19
JP4502807B2 (en) 2010-07-14
DE60313783T2 (en) 2008-06-05
US7103731B2 (en) 2006-09-05
CN1675614A (en) 2005-09-28
US9213496B2 (en) 2015-12-15
WO2004021190A3 (en) 2004-09-23
JP2005537554A (en) 2005-12-08
KR20050027263A (en) 2005-03-18
CN1295591C (en) 2007-01-17
AU2003251066A1 (en) 2004-03-19
EP1540455B1 (en) 2007-05-09
US20060294336A1 (en) 2006-12-28
DE60313783D1 (en) 2007-06-21
EP1540455A2 (en) 2005-06-15
ATE362132T1 (en) 2007-06-15
CA2497326A1 (en) 2004-03-11
CA2497326C (en) 2011-10-11

Similar Documents

Publication Publication Date Title
CA2497326C (en) Moving data among storage units
US6952757B2 (en) Method, system, and program for managing storage units in storage pools
US8301834B2 (en) System for determining allocation of tape drive resources for a secure data erase process
US9933959B2 (en) Method for selectively performing a secure data erase to ensure timely erasure
US7979664B2 (en) Method, system, and article of manufacture for returning empty physical volumes to a storage pool based on a threshold and an elapsed time period
US7577800B2 (en) Method for borrowing and returning physical volumes
US9141303B2 (en) Method for selectively performing a secure data erase to ensure timely erasure
JP4351729B2 (en) Maintaining information in one or more virtual volume aggregates containing multiple virtual volumes
US7249218B2 (en) Method, system, and program for managing an out of available space condition
US8332599B2 (en) Method for determining allocation of tape drive resources for a secure data erase process
US7818530B2 (en) Data management systems, articles of manufacture, and data storage methods
WO2007020121A1 (en) Maintaining an aggregate including active files in a storage pool in a random access medium
US6895466B2 (en) Apparatus and method to assign pseudotime attributes to one or more logical volumes
JP5203788B2 (en) Method, computer program and system for determining tape drive allocation for secure data erasure process

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1020057001480

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 20038187949

Country of ref document: CN

ENP Entry into the national phase

Ref document number: 2497326

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2004532263

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2003790999

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1020057001480

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2003790999

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 2003790999

Country of ref document: EP