US20190384507A1 - Dynamic scratch pool management on a virtual tape system - Google Patents
Dynamic scratch pool management on a virtual tape system Download PDFInfo
- Publication number
- US20190384507A1 US20190384507A1 US16/010,461 US201816010461A US2019384507A1 US 20190384507 A1 US20190384507 A1 US 20190384507A1 US 201816010461 A US201816010461 A US 201816010461A US 2019384507 A1 US2019384507 A1 US 2019384507A1
- Authority
- US
- United States
- Prior art keywords
- volumes
- pool
- scratch
- scratch pool
- external
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 59
- 238000004590 computer program Methods 0.000 claims abstract description 14
- 238000003860 storage Methods 0.000 claims description 35
- 238000012544 monitoring process Methods 0.000 claims description 23
- 238000010586 diagram Methods 0.000 description 17
- 238000012545 processing Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 9
- 238000007726 management method Methods 0.000 description 9
- 238000004519 manufacturing process Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000003491 array Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 230000001934 delay Effects 0.000 description 4
- 238000006467 substitution reaction Methods 0.000 description 4
- 238000013500 data storage Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000000835 fiber Substances 0.000 description 3
- 230000003466 anti-cipated effect Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0617—Improving the reliability of storage systems in relation to availability
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0608—Saving storage space on storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0631—Configuration or reconfiguration of storage systems by allocating resources to storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0644—Management of space entities, e.g. partitions, extents, pools
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0664—Virtualisation aspects at device level, e.g. emulation of a storage device or system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0682—Tape device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0686—Libraries, e.g. tape libraries, jukebox
Definitions
- This invention relates to systems and methods for managing scratch pools in virtual tape systems.
- tape storage As data storage needs continue to increase at a rapid rate, magnetic tape continues to offer some significant advantages over other data storage technologies. At an average cost on the order of $0.01 per gigabyte, tape storage is typically the most affordable option for storing massive quantities of data. Recent technological advances have also increased the speed that data can be written to and/or retrieved from tape, with some tape drives having the ability to read and/or write data at speeds of over 1 terabyte per hour. Other advantages of magnetic tape include reduced energy costs associated with storing data, portability, greater reliability and longevity, and the ability to easily scale tape storage as storage needs increase. For the reasons provided above, tape storage often plays a significant role in an organization's data storage infrastructure.
- a virtual tape system is a storage solution that combines a high-speed disk cache with tape automation, tape drives, and intelligent storage management software running on a server.
- the disk cache associated with the VTS acts as a buffer to the tape drives, providing near-instantaneous performance for multiple, simultaneous scratch-mount requests and for specific mount requests for tape volumes that reside in the disk cache.
- a VTS breaks the one-to-one connection between a logical tape drive and a physical tape drive, enabling logical access to significantly more tape drives than are physically installed.
- a VTS breaks the one-to-one connection between a tape cartridge and a tape volume.
- a user In a VTS, a user typically must have at least one volume available in a scratch pool in order to satisfy a request to mount a volume to write new files to tape. It is common for a scratch pool to run out of volumes, which can cause disruption to batch and online processing. When this occurs, an administrator must typically intervene to free up additional volumes to be placed in the scratch pool. This can be a time-consuming process that may undesirably cause delays to production cycles. To avoid such delays, an administrator may need to decide how many scratch volumes are needed in the pool and monitor the number of volumes to ensure production cycles are not negatively impacted by running out of scratch volumes.
- a method for managing volumes in a scratch pool of a virtual tape system provides a scratch pool containing volumes for use in a virtual tape system.
- the method further enables a user to predefine an external pool of volumes residing outside of the scratch pool. This external pool may be hidden to a host system accessing the virtual tape system.
- the method monitors current and/or past usage of the volumes in the scratch pool and, based on the usage, predicts a future need for volumes in the scratch pool.
- the method automatically moves volumes between the external pool and the scratch pool in accordance with the future need.
- FIG. 1 is a high-level block diagram showing one example of a network environment in which a system and method in accordance with the invention may be implemented;
- FIG. 2 is a high-level block diagram showing a virtual tape system utilizing a scratch pool of volumes to accommodate volume mount requests;
- FIG. 3 is a high-level block diagram showing an external pool of volumes used to dynamically increase a number of volumes in the scratch pool;
- FIG. 4 is a high-level block diagram showing a scratch pool management module and various sub-modules
- FIG. 5 is a process flow diagram showing a method for predefining an external pool of volumes
- FIG. 6 is a process flow diagram showing a method for utilizing the external pool to increase the number of volumes in the scratch pool.
- FIG. 7 is a process flow diagram showing a method for substituting external pool volumes for those that are manually inserted.
- the present invention may be embodied as a system, method, and/or computer program product.
- the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- the computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
- the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- SRAM static random access memory
- CD-ROM compact disc read-only memory
- DVD digital versatile disk
- memory stick a floppy disk
- a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
- a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- ISA instruction-set-architecture
- machine instructions machine dependent instructions
- microcode firmware instructions
- state-setting data or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- the computer readable program instructions may execute entirely on a user's computer, partly on a user's computer, as a stand-alone software package, partly on a user's computer and partly on a remote computer, or entirely on a remote computer or server.
- a remote computer may be connected to a user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- FPGA field-programmable gate arrays
- PLA programmable logic arrays
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- FIG. 1 one example of a network environment 100 is illustrated.
- the network environment 100 is presented to show one example of an environment where systems and methods in accordance with the invention may be implemented.
- the network environment 100 is presented by way of example and not limitation. Indeed, the systems and methods disclosed herein may be applicable to a wide variety of network environments, in addition to the network environment 100 shown.
- the network environment 100 includes one or more computers 102 , 106 interconnected by a network 104 .
- the network 104 may include, for example, a local-area-network (LAN) 104 , a wide-area-network (WAN) 104 , the Internet 104 , an intranet 104 , or the like.
- the computers 102 , 106 may include both client computers 102 and server computers 106 (also referred to herein as “host systems” 106 ). In general, the client computers 102 initiate communication sessions, whereas the server computers 106 wait for requests from the client computers 102 .
- the computers 102 and/or servers 106 may connect to one or more internal or external direct-attached storage systems 112 (e.g., arrays of hard-disk drives, solid-state drives, tape drives, tape libraries, virtual tape libraries etc.). These computers 102 , 106 and direct-attached storage systems 112 may communicate using protocols such as ATA, SATA, SCSI, SAS, Fibre Channel, or the like.
- direct-attached storage systems 112 may communicate using protocols such as ATA, SATA, SCSI, SAS, Fibre Channel, or the like.
- the network environment 100 may, in certain embodiments, include a storage network 108 behind the servers 106 , such as a storage-area-network (SAN) 108 or a LAN 108 (e.g., when using network-attached storage).
- This network 108 may connect the servers 106 to one or more storage systems 110 , such as arrays 110 a of hard-disk drives or solid-state drives, tape libraries 110 b or virtual tape libraries 110 b , individual hard-disk drives 110 c or solid-state drives 110 c , tape drives 110 d or virtual tape drives 110 d , CD-ROM libraries, or the like.
- storage systems 110 such as arrays 110 a of hard-disk drives or solid-state drives, tape libraries 110 b or virtual tape libraries 110 b , individual hard-disk drives 110 c or solid-state drives 110 c , tape drives 110 d or virtual tape drives 110 d , CD-ROM libraries, or the like.
- a host system 106 may communicate over physical connections from one or more ports on the host 106 to one or more ports on the storage system 110 .
- a connection may be through a switch, fabric, direct connection, or the like.
- the servers 106 and storage systems 110 may communicate using a networking standard such as Fibre Channel (FC).
- FC Fibre Channel
- a virtual tape library 110 b may utilize a scratch pool 200 of virtual tape volumes 202 to accommodate requests to mount volumes to store files thereon.
- a volume 202 When a volume 202 is full of data, the volume 202 may be inventoried in an archive 204 until the data is no longer needed or has expired. At this point, the volume 202 may be returned to the scratch pool 200 so it can be reused for future mount requests.
- a virtual tape system 110 b must typically have at least one volume 202 available in its scratch pool 200 in order to satisfy a request to mount a volume 202 to write new files to tape. It is common for a scratch pool 200 to run out of volumes 202 , which can cause disruption to production activities such as batch and online processing. When this occurs, an administrator may need to intervene to free up additional volumes 202 for placement in the scratch pool 200 . This can be a time-consuming process that may undesirably cause delays to production cycles. To avoid such delays, an administrator may need to decide how many scratch volumes 202 are needed in the scratch pool 200 and monitor the level of available scratch volumes 202 to make sure production cycles are not adversely impacted by running out of scratch volumes 202 .
- an external pool 300 of volumes 202 may be established to increase a number of scratch volumes 202 in the scratch pool 200 .
- the volumes 202 in the external pool 300 may be predefined in advance.
- the volumes 202 in the external pool 300 may be assigned a range of volume serial numbers (i.e., volsers), default constructs, and media types.
- the default constructs may establish whether the volumes 202 support encryption and/or compression and, if so, what types of encryption/compression the volumes 202 support.
- the default constructs may also establish the storage capacities of the volumes 202 , the recording technologies use to record data on the volumes 202 , and the like.
- the designated media types may indicate the type of magnetic tape the volumes 202 are configured to emulate. Because defining the characteristics of the volumes 202 may take significant time, predefining the volumes 202 in the external pool 300 before they are actually needed may reduce delay and enable the volumes 202 to be dynamically added to the scratch pool 200 on an as-need basis.
- the virtual tape system 110 b in FIG. 3 is shown with a single scratch pool 200 and external pool 300 , in reality the virtual tape system 110 b may include multiple scratch pools 200 and external pools 300 .
- Each scratch pool 200 and external pool 300 may have its own volume serial numbers, default constructs, and media types.
- the volumes 202 in an external pool 300 may be predefined with volume serial numbers, default constructs, and media types that are consistent with the scratch pool 200 to which it is assigned. This enables volumes 202 to be dynamically added to the scratch pool 200 while assuring that the volumes 202 in the external pool 300 have the characteristics required of volumes 202 in the scratch pool 200 .
- the volumes 202 in the external pool 300 may be dynamically and automatically moved to the scratch pool 200 to accommodate temporary or unanticipated spikes in workload processing.
- a scratch pool management module 400 may be included in the virtual tape system 110 b .
- the scratch pool management module 400 may be implemented in software, hardware, firmware, or a combination thereof.
- the scratch pool management module 400 may be used to manage volumes 202 in the scratch pool 200 and dynamically add volumes 202 to the scratch pool 200 on an as-need basis and/or to anticipate a future need.
- the scratch pool management module 400 may include various sub-modules to provide different features and functions.
- the sub-modules may include one or more of a predefinition module 402 , monitoring module 404 , prediction module 406 , threshold module 408 , movement module 410 , grace period module 412 , reporting module 414 , broadcast module 416 , substitution module 418 , and notification module 420 .
- These sub-modules are presented by way of example and not limitation. More of fewer modules may be provided in different embodiments. For example, the functionality of some sub-modules may be combined into a single or smaller number of sub-modules, or the functionality of a single sub-module may, in certain embodiments, be distributed across several sub-modules.
- the predefinition module 402 may enable a user to predefine an external pool 300 for a scratch pool 200 , as well as volumes 202 within the external pool 300 .
- the predefinition module 402 may enable a user to define a range of volume serial numbers (i.e., volsers) for the external pool 300 , as well as default constructs, default media types, thresholds, and the like, for volumes 202 within the external pool 300 . This may be performed before the volumes 202 in the external pool 300 are actually needed within the scratch pool 200 .
- the monitoring module 404 may monitor usage of volumes 202 in the scratch pool 200 . This may include monitoring past and present usage as well as the number of volumes 202 that are available in the scratch pool 200 during these time periods. The monitoring module 404 may also monitor peak usage times or spikes in usage that may consume additional volumes 202 in the scratch pool 200 , or times or periods when the scratch pool 200 ran out of volumes 202 . In other cases, the monitoring module 404 may monitor the growth rate of volumes 202 in the scratch pool 200 and, in certain embodiments, whether this growth rate is outside of normal or an indicator of some type of problem or error.
- the monitoring module 404 may also monitor the return of volumes 202 from the archive 204 to the scratch pool 200 (i.e., volumes 202 changing from private to scratch).
- the monitoring module 404 may, in certain embodiments, record observed numbers in a log and keep a rolling average (e.g., a 30 day rolling average) in order to track trends in scratch volume 202 consumption and return to the scratch pool 200 .
- the prediction module 406 may predict future need for volumes 202 in the scratch pool 200 .
- a threshold module 408 may detect whether a number of volumes 202 in the scratch pool 200 has fallen below a threshold associated with the scratch pool 200 . This threshold may, in certain embodiments, be based on future need determined by the prediction module 406 . For example, the threshold may be set at ninety percent of the designated need. If the number of volumes 202 in the scratch pool 200 falls below this threshold, the movement module 410 may move volumes 202 from the external pool 300 to the scratch pool 200 to more closely align the number of volumes 202 in the scratch pool 200 with the designated need for volumes 202 .
- the scratch pool management module 400 carefully manages the number of volumes 202 in the scratch pool 200 to avoid exceeding the designated need and thereby placing additional processing and book-keeping overhead on host systems 106 using the virtual tape system 110 b .
- the movement module 410 is configured to move volumes 202 from the scratch pool 200 to the external pool 300 when the scratch pool 200 contains more volumes 202 than are needed or anticipated to be needed.
- the grace period module 412 may, in certain embodiments, provide a host system 106 a certain amount of time (i.e., a “grace period”) to still be able to access data on the returned volumes 202 .
- a certain amount of time i.e., a “grace period”
- the monitoring module 404 may take this grace period into account when determining how many volumes 202 are not just present in the scratch pool 200 , but are actually available for reuse. This, in turn, may affect how many volumes 202 are moved from the external pool 300 to the scratch pool 200 to accommodate the designated need.
- the reporting module 414 may create reports showing which host systems 106 , programs, job names, and data set naming patterns are consuming volumes 202 in the scratch pool 200 . This may help an administrator determine if the behavior is expected or if configurations changes are needed in the workload. Thus, in certain embodiments, data that is collected by the monitoring module 404 and used by the prediction module 406 to forecast need may also be provided to an administrator in reports so that the administrator may use the data to make decisions or configuration changes in the virtual tape system 110 b . This data may help an administrator understand what is consuming volumes 202 in the scratch pool 200 and/or why the volumes 202 are being consumed faster or at a different rate than expected.
- the broadcast module 416 may communicate these changes to connected host systems 106 .
- software e.g., tape management system software
- a host system 106 may receive the broadcast and update its internal records, such as a tape configuration database and/or tape management system database.
- the substitution module 418 may check whether volumes 202 already exist in the external pool 300 . If volumes 202 are present in the external pool 300 , the substitution module 418 may substitute the volumes 202 in the external pool 300 for those the user is attempting to insert/define. These volumes 202 may then be moved from the external pool 300 to the scratch pool 200 in lieu of the attempted insertion. If the volumes 202 the user is attempting to insert are not in the external pool 300 , the volumes 202 may be inserted into the scratch pool 200 in the conventional manner using conventional insert processing.
- the notification module 420 may notify a user via, for example, a console message, that additional volumes 202 (i.e., an additional volser range) need to be added to the external pool 300 . This may performed well in advance of the external pool 300 actually running out of volumes 202 . This enables the user to define additional volumes 202 in the external pool 300 so that the volumes 202 are available to the scratch pool 200 but without actually placing additional volumes 202 in the scratch pool 200 until they are needed. This will prevent over-allocation of volumes 202 to the scratch pool 200 and the attendant negative effects to host performance.
- additional volumes 202 i.e., an additional volser range
- a method 500 for predefining an external pool 300 of volumes 202 is illustrated.
- This method 500 may, in certain embodiments, be executed by the predefinition module 402 previously discussed.
- the method 500 may enable a user to assign 502 a range of volume serial numbers (volsers) to an external pool 300 , as well as specify default constructs, default media types (e.g., what type of magnetic tape to emulate), and the like.
- the user may also specify thresholds for each scratch pool 200 of the virtual tape system 110 b . For example, the user may configure the virtual tape system 110 b to move volumes 202 from the external pool 300 to the scratch pool 200 if the number of volumes 202 in the scratch pool 200 falls below a certain specified number or percentage.
- the virtual tape system 110 b may automatically create 504 logical volumes 202 in the specified volser range with the specified constructs and media types. These logical volumes 202 may be placed in the external pool 300 so they are available for movement to the scratch pool 200 if and when they are needed.
- a method 600 for utilizing the external pool 300 to increase a number of volumes 202 in the scratch pool 200 is illustrated.
- This method 600 may, in certain embodiments, be executed by the threshold module 408 and movement module 410 previously discussed. This method 600 may be executed after the scratch pools 200 and associated external pools 300 have been established.
- the method 600 initially examines 602 a first scratch pool 200 of the set of scratch pools 200 .
- the method 600 determines 604 the number of logical volumes 202 in the scratch pool 200 . If, at step 606 , the number of logical volumes 202 is below the threshold established for the scratch pool 200 , the method 600 may move 608 logical volumes 202 from the associated external pool 300 to the scratch pool 200 .
- the number of logical volumes 202 that are moved may, in certain embodiments, depend on the amount that the threshold is exceeded.
- the method 600 may then notify 610 any connected host systems 106 of the logical volumes 202 that have been moved into the scratch pool 200 . This may allow the host systems 106 to update their internal catalogs and/or databases. For example, the notification may prompt tape management software on the host systems 106 to update their tape configuration databases and/or tape management system databases.
- the method 600 may then be repeated 612 for each scratch pool 200 and associated external pool 300 in the virtual tape system 110 b .
- the method 600 may be executed periodically to maintain a needed number of volumes 202 in the scratch pools 200 .
- FIG. 7 shows embodiment of a method 700 for substituting external pool volumes 202 for those that are manually inserted by a user.
- This method 700 may, in certain embodiments, be executed by the substitution module 418 previously discussed.
- the method 700 initially detects 702 when a user is attempting to insert volumes 202 into a scratch pool 200 . When this occurs, the method 700 determines 704 whether the volumes 202 that are being inserted are already defined in an external pool 300 associated with the scratch pool 200 . If the volumes 202 are already defined in the external pool 300 , the method 700 substitutes 706 the volumes 202 from the external pool 300 for those being inserted by the user (particularly if the volumes 202 in the external pool 300 have the same volsers as those being inserted by the user).
- the method 700 may continue 708 with insert processing in the conventional manner.
- the method 700 completes 710 the insert processing by inserting the volumes 202 into the scratch pool.
- the method 700 may also notify 712 any connected host systems 106 and/or administrator consoles that volumes 202 were added to the scratch pool 200 so that the host systems 106 and/or administrator consoles may updates their internal records/databases.
- the method 700 also notifies the host systems 106 and/or administrator consoles that volumes 202 from the external pool 300 were substituted for those that were attempted to be manually inserted.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures.
- two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- This invention relates to systems and methods for managing scratch pools in virtual tape systems.
- As data storage needs continue to increase at a rapid rate, magnetic tape continues to offer some significant advantages over other data storage technologies. At an average cost on the order of $0.01 per gigabyte, tape storage is typically the most affordable option for storing massive quantities of data. Recent technological advances have also increased the speed that data can be written to and/or retrieved from tape, with some tape drives having the ability to read and/or write data at speeds of over 1 terabyte per hour. Other advantages of magnetic tape include reduced energy costs associated with storing data, portability, greater reliability and longevity, and the ability to easily scale tape storage as storage needs increase. For the reasons provided above, tape storage often plays a significant role in an organization's data storage infrastructure.
- A virtual tape system (VTS) is a storage solution that combines a high-speed disk cache with tape automation, tape drives, and intelligent storage management software running on a server. The disk cache associated with the VTS acts as a buffer to the tape drives, providing near-instantaneous performance for multiple, simultaneous scratch-mount requests and for specific mount requests for tape volumes that reside in the disk cache. A VTS breaks the one-to-one connection between a logical tape drive and a physical tape drive, enabling logical access to significantly more tape drives than are physically installed. In addition, a VTS breaks the one-to-one connection between a tape cartridge and a tape volume. One key reason tapes are significantly underutilized is that a single application may own a particular drive and the associated tapes. If that application does not fully utilize the associated tape capacity, it may be wasted.
- In a VTS, a user typically must have at least one volume available in a scratch pool in order to satisfy a request to mount a volume to write new files to tape. It is common for a scratch pool to run out of volumes, which can cause disruption to batch and online processing. When this occurs, an administrator must typically intervene to free up additional volumes to be placed in the scratch pool. This can be a time-consuming process that may undesirably cause delays to production cycles. To avoid such delays, an administrator may need to decide how many scratch volumes are needed in the pool and monitor the number of volumes to ensure production cycles are not negatively impacted by running out of scratch volumes.
- In view of the foregoing, what are needed are systems and methods to monitor scratch pools and automatically add scratch volumes to the scratch pools on an as-need basis. Ideally, such systems and methods will minimize impacts to production activities such as batch and online processing.
- The invention has been developed in response to the present state of the art and, in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available systems and methods. Accordingly, systems and methods are disclosed for managing scratch pool volumes in a virtual tape system. The features and advantages of the invention will become more fully apparent from the following description and appended claims, or may be learned by practice of the invention as set forth hereinafter.
- Consistent with the foregoing, a method for managing volumes in a scratch pool of a virtual tape system is disclosed. In one embodiment, such a method provides a scratch pool containing volumes for use in a virtual tape system. The method further enables a user to predefine an external pool of volumes residing outside of the scratch pool. This external pool may be hidden to a host system accessing the virtual tape system. The method monitors current and/or past usage of the volumes in the scratch pool and, based on the usage, predicts a future need for volumes in the scratch pool. The method automatically moves volumes between the external pool and the scratch pool in accordance with the future need.
- A corresponding system and computer program product are also disclosed and claimed herein.
- In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered limiting of its scope, the invention will be described and explained with additional specificity and detail through use of the accompanying drawings, in which:
-
FIG. 1 is a high-level block diagram showing one example of a network environment in which a system and method in accordance with the invention may be implemented; -
FIG. 2 is a high-level block diagram showing a virtual tape system utilizing a scratch pool of volumes to accommodate volume mount requests; -
FIG. 3 is a high-level block diagram showing an external pool of volumes used to dynamically increase a number of volumes in the scratch pool; -
FIG. 4 is a high-level block diagram showing a scratch pool management module and various sub-modules; -
FIG. 5 is a process flow diagram showing a method for predefining an external pool of volumes; -
FIG. 6 is a process flow diagram showing a method for utilizing the external pool to increase the number of volumes in the scratch pool; and -
FIG. 7 is a process flow diagram showing a method for substituting external pool volumes for those that are manually inserted. - It will be readily understood that the components of the present invention, as generally described and illustrated in the Figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the invention, as represented in the Figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of certain examples of presently contemplated embodiments in accordance with the invention. The presently described embodiments will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout.
- The present invention may be embodied as a system, method, and/or computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- The computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- The computer readable program instructions may execute entirely on a user's computer, partly on a user's computer, as a stand-alone software package, partly on a user's computer and partly on a remote computer, or entirely on a remote computer or server. In the latter scenario, a remote computer may be connected to a user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer readable program instructions.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- Referring to
FIG. 1 , one example of anetwork environment 100 is illustrated. Thenetwork environment 100 is presented to show one example of an environment where systems and methods in accordance with the invention may be implemented. Thenetwork environment 100 is presented by way of example and not limitation. Indeed, the systems and methods disclosed herein may be applicable to a wide variety of network environments, in addition to thenetwork environment 100 shown. - As shown, the
network environment 100 includes one ormore computers network 104. Thenetwork 104 may include, for example, a local-area-network (LAN) 104, a wide-area-network (WAN) 104, theInternet 104, anintranet 104, or the like. In certain embodiments, thecomputers client computers 102 and server computers 106 (also referred to herein as “host systems” 106). In general, theclient computers 102 initiate communication sessions, whereas theserver computers 106 wait for requests from theclient computers 102. In certain embodiments, thecomputers 102 and/orservers 106 may connect to one or more internal or external direct-attached storage systems 112 (e.g., arrays of hard-disk drives, solid-state drives, tape drives, tape libraries, virtual tape libraries etc.). Thesecomputers storage systems 112 may communicate using protocols such as ATA, SATA, SCSI, SAS, Fibre Channel, or the like. - The
network environment 100 may, in certain embodiments, include astorage network 108 behind theservers 106, such as a storage-area-network (SAN) 108 or a LAN 108 (e.g., when using network-attached storage). Thisnetwork 108 may connect theservers 106 to one or more storage systems 110, such asarrays 110 a of hard-disk drives or solid-state drives,tape libraries 110 b orvirtual tape libraries 110 b, individual hard-disk drives 110 c or solid-state drives 110 c, tape drives 110 d or virtual tape drives 110 d, CD-ROM libraries, or the like. To access a storage system 110, ahost system 106 may communicate over physical connections from one or more ports on thehost 106 to one or more ports on the storage system 110. A connection may be through a switch, fabric, direct connection, or the like. In certain embodiments, theservers 106 and storage systems 110 may communicate using a networking standard such as Fibre Channel (FC). - Referring to
FIG. 2 , in certain embodiments, avirtual tape library 110 b (also referred to herein as avirtual tape system 110 b), may utilize ascratch pool 200 ofvirtual tape volumes 202 to accommodate requests to mount volumes to store files thereon. When avolume 202 is full of data, thevolume 202 may be inventoried in anarchive 204 until the data is no longer needed or has expired. At this point, thevolume 202 may be returned to thescratch pool 200 so it can be reused for future mount requests. - As previously mentioned, a
virtual tape system 110 b must typically have at least onevolume 202 available in itsscratch pool 200 in order to satisfy a request to mount avolume 202 to write new files to tape. It is common for ascratch pool 200 to run out ofvolumes 202, which can cause disruption to production activities such as batch and online processing. When this occurs, an administrator may need to intervene to free upadditional volumes 202 for placement in thescratch pool 200. This can be a time-consuming process that may undesirably cause delays to production cycles. To avoid such delays, an administrator may need to decide how many scratchvolumes 202 are needed in thescratch pool 200 and monitor the level ofavailable scratch volumes 202 to make sure production cycles are not adversely impacted by running out ofscratch volumes 202. - Referring to
FIG. 3 , in certain embodiments in accordance with the invention, anexternal pool 300 ofvolumes 202 may be established to increase a number ofscratch volumes 202 in thescratch pool 200. In certain embodiments, thevolumes 202 in theexternal pool 300 may be predefined in advance. For example, thevolumes 202 in theexternal pool 300 may be assigned a range of volume serial numbers (i.e., volsers), default constructs, and media types. The default constructs may establish whether thevolumes 202 support encryption and/or compression and, if so, what types of encryption/compression thevolumes 202 support. The default constructs may also establish the storage capacities of thevolumes 202, the recording technologies use to record data on thevolumes 202, and the like. The designated media types may indicate the type of magnetic tape thevolumes 202 are configured to emulate. Because defining the characteristics of thevolumes 202 may take significant time, predefining thevolumes 202 in theexternal pool 300 before they are actually needed may reduce delay and enable thevolumes 202 to be dynamically added to thescratch pool 200 on an as-need basis. - Although the
virtual tape system 110 b inFIG. 3 is shown with asingle scratch pool 200 andexternal pool 300, in reality thevirtual tape system 110 b may includemultiple scratch pools 200 andexternal pools 300. Eachscratch pool 200 andexternal pool 300 may have its own volume serial numbers, default constructs, and media types. Thevolumes 202 in anexternal pool 300 may be predefined with volume serial numbers, default constructs, and media types that are consistent with thescratch pool 200 to which it is assigned. This enablesvolumes 202 to be dynamically added to thescratch pool 200 while assuring that thevolumes 202 in theexternal pool 300 have the characteristics required ofvolumes 202 in thescratch pool 200. In certain cases, thevolumes 202 in theexternal pool 300 may be dynamically and automatically moved to thescratch pool 200 to accommodate temporary or unanticipated spikes in workload processing. - Referring to
FIG. 4 , in order to provide various features and functions in association with theexternal pool 300, a scratchpool management module 400 may be included in thevirtual tape system 110 b. The scratchpool management module 400 may be implemented in software, hardware, firmware, or a combination thereof. In general, the scratchpool management module 400 may be used to managevolumes 202 in thescratch pool 200 and dynamically addvolumes 202 to thescratch pool 200 on an as-need basis and/or to anticipate a future need. - As shown, the scratch
pool management module 400 may include various sub-modules to provide different features and functions. The sub-modules may include one or more of apredefinition module 402,monitoring module 404,prediction module 406,threshold module 408,movement module 410,grace period module 412, reportingmodule 414,broadcast module 416,substitution module 418, andnotification module 420. These sub-modules are presented by way of example and not limitation. More of fewer modules may be provided in different embodiments. For example, the functionality of some sub-modules may be combined into a single or smaller number of sub-modules, or the functionality of a single sub-module may, in certain embodiments, be distributed across several sub-modules. - The
predefinition module 402 may enable a user to predefine anexternal pool 300 for ascratch pool 200, as well asvolumes 202 within theexternal pool 300. For example, thepredefinition module 402 may enable a user to define a range of volume serial numbers (i.e., volsers) for theexternal pool 300, as well as default constructs, default media types, thresholds, and the like, forvolumes 202 within theexternal pool 300. This may be performed before thevolumes 202 in theexternal pool 300 are actually needed within thescratch pool 200. - The
monitoring module 404 may monitor usage ofvolumes 202 in thescratch pool 200. This may include monitoring past and present usage as well as the number ofvolumes 202 that are available in thescratch pool 200 during these time periods. Themonitoring module 404 may also monitor peak usage times or spikes in usage that may consumeadditional volumes 202 in thescratch pool 200, or times or periods when thescratch pool 200 ran out ofvolumes 202. In other cases, themonitoring module 404 may monitor the growth rate ofvolumes 202 in thescratch pool 200 and, in certain embodiments, whether this growth rate is outside of normal or an indicator of some type of problem or error. In addition to monitoring usage ofvolumes 202 in the scratch pool 200 (i.e.,volumes 202 changing from scratch to private), themonitoring module 404 may also monitor the return ofvolumes 202 from thearchive 204 to the scratch pool 200 (i.e.,volumes 202 changing from private to scratch). Themonitoring module 404 may, in certain embodiments, record observed numbers in a log and keep a rolling average (e.g., a 30 day rolling average) in order to track trends inscratch volume 202 consumption and return to thescratch pool 200. - Based on the usage, growth rates, trends, etc. monitored by the
monitoring module 404, theprediction module 406 may predict future need forvolumes 202 in thescratch pool 200. Alternatively, or in addition, athreshold module 408 may detect whether a number ofvolumes 202 in thescratch pool 200 has fallen below a threshold associated with thescratch pool 200. This threshold may, in certain embodiments, be based on future need determined by theprediction module 406. For example, the threshold may be set at ninety percent of the designated need. If the number ofvolumes 202 in thescratch pool 200 falls below this threshold, themovement module 410 may movevolumes 202 from theexternal pool 300 to thescratch pool 200 to more closely align the number ofvolumes 202 in thescratch pool 200 with the designated need forvolumes 202. - In certain embodiments, the scratch
pool management module 400 carefully manages the number ofvolumes 202 in thescratch pool 200 to avoid exceeding the designated need and thereby placing additional processing and book-keeping overhead onhost systems 106 using thevirtual tape system 110 b. In certain embodiments, themovement module 410 is configured to movevolumes 202 from thescratch pool 200 to theexternal pool 300 when thescratch pool 200 containsmore volumes 202 than are needed or anticipated to be needed. - When
volumes 202 are returned to thescratch pool 200 from thearchive 204, such as when data has expired or is no longer needed, thegrace period module 412 may, in certain embodiments, provide a host system 106 a certain amount of time (i.e., a “grace period”) to still be able to access data on the returnedvolumes 202. Thus, in certain cases,volumes 202 that have been returned to thescratch pool 200 may not be available for reuse by thevirtual tape system 110 b until the grace period has expired. In certain embodiments, themonitoring module 404 may take this grace period into account when determining howmany volumes 202 are not just present in thescratch pool 200, but are actually available for reuse. This, in turn, may affect howmany volumes 202 are moved from theexternal pool 300 to thescratch pool 200 to accommodate the designated need. - Using data gathered by the
monitoring module 404, thereporting module 414 may create reports showing whichhost systems 106, programs, job names, and data set naming patterns are consumingvolumes 202 in thescratch pool 200. This may help an administrator determine if the behavior is expected or if configurations changes are needed in the workload. Thus, in certain embodiments, data that is collected by themonitoring module 404 and used by theprediction module 406 to forecast need may also be provided to an administrator in reports so that the administrator may use the data to make decisions or configuration changes in thevirtual tape system 110 b. This data may help an administrator understand what is consumingvolumes 202 in thescratch pool 200 and/or why thevolumes 202 are being consumed faster or at a different rate than expected. - When
volumes 202 are moved from theexternal pool 300 to thescratch pool 200, or vice versa, thebroadcast module 416 may communicate these changes toconnected host systems 106. In certain embodiments, software (e.g., tape management system software) on ahost system 106 may receive the broadcast and update its internal records, such as a tape configuration database and/or tape management system database. - When a user attempts to insert/define
new volumes 202 in thescratch pool 200, thesubstitution module 418 may check whethervolumes 202 already exist in theexternal pool 300. Ifvolumes 202 are present in theexternal pool 300, thesubstitution module 418 may substitute thevolumes 202 in theexternal pool 300 for those the user is attempting to insert/define. Thesevolumes 202 may then be moved from theexternal pool 300 to thescratch pool 200 in lieu of the attempted insertion. If thevolumes 202 the user is attempting to insert are not in theexternal pool 300, thevolumes 202 may be inserted into thescratch pool 200 in the conventional manner using conventional insert processing. - In the event the
external pool 300 is anticipated to run out ofvolumes 202, thenotification module 420 may notify a user via, for example, a console message, that additional volumes 202 (i.e., an additional volser range) need to be added to theexternal pool 300. This may performed well in advance of theexternal pool 300 actually running out ofvolumes 202. This enables the user to defineadditional volumes 202 in theexternal pool 300 so that thevolumes 202 are available to thescratch pool 200 but without actually placingadditional volumes 202 in thescratch pool 200 until they are needed. This will prevent over-allocation ofvolumes 202 to thescratch pool 200 and the attendant negative effects to host performance. - Referring to
FIG. 5 , one embodiment of amethod 500 for predefining anexternal pool 300 ofvolumes 202 is illustrated. Thismethod 500 may, in certain embodiments, be executed by thepredefinition module 402 previously discussed. As shown, themethod 500 may enable a user to assign 502 a range of volume serial numbers (volsers) to anexternal pool 300, as well as specify default constructs, default media types (e.g., what type of magnetic tape to emulate), and the like. The user may also specify thresholds for eachscratch pool 200 of thevirtual tape system 110 b. For example, the user may configure thevirtual tape system 110 b to movevolumes 202 from theexternal pool 300 to thescratch pool 200 if the number ofvolumes 202 in thescratch pool 200 falls below a certain specified number or percentage. - Using the configuration settings established at step 502, the
virtual tape system 110 b may automatically create 504logical volumes 202 in the specified volser range with the specified constructs and media types. Theselogical volumes 202 may be placed in theexternal pool 300 so they are available for movement to thescratch pool 200 if and when they are needed. - Referring to
FIG. 6 , amethod 600 for utilizing theexternal pool 300 to increase a number ofvolumes 202 in thescratch pool 200 is illustrated. Thismethod 600 may, in certain embodiments, be executed by thethreshold module 408 andmovement module 410 previously discussed. Thismethod 600 may be executed after the scratch pools 200 and associatedexternal pools 300 have been established. As shown, themethod 600 initially examines 602 afirst scratch pool 200 of the set of scratch pools 200. Themethod 600 then determines 604 the number oflogical volumes 202 in thescratch pool 200. If, atstep 606, the number oflogical volumes 202 is below the threshold established for thescratch pool 200, themethod 600 may move 608logical volumes 202 from the associatedexternal pool 300 to thescratch pool 200. The number oflogical volumes 202 that are moved may, in certain embodiments, depend on the amount that the threshold is exceeded. - The
method 600 may then notify 610 anyconnected host systems 106 of thelogical volumes 202 that have been moved into thescratch pool 200. This may allow thehost systems 106 to update their internal catalogs and/or databases. For example, the notification may prompt tape management software on thehost systems 106 to update their tape configuration databases and/or tape management system databases. Themethod 600 may then be repeated 612 for eachscratch pool 200 and associatedexternal pool 300 in thevirtual tape system 110 b. Themethod 600 may be executed periodically to maintain a needed number ofvolumes 202 in the scratch pools 200. -
FIG. 7 shows embodiment of amethod 700 for substitutingexternal pool volumes 202 for those that are manually inserted by a user. Thismethod 700 may, in certain embodiments, be executed by thesubstitution module 418 previously discussed. As shown, themethod 700 initially detects 702 when a user is attempting to insertvolumes 202 into ascratch pool 200. When this occurs, themethod 700 determines 704 whether thevolumes 202 that are being inserted are already defined in anexternal pool 300 associated with thescratch pool 200. If thevolumes 202 are already defined in theexternal pool 300, themethod 700 substitutes 706 thevolumes 202 from theexternal pool 300 for those being inserted by the user (particularly if thevolumes 202 in theexternal pool 300 have the same volsers as those being inserted by the user). If thevolumes 202 are not already defined in theexternal pool 300, themethod 700 may continue 708 with insert processing in the conventional manner. Themethod 700 completes 710 the insert processing by inserting thevolumes 202 into the scratch pool. Themethod 700 may also notify 712 anyconnected host systems 106 and/or administrator consoles thatvolumes 202 were added to thescratch pool 200 so that thehost systems 106 and/or administrator consoles may updates their internal records/databases. In certain embodiments, themethod 700 also notifies thehost systems 106 and/or administrator consoles thatvolumes 202 from theexternal pool 300 were substituted for those that were attempted to be manually inserted. - The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/010,461 US10521132B1 (en) | 2018-06-17 | 2018-06-17 | Dynamic scratch pool management on a virtual tape system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/010,461 US10521132B1 (en) | 2018-06-17 | 2018-06-17 | Dynamic scratch pool management on a virtual tape system |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190384507A1 true US20190384507A1 (en) | 2019-12-19 |
US10521132B1 US10521132B1 (en) | 2019-12-31 |
Family
ID=68840710
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/010,461 Expired - Fee Related US10521132B1 (en) | 2018-06-17 | 2018-06-17 | Dynamic scratch pool management on a virtual tape system |
Country Status (1)
Country | Link |
---|---|
US (1) | US10521132B1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11461193B2 (en) * | 2020-09-24 | 2022-10-04 | International Business Machines Corporation | Data storage volume recovery management |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6513101B1 (en) | 2000-01-04 | 2003-01-28 | International Business Machines Corporation | Expiring host selected scratch logical volumes in an automated data storage library |
US7103731B2 (en) * | 2002-08-29 | 2006-09-05 | International Business Machines Corporation | Method, system, and program for moving data among storage units |
US6954831B2 (en) * | 2002-08-29 | 2005-10-11 | International Business Machines Corporation | Method, system, and article of manufacture for borrowing physical volumes |
US6985916B2 (en) * | 2002-08-29 | 2006-01-10 | International Business Machines Corporation | Method, system, and article of manufacture for returning physical volumes |
US8856450B2 (en) | 2010-01-25 | 2014-10-07 | International Business Machines Corporation | Systems for managing a cache in a multi-node virtual tape controller |
US8595430B2 (en) | 2010-09-30 | 2013-11-26 | International Business Machines Corporation | Managing a virtual tape library domain and providing ownership of scratch erased volumes to VTL nodes |
US9552370B1 (en) | 2013-06-27 | 2017-01-24 | EMC IP Holding Company LLC | Signaling impending out of storage condition from a virtual tape drive |
-
2018
- 2018-06-17 US US16/010,461 patent/US10521132B1/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
US10521132B1 (en) | 2019-12-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10346081B2 (en) | Handling data block migration to efficiently utilize higher performance tiers in a multi-tier storage environment | |
US8880835B2 (en) | Adjusting location of tiered storage residence based on usage patterns | |
JP5456063B2 (en) | Method and system for dynamic storage tiering using allocate-on-write snapshots | |
US12008406B1 (en) | Predictive workload placement amongst storage systems | |
US11209982B2 (en) | Controlling operation of a data storage system | |
US11399064B2 (en) | Peak cyclical workload-based storage management in a multi-tier storage environment | |
US8966218B2 (en) | On-access predictive data allocation and reallocation system and method | |
US20130290598A1 (en) | Reducing Power Consumption by Migration of Data within a Tiered Storage System | |
US9218276B2 (en) | Storage pool-type storage system, method, and computer-readable storage medium for peak load storage management | |
US8230131B2 (en) | Data migration to high speed storage in accordance with I/O activity over time | |
US10209898B2 (en) | Estimation of performance utilization of a storage device | |
WO2007009910A2 (en) | Virtualisation engine and method, system, and computer program product for managing the storage of data | |
JP2007156815A (en) | Data migration method and system | |
US10168945B2 (en) | Storage apparatus and storage system | |
US10621059B2 (en) | Site recovery solution in a multi-tier storage environment | |
US10521132B1 (en) | Dynamic scratch pool management on a virtual tape system | |
JP2016103304A (en) | Data archive system | |
US11720256B2 (en) | Maximizing power savings using IO monitoring | |
US10114568B2 (en) | Profile-based data-flow regulation to backend storage volumes | |
US11119689B2 (en) | Accelerated data removal in hierarchical storage environments | |
US11126355B2 (en) | Write-based data management using endurance tiers in a storage system | |
US10623287B2 (en) | Intelligent gathering of historical performance information | |
US10970153B2 (en) | High-granularity historical performance snapshots | |
US10528295B2 (en) | Intra-tier data migration to segregate zombie storage from dead storage | |
JP2019144778A (en) | Information processing device, information processing method and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SCOTT, MICHAEL R.;REED, DAVID C.;MATSUI, SOSUKE;AND OTHERS;SIGNING DATES FROM 20180611 TO 20180613;REEL/FRAME:046110/0782 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20231231 |