WO2017154127A1 - ストレージ装置の性能管理方法及び装置 - Google Patents
ストレージ装置の性能管理方法及び装置 Download PDFInfo
- Publication number
- WO2017154127A1 WO2017154127A1 PCT/JP2016/057302 JP2016057302W WO2017154127A1 WO 2017154127 A1 WO2017154127 A1 WO 2017154127A1 JP 2016057302 W JP2016057302 W JP 2016057302W WO 2017154127 A1 WO2017154127 A1 WO 2017154127A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- performance
- storage
- volume
- storage device
- pool
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0605—Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0611—Improving I/O performance in relation to response time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0635—Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0644—Management of space entities, e.g. partitions, extents, pools
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0685—Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0665—Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
Definitions
- the present invention relates to a performance management method and apparatus for a storage apparatus in which two or more types of storage media having different rewrite lifetimes and response performances are mounted.
- storage media having different rewrite lifetimes and response performances
- flash drives having different memory cell structures, flash drives and hard disk drives, and the like. It is suitable to be applied to a performance management method and apparatus for a storage apparatus that can stratify the above combinations and move data between layers according to response performance.
- the present invention has been made in view of the above points, and in a storage apparatus that uses a combination of different types of drive devices with different lifetimes and a storage medium as a storage medium, problems that may occur in the future when the drive devices reach the end of their lives. It is an object of the present invention to propose a storage apparatus performance management method and apparatus that can systematically cope with the problem.
- the present invention provides a performance management apparatus for a storage apparatus comprising two or more types of storage media having different rewrite lifetime and response performance, the storage apparatus being a host connected to the storage apparatus
- a virtual volume that constitutes a virtual logical volume is provided to the computer using the two or more types of storage media, and a write request is issued from the host computer to an unwritten address of the virtual volume
- the storage apparatus writes data corresponding to the write request to a storage area that is unused in any one of the two or more types of storage media and has not reached the write life, and the storage apparatus Depending on the IO load by the host computer in each storage area of more than one type of storage medium.
- the storage location of the data is moved to a storage area that has not reached the write life among the lower storage media, and the performance management device has a storage capacity that accompanies the end of the rewrite life for each of the two or more types of storage media.
- a performance impact estimator that estimates a decrease and estimates the performance impact on the virtual volume and its occurrence time; and a performance impact display that displays information related to the estimated performance impact and the time of occurrence. I did it.
- a performance management method by a performance management device for managing the performance of a storage device comprising two or more types of storage media having different rewrite lifetime and response performance, wherein the storage device is connected to the storage device.
- a virtual volume constituting a virtual logical volume is provided to the host computer using the two or more types of storage media, and a write request is issued from the host computer to an unwritten address of the virtual volume.
- the storage device writes data corresponding to the write request to a storage area that is unused in any of the two or more types of storage media and has not reached the write life, and the storage device , I by the host computer in each storage area of each of the two or more types of storage media Depending on the amount of load, from a storage area of a storage medium with lower response performance to a storage area with a higher response performance that has not reached the write life, or a storage area of a storage medium with higher response performance From the storage medium having a lower response performance to a storage area that has not reached the write life, and the performance management method includes: A performance impact estimation step for estimating a decrease in storage capacity due to the reaching of the rewrite life, estimating a performance impact on the virtual volume and its occurrence time, and the performance management device comprising the estimated performance impact and its occurrence And a performance influence display step for displaying information related to the time.
- a storage apparatus that uses a combination of different types of drive devices with different lifetimes as storage media, it is possible to systematically cope with problems that may occur in the future when the drive devices reach the end of their lives.
- a performance management method and apparatus can be realized.
- aaa table such as “aaa list”, “aaaDB”, “aaa queue”, etc.
- these information include tables, lists, DBs, queues, etc. It may be expressed other than the data structure. Therefore, “aaa table”, “aaa list”, “aaaDB”, “aaa queue”, etc. may be referred to as “aaa information” to indicate that they are not dependent on the data structure.
- program is the subject, but the program is executed by the processor, and the processing determined by using the memory, disk device, and communication device is used as the subject. It is good also as explanation. Further, the processing disclosed with the program as the subject may be processing performed by a computer such as a management server or an information processing apparatus. Further, part or all of the program may be realized by dedicated hardware.
- Various programs may be installed in each computer by a program distribution server or a storage medium that can be read by the computer.
- the program distribution server includes a CPU and storage resources, and the storage resources further store a distribution program and a program to be distributed.
- the distribution program is executed by the CPU, the CPU of the program distribution server distributes the distribution target program to other computers.
- FIG. 1 shows a configuration example of a computer system 1 according to this embodiment.
- the computer system 1 one or a plurality of host computers 2, one or a plurality of storage apparatuses 3, a management computer 4, and a storage management client 5 are connected to each other via a management LAN (Local Area Network) 6.
- the host computer 2, the storage device 3, and the management computer 4 are connected to each other via one or a plurality of SAN (Storage Area Network) switches 7.
- SAN Storage Area Network
- the host computer 2 is a computer device that executes various processes according to user operations, and includes a processor, a memory, a disk device, and a communication device (not shown).
- the processor is hardware that controls the operation of the entire host computer 2.
- the memory is mainly used for storing various programs and also used as a work memory for the processor.
- Business software which is application software for executing user business, and an OS (Operating System) are also stored and held in this memory.
- the disk device is a large-capacity storage device that is used to hold various programs and various data for a long period of time, and includes, for example, a hard disk device.
- the communication device performs protocol control during communication with the storage apparatus 3 and the management computer 4 performed via the management LAN 6 and the SAN switch 7.
- the storage device 3 is a large-capacity storage device that provides a virtual logical volume (hereinafter referred to as a virtual volume or simply a volume) to the host computer 2.
- the storage apparatus 3 includes one or a plurality of storage devices 20 and a controller 21 that controls data input / output with respect to the storage devices 20.
- the storage device 20 is composed of two or more storage media having different rewrite lifetimes and response performances.
- Examples of the storage device 20 include flash drives (SSD (Solid State Drive)) using flash memories having different memory cell structures, or a combination of a flash drive and a hard disk drive.
- SSD Solid State Drive
- the controller 21 has one or more processors, a memory, and one or more cache memories.
- the processor is hardware that controls the operation of the entire storage apparatus 3.
- the memory is used mainly for storing various programs and also used as a work memory for the processor. Further, the memory stores a virtual environment control program for performing control according to a read request or a write request issued from the host computer 2 to the virtual volume.
- the cache memory is used to temporarily hold data input / output to / from the storage device 20.
- the management computer 4 is a computer device that manages the entire computer system 1 and includes a processor 30, a memory 31, a disk device 32, and a communication device 33.
- the processor 30 is hardware having a function of controlling the operation of the entire management computer 4.
- the memory 31 is used mainly for storing various programs and also used as a work memory for the processor 30.
- storage management software 34 is stored.
- the storage management software 34 has a function of collecting configuration information and performance information of the SAN environment and analyzing performance problems based on the collected information.
- the disk device 32 is a large-capacity storage device used to hold various programs and various data for a long period of time, and includes, for example, a hard disk device.
- the communication device 33 performs protocol control during communication with the host computer 2 and the storage apparatus 3 performed via the management LAN 6 and the SAN switch 7.
- Management computer 4 has input / output devices.
- the input / output device at least one of a display, a keyboard, and a pointer device can be considered, but other devices may be used.
- a serial interface (or Ethernet interface) may be employed as an alternative to the input / output device.
- a set of one or more computers that manage the computer system 1 and display display information may be referred to as a management system.
- the management computer 4 displays display information
- the management computer 4 is a management system
- a combination of the management computer 4 and the display computer is also a management system.
- a plurality of computers may realize processing equivalent to that of the management computer 4.
- the plurality of computers in the case where the display computer performs display, display is performed).
- Management computer is the management system.
- the storage management client 5 is a communication terminal device that provides the user interface of the storage management software 34 stored in the memory 31 of the management computer 4 as described above.
- the storage management client 5 has at least an input device and an output device not shown.
- the input device is hardware for a user to perform various inputs, and is, for example, at least one of a keyboard, a mouse, and a touch panel.
- the output device is a display device that displays various GUIs (Graphical User Interface) and the like, and includes, for example, a liquid crystal panel.
- the storage management client 5 communicates with the storage management software 34 of the management computer 4 via the management LAN 6.
- FIG. 2 is a conceptual diagram showing an example of a hierarchical storage system using Tier.
- the hierarchical storage method is a management method for moving data between a high-performance (and therefore relatively high-cost) storage medium and a low-performance (relatively low-cost) storage medium according to a predetermined policy.
- pools created using storage devices 20 such as flash drives of different types or flash drives and hard disk drives (hereinafter also referred to as “hierarchical pools”) are used.
- hierarchical pools which performance (hierarchy) storage device 20 is allocated is determined based on the access load in units of pages for the virtual volumes belonging to the hierarchical pool. Specifically, for example, a drive with high performance is assigned to a page with a high access load, while a drive with low performance is assigned to a page with a low access load.
- the two host computers 2 with the names “Host A” and “Host B” can access the volumes of the volume IDs “VOL1” to “VOL7” in the storage apparatus 3 via the SAN switch 7, respectively. It is configured.
- Volumes with volume IDs “VOL1” to “VOL3” are assigned to a pool with pool ID “PoolA”
- volumes with volume IDs “VOL4” and “VOL5” are assigned to a pool with pool ID “PoolB”
- Volumes with volume IDs “VOL6” and “VOL7” are assigned to a pool with pool ID “PoolC”.
- tier IDs In each pool, two pools with pool IDs “Pool A” and “Pool B” are stratified into, for example, three tiers of Tier 1 to Tier 3, while a pool with a pool ID “Pool C” For example, it is hierarchized into two hierarchies, Tier 1 and Tier 2. Here, in order to distinguish these tiers Tier 1 to Tier 3 from each other, these are referred to as tier IDs.
- the storage device 20 of the flash drive A11, the flash drive A12, and the like (not shown in the drawing, the same applies hereinafter) is assigned to the tier with the tier ID “Tier1”.
- the storage device 20 such as the flash drive A21 or the like is assigned to the hierarchy of “”
- the storage device 20 such as the hard disk drive A31 is assigned to the hierarchy of the hierarchy ID “Tier3”.
- the storage device 20 such as the flash drive B11 is assigned to the hierarchy with the hierarchy ID “Tier1”, and the storage device 20 such as the flash drive B21 is assigned to the hierarchy with the hierarchy ID “Tier2”. Is assigned, and the storage device 20 such as the hard disk drive B31 is assigned to the layer of the layer ID “Tier3”.
- the storage device 20 such as the flash drive C11 and the flash drive C12 is assigned to the hierarchy of the hierarchy ID “Tier1”, and the hard disk drive C21 etc. is assigned to the hierarchy of the hierarchy ID “Tier2”. Storage device 20 is allocated.
- FIG. 3 is a block diagram mainly showing a configuration example of the management computer 4 according to the present embodiment.
- an arrow indicates the flow of information, and information stored in a table arranged at the rear end of the arrow is used by the processing unit arranged at the tip of the arrow. This indicates that the processing unit to be arranged stores information obtained as a processing result in a table arranged at the tip of the arrow.
- the management computer 4 includes, for example, an analysis condition setting unit 102, a drive flash count upper limit arrival time estimation unit 104, a page placement destination hierarchy specifying unit 105, a volume Response performance deterioration time estimation unit 106, volume response performance estimation unit 107, volume target response performance unachieved time estimation unit 108, analysis mode I processing unit 109, analysis mode II processing unit 110, analysis result display unit 111, and analysis result table 112
- an analysis condition setting unit 102 for example, a drive flash count upper limit arrival time estimation unit 104, a page placement destination hierarchy specifying unit 105, a volume Response performance deterioration time estimation unit 106, volume response performance estimation unit 107, volume target response performance unachieved time estimation unit 108, analysis mode I processing unit 109, analysis mode II processing unit 110, analysis result display unit 111, and analysis result table 112
- the storage device information collection unit 100 constantly collects storage information to be described later from the storage device 3 and stores it in the storage device information storage unit 101.
- the drive flash count upper limit arrival time estimation unit 104 and the like use the storage device information in the storage device information storage unit 101 to perform analysis processing as will be described in detail later.
- the analysis result display unit 111 displays the analysis result based on the analysis result table 112 including such an analysis result.
- the management computer 4 includes, as part of the table group, an analysis condition table 200, a flush count upper limit arrival time table 201, a tier capacity estimation table 202, a page placement destination tier table 203, a page load estimation table 204, response performance degradation.
- a volume table 205, a volume response performance table 206, a target unachieved volume table 207, a required additional capacity table 208, and a volume migration plan table 209 are included. These are part of the table group as described above, and the description including the remaining table group will be described later.
- the management computer 4 determines, for example, the volume response performance deterioration timing or target response performance caused by a decrease in the total capacity of the storage device 20 such as a flash drive, based on the combination of the tables as described above.
- As a countermeasure to display the time when the system cannot be achieved hereinafter also referred to as “unachieved time” or to extend the period during which the target response performance of the volume can be achieved (hereinafter also referred to as “target achievement period”)
- target achievement period information on the capacity of the drive that needs to be added and information on the migration destination pool are displayed.
- FIG. 4 shows an example of a data input screen for setting analysis conditions.
- FIG. 5 shows an example of a result output screen showing the result of analysis based on the input analysis condition.
- the analysis condition setting screen 300 shown in FIG. 4 is displayed by the analysis condition setting unit 102 (see FIG. 3).
- the common input field 310, the analysis mode I input field 320, the analysis mode II input field 330, An analysis mode III input field 340 and an analysis execution button 350 are provided.
- the analysis condition setting unit 102 displays the analysis condition setting screen 300 based on the information already stored in the storage device information storage unit 101 described above, and the analysis conditions input on the analysis condition setting screen are displayed in the analysis condition table 200. sign up.
- several analysis modes are prepared as analysis methods for future volume performance degradation. Details of each analysis mode will be described later.
- the common input field 310 is an item field for setting common input items in each analysis mode.
- the common input field 310 includes a storage device input field 311, a pool input field 312, and a target response performance input field 313.
- the storage device input field 311 is an input field in a pull-down menu format (not shown), and by selecting a storage device ID of a desired storage device 3 from a plurality of storage device IDs listed in the pull-down menu, The storage device 3 to be analyzed is selected, and the storage device ID corresponding to the selected storage device 3 is input.
- the storage device ID listed in the pull-down menu is the storage device ID of all the storage devices 3 to be managed by the tube computer 4. In the illustrated example, “Storage A”, for example, is selected as the storage device ID.
- the pool input field 312 is an input field in a pull-down menu format (not shown), selects a pool ID of a desired pool from a plurality of pool IDs listed in the pull-down menu, and corresponds to the selected pool. A pool ID is entered.
- the pool IDs listed in the pull-down menu are the pool IDs of all the pools constructed in the storage device 3 of the storage device ID selected in the storage device input field 311. In the illustrated example, for example, “Pool A” is selected as the pool ID.
- the target response performance input column 313 is a column in which response performance to be targeted (hereinafter referred to as “target response performance”) is input.
- target response performance response performance to be targeted
- the volume IDs of all the volumes assigned to the pool with the pool ID selected in the pool input field 312 are listed together with the host IDs of the host computer 2 accessing the volume.
- a desired target response performance can be input to each volume.
- the target response performance for a volume that does not have the desired target response performance is represented by a value “not set”.
- “0.5 [ms] is set as the target response performance for the volume with the volume ID“ VOL 1 ”accessed by the host computer 2 with the host ID“ Host A ”at the top. "Is set.
- the analysis condition setting screen 300 includes a radio button 321 that enables input to the analysis mode I input field 320, a radio button 331 that allows input to the analysis mode II input field 330, and an analysis mode III input.
- the input field 340 includes a radio button 341 that enables input.
- the last day of the target achievement maintenance period can be set for the analysis mode I that displays the capacity that needs to be expanded.
- the last day of the target achievement maintenance period can be set for the analysis mode II displaying the volume migration plan.
- an expansion capacity can be set for each tier of each tier ID as a planned expansion capacity for analysis mode III for displaying the target unachieved volume.
- the analysis execution button 350 is an operation button for causing the later-described analysis to be performed based on the analysis conditions input as described above.
- the analysis result screen 400 shown in FIG. 5 has an analysis result main display column 410, an expansion capacity display column 420, and a volume migration plan display column 430.
- the analysis result display unit 111 described above displays the analysis result based on the analysis result table 112.
- the analysis result main display column 410 displays, for each period, the volume ID of the volume in which the response performance has deteriorated due to the decrease in the total capacity of the storage device 20 such as the flash drive and the volume for which the target response performance has not yet reached.
- the volume ID is displayed.
- the expansion capacity display field 420 displays the analysis result only when the analysis mode I is selected on the data input screen.
- the additional capacity and the extension time limit are displayed for each tier of each tier ID of the pool with the pool ID constructed in the storage apparatus with the storage apparatus ID.
- the volume migration plan display field 430 displays the analysis result only when the analysis mode II is selected on the data input screen.
- the pool ID is displayed.
- FIG. 6 shows an example of the analysis condition table 200.
- An example of the analysis condition is managed in the analysis condition table 200, and as described above, a new analysis condition is registered or the registered analysis condition is updated by the analysis condition setting unit 102.
- the storage device ID is “Storage A” and the pool ID is “Pool A”, and the analysis mode I is applied to perform analysis for the analysis target. ing.
- FIGS. 7 to 9 are diagrams showing examples of the volume target response performance table 220, the target maintenance deadline table 221 and the additional capacity table 222, respectively.
- the analysis conditions in the volume target response performance table 220, the target retention period table 221 and the expansion capacity table 222, the target response performance input field 313, the analysis mode I input field 320, and the analysis mode II input field of the analysis condition setting screen 300 330 and an example of the analysis conditions input in the analysis mode III input field 340 are managed, and a new analysis condition is registered or the registered analysis condition is updated by the analysis condition setting unit 102.
- the volume target response performance table 220 manages the target response performance 220B for the volume ID 220A assigned to each volume.
- a deadline (hereinafter also referred to as “target maintenance deadline”) 221A required to maintain achievement of the performance target is managed.
- a capacity 222B that can be expanded to a tier corresponding to each tier ID 222A (for example, Tier 1 and Tier 2) is managed in a pool that is tiered to each tier in order to extend the time limit for maintaining achievement of the performance target. Yes.
- FIG. 10 to 18 show an example of a group of tables constituting information acquired from the plurality of storage devices 3 by the storage device information storage unit 101 and stored in the storage device information storage unit 101.
- FIG. These are the storage device table 230, pool table 231, pool configuration hierarchy table 232, volume table 233, volume capacity information table 234, flash drive table 235, drive flush count table 236, page table 237, and page load table 238, respectively. An example is shown.
- the storage device table 230 storage device IDs of all storage devices 3 to be managed by the management computer 4 are registered.
- the pool table 231 the storage device ID of the storage device 3 in which the pool of each pool ID 231A is constructed is registered.
- the pool configuration tier table 232 for each pool ID 232B of the pool to which the tier of each tier ID 232A belongs and the storage device ID 232C of the storage device, the response time 232D and the assigned capacity 232E of the storage device 20 assigned to that tier are registered. Has been.
- volume table 233 for each volume ID 233A, the storage device ID 233B of the storage device to which the volume is assigned, the pool ID 233C of the pool, and the connection destination host ID 233D of the host computer 2 that accesses the volume are registered. ing.
- volume capacity information table 2344 year / month 234A, volume ID 234B, storage device ID 234C, pool ID 234D, allocated capacity 234E, and used capacity 234F are registered.
- a drive ID 235A In the flash drive table 235, a drive ID 235A, a storage device ID 235B, an allocation pool ID 235C, a capacity 235D, an allocation hierarchy ID 235E, and a flash limit count 235F are registered.
- a drive ID 236A, a storage device ID 236B, a time 236C, and a flash count 236D are registered.
- a page ID 237A, a tier ID 237B, a pool ID 237C, a storage device ID 237D, a volume ID 237E, and a page size 237F are registered.
- year / month 238A, page ID 238B, pool ID 238C, storage device ID 238D, and IOPS 238E are registered.
- 19 to 24 are examples of the flush count upper limit arrival time table 201, the tier capacity estimation table 202, the page load estimation table 204, the page placement destination tier table 203, the response performance degradation volume table 205, and the volume response performance table 206, respectively. Represents.
- the flash count upper limit arrival time table 201 manages the flash count upper limit reach timing 201B for each drive ID 201A, and the above-described drive flash count upper limit reach timing estimation unit 104 updates the flash count upper limit reach timing 201B for each drive ID 201A.
- the tier capacity estimation table 202 manages the capacity of each tier for each year / month 202A, for example, the Tier 1 capacity 202B and the Tier 2 capacity 202C.
- the page load estimation table 204 manages the year / month 204A, the page ID 204B, and the IOPS 204C, and manages the IOPS (number of I / Os per second) of each page in a certain year / month.
- the page arrangement destination hierarchy table 203 manages the year / month 203A, the page ID 203B, and the hierarchy ID 203C, and the page arrangement destination hierarchy specifying unit 105 updates or adds the year / month 203A, the page ID 203B, and the hierarchy ID 203C.
- the response performance degradation volume table 205 manages the year / month 205A and the volume ID 205B.
- the total capacity of the storage device 20 such as the flash drive for each tier decreases, and the pages constituting the volume
- the volume that is estimated to start degrading the response performance due to the transition of the assigned tier to the low-performance tier is managed.
- the volume response performance deterioration time estimation unit 106 updates or adds the year 205A and the volume ID 205B.
- the volume response performance table 206 manages the year / month 206A, the volume ID 206B, and the response performance 206C, and manages the volume response performance in a certain year / month by these.
- the volume response performance estimation unit 107 updates or adds the year / month 206A, the volume ID 206B, and the response performance 206C.
- 25 to 28 show examples of the target unachieved volume table 207, the analysis result table 240, the required additional capacity table 208, and the volume migration plan table 209, respectively.
- the target unachieved volume table 207 manages the year / month 207A and the volume ID 207B, and in a certain year / month, the total capacity of the storage device 20 such as the flash drive for each layer decreases, and the allocation hierarchy of pages constituting the volume The volume that is estimated that the response performance does not satisfy the target response performance due to the transition to the low performance tier is managed.
- the year / month 207A and the volume ID 207B are updated or added by the volume target response performance unachieved time estimation unit.
- the analysis result table 240 manages the year / month 240A, the performance deterioration volume 240B, and the target unachieved volume 240C, and the volume and the response performance estimated that the response performance starts to deteriorate in a certain year / month are the targets.
- the volume that is estimated to be unachievable is managed.
- the required additional capacity table 208 manages the tier ID 208A, the required additional capacity 208B, and the extension time limit 208C, and manages the required additional capacity and the time limit that must be added in a tier with a pool.
- the analysis mode I processing unit 109 updates or adds the layer ID 208A, the required additional capacity 208B, and the extension time limit 208C.
- the volume migration plan table 209 manages the volume ID 209A, the migration destination storage apparatus ID 209B, and the migration destination pool ID 209C, and manages the volume migration plan.
- the volume ID 209A, migration destination storage device ID 209B, and migration destination pool ID 209C are updated or added by the analysis mode II processing unit 110.
- FIG. 30 shows an example of storage device information acquisition processing that is constantly executed.
- the storage device information collection unit 100 collects the latest configuration information, capacity information, and performance information from each storage device 3 (step S1).
- the storage device information collection unit 100 registers the collected storage device information in the storage device information storage unit 101.
- the storage device information collection unit 100 repeatedly executes steps S1 and S2 as described above for each storage device 3.
- FIG. 29 is a flowchart showing an example of information collection / estimation / result display processing.
- FIG. 31 is a flowchart showing an example of the analysis condition setting process S1000 shown in FIG.
- FIG. 32 is a flowchart showing an example of the volume response performance analysis process S2000 shown in FIG. 33 to 39 are flowcharts specifically showing the respective processes (S2100, S2200, S2220, S2230, S2300, S2400, and S2500) shown in FIG. 32 and FIG.
- the processor 30 executes analysis condition setting processing (S1000).
- the processor 30 stores the storage apparatus ID acquired from the storage apparatus ID 230A of the storage apparatus table 230 in the pull-down menu list of the storage apparatus input field 311 of the common input field 310 in the analysis condition setting screen shown in FIG. Is set (step S1101 in FIG. 31).
- “Storage A” is displayed in the storage device input field 311 as a state after the user selects a storage device ID from the pull-down menu list.
- the processor 30 acquires the pool ID related to the storage device ID designated in the storage device input field 311 from the pool table 231 (see FIG. 11), and sets it in the pull-down menu list in the pool input field 312 (step). S1102).
- the processor 30 acquires the volume ID and connection destination host ID of the volume related to the pool ID specified in the pool input field 312 from the volume table 233 (see FIG. 13), and displays the list (step S1103).
- step S1104 when detecting that the analysis execution button 350 has been pressed (step S1104), the processor 30 analyzes the analysis target storage device ID, pool ID, and analysis mode input to the analysis condition setting screen shown in FIG. Registration is made in the condition table 200 (see FIG. 6) (step S1105).
- the processor 30 registers the volume target response performance input to the analysis condition setting screen in the volume target response performance table 220 (see FIG. 7) in association with the volume ID (step S1106).
- the processor 30 divides and executes processing as follows according to the analysis mode. That is, when the analysis mode is I or II, the processor 30 registers the target maintenance deadline input on the analysis condition setting screen in the target maintenance deadline table 221 (see FIG. 8) (step S1108) and ends. When the analysis mode is III, the processor 30 registers the additional capacity input on the analysis condition setting screen in the additional capacity table 222 (see FIG. 9) (step S1109), and ends. On the other hand, when the analysis mode is not selected, the processor 30 ends the process.
- the processor 30 performs a drive flush number upper limit arrival time estimation process S2100 and a page placement destination hierarchy specifying process S2200 as shown in FIG. Then, a volume response performance deterioration time estimation process S2300, a volume response performance analysis process S2400, and a volume target response performance unachieved time estimation process S2500 are executed.
- the processor 30 first selects the drive IDs of the drives belonging to the pool with the pool ID specified as the analysis condition from among the drives corresponding to the drive IDs registered in the slash drive table 235 (see FIG. 15). Extract (step S2101).
- the processor 30 selects the flash limit number registered in the flash drive table 235 (see FIG. 15) for the flash drive of the selected drive ID, and the drive. Based on the number of flashes at two or more times registered in the flash number table 236 (see FIG. 16), a time when the upper limit of the flash number will be reached in the future (hereinafter also referred to as “flash time upper limit time”) is estimated. (Step S2102). Specifically, assuming that the number of flashes at two times T1 and T2 is F1 and F2, respectively, and the flash limit number is Fx, an estimated value Tx of the flash number upper limit arrival time is obtained by the following equation (1).
- a least square method or another estimation method may be used.
- the processor 30 registers the estimated flash count upper limit reaching time in the flash count upper limit reaching time table 201 (see FIG. 19) (step S2103).
- the processor 30 repeatedly executes step S2102 and step S2103 as described above for all the drives extracted in step S2101 (step S2104).
- the processor 30 ends the processing when the analysis mode is not III, while when the analysis mode is III, the processor 30 flashes the flash drive table based on the information registered in the additional capacity table 222 (see FIG. 9). Data for additional registration in 235 (see FIG. 15) is created (step S2105).
- the drive ID is arbitrarily set within a unique range. The same number of flash as the number of drives belonging to the same hierarchical ID is set.
- the processor 30 additionally registers the created data in the flash drive table 235 (step S2106).
- the processor 30 executes the above-described page arrangement destination layer specifying process.
- the processor 30 selects one of the flash drives registered in the flush count upper limit arrival time table 201 in order from the earliest flash count upper limit reach timing, and then selects one in the pool configuration hierarchy table 232 (see FIG. 12).
- the capacity of the drive is subtracted from the allocated capacity of the tier to which the drive belongs in the pool configuration tier, and is associated with the flush count upper limit arrival time and registered in the tier capacity estimation table 202 (see FIG. 20) (step S2210).
- the allocation capacity of the tier to which the drive does not belong is the allocation capacity of the tier to which the drive belongs among the pool configuration tiers registered in the pool configuration tier table 232 (see FIG.
- the processor 30 repeats the above-described step S2210 for all the flash drives registered in the flash count upper limit arrival time table 201 in order from the earliest flash count upper limit reach timing.
- the processor 30 executes an increase page estimation process shown in FIG. 34 (step S2220). Specifically, as shown in FIG. 35, the processor 30 determines from the history of the volume usage rate (the ratio between the used capacity and the allocated capacity) in the volume capacity information table 234 (refer to FIG. 14) from the volume generation time point. The average value of the capacity usage rate for each hour is calculated (step S2221).
- the volume that is the target range for calculating the average value may be all the volumes registered in the volume capacity information table 234, or may be narrowed down to a common volume in the following viewpoints (A) to (C). Good.
- the processor 30 refers to the volume table 233 shown in FIG. 13 and narrows down the volumes belonging to the pool with the pool ID to be analyzed.
- volume performance information read count, write count, etc.
- the processor 30 periodically collects volume performance information and stores it in the storage device information storage unit 101.
- the performance information is narrowed down to the volume with the most recent value or the average value for a predetermined period.
- Businesses that use volumes (mail servers, online transaction processing business servers, batch processing business servers, etc.)
- the processor 30 stores information indicating the business type of the connection destination host in the storage device information storage unit 101. The volume is narrowed down based on this business type information.
- the processor 30 estimates the future capacity of each volume based on the average value of the capacity usage rate for each elapsed time from the time of volume generation, and divides the volume by a single page capacity to determine which future capacity for the volume. It is estimated how many pages are newly allocated at the time, and a page ID is arbitrarily set within a range unique to these pages to be added in the future (step S2222).
- the processor 30 calculates an average value of loads (IOPS and the like) for each elapsed time from the time when the page is generated based on the history of page loads (IOPS and the like) registered in the page load table 238.
- a load (IOPS or the like) after the addition of each page estimated to be added in the future in step S2222 is estimated (step S2223).
- the processor 30 registers each page estimated to be added in the future in step S2222 in the hierarchical capacity estimation table 202 (see FIG. 20) among the loads (IOPS and the like) estimated in step S2223.
- the value for each year and month is registered in the page load estimation table 204 (see FIG. 21) (step S2224).
- the processor 30 executes the page load increase estimation process shown in FIG. 34 (step S2230). Specifically, as shown in FIG. 36, the processor 30 first selects one of the years and months registered in the hierarchical capacity estimation table 202 (see FIG. 20) in order from the oldest. Next, any one of the page IDs registered in the page table 237 (see FIG. 17) is selected, and the page of the page at two or more times registered in the page load table 238 (see FIG. 18) is selected. Based on the load (IOPS, etc.), the load (IOPS, etc.) of the page in the current month is estimated (step S2231). Specifically, the estimation is performed using a least square method or another estimation method. Next, the processor 30 registers the estimated load (IOPS or the like) in the page load estimation table 204 (see FIG. 21) in association with the year and month and the page ID (step S2232).
- the processor 30 registers the estimated load (IOPS or the like) in the page load estimation table 204 (see FIG. 21) in association
- the processor 30 repeatedly executes steps S2231 and S2232 as described above for all pages registered in the page table 237 (see FIG. 17). Further, the processor further performs steps S2231 and S2232 repeatedly executed for the pages registered in the page table 237 (see FIG. 17), and all the years in the year / month column 202A registered in the hierarchical capacity estimation table 202. Repeat for the month in order from the oldest.
- the processor 30 selects one of the year and month registered in the tier capacity estimation table 202 (see FIG. 20) in order from the oldest, and each pool configuration tier in that year and month. Is allocated (step S2240).
- the processor 30 selects one of the pool configuration layers in order from the upper layer, and calculates the number of pages that can be arranged in the pool configuration layer by dividing the allocated capacity of the pool configuration layer by the page size. (Step S2250).
- the processor 30 sorts each row corresponding to the year and month registered in the page load estimation table 204 (see FIG. 21) in descending order by IOPS, and the number of pages that can be arranged in the hierarchy calculated above from the first page.
- step S2260 Only the minutes are acquired (step S2260). However, pages that have already been acquired in this repetitive process are not included.
- the processor 30 associates the acquired page with the pool configuration hierarchy in the year and month, and registers them in the page arrangement destination hierarchy table 203 (see FIG. 22) (step S2270).
- the processor 30 repeatedly executes the above steps S2250 to S2270 in order from the upper layer for all the layers of the pool configuration layer.
- the processor 30 includes the above-described step S2240 in order from the oldest for all years registered in the tier capacity estimation table 202 (see FIG. 20), and thus for all tiers of the pool configuration tier. As described above, repeatedly executing Steps S2250 to S2270 is repeatedly executed.
- the processor 30 executes the volume response performance deterioration time estimation process S2300 shown in FIG. 32 as shown in FIG.
- the processor 30 selects one of the years and months registered in the page arrangement destination hierarchy table 203 (see FIG. 22) in order from the oldest, and sets the page ID of each page in that year and month to the corresponding hierarchy ID. Obtained from the placement destination hierarchy table 203 (see FIG. 22) (step S2301).
- the processor 30 acquires from the page placement destination hierarchy table 203 (see FIG. 22) the hierarchy ID corresponding to the page ID of each page in the year after the current month (step S2302).
- the processor 30 compares the information acquired as described above in steps S2301 and S2302, and selects and takes out the page having the same page ID and the changed hierarchy ID (step S2303).
- the processor 30 refers to the page table 237 (see FIG. 17), and specifies the volume ID of the volume to which the page selected and extracted in step S2303 belongs (step S2304).
- the processor 30 associates the volume ID identified in step S2304 with the year and month and registers it in the response performance degradation volume table 205 (see FIG. 23) (step S2305).
- the processor 30 repeatedly executes steps S2301 to S2305 as described above in order from the oldest for all years registered in the page arrangement destination hierarchy table 203 (see FIG. 22).
- the processor 30 executes the volume response performance analysis process S2400 shown in FIG. 32 as shown in FIG.
- the processor 30 selects one of the volumes registered in the response performance deterioration volume table 205 (see FIG. 23), and refers to the page table 237 (see FIG. 17), and pages of all pages belonging to the volume.
- the ID is acquired, and the IOPS for each year of all pages belonging to the volume is acquired with reference to the page load estimation table 204 (FIG. 21) (step S2401).
- the processor 30 refers to the page arrangement destination hierarchy table 203 (see FIG. 22) and specifies the hierarchy in which each page acquired in step S2401 is arranged (step S2402).
- the processor 30 refers to the pool configuration tier table 232 (see FIG. 12) and acquires the response performance of the tier identified in step S2402 (step S2403).
- the processor 30 uses the following equation (2) to calculate the response time of the hierarchy to which the page belongs for all pages belonging to the volume by weighted averaging with the IOPS of the page, and thereby for each year of the volume.
- Response performance is estimated (step S2404).
- the processor 30 associates the estimated response performance of the volume with the volume ID and the year and month, and registers them in the volume response performance table 206 (see FIG. 24) (step S2405).
- the processor 30 repeatedly executes steps S2401 to S2405 as described above for all the volumes registered in the response performance deterioration volume table 205 (see FIG. 23).
- the processor 30 executes a volume target response performance unachieved time estimation process S2500 shown in FIG. 32 as shown in FIG.
- the processor 30 selects one of the rows registered in the volume response performance table 206 (see FIG. 24) in order from the oldest, and the response performance of the volume in the row is the volume target response performance table 220 ( It is determined whether or not the target response performance registered in (see FIG. 7) has not been achieved (step S2501).
- the processor 30 determines whether or not the volume is registered in the target unachieved volume table 207 (see FIG. 25). (Step S2502) If not registered, the volume ID of the volume and the date of the line are registered in the target unreachable volume table 207 (see FIG. 25) (Step S2503).
- the processor 30 registers the volume ID of the volume of the row and the year and month of the row in the target unreachable volume table 207 (see FIG. 25) in step S2503, the response performance of the volume is determined in step S2501.
- the volume response performance table 206 includes the case where the target response performance is satisfied, or the case where the volume of the row is registered in the target unachieved volume table 207 (see FIG. 25) in step S2503. Steps S2501 to S2503 as described above are repeatedly executed in order from the oldest year for all registered rows.
- step S2000 when each process included in the volume response performance analysis process (step S2000) shown in FIG. 29 is completed as described above, the type of analysis mode is next determined (step S3000). In the case of the analysis mode III or when the analysis mode is not selected, an analysis result display process is executed (step S4000).
- the processor 30 is registered in the response performance deterioration volume table 205 (see FIG. 23) and the target unachieved volume table 207 (see FIG. 25).
- the rows are scanned in order of early year and month, and the corresponding year and month and volume ID are registered in the analysis result table 240 (see FIG. 26) (step S4001).
- the processor 30 displays a screen for the data registered in the analysis result table 240 (step S4002), and ends the process.
- the analysis mode I processing unit 109 executes the analysis mode I process under the control of the processor 30 (step S5000 in FIG. 29).
- the analysis mode I processing unit 109 first determines whether or not there is a volume whose target performance has not been reached in the years before the target maintenance deadline (specifically, Determines whether or not the year and month before the target maintenance deadline registered in the target maintenance deadline table 221 is registered in the target unachieved volume table 207 (step S5001), and the target unachieved volume does not exist
- the analysis mode I process ends, if there is a target unachieved volume, the following process is repeatedly executed for the pool configuration hierarchy belonging to the pool of the pool ID in the analysis condition table 200.
- the analysis mode I processing unit 109 calculates each pool configuration tier when a capacity corresponding to the minimum pool allocated capacity (for example, a capacity for an arbitrary number of storage devices allocated to the tier) is added to the pool configuration tier. (Step S5002). Next, based on the calculated information, the analysis mode I processing unit 109 creates data to be additionally registered in the flash drive table 235 (see FIG. 15) for the number of storage devices corresponding to the pool minimum allocated capacity. (Step S5003). The drive ID is arbitrarily set within a unique range, while the flash limit number is set to the same value as that of drives belonging to the same hierarchical ID. Thereafter, the analysis mode I processing unit 109 registers the created data in the flash drive table 235 (step S5004).
- a capacity corresponding to the minimum pool allocated capacity for example, a capacity for an arbitrary number of storage devices allocated to the tier
- step S2400 the above-described volume response performance analysis process is executed (step S2400).
- the analysis mode I processing unit 109 manages the data of each table storing the analysis result as an analysis result in the configuration in which the pool configuration hierarchy is added in association with the capacity of each pool configuration hierarchy (step S5005). .
- the analysis mode I processing unit 109 repeatedly executes steps S5002 to S5005 as described above for the pool configuration hierarchy belonging to the pool with the pool ID in the analysis condition table 200 as described above.
- the analysis mode I processing unit 109 determines whether there is a configuration in which all the volumes can achieve the target among the analysis results after the expansion (specifically, the analysis mode I processing unit 109 is registered in the target maintenance period table 221). It is determined whether there is a configuration in which the year and month before the target maintenance deadline are not registered in the target unachieved volume table 207 (step S5006). If the analysis mode I processing unit 109 determines that it does not exist, the process returns to the process of repeatedly executing steps S5002 to S5005 for the pool configuration tier belonging to the pool with the pool ID in the analysis condition table 200. The following processing is executed.
- the analysis mode I processing unit 109 calculates the difference capacity between the capacity of each pool configuration tier in the configuration and the capacity of each pool configuration tier in the initial configuration (prior to the allocation of the capacity corresponding to the minimum pool allocation capacity in step S5002).
- the initial configuration the oldest one of the years registered in the target volume table 207 (see FIG. 25) is registered in the required additional capacity table 208 (see FIG. 27) in association with the year and month one month before. (Step S5007), the process ends.
- the analysis result display unit 111 executes the above-described analysis result display process (step S4000), and then executes an additional capacity display process (step S7000).
- the analysis result display unit 111 first determines whether or not the number of registrations in the required additional capacity table 208 (see FIG. 27) is one or more (steps). S7001). The analysis result display unit 111 outputs a message indicating that expansion is not necessary when the number of registrations is not one or more (step S7003), whereas when the number of registrations is one or more, the required expansion capacity table A screen display is performed by associating the tier ID, the required additional capacity, and the extension time limit registered in 208 (see FIG. 27) with the storage apparatus ID and pool ID of the analysis condition table 200 (step S7002).
- the analysis mode II processing unit 110 executes the analysis mode II processing shown in FIG. .
- This analysis mode II processing unit 110 uses the target unachieved volume (specifically, the volume registered in the target unachieved volume table 207 and the date before the target maintenance deadline registered in the target maintenance deadline table 221). The following processing is repeatedly executed.
- the analysis mode II processing unit 110 selects one pool ID registered in the pool table 231 (see FIG. 11) and one other than the pool ID registered in the analysis condition table 200.
- the following steps S6001, S6002, S2400 and S6003 are repeatedly executed.
- the analysis mode II processing unit 110 reflects the configuration when the target destination unallocated volume allocation destination pool is changed to the pool with the pool ID in each table configuring the storage device information storage unit 101 (Ste S6001).
- the analysis mode II processing unit 110 sets the pool as an analysis target pool (step S6002) and executes the above-described volume response performance analysis process (see FIG. 32) (step S2400).
- the analysis mode II processing unit 110 determines whether or not the target unachieved volume exists in the year and months before the target maintenance deadline (step S6003), and if there is no target unachieved volume, executes from step S6004 described later.
- Steps S6001, S6002, S2400, and S6003 are repeatedly executed.
- step S6004 the analysis mode II processing unit 110 registers the volume ID of the target unachieved volume and the pool ID in the volume migration plan table 209 (see FIG. 28).
- step S6005 the analysis mode II processing unit 110 sets the pool with the pool ID registered in the analysis condition table 200 (see FIG. 6) as an analysis target pool (step S6005).
- a performance analysis process (see FIG. 32) is executed (step S2400).
- step S6006 determines whether or not the target unachieved volume exists in the year and month before the target maintenance deadline (step S6006), and terminates the processing when the target unachieved volume does not exist.
- step S6006 the above processing is repeatedly executed from step S6001 for the target unachieved volume.
- the analysis result display unit 111 executes the above-described analysis result display process (step S4000), and then executes a volume migration plan display process (step S8000).
- the analysis result display unit 111 first determines whether or not the number of registrations in the volume migration plan table 209 (see FIG. 28) is one or more ( Step S8001).
- the analysis result display unit 111 displays the volume ID, migration destination storage ID, and migration destination pool ID registered in the volume migration plan table 209 as the storage device ID and pool ID of the analysis condition table 200. Are displayed in association with each other (step S8002), and if the number of registrations is not one or more, a message indicating that migration is unnecessary is output (step S8003).
- an example of a performance management apparatus in the storage apparatus 3 including two or more types of storage devices 20 having different rewrite lifetimes and response performances.
- the storage device 3 is unused in any one of the two or more types of storage devices 20, and The data corresponding to the write request is written in the storage area that has not reached the write life, and the storage apparatus 3 responds to the IO load amount by the host computer 2 in each storage area of the two or more types of storage devices 20.
- the storage device with higher response performance From the storage area of the storage device 20 with lower response performance, the storage device with higher response performance From a storage area of the storage device 20 that has not reached the write life or from a storage area of the storage device 20 having a higher response performance to a storage area that has not reached the write life of the storage device 20 having a lower response performance.
- the management computer 4 has the following characteristic configuration.
- the management computer 4 further estimates a decrease in storage capacity associated with the reaching of the rewrite life for each of the two or more types of storage devices 20, and estimates a performance impact on the virtual volume and a generation timing thereof.
- the analysis mode I processing unit 109, the analysis mode II processing unit 110, and the like while the analysis result as an example of the performance influence display unit that displays information related to the estimated performance influence and the generation time thereof A display unit 111 is provided.
- the analysis result display unit 111 displays the time when the response performance deterioration of the volume starts in the future. In this way, based on the time when the response performance deterioration of the volume estimated due to the decrease in the storage capacity due to the reaching of the life of each storage medium of two or more types of storage devices 20 having different rewrite life and response performance begins, It is possible to support the formulation of an expansion plan for the storage device 20.
- the analysis result display unit 111 displays the time when the volume does not satisfy the target response performance. In this case, based on the time when the volume estimated due to the decrease in storage capacity due to the end of the life of each storage medium of two or more types of storage devices 20 having different rewrite lifetimes and response performances does not satisfy the target response performance. Therefore, it is possible to support the creation of an expansion plan for the storage device 20.
- the analysis result display unit 111 displays the capacity of the additional flash drive necessary for extending the target achievement period to a preset time. By doing so, until the time when the target achievement period is set in advance, which is estimated due to a decrease in the storage capacity due to the end of the life of each storage medium of two or more types of storage devices 20 having different rewrite life and response performance Based on the capacity of the additional flash drive necessary for the extension, it is possible to support the formulation of the expansion plan for the storage device 20.
- the analysis result display unit 111 displays the volume target achievement period based on the capacity of the designated additional flash drive.
- the analysis result display unit 111 displays a volume pool migration plan necessary for extending the target achievement period to a preset time.
- the pool migration plan of the volume estimated based on the decrease in storage capacity due to the end of the life of each storage medium of two or more types of storage devices 20 having different rewrite lifetime and response performance, It is possible to support the formulation of an expansion plan for the storage device 20.
- management computer 4 having the storage performance analysis function and the storage management client 5 operated by the user are configured separately is described.
- the management computer 4 and the storage management client 5 may be configured by one computer (for example, a computer device).
- management computer 4 with the storage performance analysis function and the storage device 3 are configured separately, but the present invention is not limited to this, and the management computer is not limited thereto. 4 may be integrated with any one of the storage devices 3.
- the present invention can be widely applied to a performance management method and apparatus for a storage apparatus in which two or more types of storage media having different rewrite lifetimes and response performances are hierarchized and data can be moved between layers according to the response performance.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Debugging And Monitoring (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
図1は、本実施の形態による計算機システム1の構成例を示す。本計算機システム1においては、1又は複数のホスト計算機2、1又は複数のストレージ装置3、管理計算機4及びストレージ管理クライアント5が管理用LAN(Local Area Network)6を介して相互に接続されている一方、ホスト計算機2、ストレージ装置3及び管理計算機4が1又は複数のSAN(Storage Area Network)スイッチ7を介して相互に接続されている。
図2は、Tierを用いた階層化ストレージ方式の一例を示す概念図である。階層化ストレージ方式は、所定のポリシーに従って、高性能(従って比較的高コスト)のストレージメディアと低性能(比較的低コスト)のストレージメディアとの間でデータを移動する管理手法である。ここでは、種類の異なるフラッシュドライブ同士或いはフラッシュドライブとハードディスクドライブといった記憶デバイス20を用いて作成したプール(以下「階層化プール」とも呼ぶ)を用いている。階層化プールでは、階層化プールに属する仮想ボリュームに対するページ単位のアクセス負荷を基にして、どの性能(階層)の記憶デバイス20を割り当てるかが決定される。具体的には、例えば、アクセス負荷の高いページに対しては性能の高いドライブを割り当てる一方、アクセス負荷の低いページに対して性能の低いドライブを割り当てる。
図3は、主として、本実施の形態による管理計算機4の構成例を示すブロック図である。なお、図示の例では、矢印が情報の流れを表しており、矢印の後端に配置するテーブルに格納されている情報がその矢印の先端に配置する処理部で用いられ、矢印の後端に配置する処理部が処理結果として得た情報をその矢印の先端に配置するテーブルに格納することを表している。
図4は、分析条件の設定のためのデータ入力画面の一例を示している。図5は、入力された分析条件に基づく分析の結果を表す結果出力画面の一例を示している。図4に示す分析条件設定画面300は、分析条件設定部102(図3参照)によって表示されており、例えば、共通入力欄310、分析モードI用入力欄320、分析モードII用入力欄330、分析モードIII用入力欄340及び分析実行ボタン350を有する。
(5-1)ストレージ装置情報取得処理
図30は、恒常的に実行されているストレージ装置情報取得処理の一例を示している。このストレージ装置情報取得処理では、まず、ストレージ装置情報収集部100が各ストレージ装置3から最新の構成情報、容量情報及び性能情報を収集する(ステップS1)。次にストレージ装置情報収集部100は、収集したストレージ装置情報をストレージ装置情報記憶部101に登録する。このストレージ装置情報取得処理では、ストレージ装置情報収集部100がストレージ装置3ごとに、以上のようなステップS1,S2を繰り返し実行している。
図29は、情報収集・推定・結果表示処理の一例を示すフローチャートである。図31は、図29に示す分析条件設定処理S1000の一例を示すフローチャートである。図32は、図29に示すボリューム応答性能分析処理S2000の一例を示すフローチャートである。図33~39は、それぞれ、図32に及び図34示す各処理(S2100、S2200、S2220、S2230、S2300、S2400及びS2500)を具体的に表したフローチャートである。
管理計算機4ではプロセッサ30が、図13に示すボリュームテーブル233を参照し、分析対象であるプールIDのプールに所属するボリュームに絞り込む。
プロセッサ30は、ボリュームの性能情報を定期的に収集し、ストレージ装置情報記憶部101に格納する。その性能情報の直近の値または所定の期間の平均値が類似するボリュームに絞り込む。
プロセッサ30は、ボリュームテーブル233の接続先ホストIDに加え、接続先ホストの業務種別を表す情報をストレージ装置情報記憶部101に格納する。この業務種別情報をもとにボリュームを絞り込む。
以上のように本実施の形態の計算機システム1では、書き換え寿命および応答性能が互いに異なる2種以上の記憶デバイス20を備えるストレージ装置3における性能管理装置の一例としての管理計算機4において、ホスト計算機2から仮想ボリュームの未書き込みアドレスに対しライト要求が発行されると、ストレージ装置3が、2種以上の記憶デバイス20のうちいずれかにおいて未使用であり、かつ、書き込み寿命に到達していない記憶領域に当該ライト要求に応じたデータを書き込み、ストレージ装置3が、それら2種以上の記憶デバイス20各々の記憶領域ごとにおけるホスト計算機2によるIO負荷量に応じて、応答性能がより低い記憶デバイス20の記憶領域から、応答性能がより高い記憶デバイス20のうち書き込み寿命に到達していない記憶領域に、又は、応答性能がより高い記憶デバイス20の記憶領域から、応答性能がより低い記憶デバイス20のうち書き込み寿命に到達していない記憶領域に、データの格納場所を移行することを前提とし、管理計算機4は次のような特徴的な構成を備えている。
なお上述の実施の形態においては、本発明を、図1のように構成された計算機システム1に適用するようにした場合について述べたが、本発明はこれに限らず、この他種々の構成の計算機システムに広く適用することができる。例えば、上述の実施の形態においてはストレージ装置3が2~3台である場合について述べたが、ストレージ装置3が1台又は4台以上の場合にも本発明を適用することができる。
Claims (12)
- 書き換え寿命および応答性能が互いに異なる2種以上の記憶媒体を備えるストレージ装置の性能管理装置であって、
前記ストレージ装置は、前記ストレージ装置と接続されたホスト計算機に対し、前記2種以上の記憶媒体を用いて仮想的な論理ボリュームを構成する仮想ボリュームを提供し、
前記ホスト計算機から前記仮想ボリュームの未書き込みアドレスに対しライト要求が発行されると、前記ストレージ装置が、前記2種以上の記憶媒体のうちいずれかにおいて未使用であり、かつ、書き込み寿命に到達していない記憶領域に当該ライト要求に応じたデータを書き込み、
前記ストレージ装置が、前記2種以上の記憶媒体各々の記憶領域ごとにおける前記ホスト計算機によるIO負荷量に応じて、応答性能がより低い記憶媒体の記憶領域から、応答性能がより高い記憶媒体のうち書き込み寿命に到達していない記憶領域に、又は、応答性能がより高い記憶媒体の記憶領域から、応答性能がより低い記憶媒体のうち書き込み寿命に到達していない記憶領域に、データの格納場所を移行し、
前記性能管理装置は、
前記2種以上の記憶媒体各々について書き換え寿命の到達に伴う記憶容量の減少分を推定し、前記仮想ボリュームに対する性能影響とその発生時期を推定する性能影響推定部と、
前記推定された性能影響及びその発生時期に関連する情報を表示する性能影響表示部と、
を有することを特徴とするストレージ装置の性能管理装置。 - 前記性能影響表示部は、ボリュームの応答性能劣化が始まる時期を表示する請求項1に記載のストレージ装置の性能管理装置。
- 前記性能影響表示部は、ボリュームが目標応答性能を満たさなくなる時期を表示する請求項1のストレージ装置の性能管理装置。
- 前記性能影響表示部は、目標達成期間を予め設定した時期まで延長するために必要な増設フラッシュドライブの容量を表示する請求項1に記載のストレージ装置の性能管理装置。
- 前記性能影響表示部は、指定された増設フラッシュドライブの容量を基に、ボリュームの目標達成期間を表示する請求項1に記載のストレージ装置の性能管理装置。
- 前記性能影響表示部は、目標達成期間を予め設定した時期まで延長するために必要なボリュームのプール移行プランを表示する請求項1に記載のストレージ装置の性能管理装置。
- 書き換え寿命および応答性能が互いに異なる2種以上の記憶媒体を備えるストレージ装置の性能を管理する性能管理装置による性能管理方法であって、
前記ストレージ装置は、前記ストレージ装置と接続されたホスト計算機に対し、前記2種以上の記憶媒体を用いて仮想的な論理ボリュームを構成する仮想ボリュームを提供し、
前記ホスト計算機から前記仮想ボリュームの未書き込みアドレスに対しライト要求が発行されると、前記ストレージ装置が、前記2種以上の記憶媒体のうちいずれかにおいて未使用であり、かつ、書き込み寿命に到達していない記憶領域に当該ライト要求に応じたデータを書き込み、
前記ストレージ装置が、前記2種以上の記憶媒体各々の記憶領域ごとにおける前記ホスト計算機によるIO負荷量に応じて、応答性能がより低い記憶媒体の記憶領域から、応答性能がより高い記憶媒体のうち書き込み寿命に到達していない記憶領域に、又は、応答性能がより高い記憶媒体の記憶領域から、応答性能がより低い記憶媒体のうち書き込み寿命に到達していない記憶領域に、データの格納場所を移行し、
前記性能管理方法は、
前記性能管理装置が、前記2種以上の記憶媒体各々について書き換え寿命の到達に伴う記憶容量の減少分を推定し、前記仮想ボリュームに対する性能影響とその発生時期を推定する性能影響推定ステップと、
前記性能管理装置が、前記推定された性能影響及びその発生時期に関連する情報を表示する性能影響表示ステップと、
を有することを特徴とするストレージ装置の性能管理方法。 - 前記性能影響表示ステップでは、前記性能管理装置が、ボリュームの応答性能劣化が始まる時期を表示する請求項7に記載のストレージ装置の性能管理方法。
- 前記性能影響表示ステップでは、前記性能管理装置が、ボリュームが目標応答性能を満たさなくなる時期を表示する請求項7のストレージ装置の性能管理方法。
- 前記性能影響表示ステップでは、前記性能管理装置が、目標達成期間を予め設定した時期まで延長するために必要な増設フラッシュドライブの容量を表示する請求項7に記載のストレージ装置の性能管理方法。
- 前記性能影響表示ステップでは、前記性能管理装置が、指定された増設フラッシュドライブの容量を基に、ボリュームの目標達成期間を表示する請求項7に記載のストレージ装置の性能管理方法。
- 前記性能影響表示ステップでは、前記性能管理装置が、目標達成期間を予め設定した時期まで延長するために必要なボリュームのプール移行プランを表示する請求項7に記載のストレージ装置の性能管理方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/057302 WO2017154127A1 (ja) | 2016-03-09 | 2016-03-09 | ストレージ装置の性能管理方法及び装置 |
JP2018503909A JPWO2017154127A1 (ja) | 2016-03-09 | 2016-03-09 | ストレージ装置の性能管理方法及び装置 |
US15/744,613 US10282095B2 (en) | 2016-03-09 | 2016-03-09 | Method and device for managing performance of storage apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/057302 WO2017154127A1 (ja) | 2016-03-09 | 2016-03-09 | ストレージ装置の性能管理方法及び装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017154127A1 true WO2017154127A1 (ja) | 2017-09-14 |
Family
ID=59789129
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/057302 WO2017154127A1 (ja) | 2016-03-09 | 2016-03-09 | ストレージ装置の性能管理方法及び装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10282095B2 (ja) |
JP (1) | JPWO2017154127A1 (ja) |
WO (1) | WO2017154127A1 (ja) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10474366B2 (en) * | 2016-12-28 | 2019-11-12 | Sandisk Technologies Llc | Non-volatile storage system with in-drive data analytics |
US10795583B2 (en) * | 2017-07-19 | 2020-10-06 | Samsung Electronics Co., Ltd. | Automatic data placement manager in multi-tier all-flash datacenter |
US10496541B2 (en) | 2017-11-29 | 2019-12-03 | Samsung Electronics Co., Ltd. | Dynamic cache partition manager in heterogeneous virtualization cloud cache environment |
US10733108B2 (en) * | 2018-05-15 | 2020-08-04 | Intel Corporation | Physical page tracking for handling overcommitted memory |
US11221782B1 (en) * | 2019-03-27 | 2022-01-11 | Amazon Technologies, Inc. | Customizable progressive data-tiering service |
US10810054B1 (en) * | 2019-07-29 | 2020-10-20 | Hitachi, Ltd. | Capacity balancing for data storage system |
EP4085594A1 (en) * | 2020-01-02 | 2022-11-09 | Level 3 Communications, LLC | Systems and methods for storing content items in secondary storage |
US11928516B2 (en) * | 2021-04-27 | 2024-03-12 | EMC IP Holding Company LLC | Greener software defined storage stack |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007115232A (ja) * | 2005-09-22 | 2007-05-10 | Hitachi Ltd | 低消費電力記憶装置とその制御方法 |
WO2015029102A1 (ja) * | 2013-08-26 | 2015-03-05 | 株式会社日立製作所 | ストレージ装置及び階層制御方法 |
WO2015145532A1 (ja) * | 2014-03-24 | 2015-10-01 | 株式会社日立製作所 | ストレージシステム及びデータ処理方法 |
-
2016
- 2016-03-09 WO PCT/JP2016/057302 patent/WO2017154127A1/ja active Application Filing
- 2016-03-09 JP JP2018503909A patent/JPWO2017154127A1/ja active Pending
- 2016-03-09 US US15/744,613 patent/US10282095B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007115232A (ja) * | 2005-09-22 | 2007-05-10 | Hitachi Ltd | 低消費電力記憶装置とその制御方法 |
WO2015029102A1 (ja) * | 2013-08-26 | 2015-03-05 | 株式会社日立製作所 | ストレージ装置及び階層制御方法 |
WO2015145532A1 (ja) * | 2014-03-24 | 2015-10-01 | 株式会社日立製作所 | ストレージシステム及びデータ処理方法 |
Also Published As
Publication number | Publication date |
---|---|
US10282095B2 (en) | 2019-05-07 |
US20180210656A1 (en) | 2018-07-26 |
JPWO2017154127A1 (ja) | 2018-06-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017154127A1 (ja) | ストレージ装置の性能管理方法及び装置 | |
AU2019213340B2 (en) | Dynamic configuration of data volumes | |
US20090150639A1 (en) | Management apparatus and management method | |
JP4686305B2 (ja) | ストレージ管理システムおよびその方法 | |
US8458424B2 (en) | Storage system for reallocating data in virtual volumes and methods of the same | |
US8069326B2 (en) | Relocation system and a relocation method | |
JP6084685B2 (ja) | ストレージシステム | |
US8572319B2 (en) | Method for calculating tier relocation cost and storage system using the same | |
US8402214B2 (en) | Dynamic page reallocation storage system management | |
US7895161B2 (en) | Storage system and method of managing data using same | |
US9086947B2 (en) | Management method and management system for computer system | |
US9477407B1 (en) | Intelligent migration of a virtual storage unit to another data storage system | |
JP6510635B2 (ja) | ストレージシステム及びデータ制御方法 | |
CN102099794B (zh) | 控制备份操作中的资源分配 | |
JP2007042034A (ja) | 計算機システム、管理計算機及び論理記憶領域の管理方法 | |
CN102483684A (zh) | 提供虚拟卷的存储系统 | |
US20170359221A1 (en) | Method and management system for calculating billing amount in relation to data volume reduction function | |
CN103827970A (zh) | 对固态驱动器再配置数据的存储装置、存储控制器以及方法 | |
US8156281B1 (en) | Data storage system and method using storage profiles to define and modify storage pools | |
JP6842447B2 (ja) | リソース割当ての最適化を支援するシステム及び方法 | |
US8261038B2 (en) | Method and system for allocating storage space | |
JP5183363B2 (ja) | 論理ボリュームのデータ移動方法とストレージシステムおよび管理計算機 | |
US11650731B2 (en) | Embedded dynamic user interface item segments | |
US9317224B1 (en) | Quantifying utilization of a data storage system by a virtual storage unit | |
CN111158595A (zh) | 企业级异构存储资源调度方法及系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2018503909 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15744613 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16893459 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16893459 Country of ref document: EP Kind code of ref document: A1 |