WO2018042608A1 - Unité de stockage et son procédé de commande - Google Patents

Unité de stockage et son procédé de commande Download PDF

Info

Publication number
WO2018042608A1
WO2018042608A1 PCT/JP2016/075734 JP2016075734W WO2018042608A1 WO 2018042608 A1 WO2018042608 A1 WO 2018042608A1 JP 2016075734 W JP2016075734 W JP 2016075734W WO 2018042608 A1 WO2018042608 A1 WO 2018042608A1
Authority
WO
WIPO (PCT)
Prior art keywords
pool
storage device
performance
target
read
Prior art date
Application number
PCT/JP2016/075734
Other languages
English (en)
Japanese (ja)
Inventor
英美子 藤松
開道 相浦
純二 岩崎
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to PCT/JP2016/075734 priority Critical patent/WO2018042608A1/fr
Publication of WO2018042608A1 publication Critical patent/WO2018042608A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems

Definitions

  • the present invention relates to a storage apparatus and a control method thereof, and is particularly suitable when applied to a storage apparatus equipped with a virtualization function.
  • the virtualization function provides a virtual logical volume (hereinafter referred to as a virtual volume) to the host device, and when a write request is given from the host device to an unused storage area in the virtual volume.
  • a virtual volume a virtual logical volume
  • This is a function for dynamically allocating a physical storage area to the storage area.
  • a predetermined number of hard disk devices are managed as a RAID (Redundant Arrays of Inexpensive Disks) group, and logical volumes defined on storage areas provided by the RAID group (hereinafter referred to as this). Is called a pool volume) or a plurality of them are managed as one pool.
  • the physical storage area is assigned to the virtual volume by cutting out a storage area of a predetermined size from this pool.
  • Such a virtualization function has the advantage that it is not necessary to prepare in advance a hard disk device for the capacity of the virtual volume provided to the host device, and the introduction cost of the storage device can be reduced.
  • Patent Document 1 As a method for solving such a problem, for example, in Patent Document 1, the performance of a plurality of disk devices is periodically measured while the device is operating, and the performance degradation of any one of the disk devices is detected based on the measured performance value. In this case, it is disclosed that a disk device is added and data of an existing disk device is rearranged in the added disk device.
  • Patent Document 2 monitors the access load on a logical disk belonging to a storage pool, and when a logical disk with a high access load is detected, a storage pool having a necessary capacity is secured to store the logical disk. A method for moving data is disclosed.
  • the present invention has been made in consideration of the above points, and an object of the present invention is to propose a highly reliable storage apparatus and its control method capable of easily performing the optimum performance improvement according to the situation of the storage apparatus.
  • one or a plurality of logical volumes are defined on a storage area provided by a storage device group composed of one or a plurality of storage devices, and one or a plurality of the storage device groups are defined.
  • the pool is formed by one or more logical volumes to be provided, provides a virtual volume to a host device, and is associated with the virtual volume in response to a data write request from the host device to the virtual volume
  • the storage area is logically divided into a plurality of areas, and the area is used as a cache area for temporarily storing data read from and written to the logical volume Cache memory allocated to the pool, and for each pool,
  • a cache hit rate which is a ratio in which read target data is stored in the cache area allocated to the pool at the time of read access in the device group, and the frequency of the read access and write access in each of the storage device groups, respectively
  • the cache hit rate at the time of the read access in the storage device group is equal to or less than a predetermined first threshold for each of the performance monitoring unit to be acquired and the pool
  • the storage device group provides the logical volume.
  • the capacity of the cache area allocated to the pool is added, and if the frequency of the read access or the
  • one or more logical volumes are defined on a storage area provided by a storage device group composed of one or more storage devices, and one or more provided by one or more storage device groups.
  • a pool is formed by the logical volume of the virtual volume, the virtual volume is provided to the host device, and the virtual volume is transferred from the pool associated with the virtual volume in response to a data write request from the host device to the virtual volume.
  • the storage apparatus is a cache in which the storage area is logically divided into a plurality of areas, and the area temporarily stores data read and written to the logical volume.
  • the write performance is improved by adding a new logical volume, while the read performance depends on the pool status.
  • the performance of the pool is improved by adding the capacity of the cache area of the pool and / or adding a new logical volume to the pool.
  • (A) to (C) are charts for explaining the pool performance monitoring process. It is a flowchart which shows the process sequence of an access classification discrimination
  • reference numeral 1 denotes a computer system according to this embodiment as a whole.
  • the computer system 1 includes a host device 2 that executes processing according to a user's job, and a storage device 3 that is equipped with a virtualization function.
  • the host device 2 and the storage device 3 communicate with each other through a communication path 4. Connected through.
  • the host device 2 is a general-purpose server device provided with information processing resources such as a CPU 10, a memory 11, and an interface 12, and the CPU 10 executes application software 14 stored in the memory 11, whereby data to the storage device 3 is stored. Predetermined business processing including reading and writing is executed.
  • the interface 12 is a hardware device having a function of performing protocol control when communicating with the storage apparatus 3 via the communication path 4.
  • the storage device 3 includes a data storage unit 21 including a plurality of storage devices 20 and a controller 22 that controls data input / output with respect to the data storage unit 21.
  • the storage device 20 of the data storage unit 21 is composed of a large-capacity nonvolatile storage device such as a hard disk device or an SSD (Solid State Drive). In the following description, it is assumed that the storage device 20 is composed of a hard disk device.
  • a RAID group RG is formed by one or a plurality of storage devices 20, and one or a plurality of pool volumes PLVOL are created in a storage area provided by the storage devices 20 constituting one RAID group RG.
  • a pool PL is formed by one or a plurality of pool volumes PLVOL provided by one or a plurality of RAID groups RG.
  • the virtual volume VVOL provided by the storage device 3 to the host device 2 is associated with one of the pools PL.
  • a write request for an unused storage area in the virtual volume VVOL is given from the host device 2
  • a predetermined size is generated from any pool volume PLVOL in the pool PL associated with the virtual volume VVOL.
  • a storage area (hereinafter referred to as a real storage area) is cut out and assigned to the write destination storage area in the virtual volume VVOL, and the write data is stored in the real storage area.
  • the controller 22 includes information processing resources such as a host interface 23, a disk interface 24, a processor 25, a local memory 26, and a cache memory 27.
  • the host interface 23 is a hardware device having a function of performing protocol control when communicating with the host device 2 via the communication path 4, and the disk interface 24 is communicating with the storage device 20 in the data storage unit 21. This is a hardware device having a function of performing protocol control.
  • the processor 25 is a hardware device having a function for controlling the operation of the entire storage apparatus 3, and the local memory 26 is mainly used to hold various programs and various data. Various processes of the entire storage apparatus 3 are executed by the processor 25 executing the program stored in the local memory 26.
  • the cache memory 27 is used to temporarily store data read / written from / to the data storage unit 21.
  • the storage area of the cache memory 27 is logically divided and managed in a plurality of areas called CLPR (CacheitionLogical ⁇ Partition), and these CLPR are read / write target for the virtual volume VVOL.
  • CLPR cacheitionLogical ⁇ Partition
  • Each pool PL is allocated as a cache area for temporarily storing data.
  • the data written from the host device 2 to the virtual volume VVOL is temporarily stored in the cache area (CLPR) assigned to the pool PL associated with the virtual volume VVOL, and then stored in the storage device 3.
  • CLPR cache area assigned to the pool PL associated with the virtual volume VVOL
  • the data is distributed and stored in each storage device 20 constituting the RAID group RG corresponding to when the load on the device 3 is low.
  • the data is read from the cache area.
  • the data is read from the corresponding storage device 20 to the cache area (hereinafter referred to as staging). Thereafter, the data is transferred from the cache area to the host device 2.
  • the cache hit rate the rate of cache hits during read processing
  • the pool volume PLVOL of the RAID group RG that does not provide the pool volume PLVOL is added to the pool PL, and each pool volume that has constituted the pool PL until then is added.
  • the load on each RAID group RG that provides the pool volume PLVOL to the pool PL can be reduced.
  • the write performance and read performance of the pool PL can be improved.
  • the storage apparatus 3 of the present embodiment for each pool PL defined in the apparatus, for each RAID group RG that provides the pool volume PLVOL to the pool PL, at least the unit time during the read process and the write process Acquire the number of accesses per second (hereinafter referred to as the access frequency) and the cache hit rate during read processing, and improve the performance of the pool PL based on these information
  • the performance improvement function which performs is carried.
  • the storage apparatus 3 determines the write performance and read performance of the pool PL based on the information acquired as described above. If it is determined that the write performance has decreased, a new pool volume is added to the pool PL. While improving the performance of the pool PL by adding a PLVOL, if it is determined that the read performance has deteriorated, the capacity of the cache area allocated to the pool PL is determined according to the status of the pool PL. The performance of the pool PL is improved by increasing and / or adding a new pool volume PLVOL to the pool PL.
  • the storage device 3 also includes a pool PL that is used in the batch processing so that the batch processing scheduled in the host device 2 does not end within the scheduled time, so that the batch processing ends within the scheduled time.
  • a batch improvement function for increasing the capacity of the cache area to be assigned to or adding a new pool volume PLVOL to the pool PL is also installed.
  • the local memory 26 (FIG. 1) of the controller 22 (FIG. 1) of the storage apparatus 3 has a performance as a program as shown in FIG. A monitoring program 30, an access type determination program 31, a performance improvement processing program 32, and a batch improvement program 33 are stored.
  • a management database 34 As a management database 34, a RAID group performance information table 35, a RAID group information management table 36, a pool information management table 37, a CLPR A management table 38 and a batch schedule management table 39 are stored.
  • the performance monitoring program 30 acquires performance information for each RAID group RG defined in the storage device 3, monitors the performance of each pool PL in the storage device 3 based on the acquired performance information, and if necessary This is a program having a function of executing processing for improving the performance of the pool PL.
  • the performance monitoring program 30 performs, for each RAID group RG, the access frequency for each of the four types of access such as random read, sequential read, random write, and sequential write for that RAID group RG, and the cache hit for each access type.
  • Rate and the usage rate of the storage device 20 in the RAID group RG (the ratio of the used capacity to the total capacity of the RAID group RG, hereinafter referred to as the storage device usage rate) is a unit time (1 second)
  • the acquired information is stored in the RAID group performance information table 35 corresponding to the RAID group RG and managed.
  • the performance monitoring program 30 monitors the performance of each pool PL based on the information stored in the RAID group performance information table 35, and determines that the performance of the pool PL has deteriorated. 31 and the performance improvement processing program 32 and / or the batch improvement program 33 are called to execute processing for improving the performance of the pool PL.
  • the access type determination program 31 stores the access frequency for each of the above four types of access types stored in each RAID group performance information table 35, and a threshold value for the access frequency preset for each access type (hereinafter referred to as these). Based on the first IOPS (Input Output Per Second) threshold), the access type whose access frequency exceeds the corresponding first IOPS threshold is specified, and based on the specified result, the performance to be executed at that time.
  • This is a program having a function of instructing the performance improvement processing program 32 for improvement processing.
  • the performance improvement processing program 32 is a program having a function of executing various performance improvement processes for improving the performance of the pool PL in response to an instruction from the access type determination program 31. Further, the batch improvement program 33 performs the performance of the pool PL used in the batch processing so that the batch processing is completed within the time when the batch processing executed by the host device 2 is not completed within the scheduled time. It is a program having a function to improve.
  • the RAID group performance information table 35 is a table used for holding and managing the performance information of the RAID group RG acquired by the performance monitoring program 30, and is created for each RAID group RG. As shown in FIG. 3, the RAID group performance information table 35 includes a serial number column 35A, an information acquisition time column 35B, a read related information column 35C, a write related information column 35D, and a storage device usage rate column 35E. .
  • serial number column 35A serial numbers assigned to registration information (hereinafter referred to as record information) for each row (hereinafter referred to as record) of the RAID group performance information table 35 are stored.
  • record information serial numbers assigned to registration information (hereinafter referred to as record information) for each row (hereinafter referred to as record) of the RAID group performance information table 35 are stored.
  • information acquisition time column 35B the date and time when the record information is acquired is stored.
  • the storage device usage rate column 35E stores the storage device usage rate at the time when the record information in the corresponding RAID group RG is acquired.
  • the read related information column 35C includes a random read IOPS column 35CA, a random read cache hit rate column 35CB, a sequential read IOPS column 35CC, and a sequential read cache hit rate column 35CD.
  • the random read IOPS column 35CA In the random read IOPS column 35CA, the number of random read accesses made to the corresponding RAID group RG in the time zone from the time when the record immediately before that record was acquired to the time when the record was acquired. (That is, the random read access frequency) is stored, and the random read cache hit rate column 35CB stores the cache hit rate at the time of random read access in that time zone.
  • the sequential read IOPS column 35CC stores the number of sequential read accesses made to the corresponding RAID group RG in that time zone (that is, the sequential read access frequency), and is stored in the sequential read cache hit rate column 35CD. Stores the cache hit rate at the time of sequential read access in that time zone.
  • the write related information column 35D includes a random write IOPS column 35DA, a random write cache hit rate column 35DB, a sequential write IOPS column 35DC, and a sequential write cache hit rate column 35DD.
  • the random write IOPS column 35DA the number of random write accesses made to the corresponding RAID group RG in the time zone from the time when the record immediately before that record was acquired to the time when the record was acquired. (That is, the random write access frequency) is stored, and the random write cache hit rate column 35DB stores the cache hit rate at the time of random write access in that time zone.
  • the sequential write IOPS column 35DC stores the number of sequential write accesses (that is, the sequential write access frequency) performed for the corresponding RAID group RG during the time period, and the sequential write cache hit rate column 35DD stores the number of sequential write accesses. Stores the cache hit rate at the time of sequential write access in that time zone.
  • the corresponding RAID group RG has a random read access frequency of “2009/11/14 ⁇ 0:00 ”to“ 2009/11/14 0:01 ”.
  • the cache hit rate is“ 33 ”%
  • the sequential read access frequency is“ 780 ”
  • the cache hit rate is“ 9 ”%
  • the random write access frequency is“ 90 ”.
  • the cache hit rate is “93”%
  • the sequential write access frequency is “199”
  • the cache hit rate at that time is “55”%
  • the average usage rate of each storage device 20 constituting the RAID group RG Was 30%.
  • the RAID group information management table 36 is a table used for managing the configuration information of each RAID group RG created in the storage apparatus 3, and as shown in FIG. A configured logical volume number column 36B, a configuration pool number column 36C, a RAID level column 36D, a storage device rotation number column 36E, a storage device number column 36F, and a logical volume capacity column 36G.
  • the RAID group RG is abbreviated as “RG”
  • the logical volume is abbreviated as “LU”. The same applies to the other drawings.
  • the created logical volume number column 36B stores the identification numbers (logical volume numbers) of all the logical volumes created in the storage device 3, and the corresponding logical volume is created in the RAID group number column 36A.
  • the identification number (RAID group number) of the RAID group RG is stored, and when the corresponding logical volume configures any pool PL as a pool volume, the identification number of the pool PL is stored in the configuration pool number column 36C. (Pool number) is stored.
  • the RAID level column 36D stores the RAID level of the corresponding RAID group RG
  • the storage device rotation speed column 36E stores each storage device 20 constituting the RAID group RG (in the present embodiment as described above). The number of rotations per unit time of the hard disk device) is stored. Further, the number of storage devices 20 constituting the RAID group RG is stored in the storage device number column 36F, and the capacity of the corresponding logical volume is stored in the logical volume capacity column 36G.
  • the RAID group RG “1-1” has a RAID level “RAID5” composed of “5” storage devices 20 whose rotation speed per unit time is “1500”.
  • a logical volume with a logical volume number of “00:01:01” with a capacity of “1200” GB and a “volume” with a capacity of “800” GB 00:01:02 ” is created, and both of these logical volumes are assigned to a pool PL having a pool number of“ 1 ”(which constitutes the pool PL). It has been shown.
  • the pool information management table 37 is a table used for managing configuration information of each pool PL defined in the storage apparatus 3, and as shown in FIG. 5, a pool number column 37A and a RAID group number column 37B.
  • the allocated capacity column 37C, the unallocated capacity column 37D, the threshold excess flag column 37E, and the CLPR number column 37F are provided.
  • the RAID group number column 37B stores the RAID group number of each RAID group RG defined in the storage apparatus 3, and the pool number column 37A stores the logical volume provided by the corresponding RAID group RG as a pool volume.
  • the identifier (pool number) of the pool PL to be included is stored.
  • the allocated capacity column 37C stores the total capacity of the storage areas allocated to the virtual volume VVOL (FIG. 1) associated with the corresponding pool PL out of the total capacity of the corresponding RAID group RG.
  • the allocated capacity column 37D stores the total capacity of the storage area that is not yet allocated to the virtual volume VVOL out of the total capacity of the RAID group RG.
  • threshold excess flag column 37E whether or not the storage device usage rate of the corresponding RAID group RG has exceeded a preset threshold value (hereinafter referred to as a storage device usage rate threshold value) in the performance improvement process described later.
  • An over-threshold flag is stored.
  • the threshold excess flag is initially set to “0” and is updated to “1” when the storage device usage rate of the corresponding RAID group RG exceeds the storage device usage rate threshold.
  • the CLPR number column 37F stores an identifier (CLPR number) unique to the CLPR assigned to the CLPR in the cache memory 27 (FIG. 1) allocated as a cache area to the corresponding pool PL.
  • the pool PL to which the pool number “1” is assigned is provided by each RAID group RG to which the RAID group number “1-1” or “1-2” is assigned, respectively.
  • the RAID group RG “1-1” has “1600” GB assigned to the corresponding virtual volume VVOL and “0” GB assigned to the unassigned capacity.
  • the storage device usage rate has already exceeded the storage area usage rate threshold (the value of the threshold excess flag is “1”).
  • a CLPR assigned with a CLPR number “1” in the cache memory 27 is assigned to the pool PL as a cache area.
  • the CLPR management table 38 is a table used for managing the CLPR assigned to each pool PL. As shown in FIG. 6, the CLPR number column 38A, the cache management number column 38B, the cache capacity column 38C, A cache write wait rate field 38D, a temporary allocation flag field 38E, and an allocation destination pool field 38F are provided.
  • the CLPR number column 38A stores the CLPR number of the corresponding CLPR
  • the cache management number column 38B stores the management number (cache management) assigned to each divided area when the corresponding CLPR is divided and managed. Number) is stored.
  • the cache capacity column 38C stores the capacity of the corresponding divided area in the corresponding CLPR.
  • the cache write wait rate field 38D the ratio of data waiting to be written to the storage device 20 out of the data temporarily stored in the corresponding CLPR (hereinafter referred to as the cache write wait rate). Is stored.
  • the corresponding CLPR or CLPR divided area (hereinafter referred to as CLPR or the like) is assigned to a pool PL other than the pool PL to which the CLPR is allocated in the performance improvement process described later.
  • a temporary lending flag indicating whether or not the lending is temporarily lent or assigned is stored. The temporary lending flag is set to “0” when the corresponding CLPR or the like is not lent or assigned to another pool PL, and the CLPR or the like is temporarily lent or assigned to another pool PL. Is set to "1".
  • the assigned pool field 38F stores the pool number of the destination pool PL that has been lent or assigned when the corresponding CLPR or the like is temporarily lent or assigned to another pool PL. Is done.
  • a part of the storage area of the cache memory 27 is divided into three CLPRs each assigned with a CLPR number of “CLPR1”, “CLPR2”, or “CLPR3”.
  • the area called “CLPR2” is divided into two divided areas of “2-1”, “2-2”, or “2-3” with a capacity of “8” GB, and the entire CLPR is waiting for a cache read.
  • the rate is shown to be “20%”.
  • the divided area “2-1” is lent to the pool PL with the pool number “1” (the value of the temporary allocation flag is “1”), and the other divided areas are It is also shown that it is not lent to another pool PL (the value of the temporary allocation flag is “0”).
  • the CLPR in which the character string “CLPR5 (reserved)” is stored in the CLPR number column 38A is a reserved CLPR (hereinafter referred to as “reserved CLPR”).
  • the batch schedule management table 39 is a table used for managing a schedule of batch processing periodically executed by the host device 2 using the storage device 3, and as shown in FIG. And one or a plurality of batch processing columns 39A provided in association with the respective batch processings set in FIG.
  • Each batch processing column 39A is divided into a time column 39B, a day-of-week column 39C, a used pool column 39D, a used RAID group column 39E, and an in-time processing completion flag column 39F.
  • the time column 39B, the day of week column 39C, the used pool column 39D, and the used RAID group column 39E are used when the corresponding batch processing is scheduled to be executed, the day of the week, and the batch processing is executed.
  • the pool number of the pool PL and the RAID group number of the RAID group RG used when executing the batch processing are stored.
  • the in-time process completion flag column 39F stores a flag indicating whether or not the last executed batch process has been completed within the scheduled time (hereinafter referred to as an in-time process completion flag). .
  • the in-time processing completion flag is set to “1” if the batch processing can be completed within the scheduled time, and the batch processing cannot be completed within the scheduled time. Is set to “0”.
  • the batch process “batch process 1” is performed by the pool volume PLVOL to the pool PL to which the pool number “1” is assigned in the “weekday” “0:00 to 3 o'clock” time zone.
  • the RAID group RG that is assigned the RAID group numbers “1-1” and “1-2” of the RAID groups RG that provide the data is executed within the scheduled time for the batch process that was executed last. (The value of the in-time processing completion flag is “1”).
  • FIG. 8 shows a processing procedure for pool performance monitoring processing that is executed regularly or irregularly by the performance monitoring program 30 (FIG. 2).
  • the performance monitoring program 30 monitors the current performance of each pool PL defined in the storage apparatus 3 according to the processing procedure shown in FIG.
  • the performance monitoring program 30 starts this pool performance monitoring process, first, referring to the pool information management table 37 (FIG. 5), the pool PL defined in the storage apparatus 3 is followed by step SP2 and subsequent steps. Selects one unprocessed pool PL (SP1).
  • the performance monitoring program 30 pools the RAID group numbers of all RAID groups RG that provide the pool volume PLVOL (FIG. 1) to the pool (hereinafter referred to as the target pool) PL selected in step SP1. Obtained from the information management table 37 (SP2).
  • the performance monitoring program 30 acquires the performance information of each RAID group RG that has acquired the RAID group number in step SP2 from the RAID group performance information table 35 (FIG. 3) corresponding to the RAID group RG (SP3).
  • the performance information acquired at this time is the record information of each record stored in the RAID group performance information table 35 from the start of the previous pool performance monitoring process to the start of the current pool performance monitoring process. It shall be.
  • the performance monitoring program 30 continuously uses the storage device usage rate for a certain period (for example, 5 minutes) in the RAID group RG that acquired the RAID group number in step SP2 based on the performance information acquired in step SP3. Then, it is determined whether or not there is a RAID group RG that exceeds the above-mentioned storage device usage rate threshold (SP4). If the performance monitoring program 30 obtains a negative result in this determination, it proceeds to step SP17.
  • SP4 storage device usage rate threshold
  • each RAID group RG in the pool information management table 37 whose storage device usage rate exceeds the storage device usage rate threshold value for a certain period of time.
  • Each of the threshold excess flags stored in the threshold excess flag column 37E corresponding to each is set to ON (“1”) (SP5).
  • the performance monitoring program 30 selects one RAID group RG that has not been processed after step SP7 from the RAID groups RG whose excess threshold flag is set to ON in the pool information management table 37 (SP6). ).
  • the performance monitoring program 30 extracts the performance information corresponding to the RAID group (hereinafter referred to as the target RAID group) RG selected in step SP6 from the performance information of each RAID group RG acquired in step SP3. Record information at all times when the storage device usage rate exceeds the storage device usage rate threshold value is acquired from the extracted performance information (SP7). Therefore, for example, in the example of FIG. 3, when the storage device usage threshold is “40”%, the performance monitoring program 30 has serial numbers “4” to “7” and “10” as shown in FIG. The record information of this record is acquired in this step SP7.
  • the performance monitoring program 30 refers to the record information (FIG. 9) of each record acquired in step SP7 on the premise that the batch processing is sequential access (sequential read or sequential write).
  • An average value of random read access frequency and random write access frequency for the entire target pool PL at a time when the usage rate exceeds the storage device usage rate threshold value is calculated, and at least one of these calculated average values is individually It is determined whether or not the total value in the target pool PL of the first IOPS threshold value, which is the threshold value of the random read or random write access frequency defined according to the performance of each RAID group RG, is exceeded. (SP8).
  • the target pool PL includes a first pool volume PLVOL1 and a second pool volume PLVOL2 provided by the first RAID group RG1, and a third pool volume provided by the second RAID group RG2.
  • the possible number) is “2110” or “379” as shown in FIG. 11A
  • the random read of the second RAID group RG2 calculated from the specifications of each storage device 20 constituting the second RAID group RG2, and
  • the performance limit value for random write is “211” as shown in FIG. It shall be “0” or “379”.
  • the first IOPS threshold value for random read and random write is defined as 80% of the performance limit value for random read or random write, respectively.
  • the numerical values described in the “actual average value” row in FIGS. 11A and 11B are stored as the storage device usage rate of the target RAID group RG (assuming that it is the first RAID group RG1). It is assumed that it is the average value of the access frequencies of the corresponding access types in the entire target pool PL at the time when the device usage rate threshold is exceeded.
  • the average value of the random read access frequency for the entire target pool PL may be the sum of the average values of the random read access frequencies of the RAID groups RG that respectively provide the pool volumes PLVOL constituting the target pool PL.
  • “2526”, which is the average value of random read access frequency in the first RAID group RG1, “3790” is obtained by adding together “1264” which is the average value of the random read access frequency in the second RAID group RG2.
  • the average random write access frequency for the entire target pool PL is “322”, which is the average random write access frequency in the first RAID group RG1, and the random write access in the second RAID group RG2.
  • the sum of “157”, which is the average value of the access frequency, is “480”.
  • the value of the first IOPS threshold for random reads as seen in the entire target pool PL can be the sum of the first IOPS thresholds in each RAID group RG that provides the pool volume PLVOL that constitutes the target pool PL.
  • the first IOPS threshold “1688” for the random read in the first RAID group RG1, “3376” is obtained by adding up “1688” which is the first IOPS threshold for random reads in the RAID group 2 of the second RAID group.
  • the value of the first IOPS threshold value for the random write in the entire target pool PL is “303”, which is the first IOPS threshold value for the random write in the first RAID group RG1, and in the second RAID group RG2. “606” is obtained by adding up “303” which is the first IOPS threshold for random writing.
  • the performance monitoring program 30 sets “3790”, which is the average value of the random read access frequency as seen in the entire target pool PL, as the first IOPS threshold for random reads as seen in the entire target pool PL. 3480 ”or“ 480 ”that is the average value of the random write access frequency as seen in the entire target pool PL exceeds“ 606 ”that is the first IOPS threshold for random write as seen in the entire target pool PL It will be judged whether or not. In this example, since the average value of the random read access frequency seen in the entire target pool PL exceeds the first IOPS threshold for random reads seen in the entire target pool PL, the performance monitoring program 30 makes this determination. A negative result will be obtained. Thus, at this time, the performance monitoring program 30 proceeds to step SP13.
  • the performance monitoring program 30 obtains a negative result in the determination at step SP8, it acquires all information related to each batch process scheduled in the host apparatus 2 from the batch schedule management table 39 (FIG. 7) ( (SP9) With reference to the in-time process management flag of each batch process, it is determined whether or not all scheduled batch processes are completed within the scheduled time when the batch process is executed for the last time (SP9). SP10).
  • step SP12 the performance monitoring program 30 obtains a positive result in this determination.
  • the performance monitoring program 30 proceeds to step SP12.
  • the performance monitoring program 30 calls the batch improvement program 33 (FIG. 2) and pools it in the target pool PL so that all batch processing is completed within the scheduled time.
  • a batch improvement process for adding the volume PLVOL or adding the capacity of the cache area allocated to the target pool PL is executed (SP11).
  • the performance monitoring program 30 calculates the average value of the storage device usage rate for each access type in the time zone when no batch processing is scheduled. It is calculated, and it is determined whether or not at least one of the calculated average values of the storage device usage rates for each access type exceeds the above-mentioned storage device usage rate threshold value (SP12). If the performance monitoring program 30 obtains a negative result in this determination, it proceeds to step SP14.
  • the performance monitoring program 30 when the performance monitoring program 30 obtains a positive result in the determination at step SP12, the performance monitoring program 30 performs the performance improvement processing for improving the performance of the target RAID group RG as the access type determination program 31 (FIG. 2) and the performance improvement processing program 32. (FIG. 2) is executed (SP13).
  • step SP7 the performance monitoring program 30 determines whether or not the processing of step SP7 to step SP13 has been executed for all RAID groups RG for which the threshold excess flag is set to ON in step SP5 (SP14).
  • step SP6 If the performance monitoring program 30 obtains a negative result in this determination, it returns to step SP6, and then switches the RAID group RG selected in step SP6 to another unprocessed RAID group RG in sequence, step SP6 to step SP14. Repeat the process.
  • step SP14 When the performance monitoring program 30 eventually obtains a positive result in step SP14 by completing the processing of step SP7 to step SP13 for all RAID groups RG for which the threshold excess flag is set to ON, the performance acquired in step SP3 Based on the information, the average value of the cache hit rate for the entire target pool PL is calculated, and the calculated average value of the cache hit rate is equal to or greater than a preset threshold value (hereinafter referred to as the cache hit rate threshold value). Is determined (SP15). If the performance monitoring program 30 obtains a negative result in this determination, it proceeds to step SP17.
  • a preset threshold value hereinafter referred to as the cache hit rate threshold value
  • the performance monitoring program 30 determines whether or not the processing of step SP2 to step SP16 has been executed for all the pools PL in the storage apparatus 3 (SP17).
  • the performance monitoring program 30 When the performance monitoring program 30 obtains a negative result in this determination, the performance monitoring program 30 returns to step SP1, and thereafter performs the processing of steps SP1 to SP17 while sequentially switching the pool PL selected in step SP1 to another unprocessed pool PL. repeat.
  • step SP2 finishes executing the processing of step SP2 to step SP16 for all the pools PL in the storage apparatus 3 and ends the positive result in step SP17
  • the performance monitoring program 30 ends this pool performance monitoring processing.
  • FIG. 12 shows the access executed by the access type determination program 31 (FIG. 2) at step SP13 of the pool performance monitoring processing described above with reference to FIG. The process procedure of a classification determination process is shown.
  • the performance monitoring program 30 calls the access type determination program 31 (FIG. 2).
  • the access type determination program 31 uses the performance information of the target RAID group RG acquired from the start of the previous pool performance monitoring process to the start of the current pool performance monitoring process. Then, among the random read, sequential read, random write, and sequential write for the target RAID group RG, the access type discrimination that discriminates the access type exceeding the preset first IOPS threshold for each of these access types The processing is executed according to the processing procedure shown in FIG.
  • the access type determination program 31 uses the pool performance monitoring process (the performance information of the target RAID group RG acquired from the start of the previous pool performance monitoring process to the start of the current pool performance monitoring process) (The performance information of the target RAID group RG acquired in step SP3 of FIG. 8) is used.
  • the access type discrimination program 31 starts this access type discrimination process when called by the performance monitoring program 30 in step SP13 of the pool performance monitoring process, and is first acquired in step SP3 of the pool performance monitoring process (FIG. 8). Based on the performance information of the target RAID group RG, it is determined whether there is only one access type that exceeds the first IOPS threshold (SP20).
  • the access type determination program 31 calculates the average value of the access frequency for each access type based on the performance information of the target RAID group RG acquired in step SP3 of the pool performance monitoring process (FIG. 8). The calculated average access frequency for each access type is compared with the first IOPS threshold value for that access type. Then, the access type determination program 31 determines whether the access type whose average access frequency exceeds the corresponding first IOPS threshold is only one of random read, sequential read, random write, and sequential write. Judge whether or not.
  • the access type determination program 31 determines whether the access type exceeding the first IOPS threshold is a write-related access type (random read or sequential read) (SP21). ). When the access type determination program 31 obtains a positive result in this determination, it calls the performance improvement processing program 32 (FIG. 2) to execute the first read performance improvement processing for improving the read performance (SP22). Thereafter, the access type determination process is terminated and the process returns to the pool performance monitoring process (FIG. 8).
  • the performance improvement processing program 32 FIG. 2
  • the access type determination program 31 obtains a negative result in the determination at step SP21, it calls the performance improvement processing program 32 to execute the first write performance improvement processing for improving the write performance (SP23). Then, the access type determination process is terminated and the process returns to the pool performance monitoring process.
  • the access type determination program 31 obtains a negative result in the determination at step SP20, the access type exceeding the first IOPS threshold is only the write-related access type (random write and sequential write). Is determined (SP24). If the access type determination program 31 obtains a positive result in this determination, it calls the performance improvement processing program 32 to execute the second write performance improvement process for improving the write performance (SP25). This access type determination process is terminated and the process returns to the pool performance monitoring process.
  • the access type determination program 31 determines whether or not the access type exceeding the first IOPS threshold is only a read-related access type (random read and sequential read). Judgment is made (SP26). When the access type determination program 31 obtains a positive result in this determination, it calls the performance improvement processing program 32 to execute the second read performance improvement process for improving the read performance (SP27), and thereafter This access type determination process is terminated and the process returns to the pool performance monitoring process.
  • the access type determination program calls the performance improvement processing program 32 to execute the write / read performance improvement process for improving both the write performance and the read performance (SP28), and thereafter this access type.
  • the discrimination process ends and the process returns to the pool performance monitoring process.
  • FIG. 13 shows the first read performance improvement process executed by the performance improvement process program 32 called by the access type determination program 31 in step SP22 of the access type determination process described above with reference to FIG. The specific processing content of the read performance improvement processing 1 is shown.
  • the performance improvement processing program 32 When called in step SP22 of the access type determination process, the performance improvement processing program 32 starts the first read performance improvement process shown in FIG. 13, and is first acquired in step SP3 of the pool performance monitoring process (FIG. 8). Based on the performance information of the target RAID group RG, the average value of the cache hit rate at the time of random read access and the average value of the cache hit rate at the time of sequential read access are calculated, respectively, and at least one of these average values is It is determined whether or not the cache hit rate threshold is below (SP30).
  • the performance improvement processing program 32 When the performance improvement processing program 32 obtains a positive result in this determination, it executes a cache area addition process for assigning an additional cache area (CLPR or CLPR divided area) to the target pool PL (SP31), and thereafter The first read performance improvement process ends and the process returns to the access type determination process.
  • CLPR cache area addition process for assigning an additional cache area (CLPR or CLPR divided area) to the target pool PL (SP31)
  • the performance improvement processing program 32 executes a pool volume addition process for adding a new pool volume PLVOL to the target pool PL (SP32).
  • SP32 pool volume addition process for adding a new pool volume PLVOL to the target pool PL
  • FIG. 14 shows a specific processing procedure of the cache area addition processing executed by the performance improvement processing program 32 in step SP31 of the first read performance improvement processing described above with reference to FIG. Indicates.
  • the performance improvement processing program 32 proceeds to step SP31 of the first read performance improvement processing, the performance improvement processing program 32 starts this cache area addition processing, and first determines whether or not a cache area has already been added to the target pool PL. (SP40). When the performance improvement processing program 32 obtains a positive result in this determination, it ends this cache area addition processing and returns to the read performance improvement processing. This is because when the process after step SP6 of the pool performance monitoring process is executed for each RAID group RG for which the threshold excess flag is set to ON in step SP5 of the pool performance monitoring process (FIG. 8), the target pool PL is set. This is to prevent the cache area from being added cumulatively.
  • the performance improvement processing program 32 obtains a negative result in the determination at step SP40, the CLPR that can be additionally allocated to the target pool PL out of any CLPR or the like (CLPR or CLPR divided region). Etc. are searched on the CLPR management table 38 (FIG. 6) (SP41). Specifically, the performance improvement processing program 32 is a CLPR other than the CLPR originally assigned to the target pool PL, and the cache write wait rate stored in the cache write wait rate column 38D of the CLPR management table 38 is A CLPR or the like below a preset threshold value (hereinafter referred to as a cache write wait rate threshold value) is searched.
  • a preset threshold value hereinafter referred to as a cache write wait rate threshold value
  • the performance improvement processing program 32 determines whether or not the CLPR that can be allocated to the target pool PL has been detected by the search in step SP41 (SP42). When the performance improvement processing program 32 obtains an affirmative result in this determination, the CLPR detected at step SP41 (or one CLPR of the plurality of CLPR detected at step SP41) is stored in the target pool PL. Further allocation (SP43), and then the process proceeds to step SP47.
  • the performance improvement processing program 32 searches the CLPR management table 38 (FIG. 6) for a spare CLPR that can be additionally assigned to the target pool PL (SP44). ). Specifically, the performance improvement processing program 32 stores a character string “reserved” in the CLPR number column 38A (FIG. 6) of the CLPR management table 38, and the cache write wait rate is less than the above-described cache write wait rate threshold. Search for.
  • the performance improvement processing program 32 determines whether or not a spare CLPR that can be additionally allocated to the target pool PL has been detected by the search in step SP44 (SP45). When the performance improvement processing program 32 obtains a negative result in this determination, it ends this cache area addition processing and returns to the read performance improvement processing.
  • the spare CLPR detected at step SP44 (if a plurality of spare CLPRs are detected at step SP44, one of the spare CLPRs is detected). ) Is divided as necessary, and the spare CLPR or one of the divided areas is allocated to the target pool PL (SP46).
  • the performance improvement processing program 32 is stored in the CLPR management table 38 in the temporary assignment flag column 38E (FIG. 6) corresponding to the CLPR or the like assigned to the target pool PL in step SP43 or step SP46, or the spare CLPR or its divided area. And the pool number of the target pool PL is stored in the corresponding assigned pool field 38F (FIG. 6) (SP47). Thereafter, this cache area addition process To return to the lead performance improvement process.
  • FIG. 15 shows a specific processing procedure of pool volume addition processing executed by the performance improvement processing program 32 in step SP32 of the first read performance improvement processing described above with reference to FIG. Indicates.
  • the performance improvement processing program 32 proceeds to step SP32 of the first read performance improvement processing, the performance improvement processing program 32 starts this pool volume addition processing and first determines whether or not the pool volume PLVOL has already been added to the target pool PL. (SP50). When the performance improvement processing program 32 obtains a positive result in this determination, it ends this pool volume addition processing. This is because when the process after step SP6 of the pool performance monitoring process is executed for each RAID group RG for which the threshold excess flag is set to ON in step SP5 of the pool performance monitoring process (FIG. 8), the target pool PL is set. This is to prevent the pool volume PLVOL from being added cumulatively.
  • the average value of the access frequencies for the entire target pool PL of the access type targeted at that time Is the value obtained by subtracting the second IOPS threshold from the actual average value in the target pool PL (“actual average value” ⁇ “first” 2 ") (SP51).
  • the second IOPS threshold is a numerical value that is preset for each access type as a target value of the access frequency when the access frequency of each access type is reduced by the performance improvement process executed by the performance improvement processing program 32. .
  • each RAID group RG is defined based on the specifications of the storage devices 20 constituting the RAID group RG (for example, the performance limit value of the RAID group RG). 60%), and the second IOPS threshold value of the pool PL is the sum of the second IOPS threshold values of the RAID groups RG that respectively provide the pool volumes PL in the pool PL as shown in FIG. Calculated as a value.
  • the performance improvement processing program 32 determines the random read from “3790” that is the actual average value of the random read access frequency for the target pool PL in this step SP51.
  • the number of accesses is calculated by subtracting the second IOPS threshold “2532” for the sequential read, and for sequential reads, “1570”, which is the actual average value of the sequential read access frequency for the target pool PL, is used for sequential reads.
  • the number of accesses is calculated by subtracting “2348” which is the second IOPS threshold.
  • the performance improvement processing program 32 searches the pool information management table 37 for an unused or spare RAID group RG (SP52). For example, in the example of FIG. 5, the RAID group RG assigned the RAID group number “5-1” is not assigned to any pool PL (the value in the pool number column 37A is “ ⁇ ”). Since it is a group RG, it is detected as an unused or spare RAID group RG in this step SP52.
  • the performance improvement processing program 32 refers to the RAID group information management table 36 (FIG. 4) corresponding to the RAID group RG detected in step SP52, and finds a logical volume that can be allocated from the RAID group RG to the target pool PL. It is determined whether or not it exists (SP53).
  • the “logical volume that can be allocated from the RAID group RG to the target pool PL” means a pool volume PLVOL provided by a RAID group RG other than the RAID group RG that provides the pool volume PLVOL of the target pool PL at that time.
  • the pool volume PLVOL may already be created on a storage area provided by the RAID group RG, or may be newly created on the storage area.
  • the actual average values of all the access types (see FIGS. 11A and 11B) of the RAID group RG that provides the pool volume PLVOL are all the access types. The condition is that it is less than the second IOPS threshold.
  • step SP56 If the performance improvement processing program 32 obtains a negative result in this determination, it proceeds to step SP56. In contrast, if the performance improvement processing program 32 obtains a positive result in the determination at step SP53, it allocates the logical volume detected at step SP53 to the target pool PL as an additional pool volume PLVOL (SP54).
  • the performance improvement processing program 32 indicates that the actual average value of the target access type (in this case, random read and sequential read) in the target pool PL after adding the pool volume PLVOL to the target pool PL in step SP54 is the relevant pool. It is determined whether or not the target pool PL is equal to or lower than the second IOPS threshold after the volume PLVOL is added to the target pool PL (SP55).
  • the target access type in this case, random read and sequential read
  • the pool volume PLVOL related to the target pool PL is added to the actual average value of the access type targeted by the target pool PL, and the actual average value of the target access type in the pool volume PLVOL added at that time is added.
  • Is the actual average value of the target access type in the target pool PL after this (hereinafter referred to as the actual average value after addition)
  • the second IOPS threshold value of the target access type of the target pool PL is A value obtained by adding the second IOPS threshold value of the target access type in the added pool volume PLVOL at that time is the second of the target access type in the target pool PL after adding the pool volume PLVOL related to the target pool PL.
  • IOPS threshold (hereinafter referred to as the second IOPS threshold after the addition) Since the called), improved performance processing program 32, in step SP55, extra after actual average value, will determine whether it is less than extra after the second IOPS threshold.
  • step SP53 If the performance improvement processing program 32 obtains a negative result in this determination, it returns to step SP53, and thereafter obtains a negative result at step SP53 or obtains a positive result at step SP55 until the processing of step SP53 to step SP55 is performed. repeat. By this repeated processing, pool volumes PLVOL are sequentially added to the target pool PL.
  • the performance improvement processing program 32 eventually returns a positive result in step SP55 when the performance limit value of the target access type (in this case, random read and sequential read) in the target RAID group RG becomes equal to or less than the second IOPS threshold. Is obtained, the pool volume addition process is terminated and the process returns to the first read performance improvement process (FIG. 13).
  • the target access type in this case, random read and sequential read
  • the performance improvement processing program 32 can be assigned as a pool volume PLVOL to the target pool PL before the performance limit value of the target access type in the target RAID group RG falls below the second IOPS threshold. If a negative result is obtained in step SP53 due to the absence of a logical volume, the pool volume PLVOL is not provided to the target pool PL with reference to the pool information management table 37 (FIG. 5), and other than the target pool PL It is determined whether there is a RAID group RG that provides a pool volume PLVOL to the pool PL and the storage device usage rate is equal to or less than the storage device usage rate threshold value (SP56).
  • the performance improvement processing program 32 obtains a negative result in this determination, it ends this pool volume addition processing and returns to the first read performance improvement processing (FIG. 13). Therefore, in this case, read performance is not improved by adding a new pool volume PLVOL to the target pool PL.
  • the performance improvement processing program 32 obtains a positive result in the determination at step SP56, the unallocated capacity column 37D () corresponding to the RAID group RG detected at step SP56 in the pool information management table 37 (FIG. 5). Referring to FIG. 13), whether or not a new logical volume can be created in the storage area provided by the RAID group RG is determined from the viewpoint of the unallocated storage capacity of the RAID group RG (SP57). ).
  • the performance improvement processing program 32 When the performance improvement processing program 32 obtains a negative result in this determination, it deletes the pool volume PLVOL that the RAID group RG provides to the pool PL other than the target pool PL from the pool PL, and re-enters the target pool PL.
  • the allocated pool volume reassignment process described later with reference to FIG. 16 is executed (SP58), and thereafter, the process proceeds to step SP61.
  • the performance improvement processing program 32 obtains a positive result in the determination at step SP57, the logical volume provided to the target pool PL as the pool volume PLVOL on the storage area provided by the RAID group RG detected at step SP56. Is created, it is determined whether or not the access frequency of each access type of the RAID group RG is equal to or less than the corresponding second IOPS threshold (SP59).
  • the performance improvement processing program 32 obtains a negative result in this determination, it ends this pool volume addition processing and returns to the first read performance improvement processing (FIG. 13). Therefore, in this case, read performance is not improved by adding a new pool volume PLVOL to the target pool PL.
  • the performance improvement processing program 32 obtains a positive result in the determination at step SP59, it creates a new logical volume in the storage area provided by the RAID group RG detected at step SP56, and creates the created logical volume. Is added to the target pool PL as a new pool volume PLVOL (SP60).
  • the performance improvement processing program 32 determines that the actual average value of the entire target pool PL after adding a new pool volume PLVOL in step SP60 is less than or equal to the second IOPS threshold of the entire target pool PL. It is determined whether or not (SP61). When the performance improvement processing program 32 obtains a negative result in this determination, it returns to step SP56, and thereafter obtains a negative result at any of step SP56, step SP57 or step SP59, or obtains a positive result at step SP61. The processing from step SP56 to step SP61 is repeated. Through this repeated process, new pool volumes PLVOL are sequentially added to the target pool PL.
  • the performance improvement processing program 32 eventually ends this pool volume addition processing when the measured average value of the entire target pool PL eventually becomes equal to or less than the second IOPS threshold value of the entire target pool PL and a positive result is obtained in step SP61. Then, the process returns to the first read performance improvement process (FIG. 13).
  • each pool volume PLVOL that has previously constituted the target pool PL A part may be moved to the newly added pool volume PLVOL.
  • the load of each RAID group RG that has previously provided the pool volume PLVOL to the target pool PL can be distributed to the RAID group RG that newly provides the pool volume PLVOL to the target pool PL.
  • the response performance of the target pool PL can be immediately improved.
  • FIG. 16 shows the specific processing contents of the allocated pool volume reassignment processing executed by the performance improvement processing program 32 in step SP58 of the above-described pool volume addition processing (FIG. 15).
  • step SP58 of the pool volume addition processing (FIG. 15)
  • the performance improvement processing program 32 proceeds to step SP58 of the pool volume addition processing (FIG. 15)
  • it starts the allocated pool volume reassignment processing shown in FIG. 16, and first, the pool information management table 37 (FIG. 5) and Referring to the corresponding RAID group performance information table 35 (FIG. 3), all RAID groups RG (here, target pools) that are pools PL other than the target pool PL and that provide the pool volume PLVOL to the pool PL.
  • RAID groups RG here, target pools
  • the RAID group RG that does not provide the pool volume PVOL to the PL searches the pool PL whose storage device usage rate is equal to or less than the storage area usage rate threshold, and the pool PL Pool information is acquired from the pool information management table 37 (FIG. 5). (SP70).
  • the performance improvement processing program 32 determines whether there is a pool PL that can delete the pool volume PLVOL from the viewpoint of capacity in any of the pools PL detected in step SP70 (SP71). ). This determination is made by determining whether or not the corresponding pool PL has been detected by the search in step SP71. If such a pool PL can be detected, an affirmative result is obtained in step SP71. When such a pool PL cannot be detected, a negative result is obtained in this step SP71.
  • the performance improvement processing program 32 When the performance improvement processing program 32 obtains a negative result in the determination at step SP71, it ends this allocated pool volume reassignment processing and returns to the pool volume addition processing (FIG. 15). Accordingly, in this case, the pool volume PLVOL assigned to the pool PL other than the target pool PL cannot be reassigned to the target pool PL.
  • the performance improvement processing program 32 obtains a positive result in the determination at step SP71, it refers to the required RAID group performance information table 35 (FIG. 3) and from the viewpoint of capacity detected at step SP70. It is determined whether there is a pool PL in which the pool volume PLVOL can be deleted from the viewpoint of performance among the pools in which the pool volume PLVOL can be deleted as viewed (SP72).
  • This determination is made for each pool PL other than the target pool PL, even when the pool volume PLVOL is deleted from the pool PL, the actual average value of the pool PL after the pool volume PLVOL is deleted. This is done by determining whether there is a pool volume PLVOL that is less than the overall second IOPS threshold. If such a pool PL can be detected, a positive result is obtained in this step SP72, and if such a pool PL cannot be detected, a negative result is obtained in this step SP72.
  • the performance improvement processing program 32 When the performance improvement processing program 32 obtains a negative result in the determination at step SP72, it ends this allocated pool volume reassignment processing and returns to the pool volume addition processing (FIG. 15). Therefore, also in this case, the pool volume PLVOL constituting the pool PL other than the target pool PL cannot be reassigned to the target pool PL.
  • the performance improvement processing program 32 obtains a positive result in the determination at step SP72, the pool volume that can be reassigned to the target pool PL of the pool volumes PLVOL assigned to the pool PL detected at step SP72. (Hereinafter, this is referred to as a reallocation target pool volume) After data stored in a PLVOL is migrated to another pool volume PLVOL that constitutes the pool PL, the reallocation target pool volume PLVOL is transferred from the pool PL. Delete (release the allocation of the reallocation target pool volume PLVOL to the pool PL) (SP73).
  • the performance improvement processing program 32 adds the above-described reallocation target pool volume PLVOL to the target pool PL (SP74), and thereafter performs the same step as step SP55 of the pool volume addition processing described above with reference to FIG.
  • the measured average value of the target access type (random read and sequential read here) in the entire target pool PL after adding the reallocation target pool volume PLVOL in SP74 is equal to or less than the second IOPS threshold of the target pool PL. It is determined whether or not (SP75).
  • the performance improvement processing program 32 obtains a negative result in this determination, it returns to step SP71, and thereafter repeats the processing of steps SP71 to SP75 until it obtains a negative result at step SP72 or obtains a positive result at step SP75. .
  • step SP75 When the performance improvement processing program 32 eventually obtains an affirmative result in step SP75 because the measured average value of the target access type in the entire target pool PL becomes equal to or less than the second IOPS threshold of the target pool PL. Then, the allocated pool volume deletion process is terminated and the process returns to the pool volume addition process (FIG. 15).
  • FIG. 17 is executed by the performance improvement processing program 32 called by the access type determination program 31 in step SP25 of the access type determination processing described above with reference to FIG. The specific processing content of the second write performance improvement processing will be described.
  • the performance improvement process program 32 When called in step SP25 of the access type determination process, the performance improvement process program 32 starts the second write performance improvement process shown in FIG. 17, and first, in the target RAID group RG, the random write access load Is higher than the sequential write access load (SP80). This determination is based on whether or not the value obtained by subtracting the second IOPS threshold from the average value of the random write access frequency is greater than the value obtained by subtracting the second IOPS threshold from the average value of the sequential write access frequency. It is done by judging.
  • the average value of random write access frequency is “90”, the average value of sequential write access frequency is “200”, the second IOPS threshold for random write is “227”, and
  • the value obtained by subtracting the second IOPS threshold for random writes from the average value of random write access frequency is “ ⁇ 137” (“(actually measured average value) in FIG. 10).
  • the value obtained by subtracting the second IOPS threshold for sequential write from the average value of sequential write access frequency is" -636 "times (" (( In this example, the load due to the random write is the sequential write because the actual average value) ⁇ (the second IOPS threshold) line). It is determined that the load is greater than
  • the performance improvement processing program 32 obtains an affirmative result in this determination, the performance information of the target RAID group RG acquired in step SP3 of the pool performance monitoring processing (FIG. 8) indicates the random write access frequency of the target RAID group RG. (SP81). Further, the performance improvement processing program 32 executes the pool volume addition process described above with reference to FIG. 15 to improve the write performance by adding the pool volume PLVOL to the target pool PL (SP83). 2 is ended, and the process returns to the access type determination process (FIG. 12).
  • the performance improvement processing program 32 obtains a negative result in the determination at step SP80, the sequential write access frequency of the target RAID group RG is acquired in step SP3 of the pool performance monitoring process (FIG. 8). Obtained from the performance information of the RAID group RG (SP82). Further, the performance improvement processing program 32 executes the pool volume addition process described above with reference to FIG. 15 to improve the write performance by adding the pool volume PLVOL to the target pool PL (SP83). 2 finishes the write performance improvement process and returns to the access type determination process.
  • FIG. 18 shows a second read performance improvement process executed by the performance improvement process program 32 called in step SP27 of the access type determination process described above with reference to FIG. The specific processing content of is shown.
  • the performance improvement processing program 32 When the performance improvement processing program 32 is called by the access type determination program 31 (FIG. 2) in step SP27 of the access type determination processing, the performance improvement processing program 32 starts the second read performance improvement processing shown in FIG. Based on the performance information of the target RAID group RG acquired in step SP3 of the process (FIG. 8), the average value of the cache hit rate at the time of random read access and sequential read access is calculated (SP80).
  • the performance improvement processing program 32 has both the average value of the cache hit rate at the time of random read access calculated at step SP80 and the average value of the cache hit rate at the time of sequential read access are equal to or less than the above-described cache hit rate threshold value. It is determined whether or not there is (SP81).
  • the performance improvement processing program 32 When the performance improvement processing program 32 obtains a positive result in this determination, it adds a cache area to be allocated to the target pool PL, thereby improving the cache hit rate at the time of random read access and sequential read access. After executing the cache area addition process described above (SP82), the second read performance improvement process is terminated and the process returns to the access type determination process (FIG. 12).
  • the performance improvement processing program 32 has both the average value of the cache hit rate at the time of random read access calculated at step SP80 and the average value of the cache hit rate at the time of sequential read access are not less than the above-described cache hit rate threshold value. It is determined whether or not (SP83).
  • Obtaining a negative result in this determination means that only one of the average value of the cache hit rate during random read access and the average value of the cache hit rate during sequential read access is equal to or greater than the cache hit rate threshold. means.
  • the performance improvement processing program 32 specifies whether the access type in which the average value of the cache hit rate calculated in step SP80 is equal to or less than the cache hit rate threshold is random read or sequential read (SP84).
  • the performance improvement processing program 32 executes the cache area addition process described above with reference to FIG. 14 to improve the cache hit rate of random read and sequential read by adding a cache area to be allocated to the target pool PL (SP85). .
  • the performance improvement processing program 32 thereafter accesses the access type (random read or sequential read) identified in step SP84 based on the performance information of the target RAID group RG acquired in step SP3 of the pool performance monitoring process (FIG. 8).
  • the average access frequency is calculated (SP86).
  • the performance improvement processing program 32 executes the pool volume addition process described above with reference to FIG. 15 for adding the pool volume PLVOL to the target pool PL using the average value calculated in step SP86 (SP90), and thereafter The second read performance improvement process is terminated.
  • obtaining a positive result in the determination at step SP83 means that both the average value of the cache hit rate at the time of random read access and the average value of the cache hit rate at the time of sequential read access are equal to or greater than the cache hit rate threshold value. Means.
  • the performance improvement processing program 32 in the same way as the step SP80 of the second write performance improvement processing described above with reference to FIG. 17, causes the load due to random write to be greater than the load due to sequential write in the target RAID group RG. It is determined whether or not the value is higher (SP87).
  • the performance improvement processing program 32 obtains a positive result in this determination, the average value of the random read access frequency based on the performance information of the target RAID group RG acquired in step SP3 of the pool performance monitoring processing (FIG. 8). Is calculated (SP88). If the performance improvement processing program 32 obtains a negative result in the determination at step SP87, the performance improvement processing program 32 determines the access frequency of sequential reads based on the performance information of the target RAID group RG acquired at step SP3 of the pool performance monitoring processing (FIG. 8). An average value is calculated (SP89).
  • the performance improvement processing program 32 executes the pool volume addition process described above with reference to FIG. 15 for adding the pool volume PLVOL to the target pool PL using the average value calculated in step SP88 or step SP89 (SP90). Thereafter, the second read performance improvement process is terminated.
  • FIGS. 19A and 19B show the read / write performance improvement executed by the performance improvement processing program 32 called in step SP28 of the access type determination processing described above with reference to FIG. Specific processing contents of the processing are shown.
  • the performance improvement processing program 32 When the performance improvement processing program 32 is called by the access type determination program 31 (FIG. 2) in step SP28 of the access type determination processing, the performance improvement processing program 32 starts the read / write performance improvement processing shown in FIGS. 19A and 19B. Based on the performance information of the target RAID group RG acquired in step SP3 of the performance monitoring process (FIG. 8), an average value of random read access frequency and an average value of sequential read access frequency per unit time are respectively calculated. Then, based on the calculation result, it is determined whether only one of the random read and sequential read has the average value exceeding the corresponding first IOPS threshold (SP100).
  • SP100 first IOPS threshold
  • the performance improvement processing program 32 corresponds to the average value corresponding to the average value of the random read and sequential read access frequencies of the target RAID group RG calculated in step SP100. It is specified whether random read or sequential read exceeds the IOPS threshold of 1, and the average value of the cache hit rate of the specified random read or sequential read is calculated (SP101).
  • the performance improvement processing program 32 determines whether or not the cache hit rate calculated in step SP101 is less than or equal to the cache hit rate threshold (SP102).
  • the CLPR management table 38 assigns CLPR and the like that can be assigned to the target pool PL in the same manner as in step SP41 of the cache area addition processing described above with reference to FIG. 6) Search above (SP103).
  • the performance improvement processing program 32 determines whether or not a CLPR that can be additionally assigned to the target pool PL has been detected by the search in step SP103 (SP104). When the performance improvement processing program 32 obtains an affirmative result in this determination, the CLPR detected at step SP103 or the like (if a plurality of CLPR or the like is detected at step SP103) is stored in the target pool PL. Allocation (SP105), and then the process proceeds to step SP109.
  • the performance improvement processing program 32 when it obtains a negative result in the determination at step SP104, it can be additionally allocated to the target pool PL in the same manner as at step SP44 in the cache area addition processing (FIG. 14).
  • the CLPR is searched on the CLPR management table 38 (FIG. 6) (SP106).
  • the performance improvement processing program 32 determines whether or not a spare CLPR that can be additionally allocated to the target pool PL has been detected by the search in step SP106 (SP107). If the performance improvement processing program 32 obtains a negative result in this determination, it proceeds to step SP110.
  • the spare CLPR detected at step SP106 (one spare CLPR among them when a plurality of spare CLPRs are detected at step SP106). Is assigned to the target pool PL (SP108).
  • the performance improvement processing program 32 is stored in the temporary assignment flag column 38E (FIG. 6) corresponding to the CLPR or the like assigned to the target pool PL in step SP105 or step SP108 in the CLPR management table 38 (FIG. 6) or the backup CLPR. Is set to ON (“1”) (SP109).
  • the performance improvement processing program 32 executes the performance information of the target RAID group RG acquired in step SP3 of the pool performance monitoring process (FIG. 8), the second IOPS threshold for random write, and the second IOPS threshold for sequential write. Based on the above, the higher one of the random write and sequential write is specified (SP110). Specifically, this processing is performed by subtracting the second IOPS threshold from the average value of the random write access frequency and the value obtained by subtracting the second IOPS threshold from the average value of the sequential write access frequency. This is done by specifying the larger value.
  • the performance improvement processing program 32 acquires only the performance information related to the random write or sequential write specified in step SP110 from the performance information of the target RAID group RG acquired in step SP3 of the pool performance monitoring process (FIG. 8). Then, the pool volume addition processing described above with reference to FIG. 15 is executed using the acquired performance information (SP124). Then, the performance improvement processing program 32 thereafter ends this read / write performance improvement processing and returns to the access type determination processing (FIG. 12).
  • each access type (random) is based on the performance information of the target RAID group RG acquired at step SP3 of the pool performance monitoring processing (FIG. 8). All access types in which the average access frequency exceeds the first IOPS threshold among the read, sequential read, random write, and sequential write) are detected (SP111).
  • the performance improvement processing program 32 specifies the access type having the highest load among the access types detected in step SP111 (SP112). This process is performed by specifying the access type having the largest value obtained by subtracting the second IOPS threshold from the average access frequency of each access type detected in step SP111.
  • the performance improvement processing program 32 acquires performance information related to the access type specified in step SP112 from the performance information of the target RAID group RG acquired in step SP3 of the pool performance monitoring process (FIG. 8) (SP113). Then, the pool volume addition processing described above with reference to FIG. 15 is executed using the acquired performance information (SP124). Then, the performance improvement processing program 32 thereafter ends this read / write performance improvement processing and returns to the access type determination processing (FIG. 12).
  • the performance improvement processing program 32 obtains a negative result in the determination at step SP100, based on the performance information of the target RAID group RG acquired at step SP3 of the pool performance monitoring processing (FIG. 8), The average value of the cache hit rate at the time of random read access is calculated (SP114).
  • the performance improvement processing program 32 determines that both the average value of the cache hit rate at the sequential read access and the average value of the cache hit rate at the random read access calculated in step SP114 are equal to or greater than the cache hit rate threshold value. It is determined whether or not there is (SP115).
  • ⁇ ⁇ Acquiring a positive result in this determination means that the capacity of the cache area allocated to the target pool PL is not insufficient.
  • the performance improvement processing program 32 proceeds to step SP111, and thereafter executes step SP111 to step SP113 to step SP124 as described above. Then, the performance improvement processing program 32 thereafter ends this read / write performance improvement processing and returns to the access type determination processing (FIG. 12).
  • obtaining a negative result in the determination at step SP115 means that the capacity of the cache area allocated to the target pool PL is insufficient (SP116).
  • the performance improvement processing program 32 determines whether or not the average value of the cache hit rate at the time of random read access and sequential read access calculated at step SP114 is less than or equal to the cache hit rate threshold value (SP119). ).
  • the performance improvement processing program 32 when it obtains a positive result in this determination, it additionally allocates a cache area to the target pool PL by executing the cache area addition process described above with reference to FIG. 14 (SP120).
  • the performance improvement processing program 32 specifies the higher one of random read and sequential read based on the performance information of the target RAID group RG acquired in step SP3 of the pool performance monitoring process (FIG. 12), The performance information is acquired (SP121). Specifically, the performance improvement processing program 32 subtracts the second IOPS threshold from the average value of the random read access frequency, and subtracts the second IOPS threshold from the average value of the sequential read access frequency. Of the target RAID group RG acquired at step SP3 of the pool performance monitoring process, only the performance information related to the specified random read or sequential read is acquired.
  • the performance improvement processing program 32 executes the pool volume addition processing described above with reference to FIG. 15 using the performance information acquired in step SP121 (SP124), and then ends this read / write performance improvement processing to access type determination processing. Return to.
  • the performance improvement processing program 32 when it obtains a negative result in the determination at step SP119, it additionally allocates a cache area to the target pool PL by executing the cache area addition process described above with reference to FIG. (SP122).
  • the performance improvement processing program 32 specifies the higher one of random read and random write, and acquires performance information regarding the random read or random write in the target RAID group RG. (SP123).
  • the performance improvement processing program 32 executes the pool volume addition processing described above with reference to FIG. 15 using the performance information acquired in step SP123 (SP124), and then ends this read / write performance improvement processing to access type determination processing. Return to.
  • FIG. 20 shows the details of the batch improvement process executed by the batch improvement program 33 (FIG. 2) called by the performance monitoring program 30 in step SP11 of the pool performance monitoring process described above with reference to FIG. The typical processing content is shown. In the following description, it is assumed that the read / write access during batch processing is sequential access as described above.
  • the batch improvement program 33 When the batch improvement program 33 is called by the performance monitoring program 30 (FIG. 2) in step SP11 of the pool performance monitoring process, it starts the batch improvement process shown in FIG. 20, and first, the batch schedule management table 39 (FIG. 7). 1 is selected from the batch processes registered in (1) (SP130).
  • the batch improvement program 33 refers to the batch schedule management table 39 and uses the target RAID group RG when executing the batch process selected in step SP130 (hereinafter referred to as the target batch process). It is determined whether or not (SP131).
  • the batch improvement program 33 If the batch improvement program 33 obtains a negative result in this determination, it returns to step SP130. On the other hand, if the batch improvement program 33 obtains a positive result in the determination at step SP131, the sequential read access frequency in the target RAID group RG is equal to or higher than the first IOPS threshold during the target batch processing execution time zone. Whether or not (SP132).
  • the batch improvement program 33 first acquires the execution time zone (time zone from the start time to the end time) of the target batch process from the batch schedule management table 39, and performs the pool performance monitoring process. Based on the performance information of the target RAID group RG acquired in step SP3, an average value of sequential read access frequencies in the execution time zone of the target batch process is calculated, and the calculated average value is a first IOPS threshold value for sequential read. It is determined whether this is the case.
  • the batch improvement program 33 When the batch improvement program 33 obtains a negative result in the determination at step SP132, the batch improvement program 33 is set to execute the performance improvement process described above with reference to FIGS. 12 to 19B as the batch pre-process before the next execution of the batch process (SP133 Thereafter, the process proceeds to step SP137.
  • the performance improvement process is executed before the start of the target batch process, whereby an additional cache area or pool volume is allocated to the target pool PL.
  • the batch improvement program 33 obtains a negative result in the determination at step SP132, the cache hit rate at the time of sequential read access and sequential write access in the execution time zone of the target batch process is less than the cache hit rate threshold value. Whether or not (SP134).
  • step SP134 the batch improvement program 33 uses the performance information of the target RAID group RG acquired in step SP3 of the pool performance monitoring process to cache the sequential read access in the execution time zone of the target batch process. Calculate the average hit rate. Then, the batch improvement program 33 determines whether or not the average value of the cache hit rate at the time of sequential read access is equal to or smaller than the cache hit rate threshold value during the target batch processing.
  • the performance improvement processing program 32 executes the pool volume addition processing described above with reference to FIG. 15 as batch preprocessing before the next execution of the target batch processing. (SP135), and then the process proceeds to step SP137.
  • the pool volume addition processing is executed by the performance improvement processing program 32 before the start of the target batch processing, whereby the pool volume PLVOL is additionally allocated to the target pool PL.
  • the performance improvement processing program 32 performs the cache area addition processing described above with reference to FIG. 14 before executing the target batch processing next time before the batch. It is set to be executed as a process (SP136).
  • the cache area addition process is executed by the performance improvement processing program 32 before the start of the target batch process, and as a result, the cache capacity (cache area divided area or spare cache area) is stored in the target pool PL. Will be added.
  • the batch improvement program 33 determines whether or not the processing of step SP131 to step SP136 has been executed for all batch processing registered in the batch schedule management table 39 (FIG. 7) (SP137).
  • step SP130 If the batch improvement program 33 obtains a negative result in this determination, it returns to step SP130, and thereafter, the batch processing selected in step SP130 is sequentially switched to other unprocessed batch processing, and the processing of steps SP130 to SP137 is performed. repeat.
  • step SP137 When the batch improvement program 33 finally obtains a positive result in step SP137 by completing the execution of the processes in steps SP131 to SP136 for all the batch processes registered in the batch schedule management table 39, the batch improvement process ends. Then, the process returns to the pool performance improvement process (FIG. 8).
  • FIG. 21 shows a processing procedure of batch post-processing that is executed every time one batch processing is completed in relation to such batch improvement processing.
  • This batch post-processing releases the CLPR and the like additionally assigned to the pool PL in the batch processing, and this time, based on whether or not the batch processing is completed within the scheduled time, This is processing for setting the capacity of an additional cache area when adding CLPR or the like.
  • the batch improvement program 33 starts the post-batch process.
  • CLPR or the like is added to the pool PL used in the batch process. It is determined whether or not it has been assigned to (SP140). If the batch improvement program 33 obtains a negative result in this determination, it proceeds to step SP142.
  • the batch improvement program 33 when it obtains a positive result in the determination at step SP140, it releases the CLPR added to the pool PL used in the batch processing (SP141), and thereafter the batch schedule management table 39.
  • the batch schedule management table 39 With reference to the in-time processing completion flag 39F (FIG. 7) stored in the in-time processing completion flag column 39F (FIG. 7) corresponding to the batch processing in (FIG. 7), has this batch processing been completed within the scheduled time? It is determined whether or not (SP142).
  • the cache area to be added when the cache area is additionally allocated to the corresponding pool PL when the batch process is executed next time is set to the same capacity ( If the cache area is not added in the current process, the specified capacity is set (SP143), and then the post-batch process is terminated.
  • the capacity to be added when the cache area is additionally allocated to the corresponding pool PL when the batch process is executed next time is determined this time. Is set to a capacity larger than the predetermined amount (a prescribed capacity when no cache area is added in the current process) (SP144), and then the post-batch process is terminated.
  • FIG. 22 shows specific processing contents of the cache return processing executed by the performance monitoring program 30 in step SP15 of the pool performance monitoring processing described above with reference to FIG.
  • the performance monitoring program 30 proceeds to step SP15 of the pool performance monitoring process, the performance monitoring program 30 starts the cache return process shown in FIG. 22, and first adds the target pool PL with reference to the CLPR management table 38 (FIG. 6). It is determined whether CLPR or the like is assigned (SP150). When the performance monitoring program 30 obtains a negative result in this determination, it ends this cache return processing and returns to the pool performance monitoring processing.
  • the performance monitoring program 30 obtains a positive result in the determination at step SP150, it refers to the CLPR management table 38 (FIG. 6), and sets a threshold (( Hereinafter, this is referred to as a cache write wait rate threshold) or not (SP151).
  • the CLPR that has been originally assigned to the target pool PL in the order of the oldest time stamps the data of the same capacity as the CLPR that is additionally assigned to the target pool PL.
  • the data stored in the CLPR and the like additionally assigned to the target pool PL, and sequentially erased from the CLPR and the like additionally assigned to the target pool PL, and not written in the storage device 20 The data is sequentially moved to the CLPR originally assigned to the target pool PL (SP152).
  • the performance monitoring program 30 converts the data stored in the CLPR or the like additionally assigned to the target pool PL by the process of step SP152 and not written in the storage device 20 to the target pool PL. It is determined whether or not all of the CLPRs originally assigned to can be moved (SP153). When the performance monitoring program 30 obtains a negative result in this determination, it ends this cache return processing and returns to the pool performance monitoring processing.
  • the performance monitoring program 30 when it obtains a positive result in the determination at step SP153, it releases the CLPR etc. temporarily and additionally assigned to the target pool PL, and the CLPR etc. is a CLPR divided area. When the divided area is borrowed from another pool, the divided area is returned to the renting source pool PL (SP154).
  • the performance monitoring program 30 updates the temporary allocation flag such as CLPR released in step SP154 in the CLPR management table 38 (FIG. 6) to OFF (“0”) and stores it in the corresponding allocation destination pool field 38F. Is cleared (SP155), and then the cache return process is terminated.
  • the performance monitoring program 30 obtains a negative result in the determination at step SP151, it increases the number of repetitions by “1” (SP156), and this number of repetitions is set to a predetermined threshold (hereinafter referred to as the number of repetitions). It is determined whether or not the threshold is reached (SP157).
  • the performance monitoring program 30 If the performance monitoring program 30 obtains a negative result in this determination, it waits for a certain period (for example, several minutes) (SP158), and then returns to step SP151. Then, the performance monitoring program 30 thereafter repeats the processing from step SP151 to step SP158 until a positive result is obtained in step SP151 or step SP157.
  • a certain period for example, several minutes
  • step SP151 when the write load of the target pool PL is temporarily increased, it is possible to wait for the write load to decrease.
  • step SP151 When the performance monitoring program 30 eventually obtains a positive result in step SP151, it executes step SP152 to step SP155 as described above. In contrast, if the performance monitoring program 30 obtains a negative result in the determination at step SP157, it ends this cache return processing.
  • the write performance is improved by adding a new pool volume PLVOL, while the read performance is the same as that of the storage apparatus 3.
  • the performance of the pool PL is improved by adding the capacity of the cache area of the pool and / or adding a new pool volume PLVOL to the pool PL. Therefore, according to the present storage device 3, it is possible to easily perform the optimum performance improvement according to the status of the pool PL.
  • the access types are divided into four types of random read, sequential read, random write, and sequential write.
  • the access frequency and cache hit rate for each of these four types of access are set.
  • the access type is divided into two types, read and write, for each RAID group RG.
  • the access frequency and cache hit rate for each of these two types of access may be acquired, and performance improvement may be performed using the acquired information.
  • the capacity of the cache area allocated to the pool PL is added and / or a new pool volume PLVOL is added to the pool PL.
  • the write performance is improved by adding a new pool volume PLVOL to the pool PL.
  • the present invention is not limited to this, and the capacity of the cache area is added. Further, by adding a method other than adding a new pool volume PLVOL to the pool PL, more optimal performance improvement may be performed.
  • the present invention can be applied to a storage apparatus equipped with a virtualization function.

Abstract

Le problème décrit par la présente invention est de proposer une unité de stockage extrêmement fiable, et son procédé de commande, pouvant améliorer aisément et de manière optimale des performances en fonction des conditions de l'unité de stockage. La solution selon l'invention porte sur une unité de stockage dans laquelle des groupes sont formés par un ou plusieurs volumes logiques fournis par un ou plusieurs groupes de dispositifs de stockage, et qui, en réponse à une demande provenant d'un dispositif hôte pour écrire dans un volume virtuel, attribue une région de stockage au volume virtuel, à partir d'un groupe associé au volume virtuel, le taux de réussite de mémoire cache pour des opérations de lecture pour chaque groupe de dispositifs de stockage et les fréquences d'accès en lecture et d'accès en écriture à chaque groupe de dispositifs de stockage étant obtenu; et si le taux de réussite de mémoire cache pour des opérations de lecture pour un groupe de dispositifs de stockage est inférieur ou égal à un premier seuil, la capacité de la région de mémoire cache attribuée au groupe utilisé par le groupe de dispositifs de stockage pour fournir des volumes logiques est augmentée, et si la fréquence d'accès en lecture ou en écriture au groupe de dispositifs de stockage dépasse un second seuil, un nouveau volume logique est ajouté au groupe.
PCT/JP2016/075734 2016-09-01 2016-09-01 Unité de stockage et son procédé de commande WO2018042608A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/075734 WO2018042608A1 (fr) 2016-09-01 2016-09-01 Unité de stockage et son procédé de commande

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/075734 WO2018042608A1 (fr) 2016-09-01 2016-09-01 Unité de stockage et son procédé de commande

Publications (1)

Publication Number Publication Date
WO2018042608A1 true WO2018042608A1 (fr) 2018-03-08

Family

ID=61300449

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/075734 WO2018042608A1 (fr) 2016-09-01 2016-09-01 Unité de stockage et son procédé de commande

Country Status (1)

Country Link
WO (1) WO2018042608A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008217575A (ja) * 2007-03-06 2008-09-18 Nec Corp ストレージ装置及びその構成最適化方法
WO2012081089A1 (fr) * 2010-12-15 2012-06-21 株式会社日立製作所 Dispositif de gestion et procédé de gestion d'un système informatique
JP2015517697A (ja) * 2012-05-23 2015-06-22 株式会社日立製作所 二次記憶装置に基づく記憶領域をキャッシュ領域として用いるストレージシステム及び記憶制御方法
WO2016075779A1 (fr) * 2014-11-12 2016-05-19 株式会社日立製作所 Système informatique et dispositif de stockage

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008217575A (ja) * 2007-03-06 2008-09-18 Nec Corp ストレージ装置及びその構成最適化方法
WO2012081089A1 (fr) * 2010-12-15 2012-06-21 株式会社日立製作所 Dispositif de gestion et procédé de gestion d'un système informatique
JP2015517697A (ja) * 2012-05-23 2015-06-22 株式会社日立製作所 二次記憶装置に基づく記憶領域をキャッシュ領域として用いるストレージシステム及び記憶制御方法
WO2016075779A1 (fr) * 2014-11-12 2016-05-19 株式会社日立製作所 Système informatique et dispositif de stockage

Similar Documents

Publication Publication Date Title
JP6118401B2 (ja) ストレージ装置及びデータ管理方法
US8296533B2 (en) Method and system for deleting low-load allocated virtual server resources
JP5070315B2 (ja) ストレージ装置及びストレージ装置におけるデータ階層管理方法
JP5439581B2 (ja) ストレージシステム、ストレージ装置、ストレージシステムの記憶領域の最適化方法
JP5238235B2 (ja) 管理装置及び管理方法
US9292218B2 (en) Method and apparatus to manage object based tier
JP5323989B2 (ja) ストレージ装置及びデータ管理方法
JP5706531B2 (ja) 計算機システム、及び情報管理方法
US8677093B2 (en) Method and apparatus to manage tier information
EP1986091A2 (fr) Dispositif et procédé de gestion
US20150081964A1 (en) Management apparatus and management method of computing system
CN102185929A (zh) 一种基于san资源的视频监控数据存储方法及其装置
WO2014083620A1 (fr) Dispositif de stockage et procédé de commande hiérarchique
JP2009230367A (ja) 情報処理装置及び情報処理方法
US20080109630A1 (en) Storage system, storage unit, and storage management system
WO2011135622A1 (fr) Dispositif de stockage et procédé de commande de système de stockage
US9324099B2 (en) Dynamically allocating resources between computer partitions
WO2018042608A1 (fr) Unité de stockage et son procédé de commande
US8468303B2 (en) Method and apparatus to allocate area to virtual volume based on object access type
US10007434B1 (en) Proactive release of high performance data storage resources when exceeding a service level objective
JP6035363B2 (ja) 管理計算機、計算機システム、及び管理方法
JP6013609B2 (ja) ストレージ装置及びデータ入出力方法
JP4064033B2 (ja) 複数の記録媒体を利用したデータバックアップ装置およびプログラム記憶媒体
US20190339898A1 (en) Method, system and computer program product for managing data storage in data storage systems
CN115840540A (zh) Raid阵列扩容方法、装置、设备、raid卡及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16915168

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16915168

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP