JP2001067187A - Storage sub-system and its control method - Google Patents

Storage sub-system and its control method

Info

Publication number
JP2001067187A
JP2001067187A JP24271399A JP24271399A JP2001067187A JP 2001067187 A JP2001067187 A JP 2001067187A JP 24271399 A JP24271399 A JP 24271399A JP 24271399 A JP24271399 A JP 24271399A JP 2001067187 A JP2001067187 A JP 2001067187A
Authority
JP
Japan
Prior art keywords
class
storage
storage area
logical
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP24271399A
Other languages
Japanese (ja)
Other versions
JP3541744B2 (en
Inventor
Hiroharu Arai
Takashi Arakawa
Kazuhiko Mogi
Kenji Yamakami
憲司 山神
和彦 茂木
弘治 荒井
敬史 荒川
Original Assignee
Hitachi Ltd
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd, 株式会社日立製作所 filed Critical Hitachi Ltd
Priority to JP24271399A priority Critical patent/JP3541744B2/en
Publication of JP2001067187A publication Critical patent/JP2001067187A/en
Application granted granted Critical
Publication of JP3541744B2 publication Critical patent/JP3541744B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Abstract

(57) [Summary] [PROBLEMS] To provide a storage subsystem and a control method for simplifying a work for a user or a maintenance person of a storage subsystem to perform a layout optimization by physically relocating a storage area. A storage subsystem (200) stores a storage device (500) in a plurality of sets (classes) each having an attribute.
600, and a suitable relocation destination class is determined based on the class attribute.

Description

DETAILED DESCRIPTION OF THE INVENTION

[0001]

[0001] 1. Field of the Invention [0002] The present invention relates to a storage subsystem having a plurality of storage devices, and a control method therefor.

[0002]

2. Description of the Related Art In a computer system, a disk array system is one of the secondary storage systems realizing high performance. A disk array system is a system in which a plurality of disk devices are arranged in an array and data read / written by being divided and stored in each of the disk devices is read / written at high speed by operating the disk devices in parallel. . Papers on disk array systems include: A. Patterson, G .; Gibs
on, and R. H. Kats, "A Case
for Redundant Arrays of I
nextive Disks (RAID) "
(In Proc. ACM SIGMOD, pp.
109-116, June 1988). In this paper, levels 1 to 5 are assigned to a disk array system to which redundancy has been added, according to the configuration. In addition to these types, a disk array system without redundancy may be referred to as level 0. Since the above-described levels have different costs and performance characteristics due to redundancy and the like, when constructing a disk array system, arrays of a plurality of levels (sets of disk devices) are often mixed. Here, this set is called a parity group.

The cost of a disk device varies depending on performance, capacity, and the like. In order to realize optimum cost performance when constructing a disk array system,
Again, a plurality of types of disk devices having different performances and capacities may be used.

Since the data stored in the disk array system is distributed and arranged in the disk devices as described above, the disk array system has a logical storage area accessed by a host computer connected to the disk array system and a storage device of the disk device. The physical storage areas indicating the areas are associated (address conversion). JP-A-9-2745
Japanese Patent Application Publication No. 44-44703 discloses means for acquiring information on I / O access to a logical storage area from a host computer, and means for changing the association of a logical storage area with a physical storage area and performing physical relocation. A disk array system that realizes an optimal arrangement of stored data is disclosed.

[0005]

Problems to be Solved by the Invention
There is the following problem in the method of executing the placement optimization in the conventional technique as disclosed in Japanese Patent Application Laid-Open No. 4-64.

In selecting a logical storage area to be relocated and a physical storage area to be relocated, a user or a maintenance person of the disk array system requires a disk array system user or a maintenance staff to check the configuration of the disk array system and the characteristics and performance of individual disk devices. The user has to check the information and make the selection, and the work by the user or maintenance personnel is complicated.

[0007] Further, even when the disk array system automatically performs selection, a user or maintenance person must confirm information on the individual disk devices and define a selection reference value. The work by the staff was complicated. In particular, as described above, the complexity of information management is increased in a disk array system in which different levels and different types of disk devices are mixed.

Further, the I / O access information referred to by the disk array system for selection does not take into account the characteristics of the schedule of processing performed in the system including the host computer and the disk array system. Generally, the processing performed by the computer system and the I / O associated with the processing are performed according to a schedule created by the user, and the processing and I / O tendencies are periodic such as daily, monthly, and yearly. And the user is generally considered to be interested in processing and I / O for a specific period.

Further, in the above-mentioned prior art, there are the following problems in a performance tuning method by rearrangement. The performance tuning method by physical relocation changes the usage status of the disk device, that is, the physical storage area. In the conventional technique, however, the I / O access to the logical storage area from the host computer is not performed. Since information is referred to, there is a possibility that a correct selection cannot be made in selecting a logical storage area to be relocated and a physical storage area to be relocated.

Further, even when the sequential access and the random access from the host computer are remarkable and are performed on different physical storage areas included in the same disk device, the sequential access and the random access are separated into different disk devices. For this reason, it has not been possible to arbitrarily specify the disk device of the relocation destination and to cause the relocation to be performed automatically. Generally, as a processing requirement from the host computer, a short-time response (high response performance) is required for random access with a small data length.
If there is a sequential access with a large data length in the same disk device, the response time of the random access becomes longer because it is hindered by the sequential access processing.
Response performance deteriorates.

A first object of the present invention is to simplify the work for a user or maintenance staff of a disk array system to optimize the arrangement by reallocation.

A second object of the present invention is to make it possible to optimize the arrangement by reallocation in consideration of the processing schedule in a system including a host computer and a disk array system.

[0013] A third object of the present invention is to select a logical storage area to be relocated and a physical storage area to be relocated, by making a selection based on the use status of a disk device as an actual storage device. An object of the present invention is to provide an array system control method and a disk array system.

[0014] A fourth object of the present invention is to cope with remarkable sequential access and random access in the same disk device in a disk array system, and to arbitrarily specify a disk device to be relocated and perform sequential access by relocation. And random access can be automatically separated into different disk devices.

[0015]

In order to achieve the above first object, a disk array system connected to one or more host computers comprises: means for acquiring usage status information of a plurality of subordinate disk devices; The host computer has means for associating a logical storage area to be read / written with a first physical storage area of a disk device, and further includes a plurality of sets (classes) each of which has a plurality of disk devices having attributes. ), Means for determining a class of a relocation destination suitable for a logical storage area based on use status information and a class attribute, and a second physical storage area that can be used as a relocation destination of a logical storage area. Means for selecting from within a class, and copying the contents of the first physical storage area to the second physical storage area and associating the logical storage area with the first. Comprising means for rearranging changed from the physical storage area to the second physical storage area.

Further, in order to realize the second object,
The disk array system may include means for accumulating usage information, determining a relocation destination of a logical storage area based on the usage information for a set period, and means for performing relocation at a set time. it can.

Further, in order to realize the third object,
The disk array system includes means for using the usage time (usage rate) of the disk device per unit time as the usage status information.

Further, in order to realize the fourth object,
The disk array system uses the target access type (sequential / random access type) and the usage upper limit set as attributes for each class, and relocates the logical storage from the storage device exceeding the usage upper limit of the class. Means for selecting an area and determining a class of a logical storage area to be relocated from a class of a suitable access type so as not to exceed a usage rate upper limit value of each class based on a result of analysis of an access type to the logical storage area; Is provided.

[0019]

FIG. 1 is a block diagram showing an embodiment of the present invention.
This will be described with reference to FIGS.

<First Embodiment> In the present embodiment,
Relocation determination based on the class 600 and relocation determination and scheduling of execution will be described.

FIG. 1 is a configuration diagram of a computer system according to the first embodiment of this invention.

The computer system according to the present embodiment comprises:
It comprises a host 100, a storage subsystem 200, and a control terminal 700.

The host 100 is connected to the storage subsystem 200 via an I / O bus 800, and performs read and write I / O with respect to the storage subsystem 200. At the time of I / O, the host 100 specifies a logical area for the storage area of the storage subsystem 200. I
Examples of the / O bus 800 include ESCON, SCSI,
Fiber Channel and the like.

The storage subsystem 200 has a controller 300 and a plurality of storage devices 500. Control unit 3
00 performs read / write processing 310, use status information acquisition processing 311, relocation determination processing 312, and relocation execution processing 313. The storage subsystem 200
Are logical / physical correspondence information 400, class configuration information 40
1, class attribute information 402, logical area use status information 40
3, physical area use status information 404, relocation determination target period information 405, relocation execution time information 406, unused area information 407, and relocation information 408.

The host 100, the control unit 300, and the control terminal 700 are connected by a network 900. Examples of the network 900 include FDDI and Fiber Channel.

The host 100, the control unit 300, and the control terminal 700 have a memory for performing each processing, a C
Although components generally used in a computer, such as a PU, also exist, they are not important in the description of the present embodiment, and thus description thereof is omitted here.

The read / write processing 310 and the use status information acquisition processing 311 when the host 100 reads / writes from / to the storage subsystem 200 will be described with reference to FIG.

In the read / write processing 310, the host 100 operates the control unit 3 of the storage subsystem 200.
A request is made to read 00 or write 00 by designating a logical area (step 1000). Control unit 3 that received the request
00 determines the physical area corresponding to the logical area using the logical / physical correspondence information 400, that is, converts the address (logical address) of the logical area into the address (physical address) of the physical area (step 1010). Subsequently, the control unit 3
00, in the case of a read, reads data from the storage device 500 of this physical address and transfers it to the host 100;
In the case of a write, the data transferred from the host 100 is stored in the storage device 500 of the physical address (step 1020), and further, a use status information acquisition process 311 described later.
I do. I / O for read / write requests and data transfer
This is performed via the bus 800.

An example of the logical / physical correspondence information 400 is shown in FIG. The logical address is an address indicating a logical area used by the host 100 in the read / write processing 310. The physical address is an address indicating an area on the storage device 500 where data is actually stored, and includes a storage device number and an address in the storage device. The storage device number indicates an individual storage device 500. The address in the storage device is the storage device 500
Is an address indicating a storage area in the storage area.

Next, in the use state information acquisition processing 311, the control unit 300 uses the logical area use state information 403 for the logical area that has been read / written in the read / write processing 310, and uses it in the read / write processing 310. Physical area use status information 40 on the physical area
4 is updated (steps 1030 and 1040). Logical area usage information 403 and physical area usage information 40
Reference numeral 4 denotes information on the use status of each logical area and physical area at each date and time, such as, for example, usage frequency, usage rate, and read / write attributes. Logical area usage information 403
A specific example of the physical area use status information 404 will be described in the following embodiments.

Next, the rearrangement determination processing 312 performed by the control unit 300 will be described with reference to FIG.

The storage device 500 is classified into a plurality of sets (classes 600) by a user or as an initial state, and the classification into the class 600 is performed by class configuration information 401.
Is set to Further, each class 600 has an attribute set by the user or as an initial condition, and the attribute is set in the class attribute information 402.
The class attribute information 402 is information on attributes such as an allowable usage status, a suitable usage status, and an inter-class priority.
Specific examples of the class configuration information 401 and the class attribute information 402 will be described in the following embodiments. In the rearrangement determination target period information 405, the period of the usage status information to be subjected to the rearrangement determination processing 312 and the period update information are set by the user or as an initial condition.

FIG. 5 shows an example of the relocation judgment target period information 405. The period from the start date and time to the end date and time is the target period. The period update information is a setting condition of the next target period, and may be, for example, weekly, daily, or after X hours. The control unit 300 refers to the logical area usage information 403 and the physical area usage information 404 for the target period (Step 1).
100), the class is compared with the allowable use status of each class 600 in the class attribute information 402 (step 1110), and a logical area to be physically relocated is selected (step 11).
20).

Further, the control unit 300 refers to the permissible usage status, the preferred usage status, and the inter-class priority of the class attribute information 402 (step 1130), and selects the class 600 to which the logical area is relocated. (Step 114
0) Further, an unused physical area is selected as a logical area relocation destination from the storage devices 500 belonging to the class 600 (step 1150), and the selection result is stored in the relocation information 40.
8 (step 1160).

FIG. 6 shows an example of the relocation information 408. The logical area is a logical area to be relocated, the relocation source physical area is a storage device number and an address in a storage device indicating the current physical area corresponding to the logical area, and the relocation destination physical area is The storage device number and the address in the storage device indicating the previous physical area. As shown in FIG. 6, one or more relocation plans can be made. Further, control unit 300 refers to the period update information of relocation determination target period information 405 and updates the target period of relocation determination target period information 405 to the next period (step 1170). In the above processing, the control unit 30
0 uses the logical / physical correspondence information 400 and uses the unused area information 407 to search for the unused physical area.

An example of the unused area information 407 is shown in FIG. The storage device number indicates an individual storage device 500. The storage device address is an address indicating an area in the storage device 500. The storage device number and the address in the device indicate a physical area.
Indicates unused distinction. The control unit 300 normally performs the relocation determination processing 312 after the target period,
13 automatically.

Next, the relocation execution processing 313 performed by the control unit 300 will be described with reference to FIG.

In the relocation execution time information 406, the date and time when the relocation execution processing 313 is performed by the user or as an initial condition and the date and time update information are set.

FIG. 9 shows an example of the relocation execution time information 406. The control unit 300 automatically executes the rearrangement execution processing 313 described below at the set date and time. The date and time update information is a setting condition of the date and time when the next relocation execution process 313 is performed, and may be, for example, weekly, daily, or after X hours. The control unit 300 copies the contents stored in the relocation source physical area to the relocation destination physical area based on the relocation information 408 (step 1200). Further, when the copy is completed and the contents of the relocation source physical area are all reflected in the relocation destination physical area, the control unit 300 corresponds to the logical area to be relocated in the logical / physical correspondence information 400. The physical area is changed from the relocation source physical area to the relocation destination physical area (step 1210).

Further, the control unit 300 uses the relocation destination physical area on the unused physical area 470 and changes the relocation source physical area to unused (step 1220). Further, the control unit 300 updates the date and time of the relocation execution time information 406 to the next minute by referring to the date and time update information of the relocation execution time information 406 (step 1230).

The user or the maintenance staff sets each information used by the control unit 300 in the above processing from the control terminal 700 via the network 900 or from the host 100 via the network 900 or the I / O bus 800. In particular, the relocation information 408 can be checked and set to correct, add, or delete a relocation plan.

By performing the above processing, based on the obtained usage information and the set class attribute,
The physical relocation of the logical area is automatically performed in the storage subsystem 200, and the storage subsystem 20
Zero optimization can be performed. Furthermore, by repeating the above-described rearrangement determination and execution processing to correct the arrangement, it is possible to absorb fluctuations in the use state and other optimization error factors.

In particular, the above processing allows the user or the maintenance staff to easily perform optimization by rearrangement.
The user or the maintenance person sets the storage device 500 in the class 600.
Therefore, there is no need to manage attributes such as performance, reliability, and characteristics of the storage device 500 for each storage device 500. Further, the user or the maintenance staff can set a class 600 having the same attribute as necessary for a group in which the individual attributes of the storage device 500 are not equal, and treat the class as one management unit. However, it is also possible to perform the above-described relocation processing using one storage device 500 as a management unit, assuming that one storage device 500 constitutes one class 600.

Further, the user or the maintenance person
The above rearrangement can be automatically performed in consideration of the characteristics and schedule of the processing (job) performed in step S0. Generally, processing performed in a computer system and I / O accompanying this processing are performed according to a schedule created by a user. When the user particularly has a process to be optimized, it is possible to specify the period of the process, and by the relocation process described in the present embodiment,
The user can specify the period of interest and cause the storage system 200 to perform the relocation determination process. That is, the user can realize the above-described optimization by the relocation based on the usage status information of the period. In addition, the processing and I / O trends performed in the computer system often indicate a periodicity such as daily, monthly, or yearly. In particular, when the processing is based on routine work, the periodicity becomes remarkable. As in the case described above, the user can perform optimization by rearrangement by designating a period of interest as an optimization target in the cycle. Further, in the relocation execution processing 313, the storage contents are copied in the storage system 200.
The user sets the required processing performance on the host 100 by setting the time when the storage system 200 is not used much or the period during which the required processing performance of the processing being executed on the host 100 is low as the execution time of the relocation execution processing 313. Can prevent the I / O to the storage system 200 of the processing with a high cost from being hindered by the copy.

The storage device 500 may have different performance, reliability, characteristics, and attributes, and more specifically, different storage devices such as a magnetic disk device, a magnetic tape device, and a semiconductor memory (cache). It may be a medium. In the above example, the unused area information 407 is described based on the physical area. However, the unused area information 407 may be described based on a logical area (logical address) corresponding to the unused physical area.

<Second Embodiment> In the present embodiment,
A description will be given of the application of the disk device usage rate as the usage status information, and the determination of the relocation based on the upper limit value of the class 600 and the performance ranking between the classes 600.

FIG. 10 is a configuration diagram of a computer system according to the second embodiment of the present invention.

The computer system according to the present embodiment includes a host 100, a disk array system 201, and a control terminal 70.
It has zero. The computer system according to this embodiment is different from the storage subsystem 20 according to the first embodiment.
0 is the disk array system 201 and the storage device 50
0 corresponds to the parity group 501.

The disk array system 201 has a control unit 300 and a disk device 502. Control unit 300
Corresponds to the control unit 300 in the first embodiment. The number of disk devices 502 is n (n is an integer of 2 or more) and RA
An ID (disk array) is configured, and a set of the n disk devices 502 is referred to as a parity group 501. As a property of RAID, the n disk devices 502 included in one parity group 501 have n−1
There is a redundancy relationship such that redundant data generated from the storage contents of one disk device 502 is stored in the other one. Further, the n disk devices 502 have a data storage relationship such that the storage contents including redundant data are distributed and stored in the n disk devices 502 to improve parallel operability. From this relationship, each parity group 50
1 can be regarded as one unit in operation. However, since the cost and performance characteristics to be realized are different depending on the redundancy, the number n, and the like, in configuring the disk array system 201, arrays having different levels and different numbers n are used. (Parity group 501) is often mixed, and the cost of the disk device 502 constituting the parity group 501 differs depending on the performance and capacity. Therefore, optimal cost performance is realized when configuring the disk array system 201. Therefore, a plurality of types of disk devices 502 having different performances and capacities may be used. Therefore, in this embodiment, each parity group 501 constituting the disk array system 201 has performance,
It is assumed that attributes such as reliability and characteristics are not always the same, and there is a difference particularly in performance.

FIG. 11 shows an example of the logical / physical correspondence information 400 according to the present embodiment.

The logical address is read / written by the host 100.
This is an address indicating a logical area used in the write processing 310. The physical address is an address indicating an area on the disk device 502 where data and the redundant data are actually stored, and includes a parity group number, each disk device number, and an address in the disk device. The parity group number indicates an individual parity group 501. The disk device number indicates the individual disk device 502. The address in the disk device is an address indicating an area in the disk device 502. The controller 300 processes information using redundant data in the read / write processing 310 or the like as a RAID operation. However, in the present embodiment, the parity group 502 is described as one unit in operation. The above processing is not mentioned here.

Further, similarly to the first embodiment, the parity groups 501 are classified into a plurality of sets (classes 600) by the user or as an initial state, and the classification into the class 600 is set in the class configuration information 401. Have been. An example of the class configuration information 401 is shown in FIG.

The class number is a number indicating each class 600. The number of parity groups indicates the number of parity groups belonging to each class 600. The parity group number indicates a parity group number 501 belonging to each class 600. Similarly, the attribute of each class 600 is set in the class attribute information 402. FIG. 13 shows an example of the class attribute information 402 in the present embodiment.

The class number is a number indicating each class 600. The usage rate upper limit value is an upper limit value indicating an allowable range of a disk usage rate, which will be described later, and is applied to the parity group 501 to which the class 600 belongs. The inter-class performance order is a performance order between classes 600 (a smaller number indicates higher performance). The inter-class performance ranking is based on the above-described performance difference of the parity group 501 constituting each class 600. The relocation execution upper limit value and the fixed value will be described later.

The use status information acquisition processing 311 in the present embodiment will be described with reference to FIG.

As in the first embodiment, the control unit 300 obtains the usage time of the disk device 502 used in the read / write processing 310 to obtain the usage time per unit time (usage rate). , Disk device 502
The average of the usage rate is calculated for the parity group 501 to which the logical group belongs to (step 1300), and the average of the usage rate is recorded in the logical area usage information 403 as the disk device usage rate of the logical area to be read / written (step 1300). Step 1310). The control unit 300 also calculates the sum of the disk device usage rates of all the logical areas corresponding to the parity group 501 (step 1320), and records the sum in the physical area usage status information 404 as the usage rate of the parity group 501 (step 1330).

FIGS. 15 and 16 show examples of the logical area use information 403 and the physical area use information 404 in the present embodiment.

The date and time indicate the date and time for each sampling interval (fixed period), the logical address indicates the logical area, the parity group number indicates the individual parity group, and the disk device usage rate and parity group usage rate of the logical area are respectively The average usage rate at the sampling interval is shown. The usage rate of the disk device 502 as described above is a value indicating the load applied to the disk device 502. If the usage rate is high, the disk device 502 may be a performance bottleneck. Thus, the performance of the disk array system 201 can be improved by reducing the usage rate.

Next, the rearrangement judgment processing 312 will be described with reference to FIG.
7 will be described.

The control unit 300 acquires the parity group 501 belonging to the class 600 from the class configuration information 401 for each class 600 (step 130).
0). Subsequently, the control unit 300 acquires the target period by referring to the relocation determination target period information 405 similar to that of the first embodiment, and further obtains the parity group 501 from the physical area use status information 404 of the target period. The parity group usage rate is acquired and counted (step 1320). Subsequently, the control unit 300 refers to the class attribute information 402 and acquires the upper limit of the usage rate of the class 600 (step 13).
30). The control unit 300 compares the parity group usage rate with the class upper limit value. If the parity group usage rate is greater than the class upper limit value, the controller 300 reduces the usage rate of the parity group 501 by using the logical area corresponding to the parity group 501. It is determined that relocation is necessary (step 134)
0).

Subsequently, the control unit 300 refers to the logical area usage information 403 of the target period, and determines the disk device usage rate of the logical area corresponding to each physical area of the parity group 501 determined to require relocation. The acquired and totaled data is collected (step 1350), and the logical area to be relocated is selected from those having the highest disk device usage rates (step 13).
60). The logical area is selected by subtracting the disk usage rate of the selected logical area from the usage rate of the parity group 501 until the usage rate of the class 600 becomes equal to or less than the upper limit of the usage rate of the class 600 (1370). Since a logical area with a high disk device usage rate is considered to have a large effect on the usage rate of the parity group 501 and a high frequency of access to the logical area from the host 100, a logical area with a high disk device usage rate is given priority. By rearrangement, an effective performance improvement of the disk array system 201 can be expected.

The control unit 300 searches for a physical area to be relocated to the selected logical area. Control unit 300
Refers to the class attribute information 402, focuses on the class 600 (high-performance class) having a higher performance order than the class 600 to which the parity group 501 belongs, and
01 and unused area information 4 similar to the first embodiment.
07, an unused physical area of the parity group 501 belonging to the high performance class is obtained (step 138).
0).

Further, the control unit 300 obtains a predicted value of the parity group use rate when each unused physical area is set as a relocation destination (step 1390). In this case, an unused physical area that can be predicted not to exceed the upper limit value set in the high-performance class is selected as the physical area of the relocation destination (step 1).
400), the selection result is output to the relocation information 408 as in the first embodiment (step 1410). When the relocation destination physical area has been selected for all the selected logical areas, the processing is terminated (step 1420).

In the present embodiment, the control unit 300
Parity group information 409 in addition to the first embodiment
, And a predicted utilization value is calculated from the parity group information 409, the logical area usage information 403, and the physical area usage information 404.

An example of the parity group information 409 is shown in FIG.
FIG. The parity group number is a number indicating each parity group 501. The RAID configuration indicates the RAID level, the number of disks, and the redundancy configuration of the parity group 501. The disk device performance indicates the performance characteristics of the disk devices 502 constituting the parity group 501. The fixing will be described later. In the above processing, by relocating the logical area having a high disk device usage rate to the high-performance class parity group 501, the disk device usage time for the same load can be reduced, and the disk device usage after the logical area relocation can be reduced. The rate can be suppressed.

The relocation execution processing 313 is performed in the same manner as in the first embodiment, but as shown in FIG.
00 refers to the class attribute information 402 before copying for relocation, and determines the class 6 of the relocation source and the relocation destination.
For 00, a relocation execution upper limit value set by the user or as an initial condition is obtained (step 150).
0). Further, referring to the physical area usage information 404,
The latest parity group usage rate of the relocation source and relocation destination parity groups 501 is obtained (step 151).
0), as a result of the comparison, if the parity group usage rate exceeds the relocation execution upper limit value in at least one of the classes 600 (steps 1520 and 1530), the relocation execution processing 313 is stopped or postponed (step 154).
0).

By the above-described processing, the user can avoid a further load caused by the copy when the usage rate of the parity group 501 is high, that is, when the load is high, and an upper limit value for avoidance is set for each class 600. It can be set arbitrarily.

By performing the above-described processing, the selection of the logical area to be physically relocated based on the use status of the disk device 502 and the selection of the physical area to be relocated can be performed based on the class configuration and the attribute. Then, the load of the disk device 502 is distributed by relocation, and the
It is possible to realize an arrangement in which the usage rate of the parity group 501 belonging to the class 600 does not exceed the usage rate upper limit value set to 0. Further, by repeating the processing of the rearrangement determination and execution to correct the arrangement, it is possible to absorb the fluctuation of the use situation and the prediction error.

In the rearrangement determination processing 312, the control unit 3
00 is calculated by referring to the parity group usage rate of the physical area usage status information 404 and the disk area usage rate of the logical area in the logical area usage status information 403 during the target period, and is used for determination. Instead of using the average of all values in the period, a method using the top m values in the target period is also conceivable, and a method using the top m-th value is also conceivable (m is an integer of 1 or more). By allowing the user to select one of these methods, the user selects and uses only a characteristic part of the usage state, and performs the rearrangement determination processing 312.
Can be performed.

In the above-described relocation determination processing 312, the control unit 300 detects the parity group 501 requiring the relocation of the logical area for all the classes 600 of the disk array system 201. The control unit 300 refers to the class attribute information 402 before and the class 600 to which the fixed attribute is set may be excluded from the detection target. Similarly, the control unit 300 may refer to the parity group information 409 and exclude the parity group 501 to which the fixed attribute is set from the detection target. Also, in the relocation determination processing 312, the control unit 300 selects the physical area of the relocation destination from the unused physical areas of the parity group 501 belonging to the high-performance class, but for the class 600 for which the fixed attribute is set. May be excluded, and the class 600 having a higher performance rank may be treated as a high-performance class. The parity group 501 for which the fixed attribute is set may be excluded from the target. By handling the class 600 or the parity group 501 for which the fixed attribute is set as described above, the user does not want to cause the influence of the physical relocation in the automatic relocation processing. 501
And can be excluded from the relocation target.

<Third Embodiment> In the present embodiment,
Relocation determination within the same class 600 will be described.
The computer system in the present embodiment is the same as in the second embodiment. However, in this embodiment, a plurality of parity groups 501 belong to one class 600. The processing in this embodiment is the same as that in the second embodiment except for the rearrangement determination processing 312. Also in the relocation determination process 312, the selection of the logical area to be relocated (step 1600) is the same as in the second embodiment.

The selection of a physical area as a relocation destination in the relocation determination processing 312 in this embodiment will be described with reference to FIG.

In the second embodiment, the physical area of the relocation destination is selected from the class 600 having higher performance order than the class 600 to which the physical area of the relocation source belongs. In the present embodiment, the relocation of the same class 600 is selected. A parity group 501 other than the allocation source is selected. The control unit 300 refers to the class configuration information 401 and the unused area information 407 and refers to the parity group 50 other than the relocation source belonging to the same class 600.
One unused physical area is acquired (step 1610).
For each unused physical area, the control unit 300 calculates a predicted value of the parity group usage rate when the relocation destination is set (step 1620). An unused physical area that can be predicted not to exceed the upper limit value set to 600 is selected as a physical area of a relocation destination (step 1630), and the selection result is relocation information 408 as in the second embodiment. (Step 1640). When the physical area of the relocation destination has been selected for all the logical areas to be relocated, the process ends (step 1650).

By the above processing, the load of the disk device 502 can be distributed within the same class 600. The above processing method is performed, for example, in the disk array system 2
01 parity groups 501 are all in one class 60
It can be applied to a configuration belonging to 0 (single class). Also, for example, when the processing method described in the second embodiment is combined with the processing method described in the second embodiment, in selecting an unused physical area to be relocated, it is appropriate for the class 600 having a higher performance rank than the class 600 of the relocation source. This can be applied to the case where an unused physical area is not obtained or the processing in the class 600 having the highest performance order. When combined with the processing method described in the second embodiment, the processing method in the second embodiment and the processing method in the present embodiment use different upper limit values of the usage rate for each class 600. That is, the class attribute information 402 may have two types of usage rate upper limit values or differences for each class 600 for that purpose.

<Fourth Embodiment> In the present embodiment,
In the rearrangement determination processing 312 in the second embodiment,
Class 6 whose performance rank is higher than class 600 of the relocation source
If an unused physical area of the relocation destination is not found at 00 (high-performance class), a high-performance class 600 (low-performance class) with a lower performance order is performed prior to obtaining the relocation destination. The process of relocation from the performance class will be described.

The computer system according to the present embodiment has a second
This is the same as the embodiment. The rearrangement determination processing 312 according to the present embodiment will be described with reference to FIG.

The control section 300 acquires the parity group 501 belonging to the high-performance class from the class configuration information 401 (step 1700). Subsequently, the control unit 300 executes the same rearrangement determination target period information 405 as in the first embodiment.
To obtain the target period (step 1710), and refer to the logical area usage information 403 of the target period to obtain the disk device usage rate of the logical area corresponding to each physical area of the parity group 501 (step 1710). 1720), from the one with the lowest disk device usage rate, select as a logical area to be relocated to a low performance class (step 173)
0). At this time, the selection of the logical area is performed as necessary (step 1740).

Subsequently, the control unit 300 selects a physical area to be a relocation destination for the selected logical area from the parity group 501 belonging to the low performance class. If the high-performance class to be relocated in the description of the processing in the second embodiment is read as a low-performance class, the processing is the same as the processing in the second embodiment (step 1750). Further, other processes in the present embodiment are the same as the processes in the second embodiment.

By performing the above processing, when an unused physical area of the relocation destination is not found in the high-performance class in the relocation determination processing 312 in the second embodiment, the high-performance class Relocation of a logical area to a class is performed prior to relocation to a high-performance class, and an unused physical area to be relocated can be prepared for the high-performance class. The control unit 300 can repeatedly perform the above processing as needed to prepare a sufficient unused physical area.

Since the logical area is relocated to the parity group 501 of the low performance class, the disk usage time for the same load increases for the relocation, and the disk device utilization after the logical area relocation may increase. There is,
By relocating from a logical area with a low disk usage rate, the effect of the increase can be minimized.

<Fifth Embodiment> In the present embodiment,
An access type attribute is provided as one of the attributes of the class 600,
Relocation determination for automatically relocating a logical area in which sequential access is remarkably performed using the access type attribute and a logical area in which random access is remarkably performed to another parity group 501 by physical relocation. Will be described.

The computer system according to the present embodiment is as shown in FIG. In the present embodiment, the following information held by the control unit 300 is used in addition to the description in the second embodiment.

FIG. 22 shows an example of the class attribute information 402 in the present embodiment. In this example, the access type is added to the example in the second embodiment, and the class 60
If the access type of 0 is set to, for example, sequential, it indicates that the class 600 is set to be suitable for sequential access.

Logic area use status information 4 in this embodiment
FIG. 23 shows an example of the reference numeral 03. In this example, a sequential access rate and a random access rate are added to the example in the second embodiment.

Further, in this embodiment, the control unit 30
0 holds the access type reference value information 410 and the logical area attribute information 411 in addition to the second embodiment.

FIG. 24 shows an example of the access type reference value information 410. In the access type reference value information 410, a reference value used for determination of an access type described later is set by the user or as an initial condition. FIG. 25 shows an example of the logical area attribute information 411. The access type hint is an access type that can be expected to be performed remarkably for each logical area, and is set by the user. The fixing will be described later.

The processing in this embodiment is the same as that in the second embodiment, except for the usage information acquisition processing 311 and the relocation determination processing 312.

The use status information acquisition processing 311 according to the present embodiment will be described with reference to FIG.

The control unit 300 calculates the disk device usage rate for the logical area, similarly to the usage status information acquisition processing 311 in the second embodiment (steps 1800, 1800).
810), analyze the contents of the usage rate in the read / write processing 310, and calculate the ratio of sequential access to random access for the usage rate (step 1820),
The usage rate and the access type ratio are recorded in the logical area usage status information 403 (step 1830). Also, the control unit 300 calculates the parity group usage rate and records it in the physical area usage status information 404 as in the second embodiment (steps 1840 and 1850).

Relocation determination processing 31 in the present embodiment
In 2, the selection of the logical area to be rearranged is the same as that of the second embodiment (step 1990). FIG. 27 shows selection of a physical area to be relocated in relocation determination processing 312.
Will be described.

The control unit 300 has the logical area use information 403
, The sequential access rate for the logical area to be relocated is obtained (step 1910), and is compared with the reference value set in the access type reference value information 410 (step 1920). If the sequential access rate is larger than the reference value, the control unit 300 refers to the class attribute information 402 and checks whether there is a class 600 (sequential class) whose access type is set to sequential (step 1950). If a sequential class exists, the control unit 300 refers to the class configuration information 401 and the unused area information 407 to acquire an unused physical area of the parity group 501 other than the relocation source belonging to the sequential class (step 196).
0). Further, the control unit 300 obtains a predicted value of the parity group usage rate when each unused physical area is set as the relocation destination (step 1970), and sequentially determines the parity group usage rate when the unused physical area is set as the relocation destination. An unused physical area that can be predicted not to exceed the upper limit set in the class is selected as the physical area of the relocation destination (step 1).
980), and outputs the selection result to the relocation information 408 as in the second embodiment (step 1990). Control unit 30
0 indicates that the usage rate prediction value is the parity group information 409 similar to the second embodiment, the logical area usage information 403 and the physical area usage information 40 in the present embodiment.
Calculated from 4.

In the above comparison, if the sequential access rate is equal to or less than the reference value, the control unit 300 refers to the logical area attribute information 411 and checks whether the access type hint is set to sequential for the logical area (step 1940). ). If the access type hint is set to sequential, the control unit 3
00 checks whether there is a sequential class (step 1950). If a sequential class exists,
A physical area to be relocated is selected from the sequential class (steps 1960 to 1990).

In the above comparison, if the sequential access rate is equal to or less than the reference value and the access type hint is not sequential, or if there is no sequential class, the control unit 300
As in the second embodiment, a physical area to be relocated is selected from a class 600 other than the sequential class (step 2000).

According to the above-described processing, the sequential access is remarkably performed using the access type and the use rate upper limit value set as the attributes of each class 600, for the remarkable mixture of the sequential access and the random access in the same parity group 501. And the logical area where random access is remarkably performed can be automatically rearranged into different parity groups 501 and separated, that is, separated into different disk devices 502. In particular, the response performance to random access can be improved. Can be improved.

In the above processing, the control unit 300
Has described that automatic separation by rearrangement is performed by focusing on sequential access, but it is also possible to perform the separation by focusing on random access.

In the above-described rearrangement determination processing 312, when the logical area to be rearranged is selected, the control unit 300 refers to the logical area attribute information 411, and if the fixed attribute is specified for the logical area, If the area is not rearranged, and there is a logical area that the user does not particularly want to rearrange, the logical area can be excluded from the rearrangement by setting the fixed attribute. The processing relating to the fixed attribute described above can be applied to the above-described embodiment by using the logical area attribute information 411.

[0097]

According to the present invention, it is possible to simplify the work for the user of the storage subsystem or the maintenance staff to perform the layout optimization by physically relocating the storage area.

[Brief description of the drawings]

FIG. 1 is a configuration diagram of a computer system according to a first embodiment of this invention.

FIG. 2 is a flowchart of a read / write process 310 and a use status information acquisition process 311 according to the first embodiment of the present invention.

FIG. 3 is a diagram showing an example of logical / physical correspondence information 400 according to the first embodiment of the present invention.

FIG. 4 is a flowchart of a rearrangement determination process 312 according to the first embodiment of this invention.

FIG. 5 is a diagram illustrating an example of relocation determination target period information 405 according to the first embodiment of this invention.

FIG. 6 shows relocation information 40 according to the first embodiment of the present invention.
8 is a diagram illustrating an example of FIG.

FIG. 7 is a diagram illustrating an example of unused area information 407 according to the first embodiment of the present invention.

FIG. 8 is a flowchart of a relocation execution process 313 according to the first embodiment of this invention.

FIG. 9 is a diagram illustrating an example of relocation execution time information 406 according to the first embodiment of this invention.

FIG. 10 is a configuration diagram of a computer system according to a second embodiment and a fifth embodiment of the present invention.

FIG. 11 is a diagram showing an example of logical / physical correspondence information 400 according to the second embodiment of the present invention.

FIG. 12 is a diagram illustrating an example of class configuration information 401 according to the second embodiment of the present invention.

FIG. 13 is a diagram illustrating an example of class attribute information 402 according to the second embodiment of the present invention.

FIG. 14 is a flowchart of a usage status information acquisition process 311 according to the second embodiment of this invention.

FIG. 15 is a diagram illustrating an example of logical area usage information 403 according to the second embodiment of this invention.

FIG. 16 is a diagram illustrating an example of physical area usage status information 404 according to the second embodiment of this invention.

FIG. 17 is a flowchart of a relocation determination process 312 according to the second embodiment of this invention.

FIG. 18 is a diagram illustrating an example of parity group information 409 according to the second embodiment of the present invention.

FIG. 19 is a flowchart of a relocation execution process 313 according to the second embodiment of this invention.

FIG. 20 is a flowchart of a rearrangement determination process 312 according to the third embodiment of this invention.

FIG. 21 is a flowchart of a rearrangement determination process 312 according to the fourth embodiment of the present invention.

FIG. 22 is a diagram illustrating an example of class attribute information 402 according to the fifth embodiment of the present invention.

FIG. 23 is a diagram illustrating an example of the logical area usage information 403 according to the fifth embodiment of the present invention.

FIG. 24 is a diagram illustrating an example of access type reference value information 410 according to the fifth embodiment of the present invention.

FIG. 25 is a diagram illustrating an example of logical area attribute information 411 according to the fifth embodiment of the present invention.

FIG. 26 is a flowchart of a usage status information acquisition process 311 according to the fifth embodiment of the present invention.

FIG. 27 is a flowchart of a relocation determination process 312 according to the fifth embodiment of the present invention.

[Explanation of symbols]

 REFERENCE SIGNS LIST 100 host 200 storage subsystem 201 disk array system 300 control unit 310 read / write process 311 use status information acquisition process 312 relocation determination process 313 relocation execution process 400 logical / physical correspondence information 401 class configuration information 402 class attribute information 403 logic Area usage information 404 Physical area usage information 405 Relocation determination period information 406 Relocation execution time information 407 Unused area information 408 Relocation information 409 Parity group information 410 Access type reference value information 411 Logical area attribute information 500 Storage device 501 Parity group 502 Disk device 600 Class 700 Control terminal 800 I / O bus 900 Network

 ──────────────────────────────────────────────────続 き Continuing from the front page (72) Inventor Kenji Yamagami 1099 Ozenji Temple, Aso-ku, Kawasaki City, Kanagawa Prefecture Inside System Development Laboratory, Hitachi, Ltd. F-term in the Storage System Division of Manufacturing (reference) 5B065 BA01 CA30 CC01 CC03 EK01 5B082 CA11

Claims (10)

    [Claims]
  1. A plurality of storage devices; a unit for obtaining usage status information of the storage devices; and a correspondence between a logical storage area to be read / written by the computer and a first physical storage area of the storage device. Means for controlling a storage subsystem connected to one or more computers, wherein the storage devices are classified into a plurality of sets (classes), and the classes have set attributes. The storage subsystem determines a class of a relocation destination suitable for the logical storage area based on the use status information and the class attribute, and determines a second physical storage that can be used as a relocation destination of the logical storage area. Select an area from within the class, copy the contents of the first physical storage area to the second physical storage area, and map the logical storage area to the first physical storage area. The method of the storage subsystem, wherein the rearranging change to the second physical storage area.
  2. 2. The storage subsystem control method according to claim 1, wherein the storage subsystem accumulates the usage status information, and stores the usage status information in a logical storage area based on the usage status information for a set period. A method for controlling a storage subsystem, wherein a relocation destination is determined and relocation is performed at a set time.
  3. 3. The storage subsystem control method according to claim 1, wherein the storage subsystem uses a storage device usage time per unit time (usage rate) as usage status information. The class has a performance rank between classes and an upper limit of the usage rate set as attributes, and the storage subsystem selects a logical storage area to be relocated from a storage device exceeding the upper limit of the usage rate of the class. A storage subsystem control method, wherein a class to which the logical storage area is relocated is determined from a class having a higher rank so as not to exceed a usage rate upper limit value of each class.
  4. 4. The storage subsystem control method according to claim 1, wherein the storage subsystem uses a storage device usage time per unit time (usage rate) as usage status information. The class has a performance rank between classes and an upper limit of the usage rate set as attributes, and the storage subsystem selects a logical storage area to be relocated from a storage device exceeding the upper limit of the usage rate of the class. And determining a physical storage area that can be used as a relocation destination of the logical storage area from a storage device in the same class so as not to exceed a usage rate upper limit value of the class. .
  5. 5. The control method for a storage subsystem according to claim 1, wherein the storage subsystem uses a storage device usage time per unit time (usage rate) as usage status information. The class has a target access type and a usage rate upper limit set as attributes, and the storage subsystem selects a logical storage area to be relocated from a storage device exceeding the class usage rate upper limit,
    The class of the relocation destination of the logical storage area is determined from the class of the target access type based on the analysis result of the access type to the logical storage area so as not to exceed the upper limit of the usage rate of each class. Storage subsystem control method.
  6. 6. A plurality of storage devices connected to one or more computers, and means for acquiring usage status information of the storage devices,
    A storage subsystem having means for associating a logical storage area to be read / written by the computer with a first physical storage area of the storage device, wherein the plurality of disk devices each have an attribute. Means for managing as a set (class), means for determining a class of a relocation destination suitable for the logical storage area based on the use status information and the class attribute, and usable as a relocation destination for the logical storage area Means for selecting a suitable second physical storage area from within the class, and copying the contents of the first physical storage area to the second physical storage area and associating a logical storage area with the first physical storage area. Means for changing from a physical storage area to the second physical storage area and relocating the storage area.
  7. 7. The storage subsystem according to claim 6, wherein the storage subsystem accumulates the use status information and relocates a logical storage area based on the use status information for a set period. Characterized in that the storage subsystem comprises means for automatically determining the storage system and means for performing relocation at a set time.
  8. 8. The storage subsystem according to claim 6, wherein said storage subsystem has means for using a storage device usage time per unit time (usage rate) as usage status information, The storage subsystem is means for selecting a logical storage area to be relocated from a storage device exceeding the usage rate upper limit set as an attribute for each class, and classifying the relocation destination class of the logical storage area to each class. Means for deciding not to exceed the upper limit of the usage rate of each class from the performance rank between the classes set as attributes of the storage subsystem.
  9. 9. The storage subsystem according to claim 6, wherein said storage subsystem has means for using a storage device usage time per unit time (usage rate) as usage status information, The storage subsystem includes: a unit that selects a logical storage area to be relocated from a storage device that exceeds a usage rate upper limit value of a class set as an attribute; a unit that analyzes an access type to the logical storage area; Means for determining a class to which the logical storage area is to be relocated from a class in which an access type is set as an attribute so as not to exceed a usage rate upper limit of each class based on the analysis result. Storage subsystem to run.
  10. 10. The storage subsystem according to claim 6, 7, 8, or 9, wherein the storage subsystem is a disk array having a plurality of disk devices, and a usage rate of the disk devices is used. A storage subsystem having means for use as information.
JP24271399A 1999-08-30 1999-08-30 Storage subsystem and control method thereof Expired - Fee Related JP3541744B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP24271399A JP3541744B2 (en) 1999-08-30 1999-08-30 Storage subsystem and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP24271399A JP3541744B2 (en) 1999-08-30 1999-08-30 Storage subsystem and control method thereof

Publications (2)

Publication Number Publication Date
JP2001067187A true JP2001067187A (en) 2001-03-16
JP3541744B2 JP3541744B2 (en) 2004-07-14

Family

ID=17093144

Family Applications (1)

Application Number Title Priority Date Filing Date
JP24271399A Expired - Fee Related JP3541744B2 (en) 1999-08-30 1999-08-30 Storage subsystem and control method thereof

Country Status (1)

Country Link
JP (1) JP3541744B2 (en)

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004246749A (en) * 2003-02-17 2004-09-02 Hitachi Ltd Storage device system
US6895483B2 (en) 2002-05-27 2005-05-17 Hitachi, Ltd. Method and apparatus for data relocation between storage subsystems
JP2005196625A (en) * 2004-01-09 2005-07-21 Hitachi Ltd Information processing system and management device
US6928450B2 (en) 2001-11-12 2005-08-09 Hitachi, Ltd. Storage apparatus acquiring static information related to database management system
JP2006012156A (en) * 2004-06-29 2006-01-12 Hitachi Ltd Method for controlling storage policy according to volume activity
JP2006099763A (en) * 2004-09-29 2006-04-13 Hitachi Ltd Method for managing volume group considering storage tier
US20060101203A1 (en) * 2004-11-09 2006-05-11 Fujitsu Computer Technologies Limited Storage virtualization apparatus
US7047360B2 (en) 2002-12-20 2006-05-16 Hitachi, Ltd. Method and apparatus for adjusting performance of logical volume copy destination
US7054893B2 (en) 2001-11-12 2006-05-30 Hitachi, Ltd. Method and apparatus for relocating data related to database management system
US7089347B2 (en) 2003-03-31 2006-08-08 Hitachi, Ltd. Computer system for managing performances of storage apparatus and performance management method of the computer system
US7096338B2 (en) 2004-08-30 2006-08-22 Hitachi, Ltd. Storage system and data relocation control device
JP2006309318A (en) * 2005-04-26 2006-11-09 Hitachi Ltd Storage management system, storage management server, data rearrangement control method, and data rearrangement control program
JP2007048323A (en) * 2002-11-25 2007-02-22 Hitachi Ltd Virtualization controller and data migration control method
US7213105B2 (en) 2001-10-15 2007-05-01 Hitachi, Ltd. Volume management method and apparatus
US7254620B2 (en) 2002-06-03 2007-08-07 Hitachi, Ltd. Storage system
US7308481B2 (en) 2003-01-20 2007-12-11 Hitachi, Ltd. Network storage system
JP2008047156A (en) * 2004-08-30 2008-02-28 Hitachi Ltd Storage system, and data rearrangement controller
JP2008084254A (en) * 2006-09-29 2008-04-10 Hitachi Ltd Data migration method and information processing system
US7360051B2 (en) 2004-09-10 2008-04-15 Hitachi, Ltd. Storage apparatus and method for relocating volumes thereof
JP2008112276A (en) * 2006-10-30 2008-05-15 Hitachi Ltd Relocation system and relocation method
US7395396B2 (en) 2004-08-30 2008-07-01 Hitachi, Ltd. Storage system and data relocation control device
US7426619B2 (en) 2005-04-22 2008-09-16 Hitachi, Ltd. Inter-volume migration system, inter-volume relocation method, and program therefor
US7434017B2 (en) 2006-04-03 2008-10-07 Hitachi, Ltd. Storage system with virtual allocation and virtual relocation of volumes
EP2026187A2 (en) 2007-08-08 2009-02-18 Hitachi, Ltd. Storage system and access count equalization method therefor
US7533230B2 (en) 2004-10-13 2009-05-12 Hewlett-Packard Developmetn Company, L.P. Transparent migration of files among various types of storage volumes based on file access properties
US7536505B2 (en) 2004-03-29 2009-05-19 Kabushiki Kaisha Toshiba Storage system and method for controlling block rearrangement
JP2009545245A (en) * 2006-07-21 2009-12-17 クゥアルコム・インコーポレイテッドQualcomm Incorporated Efficient assignment of priority values to new and existing QOS filters
US7694104B2 (en) 2002-11-25 2010-04-06 Hitachi, Ltd. Virtualization controller and data transfer control method
US7761677B2 (en) 2002-04-02 2010-07-20 Hitachi, Ltd. Clustered storage system and its control method
US7774572B2 (en) 2003-07-14 2010-08-10 Fujitsu Limited Migrating data in a distributed storage system based on storage capacity utilization
JP2010198056A (en) * 2009-02-23 2010-09-09 Fujitsu Ltd Allocation control program and allocation control device
JP2011054180A (en) * 2003-08-14 2011-03-17 コンペレント・テクノロジーズ Virtual disk drive system and method
US8122214B2 (en) 2004-08-30 2012-02-21 Hitachi, Ltd. System managing a plurality of virtual volumes and a virtual volume management method for the system
US8156561B2 (en) 2003-11-26 2012-04-10 Hitachi, Ltd. Method and apparatus for setting access restriction information
WO2012066671A1 (en) * 2010-11-18 2012-05-24 株式会社日立製作所 Management device for computing system and method of management
JP2012108931A (en) * 2012-01-16 2012-06-07 Hitachi Ltd Data migration method and information processing system
JP2013171305A (en) * 2012-02-17 2013-09-02 Fujitsu Ltd Storage device, storage system, storage management method and storage management program
WO2014009999A1 (en) 2012-07-11 2014-01-16 Hitachi, Ltd. Database system and database management method
JP2014010709A (en) * 2012-06-29 2014-01-20 Fujitsu Ltd Storage control device, program thereof and method thereof
JP2016119020A (en) * 2014-12-24 2016-06-30 富士通株式会社 Storage apparatus, control method of storage apparatus and storage apparatus control program
US9489150B2 (en) 2003-08-14 2016-11-08 Dell International L.L.C. System and method for transferring data between different raid data storage types for current data and replay data

Cited By (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7624232B2 (en) 2001-10-15 2009-11-24 Hitachi, Ltd. Volume management method and apparatus
US7213105B2 (en) 2001-10-15 2007-05-01 Hitachi, Ltd. Volume management method and apparatus
US7054893B2 (en) 2001-11-12 2006-05-30 Hitachi, Ltd. Method and apparatus for relocating data related to database management system
US6928450B2 (en) 2001-11-12 2005-08-09 Hitachi, Ltd. Storage apparatus acquiring static information related to database management system
US7761677B2 (en) 2002-04-02 2010-07-20 Hitachi, Ltd. Clustered storage system and its control method
US7007147B2 (en) 2002-05-27 2006-02-28 Hitachi, Ltd. Method and apparatus for data relocation between storage subsystems
US6895483B2 (en) 2002-05-27 2005-05-17 Hitachi, Ltd. Method and apparatus for data relocation between storage subsystems
US7162603B2 (en) 2002-05-27 2007-01-09 Hitachi, Ltd. Method and apparatus for data relocation between storage subsystems
US7337292B2 (en) 2002-05-27 2008-02-26 Hitachi, Ltd. Method and apparatus for data relocation between storage subsystems
US7254620B2 (en) 2002-06-03 2007-08-07 Hitachi, Ltd. Storage system
US8560631B2 (en) 2002-06-03 2013-10-15 Hitachi, Ltd. Storage system
US8572352B2 (en) 2002-11-25 2013-10-29 Hitachi, Ltd. Virtualization controller and data transfer control method
US7877568B2 (en) 2002-11-25 2011-01-25 Hitachi, Ltd. Virtualization controller and data transfer control method
JP4509089B2 (en) * 2002-11-25 2010-07-21 株式会社日立製作所 Virtualization control device and data migration control method
JP2007048323A (en) * 2002-11-25 2007-02-22 Hitachi Ltd Virtualization controller and data migration control method
US8190852B2 (en) 2002-11-25 2012-05-29 Hitachi, Ltd. Virtualization controller and data transfer control method
US7694104B2 (en) 2002-11-25 2010-04-06 Hitachi, Ltd. Virtualization controller and data transfer control method
US7047360B2 (en) 2002-12-20 2006-05-16 Hitachi, Ltd. Method and apparatus for adjusting performance of logical volume copy destination
US7415587B2 (en) 2002-12-20 2008-08-19 Hitachi, Ltd. Method and apparatus for adjusting performance of logical volume copy destination
US7308481B2 (en) 2003-01-20 2007-12-11 Hitachi, Ltd. Network storage system
JP2004246749A (en) * 2003-02-17 2004-09-02 Hitachi Ltd Storage device system
US7272686B2 (en) 2003-02-17 2007-09-18 Hitachi, Ltd. Storage system
US7366839B2 (en) 2003-02-17 2008-04-29 Hitachi, Ltd. Storage system
JP4651913B2 (en) * 2003-02-17 2011-03-16 株式会社日立製作所 Storage system
US7925830B2 (en) 2003-02-17 2011-04-12 Hitachi, Ltd. Storage system for holding a remaining available lifetime of a logical storage region
US7089347B2 (en) 2003-03-31 2006-08-08 Hitachi, Ltd. Computer system for managing performances of storage apparatus and performance management method of the computer system
US7694070B2 (en) 2003-03-31 2010-04-06 Hitachi, Ltd. Computer system for managing performances of storage apparatus and performance management method of the computer system
US7774572B2 (en) 2003-07-14 2010-08-10 Fujitsu Limited Migrating data in a distributed storage system based on storage capacity utilization
JP2011054180A (en) * 2003-08-14 2011-03-17 コンペレント・テクノロジーズ Virtual disk drive system and method
US9489150B2 (en) 2003-08-14 2016-11-08 Dell International L.L.C. System and method for transferring data between different raid data storage types for current data and replay data
US9436390B2 (en) 2003-08-14 2016-09-06 Dell International L.L.C. Virtual disk drive system and method
US9047216B2 (en) 2003-08-14 2015-06-02 Compellent Technologies Virtual disk drive system and method
US9021295B2 (en) 2003-08-14 2015-04-28 Compellent Technologies Virtual disk drive system and method
US8560880B2 (en) 2003-08-14 2013-10-15 Compellent Technologies Virtual disk drive system and method
US8555108B2 (en) 2003-08-14 2013-10-08 Compellent Technologies Virtual disk drive system and method
US10067712B2 (en) 2003-08-14 2018-09-04 Dell International L.L.C. Virtual disk drive system and method
US8156561B2 (en) 2003-11-26 2012-04-10 Hitachi, Ltd. Method and apparatus for setting access restriction information
US8806657B2 (en) 2003-11-26 2014-08-12 Hitachi, Ltd. Method and apparatus for setting access restriction information
JP2005196625A (en) * 2004-01-09 2005-07-21 Hitachi Ltd Information processing system and management device
JP4568502B2 (en) * 2004-01-09 2010-10-27 株式会社日立製作所 Information processing system and management apparatus
US7536505B2 (en) 2004-03-29 2009-05-19 Kabushiki Kaisha Toshiba Storage system and method for controlling block rearrangement
JP4723925B2 (en) * 2004-06-29 2011-07-13 株式会社日立製作所 Method for controlling storage policy according to volume activity
JP2006012156A (en) * 2004-06-29 2006-01-12 Hitachi Ltd Method for controlling storage policy according to volume activity
JP2008047156A (en) * 2004-08-30 2008-02-28 Hitachi Ltd Storage system, and data rearrangement controller
US7096338B2 (en) 2004-08-30 2006-08-22 Hitachi, Ltd. Storage system and data relocation control device
US8843715B2 (en) 2004-08-30 2014-09-23 Hitachi, Ltd. System managing a plurality of virtual volumes and a virtual volume management method for the system
US7424585B2 (en) 2004-08-30 2008-09-09 Hitachi, Ltd. Storage system and data relocation control device
US8230038B2 (en) 2004-08-30 2012-07-24 Hitachi, Ltd. Storage system and data relocation control device
US7395396B2 (en) 2004-08-30 2008-07-01 Hitachi, Ltd. Storage system and data relocation control device
US8122214B2 (en) 2004-08-30 2012-02-21 Hitachi, Ltd. System managing a plurality of virtual volumes and a virtual volume management method for the system
US8799600B2 (en) 2004-08-30 2014-08-05 Hitachi, Ltd. Storage system and data relocation control device
US7360051B2 (en) 2004-09-10 2008-04-15 Hitachi, Ltd. Storage apparatus and method for relocating volumes thereof
JP2006099763A (en) * 2004-09-29 2006-04-13 Hitachi Ltd Method for managing volume group considering storage tier
US7533230B2 (en) 2004-10-13 2009-05-12 Hewlett-Packard Developmetn Company, L.P. Transparent migration of files among various types of storage volumes based on file access properties
US20060101203A1 (en) * 2004-11-09 2006-05-11 Fujitsu Computer Technologies Limited Storage virtualization apparatus
US7426619B2 (en) 2005-04-22 2008-09-16 Hitachi, Ltd. Inter-volume migration system, inter-volume relocation method, and program therefor
JP4690765B2 (en) * 2005-04-26 2011-06-01 株式会社日立製作所 Storage management system, storage management server, data relocation control method, and data relocation control program
JP2006309318A (en) * 2005-04-26 2006-11-09 Hitachi Ltd Storage management system, storage management server, data rearrangement control method, and data rearrangement control program
US7409496B2 (en) 2005-04-26 2008-08-05 Hitachi, Ltd. Storage management system, storage management server, and method and program for controlling data reallocation
US7434017B2 (en) 2006-04-03 2008-10-07 Hitachi, Ltd. Storage system with virtual allocation and virtual relocation of volumes
JP2009545245A (en) * 2006-07-21 2009-12-17 クゥアルコム・インコーポレイテッドQualcomm Incorporated Efficient assignment of priority values to new and existing QOS filters
US8549248B2 (en) 2006-09-29 2013-10-01 Hitachi, Ltd. Data migration method and information processing system
JP2008084254A (en) * 2006-09-29 2008-04-10 Hitachi Ltd Data migration method and information processing system
JP2008112276A (en) * 2006-10-30 2008-05-15 Hitachi Ltd Relocation system and relocation method
EP2026187A2 (en) 2007-08-08 2009-02-18 Hitachi, Ltd. Storage system and access count equalization method therefor
US7949827B2 (en) 2007-08-08 2011-05-24 Hitachi, Ltd. Storage system and access count equalization method therefor
JP2010198056A (en) * 2009-02-23 2010-09-09 Fujitsu Ltd Allocation control program and allocation control device
WO2012066671A1 (en) * 2010-11-18 2012-05-24 株式会社日立製作所 Management device for computing system and method of management
JP2012108931A (en) * 2012-01-16 2012-06-07 Hitachi Ltd Data migration method and information processing system
US9218276B2 (en) 2012-02-17 2015-12-22 Fujitsu Limited Storage pool-type storage system, method, and computer-readable storage medium for peak load storage management
JP2013171305A (en) * 2012-02-17 2013-09-02 Fujitsu Ltd Storage device, storage system, storage management method and storage management program
JP2014010709A (en) * 2012-06-29 2014-01-20 Fujitsu Ltd Storage control device, program thereof and method thereof
WO2014009999A1 (en) 2012-07-11 2014-01-16 Hitachi, Ltd. Database system and database management method
JP2016119020A (en) * 2014-12-24 2016-06-30 富士通株式会社 Storage apparatus, control method of storage apparatus and storage apparatus control program

Also Published As

Publication number Publication date
JP3541744B2 (en) 2004-07-14

Similar Documents

Publication Publication Date Title
US10162677B2 (en) Data storage resource allocation list updating for data storage operations
US10613942B2 (en) Data storage resource allocation using blacklisting of data storage requests classified in the same category as a data storage request that is determined to fail if attempted
US9448733B2 (en) Data management method in storage pool and virtual volume in DKC
US8898383B2 (en) Apparatus for reallocating logical to physical disk devices using a storage controller and method of the same
US8769227B2 (en) Storage controller and data management method
US8819364B2 (en) Information processing apparatus, tape device, and computer-readable medium storing program
US9361034B2 (en) Transferring storage resources between snapshot storage pools and volume storage pools in a distributed network
US9619472B2 (en) Updating class assignments for data sets during a recall operation
US7337292B2 (en) Method and apparatus for data relocation between storage subsystems
US7620698B2 (en) File distribution system in which partial files are arranged according to various allocation rules associated with a plurality of file types
US7096336B2 (en) Information processing system and management device
CN1559041B (en) Sharing objects between computer systems
US8423739B2 (en) Apparatus, system, and method for relocating logical array hot spots
EP1234237B1 (en) Storage management system having common volume manager
US6928450B2 (en) Storage apparatus acquiring static information related to database management system
US8239621B2 (en) Distributed data storage system, data distribution method, and apparatus and program to be used for the same
US6895485B1 (en) Configuring and monitoring data volumes in a consolidated storage array using one storage array to configure the other storage arrays
US5987566A (en) Redundant storage with mirroring by logical volume with diverse reading process
US7574435B2 (en) Hierarchical storage management of metadata
EP0375188B1 (en) File system
JP4749140B2 (en) Data migration method and system
JP4416821B2 (en) A distributed file system that maintains a fileset namespace accessible to clients over the network
US5802301A (en) System for load balancing by replicating portion of file while being read by first stream onto second device and reading portion with stream capable of accessing
US6954768B2 (en) Method, system, and article of manufacture for managing storage pools
US7076622B2 (en) System and method for detecting and sharing common blocks in an object storage system

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20040210

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20040309

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20040322

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090409

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090409

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100409

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110409

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120409

Year of fee payment: 8

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120409

Year of fee payment: 8

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130409

Year of fee payment: 9

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140409

Year of fee payment: 10

LAPS Cancellation because of no payment of annual fees