JP4842909B2 - Storage system and data relocation control device - Google Patents

Storage system and data relocation control device Download PDF

Info

Publication number
JP4842909B2
JP4842909B2 JP2007280358A JP2007280358A JP4842909B2 JP 4842909 B2 JP4842909 B2 JP 4842909B2 JP 2007280358 A JP2007280358 A JP 2007280358A JP 2007280358 A JP2007280358 A JP 2007280358A JP 4842909 B2 JP4842909 B2 JP 4842909B2
Authority
JP
Japan
Prior art keywords
storage
volume
migration
migration destination
virtual volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2007280358A
Other languages
Japanese (ja)
Other versions
JP2008047156A (en
JP2008047156A5 (en
Inventor
哲也 丸山
朋之 加地
伸夫 紅山
達人 青島
亨 高橋
沢希 黒田
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP2004250327 priority Critical
Priority to JP2004250327 priority
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to JP2007280358A priority patent/JP4842909B2/en
Publication of JP2008047156A5 publication Critical patent/JP2008047156A5/ja
Publication of JP2008047156A publication Critical patent/JP2008047156A/en
Application granted granted Critical
Publication of JP4842909B2 publication Critical patent/JP4842909B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a storage system and a data relocation control device.

  The storage system includes, for example, at least one storage device called a disk array subsystem or the like. In this storage device, for example, disk drives such as hard disk drives and semiconductor memory drives are arranged in an array, and a storage area based on RAID (Redundant Array of Independent Inexpensive Disks) is provided. A host computer (hereinafter referred to as “host”) accesses a logical storage area provided by the storage device, and reads and writes data.

By the way, for example, the amount of data managed by organizations such as corporations, local governments, educational institutions, financial institutions, and public offices is increasing year by year, and storage devices are added or replaced as the amount of data increases. As the amount of data increases and the storage system configuration becomes complicated, the data of various application programs such as mail management software and database management software are placed at appropriate locations according to the value of the data, and storage It has been proposed to improve the utilization efficiency of the system (Patent Documents 1 to 5).
JP 2003-345522 A JP 2001-337790 A JP 2001-67187 A JP 2001-249853 A JP 2004-70403 A

  Each of the above patent documents discloses a technique for copying data stored in one volume to another volume and rearranging the data based on disk performance information and usage information.

  However, in the techniques described in these documents, it is necessary to rearrange data individually for each volume unit, and the volume cannot be moved between hierarchies freely defined by the user, so that the usability is low. In addition, in the techniques described in the above-mentioned documents, since data rearrangement is performed in units of volumes, it is difficult to rearrange related groups of volumes all at once. Furthermore, the techniques described in the above documents focus only on the rearrangement of data, do not consider any processing after the rearrangement, and are low in usability in this respect as well.

Accordingly, an object of the present invention is to provide a storage system and a data relocation control device that can relocate data distributed to a plurality of storage devices more easily.
One object of the present invention is to provide a storage system and a data relocation control device that are improved in usability by collectively relocating data respectively stored in a plurality of mutually related volumes.
One object of the present invention is to provide a storage system and a data relocation control device that are improved in usability by automatically executing necessary processing after data relocation. The further objective of this invention will become clear from description of embodiment mentioned later.

  In order to solve the above problems, a storage system of the present invention includes a plurality of storage devices each having at least one volume, a virtualization unit that virtually unifies the volumes of each storage device, The specified migration source volume is specified between a storage unit that stores volume attribute information that manages attribute information, and a plurality of storage tiers that are respectively generated based on a plurality of preset policies and volume attribute information. A rearrangement unit for rearranging the storage tiers.

  Here, the policy can be arbitrarily set by the user, and the same volume may belong to different storage tiers. The storage tier can be set for each storage device or across a plurality of storage devices.

  For example, storage tier management information including policy identification information for identifying each policy and a tier configuration condition associated with each policy identification information, and a volume group satisfying each tier configuration condition, A storage tier corresponding to each policy can be generated. The hierarchical configuration condition can include at least one of RAID level, drive type, storage capacity, storage device type, and usage status.

  That is, for example, a storage tier is generated according to a policy defined by the user, such as a high-performance tier composed of high-performance disks and a low-cost tier composed of low-cost disks. One or more volumes belonging to each storage tier may exist in the same storage device, or may exist in different storage devices.

  The policy that defines each hierarchy is determined by the hierarchy configuration conditions that can be set by the user. For example, when defining a high-performance tier, the user may set a condition for selecting a high-performance disk or a high-performance storage device as the tier configuration condition. For example, when defining a low-cost tier, the user may set a condition for selecting an inexpensive disk as a tier configuration condition. The storage tier is constituted by volumes satisfying these tier configuration conditions.

  When relocating data of a volume belonging to a certain storage tier, the migration source volume and the migration destination storage tier may be designated respectively. As a result, the data of the designated migration source volume is moved to the volume belonging to the designated migration destination storage tier.

  The rearrangement unit can rearrange the designated migration source volume in a group unit including a plurality of volumes, for example, in a group unit including a plurality of mutually related volumes. Examples of a plurality of mutually related volumes include a volume group that stores data used by the same application program, a volume group that stores data constituting the same file system, and the like. Even if the relationship between data is poor, the data can be grouped.

  The relocation unit moves the specified migration source volume to the migration destination storage tier, and then executes a predetermined process associated with the migration destination storage tier in advance on the migrated volume. You can also As the predetermined processing, for example, setting of an access attribute such as read only or duplication of volume can be cited.

  The relocation unit presents to the user the migration destination candidate selection unit that selects a migration destination candidate volume that can copy the storage contents of the designated migration source volume, and the migration destination candidate volume selected by the migration destination candidate selection unit The migration destination candidate selection unit selects, as a migration destination candidate volume, a volume that matches a predetermined essential attribute among the attributes of the migration source volume by referring to the volume attribute information. be able to.

  The volume attribute information can be configured to include a static attribute and a dynamic attribute, and a predetermined attribute included in the static attribute can be set as the required attribute. The required attribute can include at least the storage capacity of the volume.

  Thus, the rearrangement unit selects a volume that matches the required attribute of the migration source volume as the migration destination candidate volume. The movement destination candidate selection unit can select only one movement destination candidate volume based on the degree of matching of attributes other than the essential attributes when there are a plurality of movement destination candidate volumes that have the same essential attributes. . Examples of attributes other than the essential attributes include RAID level, disk type, and storage device model. That is, when a plurality of volumes having the same essential attributes are extracted, a volume having a configuration closest to the migration source volume among these volumes is selected as the migration destination candidate volume.

  The rearrangement unit can further include a changing unit for changing the destination candidate volume selected by the destination candidate selecting unit. This changing unit can select a migration destination candidate volume from among migration destination candidate volumes that have the same essential attributes but have not been selected.

  For example, the changing unit should not have data groups with different usage modes in the same RAID group, such as when the migration destination candidate volumes are concentrated in a specific RAID group, or data that is randomly accessed and data that is accessed sequentially. Once selected, the migration destination candidate volume can be changed. The changing unit can change the migration destination candidate volume based on an instruction from the user.

  A data relocation control device according to another aspect of the present invention controls data relocation of a plurality of volumes that are distributed and arranged in a plurality of storage devices. Each volume is virtually integrated and managed by the virtualization unit, and includes a storage unit and a control unit. (A) The storage unit includes (a1) volume attribute information including at least identification information of each volume, RAID level, drive type, storage capacity, and usage status, and (a2) a plurality of policies definable by the user. Storage tier management information including policy identification information for identifying each and tier configuration conditions associated with each policy identification information can be stored. (B) The control unit includes a relocation unit that relocates the designated migration source volume to the designated storage tier among a plurality of storage tiers generated based on the storage tier management information and the volume attribute information. Can be provided.

  Further, another data relocation control device according to the present invention is a device that centrally manages the volumes of each of the plurality of storage devices and controls the relocation of the data stored in each of the volumes. Generated based on a virtualization unit that virtually manages the volumes that are included, a storage unit that stores volume attribute information that manages attribute information of each volume, and a plurality of preset policies and volume attribute information. A relocation unit that relocates a designated migration source volume to a designated storage tier among a plurality of storage tiers.

  The present invention can also be understood as, for example, the following data rearrangement method. In other words, a method of rearranging data stored in a plurality of volumes distributed in a plurality of storage devices to a volume in the same or different storage device, and the volumes of each storage device are virtually centrally managed. The volume attribute information is managed as volume attribute information, a plurality of policies are set in advance, a plurality of storage tiers are generated based on the plurality of policies and volume attribute information, and the migration source volume and migration Each destination storage tier is set, and the designated migration source volume is relocated to the designated storage tier between the storage tiers.

  At least some of the means, functions, and steps of the present invention may be configured as a computer program that is read and executed by a microcomputer. Such a computer program can be distributed by being fixed to a storage medium such as a hard disk or an optical disk. Alternatively, the computer program can be supplied via a communication network such as the Internet.

  Hereinafter, embodiments of the present invention will be described with reference to the drawings. FIG. 1 is an explanatory diagram schematically showing the overall concept of the present embodiment. As will be described below, the storage system of this embodiment includes a plurality of storage apparatuses A to D.

  Volumes of the storage apparatuses A to D are virtually integrated and the host (see FIG. 2) recognizes the plurality of storage apparatuses A to D as a single virtual storage apparatus as a whole.

  Each of the storage apparatuses A to D includes volumes A1 to A4, B1 to B4, C1 to C4, and D1 to D4, respectively. Each of these volumes is a logical storage area set on a physical storage area provided by a physical storage drive such as a hard disk drive, a semiconductor memory drive, or an optical disk drive.

  Here, each of the storage devices A to D can be provided with the same type of drive, or different types of drives can be mixed together. Therefore, volume attributes such as performance and price may be different even for volumes located in the same storage apparatus.

  The user can arbitrarily group each volume of the storage system as a plurality of storage tiers 1 to 3. For example, a certain storage tier 1 can be defined as a high reliability tier. The high-reliability hierarchy is configured by a volume group formed by configuring a high-reliability drive such as a fiber channel disk (FC disk) with RAID1. Another storage tier 2 can be defined as a low cost tier, for example. The low-cost layer is configured by a volume group formed by configuring an inexpensive drive such as a SATA (Serial AT Attachment) disk with RAID5. Furthermore, another storage tier 3 can be defined as an archive tier, for example. The archive hierarchy can be composed of a volume group set on an inexpensive disk having a capacity less than a predetermined capacity, for example.

  As shown in FIG. 1, the storage tiers 1 to 3 are virtually configured across the storage apparatuses A to D. In other words, the user, for example, a plurality of volumes constituting the storage system based on business usage standards (policy) such as a high reliability tier, a low cost tier, a fast responsive tier, an archive tier, etc. It can be freely defined as a desired storage tier. Because policies can be set independently of the storage system configuration, some volumes may belong to multiple storage tiers depending on the conditions that make up each storage tier, or some other It may happen that the volume of the volume does not belong to any storage tier.

  Each of the storage tiers 1 to 3 can be provided with a volume group as a data relocation target. As an example, the storage tier 1 shows a migration group composed of two volumes V1 and V2. Each of these volumes V1, V2 is a volume related to each other such as storing a data group used by the same application program or a data group constituting the same file system.

  The value of data decreases with the passage of time, for example. High value data is placed in a high reliability hierarchy and is frequently used by application programs. It is preferable to move data whose value has been reduced over time to another storage tier. This is because the storage resources of the high reliability hierarchy are limited.

  Therefore, the user considers the rearrangement of data stored in a plurality of volumes V1 and V2 related to each other, and determines, for example, the movement from the storage tier 1 to the storage tier 3 (here, the archive tier). The user designates the relocation of the migration source volumes V1 and V2 in a lump and instructs the migration to the storage tier 3.

  As a result, in the migration destination storage tier 3, volumes that can store the data of the respective volumes V1 and V2 constituting the migration group are selected, and the data is copied to these selected volumes. After the copy is completed, the data of the migration source volumes V1 and V2 can be deleted and reused as an empty volume.

  Here, predetermined actions can be associated with the storage tiers 1 to 3 in advance according to the policies of the storage tiers 1 to 3, respectively. Here, the action means predetermined information processing and data operation executed in the storage tier. For example, the storage tier 1 defined as the high reliability tier is associated in advance with a process for generating a copy of the rearranged data. Further, for example, no action can be associated with the storage tier 2 defined as the low cost tier. Further, for example, in the storage tier 3 defined as the archive tier, a plurality of processes including a process for generating a copy of the rearranged data and a process for setting a read-only access attribute are associated as actions. be able to.

  When the data of each volume V1, V2 constituting the migration group is copied to a volume belonging to the storage tier 3, a predetermined action associated with the storage tier 3 is automatically executed. The data rearranged in the storage tier 3 is duplicated and set to read-only, and modification is prohibited.

  Here, when the data of each of the volumes V1 and V2 is rearranged, a volume capable of copying the data is selected in the migration destination storage tier. Each volume has attribute information. Examples of the volume attribute information include identification information for identifying each volume, RAID level, disk type, storage capacity, usage status indicating whether or not the volume is being used, and the model of the storage device to which the volume belongs.

  All of these volume attributes do not need to match between the migration source volume (V1, V2) and the migration destination candidate volume, and only some essential attributes need to match. . An example of the essential attribute is a storage capacity. That is, a volume having a storage capacity equal to or greater than the storage capacity of the migration source volume can be selected as the migration destination volume.

  If more than the required number of volumes with the required attributes are detected, select the volume with attributes closer to the migration source volume as the migration destination volume in consideration of the degree of coincidence of other attributes other than the mandatory attributes. Can do. As an example, the volume attribute is roughly divided into two types: mandatory attributes and other attributes. However, the present invention is not limited to this. For example, there are three types such as mandatory attributes, semi-essential attributes, and other attributes. A configuration may be used in which the degree of coincidence between volumes is obtained by classifying into attributes of more than types and assigning weights to the attributes of each type. As an example, volume capacity and emulation type may be required attributes, and other attributes (RAID level, disk type, etc.) may be other attributes (non-essential attributes). Alternatively, the emulation type may be a mandatory attribute, the volume capacity may be a semi-essential attribute, and other attributes may be non-essential attributes. When the volume capacity is a semi-essential attribute, the capacity of the migration source volume and the migration destination volume do not necessarily match, but the migration destination volume must have the same or more capacity as the migration source volume. In addition, the usage state of the volume attribute is an exceptional attribute not included in these classifications, and the usage state of the migration destination volume must always be “free”.

  FIG. 2 is a block diagram schematically showing the overall configuration of the storage system. As will be described later, this storage system includes, for example, a plurality of hosts 10A, 10B (“host 10” unless otherwise distinguished), a volume virtualization apparatus 20, a plurality of storage apparatuses 30, 40, and a management client. 50 and the storage management server 60, which are connected to each other via a communication network CN1 such as a LAN (Local Area Network).

  Each of the hosts 10A and 10B can be realized as a computer system such as a server, a personal computer, a workstation, a mainframe, a portable information terminal, or the like. For example, a plurality of open hosts and a plurality of mainframe hosts can be mixed in the same storage system.

  Each of the hosts 10A and 10B includes, for example, application programs (abbreviated as “App” in the drawing) 11A and 11B (“application program 11” unless otherwise specified) and HBAs (Host Bus Adapter) 12A and 12B (particularly distinguished). If not, “HBA12”) may be provided. A plurality of these application programs 11 and HBAs 12 can be provided for each host 10.

  Examples of the application programs 11A and 11B include an e-mail management program, a database management program, and a file system. The application programs 11A and 11B can be connected to a plurality of client terminals (not shown) via another communication network, and can provide an information processing service in response to a request from each client terminal.

  The HBA 12 is in charge of data transmission / reception with the storage system, and is connected to the volume virtualization apparatus 20 via the communication network CN2. Here, examples of the communication network CN2 include a LAN, a SAN (Storage Area Network), the Internet, and a dedicated line. In the case of an open host, data transfer is performed based on protocols such as TCP / IP (Transmission Control Protocol / Internet Protocol), FCP (Fibre Channel Protocol), and iSCSI (Internet Small Computer System Interface). In the case of a mainframe host, for example, FICON (Fibre Connection: registered trademark), ESCON (Enterprise System Connection: registered trademark), ACONARC (Advanced Connection Architecture: registered trademark), FIBARC (Fibre Connection Architecture: registered trademark), etc. Data transfer is performed according to the communication protocol.

  In addition, a management program (not shown) such as a path control program may be installed in each of the hosts 10A and 10B. Such a management program performs, for example, processing such as distributing the load among the plurality of HBAs 12 and switching paths when a failure occurs.

  The volume virtualization apparatus (hereinafter also referred to as “virtualization apparatus”) 20 virtualizes the volumes 330 and 430 and the like existing in the storage system and makes it appear as one virtual storage apparatus. As shown in FIG. 3A, for example, the virtualization apparatus 20 provides each host 10 with a plurality of logical volumes (LDEVs) indicated by identification information LID001 to LID004. Each of these logical volumes is associated with another logical volume indicated by identification information PID01 to PID04. The logical volume having the identification information PID is a real volume that actually stores data, and the logical volume that can be directly recognized by the host 10 is a virtual volume.

  By controlling the mapping between the virtual volume and the real volume, data can be moved transparently to the host 10. For example, as shown in FIG. 3B, when moving data stored in a real volume (PID01) to another real volume (PID02), the data is copied between these real volumes, It is only necessary to reset the mapping between the volume and the virtual volume. Alternatively, by exchanging identification information between the virtual volume (LID001) and the virtual volume (LID002), the data storage destination device can be changed without being conscious of the host 10.

  In this way, the virtualization apparatus 20 virtualizes and manages a variety of real volumes existing on the storage system and provides them to the host 10. As will be described later, the virtualization apparatus 20 can be provided in the storage apparatus, or can be provided in a highly functional intelligent switch. Further, as will be described later, the virtualization apparatus 20 and the storage management server 60 can be provided on the same computer.

  Returning to FIG. Each of the storage devices 30 and 40 includes logical volumes (real volumes) 330 and 430, and is connected to the virtualization device 20 via a communication network CN3 such as a SAN. Each storage device 30, 40 reads and writes data to and from the volume in response to a request from the host 10. A configuration example of the storage apparatus will be described later.

  The management client 50 is configured as a computer system such as a personal computer, a workstation, or a portable information terminal, and includes a web browser 51. The user operates the web browser 51 to log in to the storage management server 60, for example, can give various instructions to the storage system or acquire various information in the storage system.

  The storage management server 60 is a computer system that manages volume relocation and the like of the storage system. A configuration example of the storage management server 60 will be described later. For example, the storage management server 60 can include a data relocation management unit 632 and a volume database (“DB” in the figure) 640.

  FIG. 4 is a block diagram schematically showing the hardware configuration of the storage system, and shows a case where the virtualization apparatus 20 is configured as a storage apparatus.

  Hereinafter, in this embodiment, the virtualization device 20 is also referred to as a third storage device 20. As will be described later, each of the third storage devices 20 includes, for example, a plurality of channel adapters (hereinafter “CHA”) 210, a plurality of disk adapters (hereinafter “DKA”) 220, a cache memory 230, and a shared memory 240. The connection control units 250 and 260, the storage unit 270, and the SVP 280 can be configured.

  The CHA 210 controls data exchange between the host 10 and the external first storage device 30 and the second storage device 40. For example, the CHA 210 can be configured as a microcomputer system including a CPU, a memory, an input / output circuit, and the like. It is. Each CHA 210 can include a plurality of communication ports 211, and can individually exchange data for each communication port 211. Each CHA 210 corresponds to one type of communication protocol, and is prepared according to the type of the host 10. However, each CHA 210 may be configured to support a plurality of types of communication protocols.

  The DKA 220 controls data exchange with the storage unit 270. Similar to the CHA 210, the DKA 220 can be configured as, for example, a microcomputer system including a CPU and a memory. Each DKA 220 accesses each disk drive 271 by, for example, converting a logical block address (LBA) designated by the host 10 into a physical disk address, and reads / writes data. Note that the CHA 210 function and the DKA 220 function may be integrated into one or a plurality of controllers.

  The cache memory 230 stores write data written from the host 10 and read data read by the host 10. The cache memory 230 can be configured from, for example, a volatile or non-volatile memory. When the cache memory 230 includes a volatile memory, it is preferable to perform memory backup by a battery power source (not shown). Although illustration is omitted, the cache memory 230 can be composed of two areas, a read cache area and a write cache area, and the data stored in the write cache area can be stored in a multiplexed manner. In other words, even if the read data in which the same data exists in the disk drive 271 is lost, it is sufficient to read it from the disk drive 271 again, so there is no need for multiplexing. On the other hand, since the write data exists only in the cache memory 230 in the storage device 20, it is preferable to store the multiplexed data in terms of reliability. However, whether or not the cache data is multiplexed and stored depends on the specification.

  The shared memory (or also called the control memory) 240 can be composed of, for example, a nonvolatile memory, but may be composed of a volatile memory. In the shared memory 240, for example, control information, management information, and the like are stored as in the mapping table T1. Information such as control information can be multiplexed and managed by a plurality of memories 240. A configuration example of the mapping table T1 will be described later.

  Here, the shared memory 240 and the cache memory 230 can be configured as separate memory packages, respectively, or the cache memory 230 and the shared memory 240 may be provided in the same memory package. It is also possible to use a part of the memory as a cache area and the other part as a control area. That is, the shared memory and the cache memory can be configured as the same memory.

  The first connection control unit (switch unit) 250 connects each CHA 210, each DKA 220, the cache memory 230, and the shared memory 240 to each other. As a result, all the CHAs 210 and DKAs 220 can individually access the cache memory 230 and the shared memory 240, respectively. The connection control unit 250 can be configured as, for example, an ultra-high speed crossbar switch. The second connection control unit 260 is for connecting each DKA 220 and the storage unit 270.

  The storage unit 270 includes a large number of disk drives 271. The storage unit 270 can be provided in the same casing together with the controller parts such as each CHA 210 and each DKA 220, or can be provided in a different casing from the controller part.

  The storage unit 270 can be provided with a plurality of disk drives 271. As the disk drive 271, for example, an FC disk (Fibre Channel disk), a SCSI (Small Computer System Interface) disk, a SATA (Serial AT Attachment) disk, or the like can be used. Further, the storage unit 270 need not be composed of the same type of disk drives, and a plurality of types of disk drives can be mixed.

  Here, in general, performance decreases in the order of FC disk, SCSI disk, and SATA disk. For example, frequently accessed data (high-value data, etc.) is stored on a high-performance FC disk, and low-access data (low-value data, etc.) is stored on a low-performance SATA disk. In addition, the type of the disk drive can be properly used according to the data utilization mode. On the physical storage area provided by each disk drive 271, a plurality of logical storage areas can be provided in a hierarchy. The configuration of the storage area will be described later.

  The SVP (Service Processor) 280 is connected to each CHA 210 and each DKA 220 via an internal network CN11 such as a LAN. In the figure, only SVP 280 and CHA 210 are connected, but SVP 280 can also be connected to each DKA 220. The SVP 280 collects various states in the storage apparatus 20 and provides them to the storage management server 60 as they are or after processing them.

  The third storage device 20 that realizes volume virtualization is a window for processing data input / output requests from the host 10, and is connected to the first storage device 30 and the second storage device 40 via the communication network CN3, respectively. ing. In the figure, a state in which two storage devices 30 and 40 are connected to the storage device 20 is shown, but the present invention is not limited to this, and one storage device may be connected to the storage device 20, or the storage device 20 Three or more storage devices may be connected.

  For example, the first storage device 30 can include a controller 310, a communication port 311 for connecting to the third storage device 20, and a disk drive 320. The controller 310 realizes the functions of the CHA 210 and DKA 220 described above, and controls data exchange with the third storage device 20 and the disk drive 320, respectively.

  The first storage device 30 may have the same or substantially the same configuration as the third storage device 20, or may have a different configuration from the third storage device 20. The first storage device 30 can perform data communication with the third storage device 20 in accordance with a predetermined communication protocol (for example, FC, iSCSI, etc.), and a storage drive (storage device) such as the disk drive 320 ). As will be described later, the logical volume of the first storage device 30 is mapped to a predetermined tier of the third storage device 20 and used as if it were an internal volume of the third storage device 20.

  In this embodiment, a hard disk is exemplified as a physical storage drive, but the present invention is not limited to this. In addition to the hard disk, for example, a semiconductor memory drive, a magnetic tape drive, an optical disk drive, a magneto-optical disk drive, or the like can be used as the storage drive.

  Similar to the first storage device 30, the second storage device 40 can be configured to include, for example, a controller 410, a disk drive 420, and a port 411. The second storage device 40 may have the same configuration as the first storage device 30 or may have a different configuration.

  FIG. 5 is a configuration explanatory diagram focusing on the logical storage structure of the storage system. The configuration of the third storage device 20 will be described first. The storage structure of the third storage device 20 can be broadly divided into, for example, a physical storage hierarchy and a logical storage hierarchy. The physical storage hierarchy is configured by a PDEV (Physical Device) 271 that is a physical disk. PDEV corresponds to a disk drive.

  The logical storage hierarchy can be composed of a plurality of (for example, two types) hierarchies. One logical hierarchy can be configured from a VDEV (Virtual Device) 272. Another logical hierarchy can be composed of an LDEV (Logical Device) 273.

  The VDEV 272 can be configured by grouping a predetermined number of PDEVs 271 such as one set of four (3D + 1P) and one set of eight (7D + 1P). The storage areas provided by each PDEV 271 belonging to the group are gathered to form one RAID storage area. This RAID storage area becomes the VDEV 272.

  Here, not all VDEVs 272 are directly provided on the PDEV 271, and some VDEVs 272 can be generated as virtual intermediate devices. Such a virtual VDEV 272 serves as a tray for mapping LUs (Logical Units) included in the external storage apparatuses 30 and 40.

  At least one LDEV 273 can be provided on the VDEV 272. The LDEV 273 can be configured by dividing the VDEV 272 with a fixed length. When the host 10 is an open host, the LDEV 273 is mapped to the LU 274, so that the host 10 recognizes the LDEV 273 as one physical disk volume. The open host 10 accesses a desired LDEV 273 by designating a LUN (Logical Unit Number) and a logical block address.

  The LU 274 is a device that can be recognized as a SCSI logical unit. Each LU 274 is connected to the host 10 via the port 211A. Each LU 274 can be associated with at least one LDEV 273. By associating a plurality of LDEVs 273 with one LU 274, the LU size can be virtually expanded.

  A CMD (Command Device) 275 is a dedicated LU used to pass commands and status between a program running on the host 10 and the storage device controllers (CHA 210, DKA 220). Commands from the host 10 are written in the CMD 275. The controller of the storage apparatus executes processing according to the command written in CMD 275 and writes the execution result in CMD 275 as a status. The host 10 reads and confirms the status written in the CMD 275, and writes the processing content to be executed next in the CMD 275. In this way, the host 10 can give various instructions to the storage apparatus via the CMD 275.

  The first storage device 30 and the second storage device 40 are connected to the external connection initiator port 211B of the third storage device 20 via the communication network CN3. The first storage device 30 includes a plurality of PDEVs 320 and an LDEV 330 set on a storage area provided by the PDEVs 320. Each LDEV 330 is associated with the LU 340. Similarly, the second storage device 40 includes a plurality of PDEVs 420 and an LDEV 430, and the LDEV 430 is associated with the LU 440.

  The LDEV 330 included in the first storage device 30 is mapped to the VDEV 272 (“VDEV2”) of the third storage device 20 via the LU 340. The LDEV 430 included in the second storage device 40 is mapped to the VDEV 272 (“VDEV3”) of the third storage device via the LU 440.

  In this way, the third storage device 20 exists outside by mapping the real volumes (LDEVs) of the first and second storage devices 30 and 40 to a predetermined logical hierarchy of the third storage device 20. It is possible to make the volumes 330 and 430 appear to the host 10 as if they are own volumes. Note that the method of taking a volume existing outside the third storage device 20 into the third storage device 30 is not limited to the above example.

  Next, FIG. 6 is a block diagram showing an outline of the hardware configuration of the storage management server 60. The storage management server 60 can be configured to include, for example, a communication unit 610, a control unit 620, a memory 630, and a volume database 640.

  The communication unit 610 performs data communication via the communication network CN1. The control unit 620 performs overall control of the storage management server 60. In the memory 630, for example, a web server program 631, a data rearrangement management program 632, and a database management system 633 are stored.

  In the volume database 640, for example, a volume attribute management table T2, a storage tier management table T3, a corresponding host management table T4, a migration group management table T5, and an action management table T6 are stored. A configuration example of each table will be described later.

  The web server program 631 is read and executed by the control unit 620, thereby realizing a web server function on the storage management server 60. The data relocation management program 632 implements a data relocation management unit on the storage management server 60 by being read by the control unit 620. The database management system 633 manages the volume database 640. The web server function, data relocation management function, and database management function can be executed in parallel.

  FIG. 7 is an explanatory diagram showing the configuration of the mapping table T1. The mapping table T1 is used to map the volumes of the first storage device 30 and the second storage device 40 to the third storage device 20, respectively. The mapping table T1 can be stored in the shared memory 240 of the third storage device 20.

  The mapping table T1, for example, associates LUN, LDEV number, maximum number of slots (capacity) of LDEV, VDEV number, maximum number of slots (capacity) of VDEV, device type, and path information. Can be configured. The path information includes internal path information indicating a path to a storage area (PDEV 271) in the third storage device 20 and external path information indicating a path to a volume of the first storage device 30 or the second storage device 40. It can be divided roughly. The external path information can include, for example, a WWN (World Wide Name) and a LUN.

  FIG. 8 shows an example of the volume attribute management table T2. The volume attribute management table T2 is for managing attribute information of each volume distributed to the storage system.

  The volume attribute management table T2 includes, for example, for each virtual volume, a logical ID that identifies the virtual volume, a physical ID of a real volume associated with the virtual volume, a RAID level, an emulation type, and a disk type. The storage capacity, the usage state, and the model of the storage device can be associated with each other.

Here, the RAID level is information indicating a RAID configuration such as RAID 0, RAID 1, or RAID 5, for example. The emulation type is information indicating the structure of the volume. For example, the volume provided to the open host and the volume provided to the mainframe host have different emulation types. The usage state is information indicating whether or not the volume is being used. The model is information indicating the model of the storage apparatus in which the volume exists.
The logical ID is an ID of a logical volume provided to the host 10 by the volume virtualization apparatus 20, and the physical ID is an ID indicating the location of an actual volume corresponding to each logical volume. The physical ID is composed of the device number of the storage device storing the real volume and the volume number in the device.

  FIG. 9 shows an example of the storage tier management table T3. The storage tier management table T3 can be configured, for example, by associating a storage tier number, a storage tier name, a conditional expression that defines the storage tier, and an action that is automatically executed. Actions are not essential setting items, and storage tiers can be defined without associating actions.

  A user (system administrator or the like) can set a desired name for the storage tier name. For example, names such as a highly reliable tier, a low cost tier, a fast response tier, and an archive tier can be used as the storage tier name. In the conditional expression, a search condition for extracting a volume that should belong to the storage tier is set. The search condition is set by the user according to the policy of the storage tier.

  Depending on the search conditions, for example, a volume configured of a predetermined type of disk with a predetermined RAID level may be detected (“RAID level = RAID 1 and disk type = FC”), or a volume existing in a predetermined storage device may be detected. Yes (“model = SS1”). For example, in the high-reliability hierarchy (# 1), a volume that is made redundant by RAID 1 with an FC disk having high reliability is selected. Thereby, a highly reliable hierarchy can be comprised only from a highly reliable volume. In the low-cost tier (# 2), a volume in which an inexpensive SATA disk is made redundant with RAID 5 is selected. As a result, the low-cost hierarchy can be configured only from an inexpensive volume having a relatively small capacity. In the high-speed response hierarchy (# 3), a volume obtained by striping (RAID 0) a disk existing in a model capable of high-speed response is selected. As a result, the high-speed response hierarchy can be configured only from volumes that have fast I / O processing and do not require processing such as parity calculation. In the archive hierarchy (# 4), a volume composed of an inexpensive SATA disk and having a capacity less than a predetermined capacity is selected. Thereby, an archive hierarchy can be comprised from a low-cost volume.

  As shown in FIG. 9, a volume group that should belong to each storage tier is detected by searching the volume attribute management table T2 based on the conditional expression set in the storage tier management table T3. Here, it should be noted that the storage tier and the volume group are not explicitly directly associated but indirectly associated via a conditional expression. Thus, even when the physical configuration of the storage system changes variously, it can be easily handled.

  FIG. 10 is an explanatory diagram showing an example of the corresponding host management table T4. The corresponding host management table T4 includes, for example, a logical ID for identifying a virtual volume, information for specifying a host that accesses the virtual volume (for example, a domain name), and a name of an application program that uses the virtual volume. Can be configured by associating.

  FIG. 11 is an explanatory diagram showing an example of the migration group management table T5. A migration group is a unit for rearranging data. In this embodiment, a migration group is configured from a plurality of mutually related volumes, and data can be rearranged in a batch unit. . By searching the corresponding host management table T4 shown in FIG. 10, mutually related volume groups can be extracted.

  The migration group management table T5 can associate, for example, a group number, a group name, a logical ID that identifies a volume belonging to the group, and a name of a storage tier to which the group currently belongs. The name of the migration group can be freely set by the user. In this way, each migration group can be configured, for example, by grouping volumes that store data groups used by the same application program or volumes that store data groups that make up the same file system. It is. Note that the storage tier name to which the group belongs may not be set when data relocation is not performed, for example, immediately after a new migration group is set.

  FIG. 12 is an explanatory diagram showing an example of the action management table T6. The action management table T6 defines specific contents of predetermined information processing and data operation preset in the storage tier. The action management table T6 can be configured, for example, by associating an ID for identifying an action, the name of the action, and a script (program) to be executed by the action. Therefore, when an action ID is set in advance in the storage tier management table T3, a necessary action can be executed by searching the action management table T6 using the action ID as a search key.

  For example, the action ID “A1” is set in the high-reliability hierarchy. The action ID “A1” performs mirroring, and is associated with a script for generating a volume copy. Therefore, when a certain migration group is rearranged in the high-reliability hierarchy, a copy of the volume group is generated. In the archive hierarchy, an action ID “A3” is set. The action ID “A3” performs data archive processing, and is associated with a plurality of scripts necessary for the archive processing. One script sets the access attribute to read only, and the other script duplicates the volume group. The ID “A2” in the action management table T6 permits writing only once and is known as so-called WORM (Write Once Read Meny). A script for setting the access attribute to read-only is associated with this action ID.

  FIG. 13 is an explanatory diagram showing the overall data relocation operation in a simplified manner. When performing data relocation, the user logs in to the storage management server 60 via the management client 50, and designates the migration group to be relocated and the storage tier of the placement destination (S1).

  The storage management server 60 selects a migration destination candidate volume for each volume constituting the designated migration group (S2). As will be described in detail later, in the process of selecting the migration destination candidate volume, one of the volumes that can copy the data of the migration source volume is selected from all the volumes belonging to the storage tier designated as the migration destination.

  The selection result of the migration destination candidate volume by the storage management server 60 is presented to the user in the form of, for example, the volume correspondence table T7 (S3). The volume correspondence table T7 can be configured, for example, by associating the logical ID of the migration source volume with the logical ID of the migration destination volume.

  The user confirms the relocation plan (volume correspondence table T7) presented from the storage management server 60 by the web browser 51. When approving the proposal from the storage management server 60 as it is, rearrangement is executed at a predetermined timing (S5). When modifying the proposal from the storage management server 60, the logical ID of the migration destination volume is changed via the web browser 51 (S4).

  FIG. 14 is a flowchart showing processing for selecting a migration destination candidate volume. This process is started, for example, when the user explicitly specifies the migration group to be relocated and the storage tier of the relocation destination (migration destination).

  The storage management server 60 (data relocation management program 632) determines whether or not a migration destination candidate volume has been selected for all migration source volumes (S11). Here, it is determined as “NO”, and the process proceeds to S12. Then, the storage management server 60 refers to the volume attribute management table T2 so that the usage status is “free” from the volume group belonging to the storage tier designated as the migration destination, and is essential as the migration source volume. Volumes with matching attributes are extracted (S12).

  The essential attribute means an attribute necessary for performing data copy between volumes. If even one of the essential attributes does not match, data copy is impossible. In this embodiment, examples of the essential attributes include a storage capacity and an emulation type. That is, in this embodiment, the migration source volume and the migration destination volume must at least have the same storage capacity and emulation type.

  Next, the storage management server 60 determines the number of volumes detected as free volumes with matching essential attributes (S13). If only one free volume with the required attribute is detected, that volume is selected as a migration destination candidate volume (S14). If no free volume matching the required attributes is found, data relocation cannot be performed, so error processing is performed and the user is notified (S16).

  When a plurality of free volumes with matching essential attributes are detected, the storage management server 60 selects a volume with the highest degree of matching of other attributes (non-essential attributes) other than the mandatory attributes as a migration destination candidate volume (S15). ). For example, a volume with more matching other attribute items such as RAID level, disk type, storage device model, etc. is selected as a migration destination candidate volume. Note that the degree of coincidence may be calculated by adding superiority or inferiority between items of the non-essential attributes. Further, when there are a plurality of volumes having the same degree of coincidence of non-essential attributes, for example, a volume with a small logical ID can be selected.

  The above processing is executed for each of all the volumes constituting the migration target migration group. When the migration destination candidate volume corresponding to each migration source volume is selected (S11: YES), the storage management server 60 generates and presents the volume correspondence table T7 to the user (S17).

  The user confirms the volume correspondence table T7 presented from the storage management server 60 and decides whether to approve or modify it. If the user's approval is obtained (S18: YES), this process ends. When the user desires to change (S18: NO), the user manually resets the migration destination candidate volume via the web browser 51 (S19). Then, when the correction is completed by the user, this process ends.

  FIG. 15 is a flowchart illustrating an overview of the rearrangement execution process. The storage management server 60 (data relocation management program 632) detects an action associated with the storage tier designated as the migration destination by referring to the storage tier management table T3 (S21).

  Next, the storage management server 60 determines whether or not relocation has been completed for all migration source volumes (S22). In the first process, “NO” is determined, and the process proceeds to the next step S23. Then, the data stored in the migration source volume is copied to the migration destination volume corresponding to the migration source volume (S23), and the access path is switched from the migration source volume to the migration destination volume (S24). This path switching is performed in the same manner as the path switching shown in FIG. As a result, the host 10 can access the desired data without changing the data relocation.

  The storage management server 60 determines whether or not the data migration from the migration source volume to the migration destination volume has been completed normally (S25). If the data migration has not been completed normally (S25: NO), error processing is performed. To finish this processing.

  When the data migration is normally completed (S25: YES), it is checked whether there is an action associated with the migration destination storage tier (S27). If an action is set in the destination storage tier (S27: YES), the storage management server 60 refers to the action management table T6, executes a predetermined script (S28), and returns to S22. If no action is set in the destination storage tier (S27: NO), the process returns to S22 without doing anything.

  As described above, for all the volumes belonging to the migration target migration group, the data stored therein is copied to the migration destination volume, and the access path is switched. When data migration is completed for all migration source volumes (S22: YES), this process ends.

  FIG. 16 is an explanatory diagram of a specific example of the volume correspondence table T7. The selection result of the migration destination candidate volume by the storage management server 60 can be displayed, for example, by arranging the migration source volume and the migration destination volume in two upper and lower stages as shown in FIG. For each of the migration source volume and the migration destination volume, for example, attributes such as the logical ID, the RAID group number to which the migration volume belongs, the RAID level, the emulation type, and the storage capacity can be displayed together.

  The user can determine whether or not to execute the data rearrangement by checking the screen of FIG. When changing the destination volume individually, the user operates the modify button B1. When this button B1 is operated, a transition is made to the individual correction screen shown in FIG.

  In the correction screen shown in FIG. 17, the logical ID of the migration source volume (source volume), the emulation type of the migration source volume, and the storage capacity can be displayed at the top.

  In the center of the screen, the logical ID, RAID group, RAID level, physical ID, number of the storage device to which the volume currently belongs, physical ID of the real volume, and the like can be displayed.

  At the bottom of the screen, it is possible to display a list of all candidate volumes that have the same essential attributes as the migration source volume in the specified hierarchy. The user can select one of the volumes displayed in the volume list at the bottom of the screen. For example, the initial selection result by the storage management server 60 is concentrated on the volume belonging to a specific RAID group, or the sequentially accessed data and the randomly accessed data are arranged in the same RAID group. In such a case, the responsiveness of the RAID group decreases. Therefore, the user can individually correct the migration destination candidate volume so that data is not concentrated on a specific RAID group.

  Since this embodiment is configured as described above, the following effects can be obtained. In this embodiment, a configuration is adopted in which a specified migration source volume is relocated to a specified storage tier among a plurality of storage tiers that are respectively generated based on a plurality of preset policies and volume attribute information. did. Therefore, the user can freely define storage tiers according to a desired policy and rearrange volumes between the storage tiers, thereby improving the usability of the storage system. In particular, in a complex storage system in which a plurality of storage devices are mixed, the user can rearrange data by intuitive operation according to the policy set by the user without considering the characteristics of each volume in detail. .

  In this embodiment, data can be rearranged in groups of a plurality of volumes. Therefore, in combination with the above-described configuration capable of data rearrangement between storage tiers, it is possible to further improve user convenience.

  In this embodiment, a predetermined action can be associated in advance with the storage tier of the migration destination, and the predetermined action can be executed after the data rearrangement is completed. This makes it possible to automatically execute an additional service associated with data relocation, prevent a user from forgetting to operate, and improve usability.

  In this embodiment, matching of required attributes is a precondition, and a volume with a high matching degree of attributes other than the required attributes is selected as a migration destination candidate volume. Therefore, it is possible to select an appropriate volume for data relocation.

  A second embodiment of the present invention will be described based on FIG. The following embodiment including this embodiment corresponds to a modification of the first embodiment. The feature of this embodiment is that the volume virtualization apparatus 20 and the storage management server 60 described in the first embodiment are integrated into one volume virtualization apparatus 70.

  The volume virtualization apparatus 70 of this embodiment includes, for example, a volume virtualization unit 71, a data relocation unit 72, and a volume database 73. The volume virtualization unit 71 realizes the same function as the volume virtualization apparatus 20 of the first embodiment. The data relocation unit 72 realizes the same function as the data relocation management program 632 of the storage management server 60 in the first embodiment. The volume database 73 stores various tables similar to the volume database 640 of the first embodiment.

  A third embodiment of the present invention will be described with reference to FIG. The feature of this embodiment is that a dynamic attribute is added to the volume attribute management table T2 and a storage tier can be defined in consideration of the dynamic attribute.

  In the volume attribute management table T2 of this embodiment, as shown at the right end, the response time of data input / output (I / O response time) is also managed. The I / O response time can be collected and updated by the storage management server 60 from each of the storage apparatuses 20, 30, and 40, for example, regularly or irregularly. In FIG. 19, the I / O response time is displayed instead of the storage device model for the sake of space, but the storage device model can also be managed as one of the volume attributes. Here, the I / O response time is obtained by issuing a test I / O and measuring the time from the issuance of this I / O to the response. The I / O response time can be included in the conditional expression shown in FIG.

  As described above, in this embodiment, in addition to the static attributes such as the RAID level and the storage capacity, the dynamic attributes are managed together, so both the static attributes and the dynamic attributes are considered. Storage tiers can be defined. For example, in a storage tier that requires a faster response, it can be configured from a volume that is set on a high-speed disk (FC disk) and exists in the storage device with the shortest I / O response time.

  A fourth embodiment of the present invention will be described with reference to FIGS. A feature of the present embodiment is that, in accordance with a storage system configuration change (storage device discard), data stored in the storage device related to the configuration change is automatically relocated to an appropriate free volume. In the point.

  FIG. 20 is an explanatory diagram of an overall overview of the storage system according to the present embodiment. In this storage system, a fourth storage device 80 is newly added in addition to the configuration of the first embodiment. The fourth storage device 80 can be configured as the storage devices 30 and 40, for example. The fourth storage device 80 is added for convenience of explanation, and is not an essential element for the configuration of the present invention. The present invention only needs to include a plurality of storage devices.

  In this embodiment, as an example, a case where the first storage device 30 is discarded will be described. For example, when the expiration date of the first storage device 30 has expired, the first storage device 30 is scheduled to be discarded. The case where the data group stored in the storage device 30 to be discarded is transferred to another storage device 40, 80 (or the volume virtualization device 20 as the third storage device; the same applies hereinafter) will be described with reference to FIG. This will be described with reference to the drawings.

As shown in FIG. 21, the user first gives a conditional expression and searches the volume attribute management table T2 to detect a volume group in the “in use” state on the storage device 30 to be discarded. A user defines a migration group consisting of volumes. 21, it is assumed that “device number 1” is set in the storage device 30 to be discarded. In the drawing, the migration group name “saved data volume” is given to the “in use” volume existing in the storage device 30 to be discarded.

  Next, the storage tier is defined based on the condition that “the device number is different from the device number of the storage device to be discarded (device number ≠ 1)”. In the figure, a storage tier name “migration destination tier” is given to a storage tier constituted by storage devices 40 and 80 other than the storage device 30 to be discarded. Then, the user instructs relocation of the migration group “evacuation data volume” using the storage tier of “migration destination tier” as the migration destination.

  Hereinafter, as described in the first embodiment, the data relocation management program 632 uses the storage tier specified as the migration destination for each volume included in the migration group “evacuation data volume” designated as the migration source. Appropriate volumes are selected from the “migration destination hierarchy” and presented to the user.

  As described above, the volume attribute “use state” is “free” and the required attribute matches, or the “use state” is “free” and the required attribute matches, and it is not mandatory A volume with a high degree of matching of attributes is selected as an appropriate migration destination candidate volume and presented to the user. If the user approves the volume association proposed by the data relocation management program 632, the relocation process is executed, and all the data on the discard target storage device 30 is stored in the other storage devices 40, Relocated to the appropriate volume in 80.

  In addition, when replacing a storage device, or when migrating a part of existing data to a newly installed storage device in consideration of performance balance, "device number = device number of the newly installed storage device" The storage tier “newly added tier” is defined under the condition “”, and this storage tier is designated as the migration destination to perform data relocation. Thus, as is clear from this embodiment, according to the present invention, the storage tier can be freely defined by the user according to the configuration and policy of the storage system, and related volumes are collectively designated as a migration group. Data relocation can be performed in groups.

  The present invention is not limited to the above-described embodiment. A person skilled in the art can make various additions and changes within the scope of the present invention. For example, a migration group does not necessarily have to be composed of mutually related volumes, and an arbitrary volume can be grouped as a movement target. Further, a storage tier can be set for each storage device, such as a tier of the first storage device, a tier of the second storage device, a tier of the first storage device, and a second storage device.

  A fifth embodiment of the present invention will be described with reference to FIGS. The feature of this example is that if there is no appropriate free volume as the migration destination of a volume in the storage tier specified as the placement destination, a volume that meets the conditions for the migration destination is automatically created and created The volume is rearranged using the selected volume as a migration destination candidate.

  FIG. 22 is a flowchart showing processing for selecting a migration destination candidate volume in the present embodiment. Many steps in FIG. 22 are the same as the steps in the destination candidate selection process shown in FIG. 14 of the first embodiment. However, as a result of executing the step (S13) of determining the number of volumes detected as free volumes with matching essential attributes, the storage management server 60 has different processing when no such free volumes are found. . In this case, in the present embodiment, volume creation processing (S31) is performed, and an attempt is made to create a volume that belongs to the storage tier of the relocation destination and that has the same essential attributes as the migration source volume.

  Next, the storage management server 60 determines whether or not creation of a volume having the same essential attribute is successful (S32), and if successful, selects the created volume as a migration destination candidate volume (S33). The created volume is registered in the volume attribute management table T2 (S34). If it could not be created, data relocation cannot be performed, so error processing is performed and the user is notified (S16).

FIG. 23 is a flowchart showing details of the volume creation processing (S31) in FIG.
In the volume creation process shown in FIG. 23, the storage management server 60 first selects one storage device from the volume virtualization device 20 and the storage device group connected to the volume virtualization device 20 (S41). Next, it is determined whether or not the selected storage device satisfies the storage device requirements regarding the volume to be created (S42).

  Here, the storage device requirement refers to conditions (if any) related to the attributes of the storage device, such as the model name and device number of the storage device, among the various conditions that define the storage tier specified as the placement destination. . When the selected storage device satisfies the storage device requirement, the storage management server 60 obtains a set S of all VDEVs in the selected storage device that satisfy the VDEV requirement (S43). Next, one VDEV is selected from the set S (S44).

  Here, the VDEV requirement refers to the conditions (if any) related to the attributes of the VDEV, such as disk type and RAID level, among the conditions that define the storage tier specified as the placement destination, and the migration source volume Refers to emulation type.

  Next, the storage management server 60 determines whether or not the unallocated free capacity in the selected VDEV is equal to or larger than the capacity (Q1) of the migration source volume (S45). If the free capacity is equal to or greater than the capacity of the migration source volume, a new volume (LDEV) having the same capacity as the migration source volume is created inside the VDEV (S46), and the volume creation succeeds for the call source. To report.

  If it is determined in step S45 that there is no free capacity equal to or larger than the capacity of the migration source volume in the selected VDEV, a volume that is a migration destination candidate cannot be created in the VDEV, so the storage management server 60 Determines whether there are any other unexamined VDEVs in the set S (S47).

  If there are other VDEVs, the next VDEV is selected (S48), and the process returns to step S44. If there are no more unexamined VDEVs in the set S, and if it is determined in step S42 that the selected storage device does not satisfy the storage device requirements, a volume that is a migration destination candidate is stored in the storage device. Since it cannot be created, the storage management server 60 determines whether there are other unexamined storage devices (S49). If there are other storage devices, the next storage device is selected (S50), and the process returns to step S42. If there is no storage device yet to be investigated, the volume creation failure is reported to the caller.

  As described above, in this embodiment, in addition to selecting an appropriate volume as the migration destination from the free volume group that existed at the time of the data relocation instruction, when an appropriate free volume does not exist, Since an appropriate volume can be created as a migration destination from unallocated storage capacity, more flexible data relocation processing can be performed.

  After the relocation process is completed, the migration source volume that holds the data before migration may be returned to a reusable status by changing the usage status in the volume management table to “free”. May be deleted and returned to a part of the free capacity of the storage device.

  FIG. 24 is a flowchart showing another example of the volume creation process (S31) in FIG. This makes it possible to create a migration destination volume (LDEV) from free areas of a plurality of VDEVs.

  In the volume creation process shown in FIG. 24, the storage management server 60 first selects one storage device from the volume virtualization device 20 and the storage device group connected to the volume virtualization device 20 (S41). Next, it is determined whether or not the selected storage device satisfies the storage device requirements regarding the volume to be created (S42).

  Here, the storage device requirement refers to conditions (if any) related to the attributes of the storage device, such as the model name and device number of the storage device, among the various conditions that define the storage tier specified as the placement destination. . When the selected storage device satisfies the storage device requirement, the storage management server 60 obtains a set S of all VDEVs in the selected storage device that satisfy the VDEV requirement (S43). Next, one VDEV is selected from the set S (S44).

  Here, the VDEV requirement refers to the conditions (if any) related to the attributes of the VDEV, such as disk type and RAID level, among the conditions that define the storage tier specified as the placement destination, and the migration source volume Refers to emulation type. Next, the storage management server 60 determines whether or not the unallocated free capacity in the selected VDEV is equal to or larger than the capacity (Q1) of the migration source volume (S45). If the free capacity is equal to or greater than the capacity of the migration source volume, a new volume (LDEV) having the same capacity as the migration source volume is created inside the VDEV (S46), and the volume creation succeeds for the call source. To report.

  If there is no free capacity equal to or larger than the capacity of the migration source volume in step S45, a free capacity (Q2) in the VDEV is secured (S81), and the capacity (Q1) of the migration source volume and the secured free capacity (Q2) ) Is obtained (S82). At this time, since the capacity equal to or larger than the migration source volume has not been secured, the storage management server 60 determines whether there is another VDEV in the set S (S83). Is selected, and it is determined whether or not the unallocated free capacity in the VDEV is equal to or larger than the difference capacity (Q3) (S84). When the free capacity is equal to or larger than the difference capacity (Q3), the volume (with the same capacity as the migration source volume) using the free capacity in the VDEV and the free capacity secured from the previous VDEV ( LDEV) is newly created (S85), and the volume creation success is reported to the caller.

  On the other hand, if the free capacity of the VDEV is not equal to or greater than the differential capacity Q3 in step S84, the free capacity (Q2) of the VDEV is secured (S86), the remaining differential capacity (Q3) is obtained (S87), and the process proceeds to step S83. Return.

  If there is no other VDEV in the set S in step S83, the necessary free capacity cannot be secured in the storage device, so the storage management server 60 releases the free capacity that has been secured so far (S88), It is determined whether there is any other unexamined storage device (S49). If it is determined in step S42 that the selected storage device does not satisfy the storage device requirements, the process similarly proceeds to step S49. If there are other storage devices in step S49, the next storage device is selected (S50), and the process returns to step S42. If there is no storage device yet to be investigated, the volume creation failure is reported to the caller.

As described above, when the capacity of the migration source volume cannot be secured from one VDEV, a volume (LDEV) serving as a migration destination candidate is generated from a plurality of VDEVs having free capacity.
23 and 24, the storage device is selected. However, the volume virtualization device collectively manages all VDEVs in the connected storage device, so that the step 41 The configuration may be such that selection of the storage device is omitted and VDEVs managed by the volume virtualization device are sequentially selected.

  A sixth embodiment of the present invention will be described with reference to FIGS. 25, 26, 27, 28, 29, and 30. FIG. The feature of this example is that any storage tier including the relocation destination is specified as the replica creation destination when the creation of the replica is included in the action associated with the storage tier specified as the relocation destination It is in a point that can be done.

  FIG. 25 is a diagram showing the association between storage tiers and actions in the present embodiment. In this embodiment, when the creation of a replica is instructed in a script, the storage tier that is the creation destination of the replica is explicitly specified by its name. As shown in FIG. 25, the storage tier that is the creation destination of the replica may be the same as or different from the storage tier itself designated as the relocation destination.

  FIG. 26 is a diagram showing the configuration of the volume attribute management table T2 in this embodiment. In the volume attribute management table of this embodiment, the volume usage state is not in the “in use” and “free” two states, but in the four states of “source”, “replica”, “reserve”, and “free”. In addition, “pair” is added as a new attribute.

  The usage state “source” indicates that the volume holds user data referenced from the host. The usage state “replica” indicates that the volume holds a replica which is a replica of user data. The usage status “reserve” indicates that the volume is reserved for future use as a replica. The use state “free” indicates that the volume is not in any of the above three states and can be newly assigned as a data transfer destination or copy destination.

  When the volume usage state is any one of “source”, “replica”, and “reserve”, the attribute “pair” of the volume can have a value. When the usage status of the volume is “source” and the volume has a replica, the value of the attribute “pair” indicates the logical ID of the replica volume to be paired. (However, if the volume does not have a replica, the value of the attribute “Pair” is empty.) If the volume usage status is “Replica”, the value of the attribute “Pair” is the logical value of the paired source volume. ID. When the volume usage state is “reserve”, the logical ID of the source volume that will be paired with the volume in the future is set in the attribute “pair”.

  FIG. 27 is a flowchart showing an outline of the data rearrangement process in this embodiment. In the data rearrangement process of the present embodiment, the user instructs the rearrangement of the migration group to a predetermined hierarchy (S1), the migration destination candidate selection process (S2), the volume correspondence table presentation (S3), and the migration destination candidate After the correction (S4), the replica volume reserve process (S6) is executed before the relocation execution process (S5).

  FIG. 28 is a flowchart showing the processing contents of the replica volume reserve processing (S6). In the replica volume reserve processing of the present embodiment, the storage management server 60 first refers to the storage tier management table T3 and the action management table T6, and obtains an action associated with the storage tier designated as the relocation destination (S61). ).

  Next, it is determined whether replica creation is instructed in the acquired action (S62). If replica creation is not instructed, or if no action is associated with the relocation destination storage tier, reservation of the replica volume is unnecessary, and the process ends.

  When the creation of replica is instructed during the action, the storage management server 60 determines whether or not the replica volume has already been reserved for all the migration source volumes (S63).

  Since it is not reserved at first, the process proceeds to the next step, and one replica volume is selected corresponding to one migration source volume (S64). Here, the replica volume that can be selected belongs to the storage tier specified as the replica creation destination, the usage status is “Free”, it is not selected as the migration destination candidate, and the required attribute is the source volume. It is a matching volume.

  Next, the storage management server 60 determines whether such a volume has been selected (S65). If it cannot be selected, it means that a volume necessary for replica creation could not be secured, so error processing is performed (S67), and the user is notified. Here, the processing from S31 to S34 in FIG. 22 may be executed to create a volume necessary for replica creation.

  If the volume can be selected, the volume is reserved (S66), the process returns to step S63, and the same processing is repeated until the replica volume is reserved for all the migration source volumes. Here, “reserving a certain volume” means changing the usage status of the volume to “reserve” in the volume attribute management table T2 and setting the logical ID of the corresponding migration source volume in the attribute “pair” of the volume. It is to be.

  FIG. 29 is a flowchart showing details of the action script execution process in the present embodiment. This process corresponds to step S28 of the relocation execution process shown in FIG. 15, and after copying the storage contents of one migration source volume to the corresponding migration destination volume and switching its access path (logical ID), This is a process of executing an action associated with the relocation destination storage tier for the migration destination volume.

  In the action script execution process shown in FIG. 29, the storage management server 60 first determines whether there is an unexecuted action for the target migration destination volume (S71). If there is an unexecuted action, one unexecuted action is extracted (S72), and its type is determined (S73).

  When the action type is replica creation, the storage management server 60 refers to the volume management table T2 and obtains a reserved volume reserved corresponding to the migration destination volume (S74). Here, the reserved volume that is reserved corresponding to the migration destination volume is a volume that holds the logical ID of the migration destination volume as the value of the attribute “pair”.

  Next, the storage management server 60 sets a pair relationship for the volume virtualization apparatus 20 in which the migration destination volume is primary and the reserved volume is secondary (S75). At the same time, the logical ID of the reserved volume corresponding to the attribute “pair” of the migration destination volume is set on the volume attribute management table T2, and the use status of the reserved volume is changed to “replica”.

  Next, the storage management server 60 instructs the volume virtualization apparatus 20 to synchronize between the volumes for which the pair relationship has been set in step S75 (S76). As a result, data is copied from the migration destination volume (primary) to the reserve volume (secondary), and the contents of both are the same. Thereafter, when the contents of the primary volume are rewritten, the rewritten data is copied to the secondary volume, and both contents are always kept the same.

After performing the above processing, the storage management server 60 returns to step S71 and determines again whether there is an unexecuted action. If it is determined in step S73 that the action type is other than replica creation, a predetermined process corresponding to the action type is executed (S77), and the process returns to step S71.
If it is determined in step S71 that there is no action not yet executed, the storage management server 60 ends the action script execution process.

  FIG. 30 is a diagram illustrating an example of the state of the volume attribute management table T2 at each stage of the data rearrangement process according to the present embodiment. FIG. 30A shows an initial state, in which three volumes having the same essential attributes (capacity and emulation type) are registered. Among these, the volume with the logical ID “001” holds user data, and the volumes with the logical IDs “006” and “007” are in an empty state. In this state, assume that the volume with logical ID “001” is the migration source, the volume with logical ID “006” is the migration destination, and data relocation with replica creation is instructed.

  FIG. 30B shows a state in which the volume with the logical ID “007” is selected as the reserve volume for replica in the replica volume reserve process (S6). The usage state of the volume “007” is “reserve”, and the attribute “pair” is set to “001”, which is the logical ID of the migration source volume.

  FIG. 30 (c) shows a state after the data rearrangement process is completed. After the data is copied from the migration source volume to the migration destination volume, the access paths (logical IDs) of both are exchanged. At this point, the logical ID of the migration destination volume is “001”. The volume with the logical ID “007” holds the replica, and both the source and replica volumes are referred to each other with the attribute “pair”.

  As described above, in this embodiment, when an arbitrary storage tier is designated as a replica creation destination in an action associated with a storage tier, and data relocation processing is performed, the relocation processing is performed. As part of the process, a replica of the moved data can be automatically created in another storage tier (or the same storage tier as the relocation destination). This makes it possible to define a storage tier that is flexible and highly useful, for example, by describing creation of a replica to a low-cost tier as part of the definition of a high-reliability tier.

It is explanatory drawing which shows the concept of embodiment of this invention. It is a block diagram which shows the whole storage system outline | summary. FIG. 3 is an explanatory diagram schematically showing how a volume distributed in a storage system is virtualized and managed. It is a block diagram which shows the hardware constitutions of a storage system. It is explanatory drawing which shows the storage structure of a storage system. It is a block diagram which shows the structure of a storage management server. It is explanatory drawing which shows the structure of a mapping table. It is explanatory drawing which shows the structure of a volume attribute management table. It is explanatory drawing which shows the structure of a storage hierarchy management table. It is explanatory drawing which shows the structure of a corresponding | compatible host management table. It is explanatory drawing which shows the structure of a migration group management table. It is explanatory drawing which shows the structure of an action management table. It is explanatory drawing which shows the outline | summary of the whole operation | movement of data rearrangement. It is a flowchart which shows a movement destination candidate selection process. It is a flowchart which shows a rearrangement execution process. It is explanatory drawing which shows the example of a screen which presents the plan of data rearrangement. It is explanatory drawing which shows the example of a screen which corrects the shown plan. It is a block diagram which simplifies and shows the whole structure of the storage system which concerns on 2nd Example of this invention. It is explanatory drawing which shows the structure of the volume attribute management table used with the storage system concerning 3rd Example of this invention. It is a block diagram which shows the whole outline | summary of the storage system which concerns on 4th Example of this invention. It is explanatory drawing which shows typically the relationship between a migration group management table and a storage hierarchy management table. It is a flowchart which shows the movement destination candidate selection process based on 5th Example of this invention. It is a flowchart which shows the volume creation process based on 5th Example of this invention. It is another flowchart which shows the volume creation process based on 5th Example of this invention. It is explanatory drawing which shows typically the relationship between the storage hierarchy management table and action management table which concern on 6th Example of this invention. It is explanatory drawing which shows the structure of the volume attribute management table which concerns on 6th Example of this invention. It is a flowchart which shows the data rearrangement process which concerns on 6th Example of this invention. It is a flowchart which shows the replica volume reserve process which concerns on 6th Example of this invention. It is a flowchart which shows the action script execution process which concerns on 6th Example of this invention. It is a figure which shows typically the example of the transition of the state of the volume attribute management table which concerns on 6th Example of this invention.

Explanation of symbols

  DESCRIPTION OF SYMBOLS 1-3 ... Storage hierarchy, 10 ... Host, 11 ... Application program, 12 ... Host bus adapter, 20 ... Storage apparatus (volume virtualization apparatus), 30, 40, 80 ... Storage apparatus, 50 ... Management client, 51 ... Web browser 60 ... Storage management server, 70 ... Volume virtualization device, 71 ... Volume virtualization unit, 72 ... Data relocation unit, 73 ... Volume database, 210 ... Channel adapter, 211 ... Communication port, 220 ... Disk adapter, 230 ... Cache memory, 240 ... Shared memory, 250, 260 ... Connection control unit, 270 ... Storage unit, 271 ... Disk drive (PDEV), 272 ... Virtual device (VDEV), 273 ... Logical volume (LDEV), 274 ... LU, 275 ... Command device (CMD), 310, DESCRIPTION OF SYMBOLS 10 ... Controller, 311, 411 ... Communication port 320, 420 ... Disk drive, 330, 430 ... Logical volume, 610 ... Communication part, 620 ... Control part, 630 ... Memory, 631 ... Web server program, 632 ... Data rearrangement management Program (data rearrangement management unit), 633 ... database management system, 640 ... volume database, CN ... communication network, T1 ... mapping table, T2 ... volume attribute management table, T3 ... storage tier management table, T4 ... corresponding host management table , T5: Migration group management table, T6: Action management table, T7: Volume correspondence table

Claims (14)

  1. A plurality of storage devices each having a physical volume composed of a controller and a plurality of disk drives connected to the controller;
    Connected to the plurality of storage devices, each providing a plurality of virtual volumes made up of respective physical volumes in two or more different storage devices of the plurality of storage devices as access destinations from a computer The virtualization department;
    For each of the plurality of virtual volumes, a management unit that manages the correspondence between the virtual volume and the plurality of storage tiers to which the virtual volume belongs,
    When the management unit receives information for specifying a virtual volume as a migration source and a storage tier as a migration destination,
    Under the control of the management unit, a virtual volume as a migration destination is selected from virtual volumes associated with the designated storage tier as the migration destination,
    The data of the migration source virtual volume is stored in the physical volume associated with the migration destination virtual volume from the storage area in the physical volume associated with the migration source virtual volume. Copied to the storage area in the volume,
    The virtualization unit assigns the identification information of the migration source virtual volume to the migration destination virtual volume instead of the migration source virtual volume,
    When the migration destination virtual volume belongs to a plurality of storage tiers, predetermined processing corresponding to one storage tier designated as the migration destination among the plurality of storage tiers is controlled by the management unit. A storage system that is executed.
  2. The storage system according to claim 1, wherein
    When the migration destination virtual volume belongs to a plurality of storage tiers, the management unit selects a predetermined process corresponding to the one storage tier designated as the migration destination among the plurality of storage tiers. A storage system that executes predetermined processing corresponding to one storage tier designated as the migration destination.
  3. The storage system according to claim 1, wherein
    When the one storage tier designated as the migration destination is the first tier, the process corresponding to the migration destination storage tier is a process of copying the data of the migration destination virtual volume. A featured storage system.
  4. The storage system according to claim 1, wherein
    When the one storage tier designated as the migration destination is the second tier, the process corresponding to the migration destination storage tier is a process of changing the attribute of the migration destination virtual volume to the Read Only attribute. A storage system characterized by
  5. The storage system according to claim 1, wherein
    The management unit manages a migration group composed of a plurality of virtual volumes, and the management unit designates the migration group as information for designating the migration source virtual volume. Is received,
    Under the control of the management unit, for each of a plurality of virtual volumes belonging to the designated migration group, the virtual volume of the migration destination is selected from the virtual volumes associated with the designated storage tier that is the migration destination. Volume is selected,
    The data of each of the plurality of virtual volumes belonging to the migration group is selected as the migration destination of each virtual volume from the storage area in the physical volume associated with the virtual volume Copied to the storage area in the physical volume associated with the virtual volume,
    The virtualization unit assigns identification information of each of a plurality of virtual volumes belonging to the migration group to the virtual volume of the migration destination selected as the migration destination of each virtual volume,
    When the migration destination virtual volume belongs to a plurality of storage tiers, predetermined processing corresponding to one storage tier designated as the migration destination among the plurality of storage tiers is controlled by the management unit. A storage system that is executed.
  6. The storage system according to claim 5, wherein
    The management unit identifies a plurality of virtual volumes having a predetermined relationship with each other from a plurality of virtual volumes, and creates the migration group having the plurality of identified virtual volumes. A featured storage system.
  7. The storage system according to claim 5, wherein
    The storage system according to claim 1, wherein the migration group includes a plurality of virtual volumes storing data used by the same application.
  8. A plurality of storage devices each having a physical volume composed of a controller and a plurality of disk drives connected to the controller, and a plurality of storage devices connected to the plurality of storage devices. A virtualization unit that provides a plurality of virtual volumes composed of the respective physical volumes in the storage device as an access destination from the computer, and for each of the plurality of virtual volumes, A data movement method in a storage system having a management unit that manages a correspondence relationship with a plurality of storage tiers to which the virtual volume belongs ,
    When the management unit receives information for designating a migration source virtual volume and a migration destination storage tier,
    Under the control of the management unit, a virtual volume as a migration destination is selected from virtual volumes associated with the designated storage tier as the migration destination,
    The data of the migration source virtual volume is stored in the physical volume associated with the migration destination virtual volume from the storage area in the physical volume associated with the migration source virtual volume. Copied to the storage area in the volume,
    The virtualization unit assigns the identification information of the migration source virtual volume to the migration destination virtual volume instead of the migration source virtual volume,
    When the migration destination virtual volume belongs to a plurality of storage tiers, predetermined processing corresponding to one storage tier designated as the migration destination among the plurality of storage tiers is controlled by the management unit. data movement wherein the Rukoto executed.
  9. The data movement method according to claim 8, wherein
    When the migration destination virtual volume belongs to a plurality of storage tiers, the management unit selects a predetermined process corresponding to the one storage tier designated as the migration destination among the plurality of storage tiers. A data migration method characterized by executing a predetermined process corresponding to one storage tier designated as the migration destination .
  10. The data movement method according to claim 8 , wherein
    When the one storage tier designated as the migration destination is the first tier, the process corresponding to the migration destination storage tier is a process of copying the data of the migration destination virtual volume. Characteristic data movement method.
  11. The data movement method according to claim 8 , wherein
    When the one storage tier designated as the migration destination is the second tier, the process corresponding to the migration destination storage tier is a process of changing the attribute of the migration destination virtual volume to the Read Only attribute. data movement wherein the at.
  12. The data movement method according to claim 8 , wherein
    The management unit manages a migration group including a plurality of virtual volumes, and the management unit receives information for designating the migration group as information for designating a migration source virtual volume. If
    Under the control of the management unit, for each of a plurality of virtual volumes belonging to the designated migration group, the virtual volume of the migration destination is selected from the virtual volumes associated with the designated storage tier that is the migration destination. Volume is selected,
    The data of each of the plurality of virtual volumes belonging to the migration group is selected as the migration destination of each virtual volume from the storage area in the physical volume associated with the virtual volume Copied to the storage area in the physical volume associated with the virtual volume,
    The virtualization unit assigns identification information of each of a plurality of virtual volumes belonging to the migration group to the virtual volume of the migration destination selected as the migration destination of each virtual volume,
    When the migration destination virtual volume belongs to a plurality of storage tiers, predetermined processing corresponding to one storage tier designated as the migration destination among the plurality of storage tiers is controlled by the management unit. A data movement method which is executed .
  13. A data movement method according to claim 12 , comprising:
    The management unit identifies a plurality of virtual volumes having a predetermined relationship with each other from a plurality of virtual volumes, and creates the migration group having the plurality of identified virtual volumes. Characteristic data movement method.
  14. A data movement method according to claim 12 , comprising:
    The data migration method according to claim 1, wherein the migration group includes a plurality of virtual volumes storing data used by the same application .
JP2007280358A 2004-08-30 2007-10-29 Storage system and data relocation control device Expired - Fee Related JP4842909B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2004250327 2004-08-30
JP2004250327 2004-08-30
JP2007280358A JP4842909B2 (en) 2004-08-30 2007-10-29 Storage system and data relocation control device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007280358A JP4842909B2 (en) 2004-08-30 2007-10-29 Storage system and data relocation control device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
JP2005245386 Division 2005-08-26

Publications (3)

Publication Number Publication Date
JP2008047156A5 JP2008047156A5 (en) 2008-02-28
JP2008047156A JP2008047156A (en) 2008-02-28
JP4842909B2 true JP4842909B2 (en) 2011-12-21

Family

ID=39180759

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007280358A Expired - Fee Related JP4842909B2 (en) 2004-08-30 2007-10-29 Storage system and data relocation control device

Country Status (1)

Country Link
JP (1) JP4842909B2 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4877249B2 (en) * 2008-03-06 2012-02-15 日本電気株式会社 Information processing system
JP4895223B2 (en) * 2008-03-31 2012-03-14 Necフィールディング株式会社 Storage device having read-only interface, method and program thereof
JP2010079626A (en) * 2008-09-26 2010-04-08 Hitachi Ltd Load distribution method and system for computer system
US9274723B2 (en) 2009-10-13 2016-03-01 Hitachi, Ltd. Storage apparatus and its control method
EP2518614A4 (en) * 2009-12-24 2014-01-01 Hitachi Ltd Storage system for providing virtual volume
US8489844B2 (en) 2009-12-24 2013-07-16 Hitachi, Ltd. Storage system providing heterogeneous virtual volumes and storage area re-allocation
JP2011221570A (en) * 2010-04-02 2011-11-04 Nec Corp Data shift system and data shift method
JP5641763B2 (en) * 2010-04-07 2014-12-17 三菱電機株式会社 Video surveillance recorder
US20130166570A1 (en) * 2010-09-08 2013-06-27 Hitachi, Ltd. Computer system management method and management apparatus
US8572318B2 (en) 2010-09-24 2013-10-29 Hitachi, Ltd. Method and system for distributing multiple storage devices to multiple tiers in a storage apparatus
US9348515B2 (en) 2011-01-17 2016-05-24 Hitachi, Ltd. Computer system, management computer and storage management method for managing data configuration based on statistical information
JP5362751B2 (en) * 2011-01-17 2013-12-11 株式会社日立製作所 Computer system, management computer, and storage management method
US9703500B2 (en) 2012-04-25 2017-07-11 International Business Machines Corporation Reducing power consumption by migration of data within a tiered storage system
WO2015029102A1 (en) 2013-08-26 2015-03-05 株式会社日立製作所 Storage device and hierarchical control method
JP5895042B2 (en) * 2014-12-22 2016-03-30 株式会社日立製作所 Computer system, management computer, and storage management method
JP2019125175A (en) 2018-01-17 2019-07-25 富士通株式会社 Data processing device, data processing system and data processing program

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2550239B2 (en) * 1991-09-12 1996-11-06 株式会社日立製作所 External storage system
JP3448068B2 (en) * 1991-12-24 2003-09-16 富士通株式会社 Data processing system and storage management method
US7447712B2 (en) * 1997-09-28 2008-11-04 Global 360, Inc. Structured workfolder
JP3541744B2 (en) * 1999-08-30 2004-07-14 株式会社日立製作所 Storage subsystem and control method thereof
JP4115093B2 (en) * 2000-07-06 2008-07-09 株式会社日立製作所 Computer system
JP2002222061A (en) * 2001-01-25 2002-08-09 Hitachi Ltd Method for setting storage area, storage device, and program storage medium
CA2458908A1 (en) * 2001-08-31 2003-03-13 Arkivio, Inc. Techniques for storing data based upon storage policies
JP2003316522A (en) * 2002-04-26 2003-11-07 Hitachi Ltd Computer system and method for controlling the same system
JP4183443B2 (en) * 2002-05-27 2008-11-19 株式会社日立製作所 Data relocation method and apparatus
US6889285B2 (en) * 2002-08-29 2005-05-03 International Business Machines Corporation Apparatus and method to maintain information using a plurality of storage attributes
JP4606711B2 (en) * 2002-11-25 2011-01-05 株式会社日立製作所 Virtualization control device and data migration control method
JP4345313B2 (en) * 2003-01-24 2009-10-14 株式会社日立製作所 Operation management method of storage system based on policy

Also Published As

Publication number Publication date
JP2008047156A (en) 2008-02-28

Similar Documents

Publication Publication Date Title
US9448733B2 (en) Data management method in storage pool and virtual volume in DKC
US10452299B2 (en) Storage system having a thin provisioning function
US8533419B2 (en) Method for controlling data write to virtual logical volume conforming to thin provisioning, and storage apparatus
US20150153961A1 (en) Method for assigning storage area and computer system using the same
US8667241B2 (en) System for data migration from a storage tier allocated to a virtual logical volume
US10031703B1 (en) Extent-based tiering for virtual storage using full LUNs
US9268489B2 (en) Method and system for heterogeneous data volume
JP5323989B2 (en) Storage apparatus and data management method
US8549247B2 (en) Storage system, management method of the storage system, and program
US8225039B2 (en) Storage controller and virtual volume control method
JP4061960B2 (en) Computer system
US7870105B2 (en) Methods and apparatus for deduplication in storage system
US8069331B2 (en) Storage system, storage extent release method and storage apparatus
US7007147B2 (en) Method and apparatus for data relocation between storage subsystems
US8639899B2 (en) Storage apparatus and control method for redundant data management within tiers
US7146460B2 (en) Dynamic spindle usage leveling
US9658779B2 (en) Computer system and control method for computer system
JP4437650B2 (en) Storage system
JP4863605B2 (en) Storage control system and method
US8566548B2 (en) Volume selection method and information processing system
US8239584B1 (en) Techniques for automated storage management
JP4890160B2 (en) Storage system and backup / recovery method
US8762672B2 (en) Storage system and storage migration method
US9367265B2 (en) Storage system and method for efficiently utilizing storage capacity within a storage system
US7143228B2 (en) Storage control system and method for storing block level data in internal or external storage control system based on control information via networks

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20080806

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20081023

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20101207

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110127

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20111004

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20111006

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20141014

Year of fee payment: 3

LAPS Cancellation because of no payment of annual fees