WO2015087442A1 - ストレージシステムの移行方式および移行方法 - Google Patents
ストレージシステムの移行方式および移行方法 Download PDFInfo
- Publication number
- WO2015087442A1 WO2015087442A1 PCT/JP2013/083471 JP2013083471W WO2015087442A1 WO 2015087442 A1 WO2015087442 A1 WO 2015087442A1 JP 2013083471 W JP2013083471 W JP 2013083471W WO 2015087442 A1 WO2015087442 A1 WO 2015087442A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- storage
- storage node
- physical storage
- storage system
- nodes
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0605—Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0607—Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0613—Improving I/O performance in relation to throughput
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
Definitions
- the present invention relates to a storage system migration method and migration method in a computer system comprising a storage system that stores data accessed by a host computer and a management computer that manages the storage system.
- a storage system connected to a host computer via a network includes, for example, a plurality of magnetic disks as storage devices for storing data.
- This storage system makes a storage area of a plurality of storage devices redundant by a RAID (Redundant Array of Independent Disks) technique, and forms a RAID group (also called a parity group). Then, the storage system provides the host computer with a storage area having a capacity required by the host computer from a part of the RAID group in the form of a logical volume.
- RAID Redundant Array of Independent Disks
- Patent Document 1 there is a technology that bundles a plurality of physical storage systems and provides them as a single virtual storage system to a host computer (for example, Patent Document 1). This technology makes it possible to manage a plurality of physical storage systems as a single storage system.
- migration destination storage system When migrating from one physical storage system (migration source storage system) to a virtual storage system (migration destination storage system) consisting of multiple physical storage systems, the physical components that make up the migration destination storage system Depending on the availability of resources (ports, caches, volumes, etc.) in a storage system, the arrangement of resources used after migration changes. There are cases where only the resources of one physical storage system are used, and there are cases where resources distributed in different physical storage systems are used.
- resources for example, ports and volumes allocated to one host computer in the virtual storage system exist in different physical storage systems
- resources for example, volumes
- the host computer In order to be able to access the resource (for example, a volume) between physical storage systems.
- communication between the physical storage systems occurs when the host computer accesses the volume. If this traffic exceeds the bandwidth of the connection path between physical storage systems, the host I / O performance in the virtual storage system may be degraded from the performance before the storage system migration.
- a physical storage system is collectively referred to as a “storage node”, and a virtual storage system is referred to as a “virtual storage system”.
- a computer system manages a host computer, a first physical storage node, a plurality of second physical storage nodes connected to each other, and a first and a plurality of second physical storage nodes.
- the plurality of second physical storage nodes provide a virtual storage system in response to the host computer with the same identifier, and one second physical storage node receives an I / O command from the host computer.
- the I / O command is transferred to another second physical storage node, and the management computer collects storage configuration information and performance information from the first physical storage node and the plurality of second physical storage nodes.
- the volume provided by the first physical storage node and load information related thereto are collected.
- the configuration information and performance information of the plurality of second physical storage nodes and the volume provided by the first physical storage node Resource of the second physical storage node is allocated within the range of the bandwidth of the transfer path between the second physical storage nodes.
- Example 1 it is a figure which shows the structural example of a computer system.
- Example 1 it is a figure which shows the example of a storage node information table.
- Example 1 it is a figure which shows the example of the transfer bandwidth information table between nodes.
- Example 1 it is a figure which shows the example of a volume information table.
- Example 1 it is a figure which shows the example of a threshold-value information table.
- Example 1 it is a flowchart which shows the example of a process of the migration plan creation program.
- Example 1 it is a figure which shows the example of the processing details of a migration plan preparation program.
- Example 2 it is a figure which shows the example of a local copy information table.
- Example 2 it is a figure which shows the example of a snapshot information table.
- Example 2 it is a figure which shows the example of a pool information table. In Example 2, it is a figure which shows the example of a related volume group information table. In Example 2, it is a figure which shows the example of the processing detail of a migration plan preparation program. In Example 3, it is a figure which shows the structural example of a computer system. In Example 3, it is a figure which shows the example of the processing details of a migration plan preparation program. In Example 4, it is a figure which shows the example of a storage node information table. In Example 4, it is a figure which shows the example of a volume information table. In Example 4, it is a figure which shows the example of a transfer plan table. In Example 4, it is a figure which shows the example of the processing detail of a transfer plan preparation program.
- Example 4 it is a figure which shows the example of the processing detail of preparation of a port transfer plan in a transfer plan preparation program.
- Example 4 it is a figure which shows the example of the processing detail of creation of BE I / F migration plan in the migration plan creation program. It is a schematic diagram showing a flow of transition from a single physical storage node (old configuration) to a virtual storage system (new configuration) consisting of a plurality of physical storage nodes.
- FIG. 21 is a schematic diagram showing a flow of transition from an old configuration having a single physical storage node to a new configuration having a virtual storage system including a plurality of physical storage nodes.
- the old configuration physical storage node provides three volumes.
- the CPU performance (MIPS) and port performance (Mbps) required by each volume are, in order, 30 MIPS and 80 Mbps for volume A, 60 MIPS and 60 Mbps for host B, and 60 MIPS and 30 Mbps for host C.
- the plurality of physical storage nodes constituting the virtual storage system have the same specifications, the CPU performance is 100 MIPS, and the port performance is 100 Mbps.
- the physical storage nodes are internally connected by an ASIC, and there is almost no deterioration in response performance, and I / O command transfer and data exchange can be relayed. This transfer performance is 50 Mbps.
- the management server collects configuration information and performance information from the physical storage node of the old configuration and the physical storage node of the new configuration, respectively ((1) Information collection in FIG. 21). Then, create a migration plan to allocate volumes and ports as resources in the same physical storage node in order of the required port performance so that data transfer between the newly configured physical storage nodes does not occur as much as possible ( FIG. 21 (2) Plan creation).
- volume A can allocate CPUs and ports from a single storage node.
- volume B is possible.
- the last volume C does not have a single physical storage node to which a storage area and a CPU are allocated, and needs to be allocated from different physical storage nodes.
- the port performance required by volume C is less than the transfer bandwidth between physical storage nodes, it is possible to migrate with little performance degradation ((3) replace in FIG. 21).
- the present invention it is possible to reduce degradation of I / O performance due to migration to a virtual storage system environment.
- storage resources may be insufficient with only one storage node.
- it can be considered to migrate to a virtual storage system composed of a plurality of storage nodes.
- improvement in performance due to migration is expected. Therefore, it is particularly useful that the present invention can reduce I / O performance degradation and improve I / O performance by using virtual storage resources. is there.
- FIG. 1 is a diagram illustrating a configuration example according to a first embodiment of a computer system to which the present invention is applied.
- the computer system includes a management computer 1000, a virtual storage system 1100, a host computer 1300, and a migration source storage node 1600.
- This virtual storage system 1100 is composed of a plurality of storage nodes 1200.
- the management computer 1000, the storage node 1200 and the migration source storage node 1600 are connected to each other via a management network 1400 (for example, LAN: Local Area Network).
- the management network 1400 is a network for mainly exchanging management data.
- the management network 1400 may be a network other than the IP network, such as a SAN, as long as it is a network for management data communication.
- the storage node 1200, the host computer 1300, and the migration source storage node 1600 are connected to each other via a data network 1500 (for example, SAN: Storage Area Network).
- the data network 1500 is a network for exchanging data stored in the virtual storage system 1100 by the host computer 1300.
- the data network 1500 may be any other type of network as long as it is a data communication network such as an IP network.
- the host computer 1300 is described as a single computer, but two or more may be used.
- the number of migration source storage nodes 1600 may be two or more.
- a virtual storage system in which a plurality of migration source storage nodes 1600 are bundled can be configured, and the migration source storage node 1600 may be read as a virtual storage system.
- the virtual storage system 1100 that is the migration destination is composed of three storage nodes 1200, but may be two or more than three.
- the data network 1500 and the management network 1400 may be the same network.
- the management computer 1000 includes a CPU 1010, a display device 1020, an input device 1030, a NIC 1040, and a memory 1050.
- the input device 1030 is a device for receiving an instruction from the administrator.
- the display device 1020 is a device that displays a processing result corresponding to an instruction from the administrator, a system status, and the like.
- the NIC 1040 is an I / F for connecting to the management network 1400.
- CPU 1010 operates in accordance with a program stored in memory 1050.
- the memory 1050 stores a storage node information table 1051, an inter-storage node transfer bandwidth information table 1052, a volume information table 1053, a threshold information table 1054, and a migration plan creation program 1055. Details of these tables will be described later.
- the migration plan creation program is a program that creates a plan for migrating from the migration source storage node 1600 to the virtual storage system 1100 and performs migration. Details of the operation of this program will also be described later.
- the storage node 1200 includes a controller 1210 and a storage medium unit 1220, which are connected via a high-speed internal network.
- the storage medium unit 1220 includes a storage medium 1221 such as a plurality of hard disk drives and an SSD (Solid State Drive).
- the controller 1210 includes an FE I / F 1211, a data communication unit 1212, a CPU 1213, a NIC 1214, a memory 1215, and a BE I / F 1217.
- the NIC 1214 is an I / F for connecting to the management network 1400.
- the FE I / F 1211 is an I / F for connecting to the data network 1500.
- the BE I / F 1217 is an I / F for connecting to the storage medium unit 1220.
- the CPU 1213 operates according to a program stored in the memory 1215.
- the memory 1215 stores a control program 1216.
- the control program 1216 is a program for controlling the storage node, and forms a RAID group with the storage medium 1221 mounted in the storage medium unit 1220 and creates a logical volume 1222 to be provided to the host computer 1300.
- the control program 1216 reads and writes data to an appropriate storage medium 1221 in accordance with an I / O command to the logical volume 1222 from the host computer 1300 based on the configured RAID group information. .
- the control program 1216 also provides control information for determining the operation of the storage node 1200, and an API (Application Programming Interface) for referring to and updating the operation status.
- a management program (not shown) and a migration plan creation program 1055 operating on the management computer 1000 use this API to control and monitor storage nodes.
- the data communication unit 1212 determines an appropriate storage if it is not an I / O command to the logical volume provided by the storage node 1200 on which the data communication unit 1212 is mounted. An I / O command is transferred to the node 1200 and data exchange is relayed. If it is an I / O command to a logical volume provided by the storage node 1200 on which it is mounted, the I / O command is transferred to the CPU 1213 that is executing its control program 1216.
- the internal connection 1230 is extended between the data communication units 1212 of each storage node 1200 constituting the virtual storage system 1100 in order to transfer I / O commands between the storage nodes and relay data exchange.
- This connection is a connection for Fiber Channel, SCSI (Small Computer System Interface), SAS (Serial Attached SCSI), or other data communication.
- control program 1216 can manage and respond from the storage node 1200 of the storage destination in response to an inquiry from the data communication unit 1212. This information can be referred to and changed by instructing the control program 1216 from the management computer 1000 or the like.
- the migration source storage node 1600 basically has the same configuration as the storage node 1200. However, mounting of the data communication unit 1212 is optional.
- FIG. 2 is an example of the storage node information table 1051 stored in the memory 1050.
- This table 1051 stores performance information of each storage node 1200 and the migration source storage node 1600.
- the storage node ID 2001 is an ID for uniquely identifying a storage node in the computer system.
- the port performance 2002 indicates the maximum data transfer performance (throughput) of the FE I / F that the storage node has.
- the CPU performance 2003 indicates the CPU performance of the storage node.
- a cache capacity 2004 indicates a cache capacity (GB) included in the storage node.
- a storage capacity 2005 indicates a storage capacity (GB) of the storage node. This storage capacity is the capacity of the RAID group.
- the storage node 1200 configuring the virtual storage system 1100 is not used.
- the storage node 1200 is already used for other purposes.
- the surplus performance and capacity obtained by subtracting the used amount are used.
- FIG. 3 is an example of the inter-storage node transfer bandwidth information table 1052 stored in the memory 1050.
- This table 1052 stores information on the connection destination storage node name and transfer bandwidth (Mbps) for the connection 1230 for communication between the storage nodes 1200 constituting the virtual storage system 1100.
- Mbps transfer bandwidth
- a storage node ID1 indicated by a column 3001 indicates a transfer source
- a storage node ID2 indicated by a column 3002 indicates a transfer destination
- a transfer band 3003 indicates a transfer band for connection between the storage nodes indicated by the storage node ID1 and the storage node ID2. Since the storage nodes communicate with each other in full duplex, the transfer bandwidth is not always the same in both directions, and there is a difference in the transfer bandwidth depending on the direction. Therefore, as shown in the first and second records in FIG. Even between nodes, the records differ depending on the transfer source and transfer destination. These values are collected in advance by the migration plan creation program 1055 from each storage node 1200 via the management network 1400. Alternatively, the user may input directly from the input device 1030.
- the band 3003 is an extra transfer band obtained by subtracting the used amount.
- FIG. 4 is an example of the volume information table 1053 stored in the memory 1050.
- This table 1053 stores information on the logical volume 1222 provided to the host computer 1300 by the migration source storage node 1600.
- the storage node ID 4001 is an ID for uniquely identifying the migration source storage node 1600.
- the VOLID 4002 is an ID for uniquely identifying a logical volume in the migration source storage node 1600 indicated by the storage node ID 4001.
- the port usage amount 4003 indicates the maximum value of the usage amount (Mbps) of the port performance used by the logical volume specified by the storage node ID 4001 and the VOLID 4002.
- the CPU usage rate 4004 indicates the maximum value of the usage rate (%) of the CPU used by the migration source storage node 1600 indicated by the storage node ID 4001 for the logical volume.
- the cache usage amount 4005 indicates the maximum value of the cache capacity (MB) used by the migration source storage node 1600 indicated by the storage node ID 4001 for the logical volume.
- the storage capacity 4006 indicates the maximum value of the storage capacity (GB) used by the migration source storage node 1600 indicated by the storage node ID 4001 for the logical volume.
- the allocation destination host 4007 indicates to which host computer 1300 the logical volume is
- the port usage amount 4003, the CPU usage rate 4004, the cache usage amount 4005, and the storage capacity 4006 use the maximum values, but as another embodiment, the values averaged over time. However, it may also be a value subjected to statistical processing such as a maximum value excluding the away value. These values are collected in advance by the migration plan creation program 1055 from the migration source storage node 1600 via the management network 1400.
- FIG. 5 is an example of the threshold information table 1054 stored in the memory 1050.
- This table 1054 stores information used when the migration plan creation program 1055 creates a migration plan.
- the port usage rate threshold value 5001 indicates the upper limit of port performance to be used in each storage node 1200 as a result of migration, as a percentage (%) of the usage rate.
- the CPU usage rate threshold 5002 indicates the upper limit of the CPU performance to be used by each storage node 1200 as a result of migration, as a percentage (%) of the usage rate.
- the cache usage rate threshold 5003 indicates the upper limit of the cache capacity to be used in each storage node 1200 as a result of migration, as a percentage (%) of the usage rate.
- the storage capacity usage rate threshold value 5004 indicates the upper limit of the storage capacity that will be used in each storage node 1200 as a result of migration, as a percentage (%) of the usage rate.
- the inter-storage node transfer bandwidth usage threshold 5005 indicates the upper limit of the inter-storage node transfer bandwidth to be used in each storage node 1200 as a result of migration, as a percentage (%) of the usage rate.
- Each threshold value is set by the user in principle, but is not limited to being fixedly set, and can be changed.
- FIG. 6 is a flowchart showing an operation example of the migration plan creation program 1055. After the migration plan creation program 1055 is started, the processing contents of each step by the migration plan creation program 1055 will be described in the following order.
- step 6001 performance information of each storage node is collected from the control program 1216 for controlling the migration source storage node 1600 and the control program 1216 for controlling the storage node 1200 constituting the virtual storage system 1100, and the storage node information table 1051 ( 2) and the inter-storage node transfer bandwidth information table 1052 (FIG. 3).
- this step 6001 may be table creation by user input.
- step 6002 logical volume usage statuses are collected from the control program 1216 that controls the migration source storage node 1600, and a volume information table 1053 (FIG. 4) is created.
- step 6003 a migration plan is created based on the collected information.
- the migration plan indicates which storage node 1200 storage area and each storage node port each logical volume 1222 of the migration source storage node 1600 uses. Details of the migration plan created in step 6003 will be described later with reference to FIG.
- step 6004 if the migration plan can be created in the previous step 6003 (yes), the process proceeds to step 6005. If the migration plan cannot be created (no), the process ends. In step 6005, migration is performed using the created migration plan as a migration instruction.
- the migration plan creation program 1055 creates a logical volume from the RAID group of the storage node 1200 that uses the storage area for each logical volume 1222 of the migration source storage node 1600 and allocates the necessary cache capacity. To the control program 1216. Then, the migration plan creation program 1055 issues an instruction to the control program 1216 to assign the port of the storage node 1200 that uses the port to the host computer 1300. The assignment destination host computer is determined with reference to the assignment destination host 4007 of the volume information table 1053. Finally, the migration plan creation program 1055 transfers the transfer control information for the data communication unit 1212 to relay the I / O command transfer and data exchange between the storage nodes, and controls both the migration source and migration destination storage nodes. Notify the program 1216.
- the transfer logical information is notified even when the migrated logical volume uses a storage area and a port in the same storage node.
- the data communication unit 1212 transfers the transfer control information. If it is a process to skip the transfer of I / O commands and data exchange without notifying control information, control the logical volumes that use the same storage node logical volume and port after migration. Do not notify information.
- FIG. 7 is a flowchart showing a detailed operation example of the migration plan (step 6003) of FIG. 6 created by the migration plan creation program 1055.
- the processing contents of each step by the migration plan creation program 1055 will be described in order.
- step 7010 the logical volumes are sorted in descending order of the value of the port usage amount 4003 in the volume information table 1053.
- step 7020 the subsequent processing is repeated for each sorted logical volume for the number of volumes.
- step 7030 it is determined whether the port performance and storage capacity used in the logical volume can be assigned as the migration destination from the same storage node.
- the CPU performance and volume information table 1053 calculated from the port usage amount 4003 of the volume information table 1053, the CPU usage rate 4004 of the volume information table 1053, and the CPU performance 2003 of the storage node information table 1051 are used as the migration destination storage node. It is determined whether there is a storage node 1200 having a sufficient surplus with respect to the cache capacity 4005 and the storage capacity 4006.
- the usable performance and capacity (hereinafter referred to as “excess performance” and “extra capacity”) by applying a threshold corresponding to each value are used. calculate. If these surplus performance and surplus capacity are larger than the values used by the logical volume, it is determined that there is a sufficient surplus and allocation is possible, and if it is small, it is determined that there is no sufficient surplus and allocation is impossible.
- step 7030 If it is determined in this step 7030 that allocation is possible from the same storage node 1200 (yes), then in the next step 7040, one of the storage nodes 1200 that can be allocated is selected, and a port and storage capacity are allocated from that storage node 1200. Add to your migration plan. At that time, the amount used by the logical volume is subtracted from the surplus performance of the port, the surplus performance of the CPU, the surplus capacity of the cache, and the surplus capacity of the storage capacity. Thereafter, the process proceeds to step 7030 to process the next logical volume.
- the storage nodes 1200 having a sufficient margin for port performance are listed. Listed here are storage nodes 1200 in which the surplus performance of the ports exceeds the performance of the ports used in the logical volume.
- step 7060 the storage nodes 1200 having a sufficient surplus in CPU performance, cache capacity, and storage capacity are listed. Listed here is a storage node 1200 in which the surplus performance of the CPU, the surplus capacity of the cache, and the surplus capacity of the storage capacity exceed the CPU performance, cache capacity, and storage capacity used in the logical volume.
- Step 7070 when the storage nodes 1200 enumerated in the previous Steps 7050 and 7060 are connected, it is determined whether there is a combination of the storage nodes 1200 having a sufficient surplus in the data transfer band between the two storage nodes 1200.
- the transfer bandwidth 3003 of the inter-storage node transfer bandwidth information table 1052 is multiplied by the inter-storage node transfer bandwidth usage threshold 5005 of the threshold information table 1054 to obtain an available bandwidth (hereinafter referred to as “excess bandwidth”. Calculated).
- this surplus bandwidth is equal to or higher than the port performance used by the logical volume, it is determined that there is a sufficient surplus, and when it is small, it is determined that there is not enough surplus.
- step 7080 If it is determined in step 7070 that the storage node combination has a sufficient surplus bandwidth (yes), then in step 7080, one storage node combination having a sufficient surplus bandwidth is selected. Then, from one of the combinations, a port of a storage node having a surplus in port performance is allocated, and a storage node having a surplus in the storage area is allocated from the other to the migration plan. At this time, the amount used by the logical volume is determined from the surplus performance of the storage node port to which the port is allocated, and from the surplus performance of the CPU of the storage node to which the storage capacity is allocated, the surplus capacity of the cache, and the surplus capacity of the storage capacity. Is subtracted. Further, the port performance used by the logical volume is subtracted from the surplus bandwidth between the selected storage nodes. Thereafter, the process proceeds to step 7030 to process the next logical volume.
- step 7090 the administrator is notified that the replacement cannot be performed while maintaining the performance, and the process is terminated.
- the data communication unit 1212 does not make an inquiry to the CPU 1213 that operates the control program 1216 when transferring I / O commands or relaying data exchange.
- a logical volume that uses a lot of port performance is migrated to perform I / O processing without straddling storage nodes. Therefore, it is possible to migrate from the migration source storage node 1600 to the virtual storage system 1100 without deteriorating the I / O performance due to the transfer bandwidth of the connection between the storage nodes.
- Example 2 is an embodiment in the case where advanced processing such as replication (local copy) or snapshot of the logical volume 1222 can be performed in the migration source storage node 1600 and the migration destination storage node 1200.
- advanced processing such as replication (local copy) or snapshot of the logical volume 1222 can be performed in the migration source storage node 1600 and the migration destination storage node 1200.
- it is necessary to arrange the resources necessary for this in the same storage node as the storage node assigned to the storage area.
- a part which becomes the form and process content of this Example 2 substantially the same as Example 1, only a difference is demonstrated below.
- control program 1216 shown in FIG. 1 has a function for performing logical volume replication and snapshot.
- the control program 1216 also provides the management computer 1000 with a reference and setting API for managing these functions.
- functions of duplication (local copy) and snapshot will be described as an example. However, functions held by other storage nodes may be used.
- the memory 1050 included in the management computer 1000 stores a local copy information table, a snapshot information table, a pool information table, and a related volume group information table (not shown). Details of these tables will be described later.
- FIG. 8 is an example of a local copy information table stored in the memory 1050. This table stores information related to the local copy.
- the storage node ID 8001 is an ID for uniquely identifying a storage node in the computer system.
- the copy group ID 8002 is an ID of a copy group that is uniquely identified by the storage node 1600 indicated by the storage node ID 8001.
- the primary volume 8003 is an ID of a logical volume uniquely identified by the storage node 1600 indicated by the storage node ID 8001, and the logical volume indicated by this ID is a replication source logical volume.
- the secondary volume 8004 is an ID of a logical volume uniquely identified by the storage node 1600 indicated by the storage node ID 8001, and the logical volume indicated by this ID is a replication destination logical volume.
- the CPU usage rate 8005 is the maximum value of the CPU performance (%) required for this copy.
- a cache capacity 8006 is the maximum cache capacity (MB) required for this copy.
- Example 2 the CPU usage rate 8005 and the cache capacity 8006 are set to the maximum values. However, as another embodiment, statistical processing such as the maximum value excluding the away value is performed even when the value is averaged over time. It may be a value. These values are collected in advance by the migration plan creation program 1055 from each storage node 1200 via the management network 1400.
- FIG. 9 is an example of a snapshot information table stored in the memory 1050.
- This table stores information related to snapshots.
- the storage node ID 9001 is an ID for uniquely identifying a storage node in the computer system.
- the SS group ID 9002 is an ID of the snapshot group uniquely identified by the storage node 1600 indicated by the storage node ID 9001.
- the volume ID 9003 is an ID of a logical volume uniquely identified by the storage node 1600 indicated by the storage node ID 9001.
- the logical volume indicated by this ID is a snapshot source logical volume.
- the SS volume ID 9004 is an ID of a logical volume that is uniquely identified by the storage node 1600 indicated by the storage node ID 9001, and the logical volume indicated by this ID is a snapshot volume.
- the CPU usage rate 9005 is the maximum value of CPU performance (%) required for this snapshot.
- the cache capacity 9006 is the maximum value of the cache capacity (MB) required for this snapshot.
- the pool ID 9007 is an ID of a pool that is uniquely identified by the storage node 1600 indicated by the storage node ID 9001, and indicates a pool used in the snapshot.
- Example 2 the CPU usage rate 9005 and the cache capacity 9006 are set to the maximum values. However, as another embodiment, statistical processing such as a maximum value excluding the away value is performed even when the value is averaged over time. It may be a value. These values are collected in advance by the migration plan creation program 1055 from each storage node 1200 via the management network 1400.
- FIG. 10 is an example of a pool information table stored in the memory 1050. This table stores information on pools used in snapshots.
- the storage node ID 10001 is an ID for uniquely identifying a storage node in the computer system.
- the pool ID 10002 is an ID of a pool uniquely identified by the storage node 1600 indicated by the storage node ID 10001.
- the pool VOLID 10003 indicates the ID of the logical volume constituting the pool indicated by the storage node ID 10001 and the pool ID 10002. These values are collected in advance by the migration plan creation program 1055 from each storage node 1200 via the management network 1400.
- FIG. 11 is an example of a related volume group information table stored in the memory 1050.
- This table is a table that summarizes logical volumes that need to be allocated storage areas from the same storage node in order to operate the storage function as described above.
- the storage node ID 11001 is an ID for uniquely identifying a storage node in the computer system.
- the group ID 11002 is an ID indicating a logical volume group.
- the related volume ID 11003 is a list of logical volume IDs belonging to the group indicated by the group ID 11002.
- the total port usage amount 11004 indicates the total port performance (Mbps) used by the volume group indicated by the storage node ID 11001 and the related volume ID 11003.
- the total CPU usage rate 11005 indicates the total (%) of CPU performance used by the volume group indicated by the storage node ID 11001 and the related volume ID 11003.
- the total cache capacity 11006 indicates the total (MB) of the cache capacity used by the volume group indicated by the storage node ID 11001 and the related volume ID 11003.
- the total storage capacity 11007 indicates the total storage capacity (GB) used by the volume group indicated by the storage node ID 11001 and the related volume ID 11003.
- FIG. 12 is a flowchart showing a detailed operation example of the migration plan creation 6003 of FIG. 6 in the second embodiment.
- the processing contents of each step by the migration plan creation program 1055 will be described in order.
- step 12010 performance information is collected from the control program 1216 of the migration source storage node 1600, and the related volume group information table of FIG. 11 is created. Specifically, the local copy information table (FIG. 8), the snapshot information table (FIG. 9), and the pool information table (FIG. 10) are collected. Thereafter, related information is registered in the related volume group information table (FIG. 11) according to the function scheduled to be executed.
- the replication source and replication destination logical volumes For example, if it is a local copy, register the replication source and replication destination logical volumes in the same related volume group. In addition, the copy source and copy destination logical volumes belonging to the same copy group ID are also registered in the same related volume group. At that time, in addition to the port performance, CPU performance, cache capacity, and storage capacity used in each volume, the CPU usage rate 8005 and cache capacity 8006 used for local copy are added together to obtain the total port usage amount 11004, total CPU usage Values are set for the usage rate 11005, the total cache capacity 11006, and the total storage capacity 11007.
- the snapshot source logical volume and the snapshot destination logical volume are registered in the same related volume group.
- the replication source and replication destination logical volumes belonging to the same snapshot group are also registered in the same related volume group.
- the logical volume constituting the pool used by the snapshot is also specified from the pool information table (FIG. 10) and registered in the same related volume group.
- the CPU usage rate 9005 and cache capacity 9006 used for snapshots are added together to obtain a total port usage amount 11004, total CPU usage. Values are set for the usage rate 11005, the total cache capacity 11006, and the total storage capacity 11007.
- step 12020 The processing of each step after step 12010 is changed from the processing performed in units of logical volumes in FIG. 7 of the first embodiment to the processing in units of related volume groups.
- step 12020 the related volume groups are sorted in descending order of the total port usage 11004 value.
- step 12030 the subsequent processing is repeated for each number of related volume groups sorted.
- step 12040 it is determined whether the port performance and storage capacity used in the relevant volume group can be allocated from the same storage node.
- step 12050 If it is determined in this step 12040 that allocation can be performed from the same storage node (yes), in step 12050, one of the storage nodes 1200 that can be allocated is selected, and the storage node 1200 is shifted to allocate ports and storage capacity. Add to plan. At this time, the total performance and capacity used by the related volume group are subtracted from each surplus performance and surplus capacity.
- step 12040 determines whether allocation is not possible from the same storage node 1200 (no) or not possible from the same storage node 1200 (no).
- step 12060 nodes having sufficient margin for port performance are listed. Listed here are storage nodes 1200 whose surplus port performance exceeds the total performance of the ports used in the relevant volume group.
- Step 12070 the storage nodes 1200 having sufficient surplus in CPU performance, cache capacity and storage capacity are listed.
- the storage nodes 1200 listed here are similarly judged using the total performance and total capacity of the related volume groups.
- step 12080 it is determined whether there is a combination of storage nodes 1200 having a sufficient margin in the data transfer bandwidth between the storage nodes 1200 listed in previous steps 12060 and 12070.
- step 12090 If it is determined in this step 12080 that there is a sufficient surplus bandwidth for the combination of storage nodes (yes), in the following step 12090, one storage node combination with a sufficient surplus bandwidth is selected. Then, from one of the combinations, a port of a storage node having a surplus in port performance is allocated, and a storage node having a surplus in the storage area is allocated from the other to the migration plan. At that time, the total performance and capacity used by the related volume group are subtracted from each surplus performance and surplus capacity. Also, the port performance used by the related volume group is subtracted from the surplus bandwidth between the selected storage nodes. Thereafter, the process proceeds to step 12040 to process the next related volume group.
- step 12080 determines whether there is not enough surplus bandwidth in the combination of the storage nodes (no). If it is determined in step 12080 that there is not enough surplus bandwidth in the combination of the storage nodes (no), in step 12100, the administrator is notified that the replacement cannot be performed while maintaining the performance, and the process ends.
- the migration instruction in step 6005 in FIG. 6 needs to perform function settings corresponding to processes such as replication (local copy) and snapshot.
- the transfer bandwidth of the connection between the storage nodes becomes a bottleneck, and the I / O performance is not deteriorated.
- the migration source storage node 1600 can be migrated to the virtual storage system 1100.
- the first embodiment has a configuration in which any storage node 1200 constituting the virtual storage system 1100 is directly connected in a mesh shape by internal connection.
- the storage nodes related to the transfer may be connected via the third storage node 1200 without being directly connected to each other.
- the I / O command transfer and data exchange relay by the data communication unit may be performed across three or more storage nodes.
- a transfer band between a plurality of storage nodes is used, and the efficiency is deteriorated.
- the third embodiment corresponds to such a transfer form.
- Example 3 since there exists a part which becomes the form and process content substantially the same as Example 1, only a difference is demonstrated below.
- FIG. 13 is a configuration example of a computer system according to the third embodiment.
- the first embodiment has a configuration in which the data communication units 1212 of the storage nodes 1200 constituting the virtual storage system 1100 are directly connected to each other by an internal connection 1230 (FIG. 1).
- the third embodiment is not configured to be directly connected to each other, but is configured to be connected via another storage node 1200 as in the internal connection 1230 in FIG.
- FIG. 14 is a flowchart showing a detailed operation example of creating a migration plan in step 6003 of FIG. 6 in the third embodiment. Since the processing contents from step 7010 to step 7060 are the same as those in the first embodiment, the processing by the migration plan creation program 1055 after that will be described below.
- step 14070 it is determined whether there is a combination of storage nodes 1200 having a sufficient surplus in the data transfer bandwidth between the storage nodes 1200 enumerated in previous steps 7050 and 7060.
- the shortest path of the internal connection between the storage nodes 1200 is calculated by referring to the inter-storage transfer bandwidth information table 1052 (FIG. 3). Then, it is determined that there is sufficient surplus when the surplus bandwidth of all the internal connections that pass through exceeds the port performance used by the logical volume, and when there is less surplus bandwidth, it is determined that there is not enough surplus.
- step 14070 when it is determined that there is a sufficient surplus bandwidth in the combination of the storage nodes 1200 (yes), in the following step 14080, the number of storage nodes (hops) that pass through the combination of storage nodes having a sufficient surplus bandwidth. Sort by number).
- step 14090 one storage node combination with the fewest number of storage nodes (number of hops) is selected. Then, from one of the combinations, a port of a storage node having a surplus in port performance is allocated, and a storage node having a surplus in the storage area is allocated from the other to the migration plan. At this time, the amount used by the logical volume is determined from the surplus performance of the storage node port to which the port is allocated, and from the surplus performance of the CPU of the storage node to which the storage capacity is allocated, the surplus capacity of the cache, and the surplus capacity of the storage capacity. Is subtracted. Further, the port performance used by the logical volume is subtracted from the surplus bandwidth of the internal connection between the storage nodes that pass through. Thereafter, the process proceeds to step 7030 to process the next logical volume.
- step 14100 the administrator is notified that the replacement cannot be performed while maintaining the performance, and the process is terminated.
- the third embodiment even in the configuration in which the storage nodes 1200 are not directly connected to each other and connected via the third storage node 1200 as in the daisy chain, It is possible to migrate from the migration source storage node 1600 to the virtual storage system 1100 without causing the bandwidth to become a bottleneck and the I / O performance to deteriorate.
- the data communication unit 1212 transfers I / O commands from the host computer 1300 and relays data exchange.
- the fourth embodiment corresponds to a configuration in which the data communication unit 1212 transfers an I / O command from the BE I / F to the storage medium unit and relays data exchange.
- the fourth embodiment since there are portions that have almost the same form and processing contents as the first embodiment, only the differences will be described below.
- FIG. 15 is an example of a storage node information table used in the fourth embodiment. This is basically the same as the storage node information table 1051 in FIG. 2, but a BE port performance 15001 is added. This column shows the performance of the BE I / F.
- FIG. 16 is an example of a volume information table used in the fourth embodiment. Although it is basically the same as the volume information table 1053 of FIG. 4, a BE port usage amount 16001 is added. This column indicates the performance of the BE I / F used by the logical volume indicated by the storage node ID 4001 and the VOLID 4002.
- FIG. 17 is an example of a migration plan table used in the fourth embodiment. Although not shown in the first to third embodiments, the processing content is complicated in the fourth embodiment, and is illustrated.
- Storage node ID 17001 is an ID for uniquely identifying a storage node in the computer system.
- the VOLID 17002 is an ID for uniquely identifying the logical volume 1222 in the storage node 1600 indicated by the storage node ID 17001.
- the port use storage node ID 17003 indicates a storage node having a port used by the logical volume 1222 indicated by the storage node ID 17001 and the VOLID 17002 after migration.
- the cache use storage node ID 17004 indicates a storage node having a cache used by the logical volume 1222 indicated by the storage node ID 17001 and the VOLID 17002 after migration.
- the storage area use storage node ID 17005 indicates a storage node having a storage area used by the logical volume 1222 indicated by the storage node ID 17001 and the VOLID 17002 after migration.
- FIG. 18 is a flowchart showing a detailed operation example of creating a migration plan in step 6003 of FIG. 6 in the fourth embodiment. Hereinafter, processing of each step by the migration plan creation program 1055 will be described in order.
- step 18010 with reference to the volume information table of FIG. 16, for each logical volume of the migration source storage node, a list of logical volume ID and port usage pair and logical volume ID and BE port usage pair list is created. To do.
- step 18020 the port usage amount and the BE port usage amount are sorted using the port usage amount as a key, and the above-mentioned groups are sorted in descending order of the key value.
- step 18030 the subsequent processing is repeated for the sorted sets in order.
- step 18040 it is determined whether the set is a set of logical volume ID and port usage or a set of logical volume ID and BE port usage.
- a port migration plan is created in step 18050. Details of the creation of this port migration plan will be described later with reference to FIG. Thereafter, the process proceeds to step 18030 to perform the next set of processing.
- a BE I / F migration plan is created in step 18060. Details of the creation of the BE I / F transition plan will be described later with reference to FIG. Thereafter, the process proceeds to step 18030 to perform the next set of processing. When all the pairs are finished, the process is finished.
- FIG. 19 is a flowchart showing a detailed operation example of creating a port migration plan in step 18050 of FIG.
- the processing contents of each step by the migration plan creation program 1055 will be described in order.
- step 19000 it is determined with reference to the migration plan table of FIG. 17 whether or not the storage node that uses the cache for the logical volume has been determined. If it has been determined (yes), it is determined in step 19010 whether there is room in the port performance of the storage node. Details of the determination are the same as in the first embodiment (step 7030 in FIG. 7).
- step 19020 the port of the storage node is added to the migration plan. Specifically, the value of the port use storage node ID 17003 of the record having the storage node ID 17001 and the VOLID 17002 corresponding to the logical volume is updated. At that time, the excess performance of the port of the storage node is subtracted.
- Step 19030 storage nodes with margin in port performance are listed.
- the specific method of enumeration is the same as that in the first embodiment (step 7050 in FIG. 7).
- step 19040 it is determined whether there is a storage node in which the surplus of the transfer bandwidth of the internal connection between the listed storage node and the storage node using the cache is equal to or greater than the port usage. If there is a surplus storage node (yes), in step 19050, one of the storage nodes is selected and added to the migration plan to use the port of the storage node. At that time, the surplus performance of the port of the storage node and the surplus of the transfer band between the storage nodes are subtracted. If there is no surplus storage node (no), in step 19060, the user is notified that the migration cannot be performed while maintaining the performance, and the process ends.
- step 19000 If it is determined in step 19000 that the logical volume does not use a storage that uses the cache (no), it is determined in step 19070 whether a single storage node can use the port and the cache. Specifically, the port usage amount 4003 of the volume information table (FIG. 16), the CPU usage amount (calculated from the CPU performance 2003 of the storage node information table (FIG. 15) and the CPU usage rate 4004 of the volume information table (FIG. 16)). Then, it is determined whether there is a storage node having sufficient surplus performance and surplus capacity to use the cache usage amount 4005 of the volume information table (FIG. 16).
- step 19080 storage nodes having sufficient performance sufficient to use the port usage amount 4003 are listed.
- a storage node having a surplus performance sufficient to use the CPU usage (calculated from the CPU performance 2003 and the CPU usage rate 4004) and the cache usage 4005 is listed.
- step 19090 it is determined whether there is a surplus of the transfer bandwidth between the storage nodes listed in the previous step 19080 that is greater than or equal to the value of the port usage amount 4003 in the volume information table (FIG. 16). If there is no corresponding item (no), in step 19060, the user is notified that the performance cannot be maintained while maintaining the performance, and the process is terminated. If applicable (yes), in step 19100, select one storage node combination from the applicable ones, use a port of the storage node that has sufficient port performance, and have a margin in cache and CPU performance. Add to the migration plan to use the storage node's cache.
- the value of the port use storage node ID 17003 and the value of the cache use storage node ID 17004 of the record having the storage node ID 17001 and the VOL ID 17002 corresponding to the logical volume are updated.
- the surplus of each of port performance, CPU performance, cache capacity and transfer bandwidth between storage nodes is subtracted.
- step 19070 if it is determined in step 19070 that a port and cache can be used in a single storage node (yes), one storage node is selected from among them in step 19110, and the port and cache of that storage node are used. To add to the migration plan. At that time, the respective surpluses of the port performance, CPU performance and cache capacity are subtracted.
- FIG. 20 is a flowchart showing a detailed operation example of creating a BE I / F migration plan in step 18060 of FIG.
- the processing contents of each step by the migration plan creation program 1055 will be described in order.
- step 20000 it is determined with reference to the migration plan table in FIG. 17 whether the storage node that uses the cache for the logical volume has been determined. If it is determined (yes), it is determined in step 20010 whether there is a margin in the storage capacity of the storage node. Details of the determination are the same as in the first embodiment (step 7030 in FIG. 7). If there is a margin (yes), in step 20020, the storage area of the storage node is added to the migration plan.
- the value of the storage area use storage node ID 17005 of the record having the storage node ID 17001 and the VOL ID 17002 corresponding to the logical volume is updated. At that time, the surplus capacity of the storage area of the storage node is subtracted.
- step 20030 If it is determined in step 20010 that the storage capacity of the storage node does not have a margin (no), in step 20030, storage nodes having a margin of storage capacity are listed. The specific method of enumeration is the same as that in the first embodiment (step 7050 in FIG. 7). In step 20040, whether there is a storage node whose surplus of the internal connection transfer band between the listed storage node and the storage node using the cache is equal to or greater than the value of BE port usage 16001 in the volume information table of FIG. Determine.
- step 20050 If there is a surplus storage node (yes), in step 20050, one storage node is selected and added to the migration plan to use the port of the storage node. At that time, the surplus capacity of the storage area of the storage node and the surplus of the transfer band between the storage nodes are subtracted. If there is no surplus storage node (no), in step 20060, the user is notified that the migration cannot be performed while maintaining the performance, and the process ends.
- step 20000 determines whether the storage using the cache for the logical volume has not been determined (no).
- step 20070 determines whether the storage area and the cache cannot be used in a single storage node.
- the storage capacity 4006 of the volume information table (FIG. 16) the CPU usage (calculated from the CPU performance 2003 of the storage node information table (FIG. 15) and the CPU usage rate 4004 of the volume information table (FIG. 16)), It is determined whether there is a storage node having sufficient surplus performance and surplus capacity to use the cache capacity 4005 of the volume information table (FIG. 16).
- Step 20080 storage nodes having a surplus capacity sufficient to use the storage capacity 4006 are listed. Further, the CPU usage (calculated from the CPU performance 2003 of the storage node information table (FIG. 15) and the CPU usage rate 4004 of the volume information table (FIG. 16)) and the cache capacity 4005 of the volume information table (FIG. 16) are used. List storage nodes with sufficient surplus performance.
- Step 20090 it is determined whether there is a surplus of the transfer bandwidth between the storage nodes listed in the previous Step 20080 that is equal to or greater than the value of BE port usage 16001 in the volume information table of FIG. If there is no surplus storage node (no), in step 20060, the user is notified that the migration cannot be performed while maintaining the performance, and the process ends. If there is a surplus storage node (yes), in step 20100, select one combination of storage nodes from the corresponding ones, use the storage area of the storage node with enough storage area, Add to the migration plan to use the cache of the storage node with sufficient performance.
- the value of the storage area use storage node ID 17005 and the value of the cache use storage node ID 17004 of the record having the storage node ID 17001 and VOL ID 17002 corresponding to the logical volume are updated.
- the respective surpluses of the storage area capacity, CPU performance, cache capacity and transfer bandwidth between storage nodes are subtracted.
- step 20110 if it is determined in step 20070 that the storage area and the cache can be used in a single storage node (yes), in step 20110, one storage node is selected from the storage nodes and the cache. Add to migration plan to use. At this time, the respective surpluses of the storage area capacity, CPU performance and cache capacity are subtracted.
- the above description is not limited to this.
- the components of one embodiment can be added to or replaced with the components of another embodiment without departing from the technical idea of the present invention.
- the embodiment of the present invention can be implemented by software running on a general-purpose computer, or can be implemented by dedicated hardware or a combination of software and hardware.
- the information used in the embodiment of the present invention is mainly described in the “table” format.
- this information is not necessarily limited to the information represented by the data structure of the table, and the list, DB It may be expressed by a data structure such as a queue or other data.
- processing disclosed with the program as the subject may be processing performed by a computer such as a management computer or a storage system.
- Part or all of the program may be realized by dedicated hardware or may be modularized.
- Non-volatile semiconductor memory hard disk drives, storage devices such as SSD (Solid State Drive), or computers such as IC cards, SD cards, and DVDs.
- SSD Solid State Drive
- computers such as IC cards, SD cards, and DVDs.
- IC cards, SD cards, and DVDs can be stored on any non-transitory data storage medium. Further, it can be installed in a computer or a computing system by a program distribution server or a non-temporary storage medium.
- management computer 1100 ... virtual storage system 1200 ... storage node 1300 ... host computer 1400 ... management network 1500 ... data network 1600 ... migration source storage nodes 1010, 1213 ... ⁇ CPU 1020 ... Display device 1030 ... Input device 1040 ... NIC 1050, 1215 ... Memory 1210 ... Controller 1220 ... Storage medium unit 1230 ... Internal connection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
図21は、単一の物理ストレージノードが存在する旧構成から、複数の物理ストレージノードから成る仮想ストレージシステムが存在する新構成へ移行する流れを示す概要図である。
新構成において、仮想ストレージシステムを構成する複数の物理ストレージノードは、それぞれ同一のスペックであり、CPU性能は100MIPS、ポート性能は100Mbpsである。物理ストレージノード間は、ASICにより内部で接続されており、ほとんど応答性能の劣化がなく、I/O命令の転送、データのやり取りの中継が可能である。この転送性能は50Mbpsである。
入力装置1030は、管理者からの指示を受け付けるためのデバイスである。表示装置1020は、管理者からの指示に対応する処理の結果やシステムの状況などを表示するデバイスである。NIC1040は、管理ネットワーク1400に接続するためのI/Fである。
CPU1010は、メモリ1050に格納されたプログラムに従って動作する。
移行プラン作成プログラムは、移行元ストレージノード1600から仮想ストレージシステム1100へ移行するためのプランを作成し、移行を実施するプログラムである。このプログラムの動作の詳細も後述する。
記憶媒体ユニット1220は、複数のハードディスクドライブやSSD(Solid State Drive)などの記憶媒体1221を搭載する。
NIC1214は、管理ネットワーク1400に接続するためのI/Fである。FE I/F1211は、データネットワーク1500に接続するためのI/Fである。BE I/F1217は、記憶媒体ユニット1220に接続するためのI/Fである。CPU1213は、メモリ1215に格納されたプログラムに従って動作する。
メモリ1215は、制御プログラム1216を格納する。
本テーブル1051は、各ストレージノード1200および移行元ストレージノード1600の性能情報を格納する。
これらの値は、移行プラン作成プログラム1055が事前に各ストレージノード1200および1600から管理ネットワーク1400を介して収集する。あるいは、ユーザが入力装置1030から直接入力してもよい。
本テーブル1052は、仮想ストレージシステム1100を構成するストレージノード1200間の通信用の結線1230について、結線接続先のストレージノード名および転送帯域(Mbps)の情報を格納する。
これらの値は、移行プラン作成プログラム1055が事前に各ストレージノード1200から管理ネットワーク1400を介して収集する。あるいは、ユーザが入力装置1030から直接入力してもよい。
本テーブル1053は、移行元ストレージノード1600でホストコンピュータ1300に提供する論理ボリューム1222の情報を格納する。
これらの値は、移行プラン作成プログラム1055が事前に移行元ストレージノード1600から管理ネットワーク1400を介して収集する。
本テーブル1054は、移行プラン作成プログラム1055が移行プランを作成する際に利用する情報を格納する。
また、各閾値については、原則ユーザが設定するが、固定的に設定されることに限定されず設定変更することもできる。
移行プラン作成プログラム1055が起動された後、この移行プラン作成プログラム1055による各ステップの処理内容を、以下順に説明する。
ステップ6003において、収集した情報を元に移行プランを作成する。移行プランには、移行元ストレージノー1600の各論理ボリューム1222がどのストレージノード1200の記憶領域を使用し、どのストレージノードのポートを使用するかが示されている。ステップ6003で作成する移行プランの詳細は、図7を用いて後述する。
ステップ6005において、移行指示として、作成した移行プランを使って移行を実施する。
ステップ7020において、ソートした論理ボリューム毎にそのボリューム数分後続の処理を繰り返す。
その際に、ポートの余剰性能、CPUの余剰性能、キャッシュの余剰容量、記憶容量の余剰容量から、当該論理ボリュームが使用する分を減算する。
その後、次の論理ボリュームの処理を行うべくステップ7030に処理を進める。
その際に、ポートを割り当てるストレージノードのポートの余剰性能から、また、記憶容量を割り当てるストレージノードのCPUの余剰性能、キャッシュの余剰容量および記憶容量の余剰容量から、それぞれ当該論理ボリュームが使用する分を減算する。さらに、選択したストレージノード間の余剰帯域から、当該論理ボリュームが使用するポート性能分を減算する。
その後、次の論理ボリュームの処理を行うべくステップ7030に処理を進める。
本テーブルは、ローカルコピーに関わる情報を格納する。
ストレージノードID8001は、計算機システム内のストレージノードを一意に識別するためのIDである。コピーグループID8002は、ストレージノードID8001で示されるストレージノード1600で一意に識別されるコピーグループのIDである。主ボリューム8003は、ストレージノードID8001で示されるストレージノード1600で一意に識別される論理ボリュームのIDであり、このIDで示される論理ボリュームは複製元論理ボリュームである。副ボリューム8004は、ストレージノードID8001で示されるストレージノード1600で一意に識別される論理ボリュームのIDであり、このIDで示される論理ボリュームは複製先論理ボリュームである。CPU使用率8005は、このコピーで必要となるCPU性能(%)の最大値である。キャッシュ容量8006はこのコピーで必要となるキャッシュ容量(MB)の最大値である。
これらの値は、移行プラン作成プログラム1055が、事前に各ストレージノード1200から管理ネットワーク1400を介して収集する。
本テーブルは、スナップショットに関わる情報を格納する。
ストレージノードID9001は、計算機システム内のストレージノードを一意に識別するためのIDである。SSグループID9002は、ストレージノードID9001で示されるストレージノード1600で一意に識別されるスナップショットグループのIDである。ボリュームID9003は、ストレージノードID9001で示されるストレージノード1600で一意に識別される論理ボリュームのIDであり、このIDで示される論理ボリュームはスナップショット元の論理ボリュームである。SSボリュームID9004は、ストレージノードID9001で示されるストレージノード1600で一意に識別される論理ボリュームのIDであり、このIDで示される論理ボリュームはスナップショットボリュームである。CPU使用率9005は、このスナップショットで必要となるCPU性能(%)の最大値である。キャッシュ容量9006は、このスナップショットで必要となるキャッシュ容量(MB)の最大値である。プールID9007は、ストレージノードID9001で示されるストレージノード1600で一意に識別されるプールのIDであり、スナップショットで利用されるプールを示す。
これらの値は、移行プラン作成プログラム1055が、事前に各ストレージノード1200から管理ネットワーク1400を介して収集する。
本テーブルは、スナップショットで使用されるプールの情報を格納する。
ストレージノードID10001は、計算機システム内のストレージノードを一意に識別するためのIDである。プールID10002は、ストレージノードID10001で示されるストレージノード1600で一意に識別されるプールのIDである。プールVOLID10003はストレージノードID10001とプールID10002で示されるプールを構成する論理ボリュームのIDを示す。
これらの値は、移行プラン作成プログラム1055が、事前に各ストレージノード1200から管理ネットワーク1400を介して収集する。
本テーブルは、上記のようなストレージの機能を動作させるために、同一のストレージノードから記憶領域を割り当てる必要がある論理ボリュームを纏めたテーブルである。
ストレージノードID11001は、計算機システム内のストレージノードを一意に識別するためのIDである。グループID11002は、論理ボリュームのグループを示すIDである。関連ボリュームID11003は、グループID11002で示されるグループに属する論理ボリュームIDのリストである。合計ポート使用量11004は、ストレージノードID11001と関連ボリュームID11003で示されるボリューム群が使用するポート性能の合計(Mbps)を示す。合計CPU使用率11005は、ストレージノードID11001と関連ボリュームID11003で示されるボリューム群が使用するCPU性能の合計(%)を示す。合計キャッシュ容量11006は、ストレージノードID11001と関連ボリュームID11003で示されるボリューム群が使用するキャッシュ容量の合計(MB)を示す。合計記憶容量11007は、ストレージノードID11001と関連ボリュームID11003で示されるボリューム群が使用する記憶容量の合計(GB)を示す。
具体的には、ローカルコピー情報テーブル(図8)、スナップショット情報テーブル(図9)、プール情報テーブル(図10)、それぞれに関連する情報を収集する。その後、関連ボリュームグループ情報テーブル(図11)に対して、実行予定の機能に応じて関連する情報を登録する。
ステップ12020において、合計ポート使用量11004の値の大きい順に関連ボリュームグループをソートする。
ステップ12040において、当該関連ボリュームグループで使用するポート性能と記憶容量を同一のストレージノードから割り当て可能かを判断する。
その際に、各余剰性能、余剰容量から関連ボリュームグループが使用する合計の性能、容量を減算する。また、選択したストレージノード間の余剰帯域から、当該関連ボリュームグループが使用するポート性能分を減算する。
その後、次の関連ボリュームグループの処理を行うべくステップ12040に処理を進める。
なお、実施例3については、実施例1とほぼ同一の形態および処理内容となる部分があるので、以下では差分のみ説明を行う。
実施例1は、仮想ストレージシステム1100を構成する各ストレージノード1200のデータ通信ユニット1212が、内部結線1230により相互に直接接続された構成である(図1)。しかし、実施例3は、相互に直接接続される構成ではなく、図13の内部結線1230のように、他のストレージノード1200を介して接続される構成である。
その際に、ポートを割り当てるストレージノードのポートの余剰性能から、また、記憶容量を割り当てるストレージノードのCPUの余剰性能、キャッシュの余剰容量および記憶容量の余剰容量から、それぞれ当該論理ボリュームが使用する分を減算する。さらに、経由するストレージノード間の内部結線の余剰帯域から、当該論理ボリュームが使用するポート性能分を減算する。
その後、次の論理ボリュームの処理を行うべくステップ7030に処理を進める。
実施例4については、実施例1とほぼ同一の形態および処理内容となる部分があるので、以下では差分のみ説明を行う。
図2のストレージノード情報テーブル1051と基本的には同じであるが、BEポート性能15001が追加されている。この列は、BE I/Fの性能を示す。
図4のボリューム情報テーブル1053と基本的には同じであるが、BEポート使用量16001が追加されている。この列は、ストレージノードID4001とVOLID4002で示される論理ボリュームが使用するBE I/Fの性能を示す。
実施例1から3では図示しなかったが、実施例4では処理内容が複雑となるため、図示したものである。
ステップ18020において、ポート使用量およびBEポート使用量をポートの使用量をキーとして、上記の各組をそのキーとなる値の大きい順にソートする。
ステップ18040において、当該組が論理ボリュームのIDとポート使用量の組か、論理ボリュームのIDとBEポート使用量の組かを判定する。
全ての組が終了したら処理を終了する。
該当するものがない場合は(no)、ステップ19060において、性能維持したままの移行ができない旨をユーザに通知し、終了する。
該当するものがある場合は(yes)、ステップ19100において、該当するものの中からストレージノードの組み合わせを一つ選び、ポート性能に余裕があるストレージノードのポートを使用し、キャッシュとCPUの性能に余裕があるストレージノードのキャッシュを使用するように、移行プランに追加する。
1100・・・仮想ストレージシステム
1200・・・ストレージノード
1300・・・ホストコンピュータ
1400・・・管理ネットワーク
1500・・・データネットワーク
1600・・・移行元ストレージノード
1010、1213・・・CPU
1020・・・表示装置
1030・・・入力装置
1040・・・NIC
1050、1215・・・メモリ
1210・・・コントローラ
1220・・・記憶媒体ユニット
1230・・・内部結線
Claims (13)
- ホスト計算機と、
第1の物理ストレージノードと、
相互に接続される複数の第2の物理ストレージノードと、
前記第1および複数の第2の物理ストレージノードを管理する管理計算機と、
を備え、
前記複数の第2の物理ストレージノードは同一の識別子でもって前記ホスト計算機に応答して仮想ストレージシステムを提供し、
前記第1の物理ストレージノードから前記仮想ストレージシステムへ移行する際に、
前記管理計算機は、前記第1および複数の第2の物理ストレージノードの構成情報および性能情報と前記第1の物理ストレージノードが提供するボリュームの負荷情報を基に、前記第2の物理ストレージノード相互間の転送経路の帯域の範囲内に収まることを条件にして前記第2の物理ストレージノードから移行先としてリソースを選択する
ことを特徴とするストレージシステムの移行方式。 - 請求項1に記載のストレージシステムの移行方式であって、
前記構成情報および前記性能情報は、前記第1および複数の第2の物理ストレージノードが有するポート、CPU、キャッシュおよび記憶媒体に関する情報であり、
前記負荷情報は、前記第1の物理ストレージノードが提供するボリュームを構成する物理ストレージノードが有するポート、CPU、キャッシュ容量および記憶媒体容量の各使用量である
ことを特徴とするストレージシステムの移行方式。 - 請求項2に記載のストレージシステムの移行方式であって、
前記ポート、CPU、キャッシュおよび記憶媒体に関する性能情報から、当該移行による前記それぞれの使用量分を減算する
ことを特徴とするストレージシステムの移行方式。 - 請求項1に記載のストレージシステムの移行方式であって、
前記第2の物理ストレージノード相互間の転送経路の帯域の範囲を、予めまたは任意に設定する閾値に応じた前記転送経路の帯域の使用率を用いて算出する
ことを特徴とするストレージシステムの移行方式。 - 請求項1に記載のストレージシステムの移行方式であって、
前記管理計算機は、前記第1の物理ストレージノードから前記仮想ストレージシステム内の単一の前記第2の物理ストレージノードへ移行可能であれば、当該移行を優先して実行する
ことを特徴とするストレージシステムの移行方式。 - 請求項1から5のいずれか一項に記載のストレージシステムの移行方式であって、
当該計算機システムは、論理ボリュームの複製およびスナップショットの少なくとも一つの機能を提供し、
前記第1の物理ストレージノードから前記仮想ストレージシステムへ移行する際に用いる前記第1の物理ストレージノードが提供するボリュームの負荷情報として、前記論理ボリュームの複製および前記スナップショットの少なくとも一つの機能を実行するときに用いたボリュームに関する負荷情報を加える
ことを特徴とするストレージシステムの移行方式。 - 請求項1から5のいずれか一項に記載のストレージシステムの移行方式であって、
前記第2の物理ストレージノード相互間の接続態様が数珠繋ぎで連結されている場合には、前記リソースを割り当てるに当たり、さらに、経由する前記第2の物理ストレージノード数が少なくなる割り当てとする
ことを特徴とするストレージシステムの移行方式。 - 請求項1から5のいずれか一項に記載のストレージシステムの移行方式であって、
前記第1の物理ストレージノードおよび前記複数の第2の物理ストレージノードを構成する記憶媒体ユニットへのI/O命令に係る転送性能を、前記リソースを割り当てる判断の基にする前記情報に加える
ことを特徴とするストレージシステムの移行方式。 - 第1の物理ストレージノードおよび複数の第2の物理ストレージノードを管理する管理計算機と、
前記複数の第2の物理ストレージノードから同一の識別子でもってホスト計算機に応答して提供される仮想ストレージシステムと、
を有し
前記第1の物理ストレージノードから前記仮想ストレージシステムへ移行する際に、
前記管理計算機は、
前記第1および複数の第2の物理ストレージノードの構成情報および性能情報を取得する第1のステップと、
前記第1の物理ストレージノードが提供するボリュームの負荷情報を取得する第2のステップと、
前記構成情報、前記性能情報および前記負荷情報を基に、前記第2の物理ストレージノード相互間の転送経路の帯域の範囲内に収まることを条件にして前記第2の物理ストレージノードから移行先としてリソースを選択する第3のステップと、を有する
ことを特徴とするストレージシステムの移行方法。 - 請求項9に記載のストレージシステムの移行方法であって、
前記構成情報および前記性能情報は、前記第1および複数の第2の物理ストレージノードが有するポート、CPU、キャッシュおよび記憶媒体に関する情報であり、
前記負荷情報は、前記第1の物理ストレージノードが提供するボリュームを構成する物理ストレージノードが有するポート、CPU、キャッシュ容量および記憶媒体容量の各使用量である
ことを特徴とするストレージシステムの移行方法。 - 請求項10に記載のストレージシステムの移行方法であって、
前記管理計算機は、
前記ポート、CPU、キャッシュおよび記憶媒体に関する性能情報から、当該移行による前記それぞれの使用量分を減算する第4のステップをさらに有する
ことを特徴とするストレージシステムの移行方法。 - 請求項9に記載のストレージシステムの移行方法であって、
前記第3のステップは、前記第2の物理ストレージノード相互間の転送経路の帯域の範囲を、予めまたは任意に設定する閾値に応じた前記転送経路の帯域の使用率を用いて算出する過程を含む
ことを特徴とするストレージシステムの移行方法。 - 請求項9に記載のストレージシステムの移行方法であって、
前記管理計算機は、
前記第3のステップに先立ち、前記第1の物理ストレージノードから前記仮想ストレージシステム内の単一の前記第2の物理ストレージノードへ移行可能であれば、当該移行を優先して実行するステップをさらに有する
ことを特徴とするストレージシステムの移行方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2013/083471 WO2015087442A1 (ja) | 2013-12-13 | 2013-12-13 | ストレージシステムの移行方式および移行方法 |
JP2015552268A JP5973089B2 (ja) | 2013-12-13 | 2013-12-13 | ストレージシステムの移行方式および移行方法 |
US14/767,137 US10182110B2 (en) | 2013-12-13 | 2013-12-13 | Transfer format for storage system, and transfer method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2013/083471 WO2015087442A1 (ja) | 2013-12-13 | 2013-12-13 | ストレージシステムの移行方式および移行方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015087442A1 true WO2015087442A1 (ja) | 2015-06-18 |
Family
ID=53370783
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/083471 WO2015087442A1 (ja) | 2013-12-13 | 2013-12-13 | ストレージシステムの移行方式および移行方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10182110B2 (ja) |
JP (1) | JP5973089B2 (ja) |
WO (1) | WO2015087442A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017037800A1 (ja) * | 2015-08-28 | 2017-03-09 | 株式会社日立製作所 | ストレージシステムおよびその制御方法 |
WO2017141408A1 (ja) * | 2016-02-18 | 2017-08-24 | 株式会社日立製作所 | 方法、媒体及び計算機システム |
WO2018131133A1 (ja) * | 2017-01-13 | 2018-07-19 | 株式会社日立製作所 | データ移行システム及びデータ移行制御方法 |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160041996A1 (en) | 2014-08-11 | 2016-02-11 | Netapp, Inc. | System and method for developing and implementing a migration plan for migrating a file system |
US10860529B2 (en) * | 2014-08-11 | 2020-12-08 | Netapp Inc. | System and method for planning and configuring a file system migration |
US10684781B1 (en) * | 2015-12-23 | 2020-06-16 | The Mathworks, Inc. | Big data read-write reduction |
US10156999B2 (en) * | 2016-03-28 | 2018-12-18 | Seagate Technology Llc | Dynamic bandwidth reporting for solid-state drives |
US10956212B1 (en) | 2019-03-08 | 2021-03-23 | The Mathworks, Inc. | Scheduler for tall-gathering algorithms that include control flow statements |
US10936220B2 (en) * | 2019-05-02 | 2021-03-02 | EMC IP Holding Company LLC | Locality aware load balancing of IO paths in multipathing software |
JP7191003B2 (ja) * | 2019-12-17 | 2022-12-16 | 株式会社日立製作所 | ストレージシステムおよびストレージ管理方法 |
JP7380363B2 (ja) * | 2020-03-19 | 2023-11-15 | 富士通株式会社 | 構築管理装置、情報処理システム及び構築管理プログラム |
JP2022175427A (ja) * | 2021-05-13 | 2022-11-25 | 株式会社日立製作所 | ストレージシステム及びストレージ管理方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007286709A (ja) * | 2006-04-13 | 2007-11-01 | Hitachi Ltd | ストレージシステム及びストレージシステムのデータ移行方法 |
JP2007299161A (ja) * | 2006-04-28 | 2007-11-15 | Hitachi Ltd | San管理方法およびsan管理システム |
JP2008108050A (ja) * | 2006-10-25 | 2008-05-08 | Hitachi Ltd | I/oの割り振り比率に基づいて性能を管理する計算機システム、計算機及び方法 |
JP2008293233A (ja) * | 2007-05-24 | 2008-12-04 | Hitachi Ltd | 計算機システム、その制御方法およびシステム管理装置 |
US20120216005A1 (en) * | 2011-02-23 | 2012-08-23 | Hitachi, Ltd. | Storage system and management method of the storage system |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005228278A (ja) * | 2004-01-14 | 2005-08-25 | Hitachi Ltd | 記憶領域の管理方法、管理装置及び管理プログラム |
JP4963892B2 (ja) | 2006-08-02 | 2012-06-27 | 株式会社日立製作所 | 仮想ストレージシステムの構成要素となることが可能なストレージシステムの制御装置 |
JP4235220B2 (ja) * | 2006-10-25 | 2009-03-11 | 株式会社日立製作所 | 計算機システムおよびデータ移行方法 |
JP4814119B2 (ja) * | 2007-02-16 | 2011-11-16 | 株式会社日立製作所 | 計算機システム、ストレージ管理サーバ、及びデータ移行方法 |
WO2011021909A2 (en) * | 2009-08-21 | 2011-02-24 | Samsung Electronics Co., Ltd. | Method and apparatus for providing contents via network, method and apparatus for receiving contents via network, and method and apparatus for backing up data via network, backup data providing device, and backup system |
JP5241671B2 (ja) * | 2009-10-05 | 2013-07-17 | 株式会社日立製作所 | 記憶装置のデータ移行制御方法 |
EP2419817A1 (en) * | 2009-10-09 | 2012-02-22 | Hitachi, Ltd. | Storage system and control method thereof, implementing data reallocation in case of load bias |
US8793463B2 (en) * | 2011-09-12 | 2014-07-29 | Microsoft Corporation | Allocation strategies for storage device sets |
WO2013190562A1 (en) * | 2012-06-22 | 2013-12-27 | Hewlett-Packard Development Company, L.P. | Optimal assignment of virtual machines and virtual disks using multiary tree |
US20150363422A1 (en) * | 2013-01-10 | 2015-12-17 | Hitachi, Ltd. | Resource management system and resource management method |
US9680933B2 (en) * | 2013-03-15 | 2017-06-13 | Hitachi, Ltd. | Computer system |
-
2013
- 2013-12-13 WO PCT/JP2013/083471 patent/WO2015087442A1/ja active Application Filing
- 2013-12-13 US US14/767,137 patent/US10182110B2/en active Active
- 2013-12-13 JP JP2015552268A patent/JP5973089B2/ja not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007286709A (ja) * | 2006-04-13 | 2007-11-01 | Hitachi Ltd | ストレージシステム及びストレージシステムのデータ移行方法 |
JP2007299161A (ja) * | 2006-04-28 | 2007-11-15 | Hitachi Ltd | San管理方法およびsan管理システム |
JP2008108050A (ja) * | 2006-10-25 | 2008-05-08 | Hitachi Ltd | I/oの割り振り比率に基づいて性能を管理する計算機システム、計算機及び方法 |
JP2008293233A (ja) * | 2007-05-24 | 2008-12-04 | Hitachi Ltd | 計算機システム、その制御方法およびシステム管理装置 |
US20120216005A1 (en) * | 2011-02-23 | 2012-08-23 | Hitachi, Ltd. | Storage system and management method of the storage system |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017037800A1 (ja) * | 2015-08-28 | 2017-03-09 | 株式会社日立製作所 | ストレージシステムおよびその制御方法 |
WO2017141408A1 (ja) * | 2016-02-18 | 2017-08-24 | 株式会社日立製作所 | 方法、媒体及び計算機システム |
WO2018131133A1 (ja) * | 2017-01-13 | 2018-07-19 | 株式会社日立製作所 | データ移行システム及びデータ移行制御方法 |
Also Published As
Publication number | Publication date |
---|---|
US20150373105A1 (en) | 2015-12-24 |
US10182110B2 (en) | 2019-01-15 |
JPWO2015087442A1 (ja) | 2017-03-16 |
JP5973089B2 (ja) | 2016-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5973089B2 (ja) | ストレージシステムの移行方式および移行方法 | |
JP5512833B2 (ja) | ストレージの仮想化機能と容量の仮想化機能との両方を有する複数のストレージ装置を含んだストレージシステム | |
US8984221B2 (en) | Method for assigning storage area and computer system using the same | |
US8307171B2 (en) | Storage controller and storage control method for dynamically assigning partial areas of pool area as data storage areas | |
JP5439581B2 (ja) | ストレージシステム、ストレージ装置、ストレージシステムの記憶領域の最適化方法 | |
JP5981563B2 (ja) | 情報記憶システム及び情報記憶システムの制御方法 | |
US8402239B2 (en) | Volume management for network-type storage devices | |
JP4684864B2 (ja) | 記憶装置システム及び記憶制御方法 | |
US8650381B2 (en) | Storage system using real data storage area dynamic allocation method | |
JP6340439B2 (ja) | ストレージシステム | |
US20130036266A1 (en) | First storage control apparatus and storage system management method | |
US20120297156A1 (en) | Storage system and controlling method of the same | |
JP2006285808A (ja) | ストレージシステム | |
WO2014162586A1 (ja) | ストレージシステムおよびストレージシステム制御方法 | |
KR20210022121A (ko) | 구성 가능한 인프라스트럭처에서 스토리지 디바이스 고장 허용을 유지하기 위한 방법 및 시스템 | |
JP2022541261A (ja) | リソース割振り方法、記憶デバイス、および記憶システム | |
WO2015198441A1 (ja) | 計算機システム、管理計算機、および管理方法 | |
JP2015532734A (ja) | 物理ストレージシステムを管理する管理システム、物理ストレージシステムのリソース移行先を決定する方法及び記憶媒体 | |
WO2015121998A1 (ja) | ストレージシステム | |
US20220382602A1 (en) | Storage system | |
WO2016194096A1 (ja) | 計算機システム及び計算機システムの管理方法 | |
JP4871758B2 (ja) | ボリューム割当方式 | |
WO2012070090A1 (en) | Computer system and its control method | |
WO2016016949A1 (ja) | 計算機システムおよび管理計算機の制御方法 | |
JP5362751B2 (ja) | 計算機システム、管理計算機およびストレージ管理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13899083 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14767137 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2015552268 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13899083 Country of ref document: EP Kind code of ref document: A1 |