WO2012117534A1 - 計算機システムおよびその制御方法 - Google Patents
計算機システムおよびその制御方法 Download PDFInfo
- Publication number
- WO2012117534A1 WO2012117534A1 PCT/JP2011/054727 JP2011054727W WO2012117534A1 WO 2012117534 A1 WO2012117534 A1 WO 2012117534A1 JP 2011054727 W JP2011054727 W JP 2011054727W WO 2012117534 A1 WO2012117534 A1 WO 2012117534A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- volume
- virtual
- computer system
- storage device
- pool
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0605—Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0644—Management of space entities, e.g. partitions, extents, pools
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
Definitions
- the present invention relates to a storage system, and more particularly to capacity management of a storage volume pool.
- Thin provisioning is a technique for assigning a virtual volume to a host computer.
- a virtual volume is a volume to which a data storage area is allocated from a pooled physical disk only when data is written from the host computer to the virtual volume.
- VMs virtual machines
- a hypervisor which is a server virtualization management program, manages a plurality of volumes at a time and provides the administrator with a single data storage area.
- the hypervisor considers the volume pool resource usage (capacity used, I / O volume, etc.) Without creating an image file that is a VM entity in a virtual volume with a large free space. As a result, there is a possibility that a deviation in resource usage between the volume pools occurs or is promoted.
- the present application includes a plurality of means for solving the above-described problems.
- a computer system comprising one or more storage devices and one or more host computers connected to the storage devices.
- the host computer includes a first interface connected to the storage device, a first processor connected to the first interface, and a first storage device connected to the first processor,
- the first processor controls one or more virtual machines each executing one or more applications
- the storage device includes a controller connected to the host computer and one or more connected to the controller.
- a plurality of virtual volumes, and a plurality of pools each including a real storage area of the physical storage device.
- the virtual volume is received as a write destination volume from the host computer, the real storage area included in the pool corresponding to the virtual volume of the write destination is received.
- the data is allocated to the write destination virtual volume and the data is stored in the allocated real storage area, and the computer system uses the volume as the write destination by the host computer based on the information held by the storage device.
- the priority order is determined, and the determined priority order
- FIG. 2 is a block diagram showing an internal configuration of the storage apparatus according to the first embodiment of the present invention. It is a block diagram which shows the outline
- FIG. 5 is a sequence diagram showing processing from when the management software according to the first embodiment of the present invention acquires configuration information of a hypervisor and each storage device until an additional module holds a write destination candidate volume management table.
- FIG. 6 is a sequence diagram illustrating processing from when an administrator performs an operation involving file creation on a hypervisor until an additional module writes a file to a specified volume of each storage device in the first embodiment of the present invention.
- FIG. 10 is a sequence diagram illustrating a process from when the management software according to the second embodiment of the present invention acquires configuration information of a hypervisor and each storage device until the management software holds a write destination candidate volume management table.
- FIG. 10 is a sequence diagram showing processing from when an administrator performs a file creation operation on management software until the management software writes a file to a specified volume of each storage device through an interface in the second embodiment of the present invention.
- FIG. 1 is a block diagram showing an overall outline of an IT system according to the first embodiment of the present invention.
- This system includes a host computer 100, a management server 200, and one or more storage apparatuses 400, which are connected to each other via a LAN 301.
- the storage apparatus 400 and the host computer 100 are connected to each other via a SAN (Storage Area Network) 300.
- the system may include a plurality of host computers. In this case, in addition to the management LAN 301, a plurality of host computers may be connected to each other via a data transfer LAN.
- FIG. 2A is a block diagram illustrating an internal configuration of the host computer 100 according to the first embodiment of this invention.
- the host computer 100 includes one or more CPUs 110, one or more memories 111, one or more SAN adapters 112, one or more LAN adapters 113, and one or more storage devices 114, which are internal to each other. They are connected by a bus 115.
- the host computer 100 is connected to the storage apparatus 400 via the SAN adapter 112.
- the host computer 100 is connected to the management server 200 via the LAN adapter 113.
- the host computer 100 does not necessarily include the storage device 114. When the host computer 100 does not include the storage device 114, the host computer 100 uses a volume in the storage apparatus 400 as a software storage area.
- FIG. 2B is a block diagram illustrating an internal configuration of the management server 200 according to the first embodiment of this invention.
- the internal configuration of the management server 200 is the same as that of the host computer 100, but does not necessarily include a SAN adapter.
- the management server 200 shown in FIG. 2B includes one or more CPUs 210, one or more memories 211, one or more LAN adapters 212, and one or more storage devices 213, which are connected to each other via an internal bus 214. Has been.
- the management server 200 is connected to the storage apparatus 400 and the host computer 100 via the LAN adapter 212.
- FIG. 3 is a block diagram illustrating an internal configuration of the storage apparatus 400 according to the first embodiment of this invention.
- the storage apparatus 400 includes one or more controllers 410 and one or more physical disks 411.
- the controller 410 includes one or more CPUs 412, one or more memories 413, one or more NVRAMs (Non Volatile Random Access Memory) 414, one or more cache memories 415, one or more back-end interfaces 416 and 417, One or more LAN adapters 418 and one or more SAN adapters 419 are connected to each other via an internal bus 420.
- the controller 410 is connected to the physical disk 411 via the back-end interfaces 416 and 417. Further, the storage apparatus 400 is connected to the management server 200 via the LAN adapter 418. The storage apparatus 400 is connected to the host computer 100 via a SAN adapter 419.
- FIG. 4 is a block diagram showing an outline of the software configuration of the IT system in the first embodiment of the present invention.
- one or more virtual machines 500, one or more hypervisors 501, and an additional module 502 operate on the host computer 100.
- the additional module is a program module added to the hypervisor 501 in the present embodiment.
- These software that is, programs corresponding to the virtual machine 500, the hypervisor 501, and the additional module 502 are stored in the storage device 114 or the storage apparatus 400, loaded into the memory 111, and executed using the CPU 110.
- the virtual machine 500, the hypervisor 501, or the additional module 502 may be described as the subject.
- the above-described program is executed by the processor (CPU 110), so that a predetermined process is performed in the memory 111.
- the communication port for example, a communication control device such as the SAN adapter 112 or the LAN adapter 113
- the subject in the description may be replaced with a processor.
- the processing executed by the virtual machine 500, the hypervisor 501, or the additional module 502 can be described as processing executed by the CPU 110 according to the above program.
- information held by the virtual machine 500, the hypervisor 501, or the additional module 502 is actually held in the memory 111 or the storage device 114.
- the processing disclosed with the processor as the subject includes a computer or an information processing apparatus (the host computer 100 in the case of the virtual machine 500, the hypervisor 501, and the additional module 502 described above) including the processor, or includes it. It may be described as a process executed by the computer system. Further, part or all of the program may be realized by dedicated hardware. Various programs may be installed in each computer or the like by a program distribution server or a non-transitory storage medium that can be read by the computer. The same applies to the management server 200 and the storage apparatus 400 described below.
- Management software 600 operates on the management server 200.
- the management software 600 is stored in the storage device 213, loaded into the memory 211, and executed using the CPU 210.
- the processing executed by the management software 600 is described as processing executed by the CPU 210 according to the management software 600, processing executed by the management server 200, or processing executed by the computer system. You can also In the following description, information held by the management software 600 is actually held in the memory 211 or the storage device 213.
- the management server 200 further has an input / output device (not shown). Examples of input / output devices include a display, a keyboard, and a pointing device, but other devices may be used.
- the management server 200 has a serial interface or an Ethernet interface as an input / output device, and a display computer (not shown) having a display, a keyboard, or a pointing device is connected to the interface, and display information is transmitted to the display computer.
- the display computer may display or accept the input, thereby replacing the input and display by the input / output device.
- a set of one or more computers that manage the computer system and display the display information of the present invention may be referred to as a management system.
- the management server 200 displays the display information
- the management server 200 is a management system.
- a combination of the management server 200 and a display computer (not shown) is also a management system.
- a plurality of computers may realize processing equivalent to that of the management server 200.
- the plurality of computers in addition, when the display computer performs display) (Including the display computer) is the management system.
- volume pools 700 there are one or more volume pools 700 on the storage device 400, and one or more virtual volumes 701 (described later) created from these.
- the virtual volume 701 is assigned to the host computer 100 via the SAN 300. Note that not only the virtual volume 701 but also logical volumes described later may be mixed on the storage apparatus 400. Further, the hypervisor 501 may straddle a plurality of host computers.
- the hypervisor 501 provides an environment for executing a plurality of virtual machines.
- the hypervisor 501 recognizes and manages volumes (that is, virtual volumes 701 and logical volumes 702 (see FIG. 5)). Further, the hypervisor 501 manages in which volume the image file 503 of the virtual machine 500 is stored, and executes a writing process to the image file 503 and the like. However, when a new image file is created, the additional module 502 that captures the new file creation process of the hypervisor 501 determines the image file storage destination volume.
- FIG. 19 is an explanatory diagram illustrating an example of the host volume management table 2100 held by the hypervisor 501 according to the first embodiment of this invention.
- the information held in the computer system will be described using the expression “table”, but the information is expressed in a data structure other than the table, for example, a list, a database (DB), or a queue. May be.
- the “host volume management table” is sometimes called “host volume management information”.
- expressions such as “identification information”, “identifier”, “name”, “name”, or “ID” may be used, but these can be replaced with each other. The same applies to tables other than the host volume management table 2100 described below.
- the host volume management table 2100 includes a host volume ID column 2101, a storage ID column 2102, a storage volume ID column 2103, and an empty capacity column 2104.
- a host volume ID column 2101 an identifier of a volume (virtual volume or logical volume) recognized and given by the hypervisor 501 is registered.
- Registered in the storage ID column 2102 is an identifier of a storage apparatus in which a storage volume (that is, a volume recognized by the storage apparatus 400) corresponding to the host volume (that is, a volume recognized by the host computer 100) exists.
- the storage volume ID column 2103 the identifier of the storage volume corresponding to the host volume is registered.
- the free capacity column 2104 the free capacity of the host volume is registered.
- the host volume management table 2100 may further include a column for registering information other than the above, for example, information indicating the I / O amount for each host volume.
- the host computer 100 may periodically measure the amount of I / O to each host volume and register the result in the host volume management table 2100.
- the storage ID and the storage volume ID are information that is assigned and held by the storage apparatus 400, but the hypervisor 501 can acquire such information from the storage apparatus 400 by using, for example, SCSI Inquiry.
- the management software 600 acquires configuration information from the host computer 100 and the storage apparatus 400 and stores it in the management table. In addition, the management software 600 transmits the priority information of the write destination volume to the additional module 502. Details of the management software 600 will be described later.
- FIG. 5 is a block diagram showing an outline of the software configuration in the storage apparatus 400 according to the first embodiment of this invention.
- a storage agent 703, a virtual volume manager 704, and a logical volume manager 705 are operating. These are stored in the physical disk 411 or the NVRAM 414, loaded into the memory 413, and executed using the CPU 412.
- processing executed by the storage agent 703, virtual volume manager 704, or logical volume manager 705 is processing executed by the CPU 412 according to a program, processing executed by the storage device 400, or It can also be described as processing executed by the computer system.
- information held by the storage agent 703, the virtual volume manager 704, or the logical volume manager 705 is actually held in the memory 111 or the storage device 114.
- the virtual volume 701 is allocated to the host computer 100, but the logical volume 702 belonging to the volume pool 700 is used for allocation to the virtual volume 701, and is directly (ie, virtual volume). It is not assigned to the host computer 100 (without going through 701).
- the storage apparatus 400 may include a logical volume 702 that does not belong to the volume pool 700.
- a logical volume 702 that does not belong to the volume pool 700 cannot be assigned to the virtual volume 701, but can be assigned directly to the host computer 100 without going through the virtual volume 701.
- a virtual volume 701 and a logical volume can be mixed on the storage apparatus 400, but this can be assigned to the host computer 100 and the virtual volume 701 that can be assigned to the host computer 100.
- the logical volume manager 705 creates one or more logical volumes 702 from the physical disk 411 and manages the mapping between the logical volume 702 and the physical disk 411.
- FIG. 6 is a conceptual diagram showing the relationship between the logical volume 702 and the physical disk 411 in the first embodiment of the present invention.
- the logical volume 702 is composed of four physical disks 800, 801, 802 and 803.
- the areas in the physical disk labeled 1-1, 1-2, 1-3,... Are areas divided into predetermined sizes and are called stripes.
- the areas labeled P1, P2,... are areas that store parity information of corresponding stripes, and are called parity stripes.
- the logical volume manager 705 holds a volume management table in order to manage the mapping relationship between the logical volume 702 and the physical disk 411.
- FIG. 7 is an explanatory diagram illustrating an example of a volume management table held by the logical volume manager 705 according to the first embodiment of this invention.
- the volume management table includes a logical volume ID column 900, a disk column 901, a RAID level column 902, and a stripe size column 903.
- a logical volume ID column 900 an identifier assigned to each logical volume by the logical volume manager 705 is registered.
- the disk column 901 identifiers of physical disks that constitute a logical volume are registered.
- the RAID level column 902 a RAID (Redundant Array of Inexpensive Disks) level used for the configuration of the logical volume is registered.
- stripe size column 903 the size of the stripe used for the configuration of the logical volume is registered.
- the virtual volume manager 704 creates one or more virtual volumes 701 from the logical volumes 702 registered in the volume pool 700, and manages the mapping between the virtual volumes 701 and the logical volumes 702.
- FIG. 8 is a conceptual diagram showing the relationship between the virtual volume 701 and the logical volume 702 in the first embodiment of the present invention.
- a storage area of a physical disk (hereinafter referred to as a real storage area) is not directly allocated to the virtual volume 701, but a logical volume 702 is configured as shown in FIG. A partial area is assigned.
- the virtual volume manager 704 uses an unused area in the logical volume 702 (that is, not yet assigned to the virtual volume 701) as a place where the virtual volume 701 has been written. Assign to.
- the area 1003 of the logical volume 702 is assigned to the area 1000 of the virtual volume 701, and the area 1005 of the logical volume 702 is assigned to the area 1001 of the virtual volume 701.
- the storage apparatus 400 receives a data write request to the area 1000 from the host computer 100
- the data is assigned to the area 1003 of the logical volume 702 assigned to the area 1000 (more precisely, to the area 1003).
- Real storage area in the physical disk 801 As shown in FIG. 7, real storage areas in the physical disk 801 are allocated in advance to all areas in the logical volume 702.
- the area of the logical volume 702 is not allocated to the remaining area 1002 excluding the areas 1000 and 1001 in the virtual volume 701.
- the remaining area 1004 of the logical volume 702 excluding the areas 1003 and 1005 has not yet been assigned to the virtual volume 701 (that is, unused).
- the storage apparatus 400 newly receives a data write request to the area 1002, at least a part of the area 1004 is allocated to at least a part of the area 1002 where data is written, and data is stored in the allocated area. Stored.
- a partial area of the logical volume 702 is allocated to the virtual volume 701, but the real storage area of the physical disk is directly allocated to the virtual volume. May be. Even when a partial area of the logical volume 702 is allocated as described above, the actual storage area of the physical disk is finally associated with the areas 1000 and 1001 of the virtual volume 701 as a result of the allocation. .
- the virtual volume manager 704 In order to manage the relationship between the virtual volume 701 and the logical volume 702, the allocation status of the virtual volume 701, and the usage status in the logical volume 702, the virtual volume manager 704 stores the virtual volume management table 1100 and the unused area management table 1200. Hold.
- FIG. 9 is an explanatory diagram illustrating an example of the virtual volume management table 1100 held by the virtual volume manager 704 according to the first embodiment of this invention.
- the virtual volume management table 1100 is roughly divided into a virtual volume column indicating a position in the virtual volume and a logical volume column indicating a corresponding area in the logical volume.
- the virtual volume column includes a volume ID column 1101, a start LBA column 1102, and an end LBA column 1103. Registered in the volume ID column 1101 is an identifier of a volume assigned to the virtual volume.
- the start LBA column 1102 the start LBA (Logical Block Address) of the area in the virtual volume is registered.
- the end LBA column 1103 the end LBA of the area in the virtual volume is registered.
- the logical volume column includes a volume ID column 1104, a start LBA column 1105, and an end LBA column 1106.
- Registered in the volume ID column 1104 is an identifier of a logical volume having a data storage area allocated to the corresponding virtual volume area.
- the start LBA column 1105 the start LBA of the area in the logical volume is registered.
- the end LBA column 1106 the end LBA of the area in the logical volume is registered.
- volume ID column 1101 For example, “3”, “0x00000000”, and “0x0001af0f” are registered in the volume ID column 1101, the start LBA column 1102, and the end LBA column 1103 in the first row of the virtual volume management table 1100 in FIG.
- the start LBA column 1105, and the end LBA column 1106, “0”, “0x00000000”, and “0x0001af0f” are registered, respectively.
- the LBA “0x00000000” to “0x0001af0f” of the logical volume 702 identified by the volume ID “0” is stored in the area from the LBA “0x00000000” to “0x0001af0f” of the virtual volume 701 identified by the volume ID “3”.
- the area up to is allocated. Since the real storage area of the physical disk 411 is allocated in advance to the area in the logical volume (see FIGS. 6 and 7), the area is eventually allocated to the area in the virtual volume 701 based on the virtual volume management table 1100. The actual storage area of the physical disk 411 can be specified.
- FIG. 10 is an explanatory diagram illustrating an example of the unused area management table 1200 held by the virtual volume manager 704 according to the first embodiment of this invention.
- the unused area management table 1200 includes a logical volume ID column 1201, a start LBA column 1202, and an end LBA column 1203.
- the identifier of the logical volume 702 registered in the volume pool 700 for allocating a data storage area to the virtual volume 701 is registered.
- the start LBA column 1202 and the end LBA column 1203 the start LBA column and the end LBA column of the unused area 1004 in the logical volume 702 are registered.
- the storage agent 703 manages the logical volume 702 and the virtual volume 701.
- FIG. 20 is an explanatory diagram illustrating an example of the storage volume management table 2200 held by the storage agent 703 according to the first embodiment of this invention.
- the storage volume management table 2200 includes a storage volume ID column 2201, a pool ID column 2202, and a pool free capacity column 2203.
- an identifier of a storage volume that is, a virtual volume 701 or a logical volume 702 is registered.
- Registered in the pool ID column 2202 is an identifier of a volume pool corresponding to a volume identified by the identifier registered in the storage volume ID column 2201 (hereinafter referred to as the volume in the description of FIG. 20).
- the pool free capacity column 2203 the free capacity of the volume pool corresponding to the volume is registered.
- the free capacity of the volume pool is the capacity of an area of the volume pool 700 that has not yet been allocated to the virtual volume 701.
- the volume pool 700 is not associated with the volume. Therefore, a blank is registered in the pool ID column 2202, and the actual volume of the volume is stored in the pool free capacity column 2203. Regardless of the capacity and the free capacity, the maximum value that can be registered in the pool free capacity column 2203 is registered. The reason why such a value is registered will be described later (see FIG. 15).
- information regarding the logical volume 702 may be registered in the storage volume management table 2200 in a format other than the above.
- a column indicating whether the volume is a logical volume 702 or a virtual volume 701 is registered in the storage volume management table 2200, and the pool ID column 2202 and pool free capacity when the volume is the logical volume 702 are registered.
- the column 2203 may be blank.
- the storage volume management table 2200 may further include a column other than the above.
- the storage volume management table 2200 may further include a column for registering an I / O amount for each storage volume or an I / O amount for each volume pool.
- the storage apparatus 400 may periodically measure the amount of I / O to each storage volume or volume pool, and register the result in the storage volume management table 2200.
- FIG. 11 is an explanatory diagram showing an example of the host / storage volume management table 1600 held by the management software 600 according to the first embodiment of this invention.
- the host storage volume management table 1600 is roughly divided into a host side information column and a storage side information column.
- the host side information column includes a host volume ID column 1601 and an empty capacity column 1602.
- a volume identifier recognized by the hypervisor 501 is registered in the host volume ID column 1601.
- a volume identified by an identifier registered in the host volume ID column 1601 is referred to as the volume.
- the volume is either a virtual volume 701 or a logical volume 702 assigned to the host computer.
- the free capacity column 1602 the free capacity of the volume is registered.
- the storage side information column includes a storage ID column 1603, a storage volume ID column 1604, a pool ID column 1605, and a pool free capacity column 1606.
- the storage ID column 1603 and the storage volume ID column 1604 include storage device and storage volume identifiers corresponding to the host volume ID column 1601 (in other words, the identifier of the storage device 400 that stores the volume, and the storage device 400 Are respectively assigned to the volume).
- the pool ID column 1605 the identifier of the volume pool corresponding to the volume is registered.
- the pool free capacity column 1606 the free capacity of the volume pool 700 corresponding to the volume is registered.
- the identifier of the host volume ID column 1601 and the identifier of the storage volume ID column 1604 are generally different.
- the pool ID column 1605 is blank as in the case of FIG. 20, and the value of the pool free capacity column 1606 is the maximum value that can be registered.
- the host side information column and the storage side information column may include columns other than the above.
- the host volume management table 2100 includes a column for registering the I / O amount for each host volume
- the host side information column of the host / storage volume management table 1600 may also include the same column.
- the storage volume management table 2200 includes a column for registering the I / O amount for each storage volume or each volume pool
- the storage side information column of the host / storage volume management table 1600 may also include the same column.
- the host / storage volume management table 1600 can be created from the “host volume management table 2100” and “table in which the storage volume management table 2200 is assigned the identifier (storage ID) of each storage device 400”. Specifically, “the row of the host volume management table 2100” and “the row of the table in which the identifier (storage ID) of each storage device 400 is added to the storage volume management table 2200” are compared, and the storage ID and storage volume ID are compared. By joining those that match, each row of the host storage volume management table 1600 is created. However, volumes that are not registered in the host volume management table 2100 (that is, volumes that are not recognized by the host computer 100) are not registered in the host / storage volume management table 1600.
- FIG. 12 is an explanatory diagram illustrating an example of the write destination candidate volume management table 1700 held by the additional module 502 according to the first embodiment of this invention.
- the write destination candidate volume management table 1700 includes a priority column 1701 and a host volume ID column 1702.
- the priority order column 1701 the order for searching for the write destination volume during the file creation process of the additional module 502 is registered.
- Registered in the host volume ID column 1702 is a volume identifier recognized by the hypervisor 501.
- the order registered in the write destination candidate volume management table 1700 is an index indicating the desirability of each volume as the storage destination of the image file 503. This means that a volume assigned a higher rank is preferable as a storage destination of the image file 503.
- the storage destination is not necessarily selected according to the priority order, and the priority order may be referred to only for issuing a warning.
- FIG. 13 shows the process from when the management software 600 according to the first embodiment of the present invention acquires the configuration information of the hypervisor 501 and each storage device 400 until the additional module 502 holds the write destination candidate volume management table 1700. It is the sequence diagram which showed the process.
- This process is periodically executed to update the write destination candidate volume management table 1700.
- the processing of FIG. 13 may be executed not only periodically but also when triggered by a predetermined event, such as when a user gives an instruction or when information is pushed from the hypervisor 501 or the storage device 401. It may be executed at arbitrary timing.
- the management software 600 requests the hypervisor 501 to transmit host side configuration information (step 1300).
- the management software 600 receives the host side configuration information from the hypervisor 501 (step 1301).
- the configuration information to be acquired is an information item included in the host volume management table 2100. Note that information other than the above may be included in the acquired configuration information. Further, the host-side configuration information may be transmitted from the hypervisor 501 without a request from the management software 600.
- the management software 600 requests each storage apparatus 400 to transmit storage side configuration information (step 1302).
- the management software 600 receives storage side configuration information from each storage device 400 (step 1303).
- the configuration information to be acquired is an identifier (storage ID) of each storage apparatus 400 and information items included in the storage volume management table 2200. Note that information other than the above may be included in the acquired configuration information. Further, the storage-side configuration information may be transmitted from each storage device 400 without a request from the management software 600.
- the management software 600 creates a host / storage volume management table 1600 based on the information acquired in Steps 1301 and 1303 (Step 1304).
- the creation method is as described with reference to FIG.
- the management software 600 determines the priority order of the write destination candidate volume by the process described later (step 1305).
- the management software 600 transmits the priority order information of the write destination candidate volume to the additional module 502 (step 1306).
- the additional module 502 holds the write destination candidate volume management table 1700 based on the priority order information of the write destination candidate volume received from the management software 600 (step 1307).
- FIG. 14 shows processing until the administrator 1400 performs an operation involving file creation on the hypervisor 501 and the additional module 502 writes the file to the specified volume of each storage apparatus 400 in the first embodiment of the present invention. It is the sequence diagram shown.
- the operation accompanied by file creation refers to an operation including an instruction for creating a new image file 503 such as “new creation of virtual machine 500” or “adding a new disk image to virtual machine 500”. Alternatively, another operation including an instruction to newly create the image file 503 may be used. Further, before the process of FIG. 14, it is necessary to execute the process shown in FIG. 13 to create the write destination candidate volume management table 1700.
- the administrator 1400 performs an operation involving file creation on the hypervisor 501 (step 1401).
- the hypervisor 501 executes a file creation process in accordance with the operation in step 1401 (step 1402).
- the additional module 502 captures the file creation process of the hypervisor 501 (step 1403).
- a capture implementation method there is a method in which the additional module 502 receives a notification of occurrence of a file creation processing event from the hypervisor 501.
- the additional module 502 determines a write destination volume using the write destination candidate volume management table 1700 (step 1404). Basically, the additional module 502 selects the volume with the highest priority as the write destination volume, and when there is a write failure due to an I / O error or insufficient capacity in the next step 1405, the additional module 502 sequentially Select a lower-order volume.
- the additional module 502 performs a write process on the volume determined in step 1404 (step 1405).
- the additional module 502 may write to the volume designated by the hypervisor 501 (regardless of the priority order) instead of the volume determined in step 1404.
- the hypervisor 501 or the management software 600 issues a warning to the administrator 1400. indicate.
- Whether the priority order is low may be determined based on a predetermined threshold. For example, a priority order of the top 10 percent or less, a priority order of 10 or less, or a priority order lower than the order defined by the administrator 1400 may be determined as a low priority order.
- FIG. 15 is a flowchart showing an example of priority order determination processing (step 1305) of the write destination candidate volume management table 1700, which is executed in the first embodiment of the present invention.
- the management software 600 sorts the host storage volume management table 1600 in descending order of the pool free capacity column 1606 (step 1501). This is due to the following reason. That is, after the image file 503 is stored, data is written to the image file 503. As a result, when the capacity of the volume pool 700 allocated to the storage destination virtual volume 701 is insufficient, this shortage is resolved. Therefore, it is necessary to copy data between the volume pools 700, which causes a decrease in I / O performance. This is because it is desirable to store the image file 503 in the virtual volume 701 assigned to the volume pool 700 having a large free capacity in order to reduce the number of times of such data copying.
- the host storage volume management table 1600 may be sorted in ascending order of the I / O amount. This is because it is desirable to store the image file 503 in the virtual volume 701 assigned to the volume pool 700 with a small amount of I / O in order to prevent performance degradation due to I / O concentration on a specific physical disk.
- n is a variable used for the iterative processing from step 1503 to step 1507, and represents the number of iterations.
- i is a variable used in the repetitive processing from step 1503 to step 1507, and indicates what line of the host / storage volume management table 1600 is to be processed.
- step 1507 the management software 600 executes the repeated processing from step 1503 to step 1507 (step 1503). In other cases, since processing for all the rows has been completed, the process proceeds to step 1508.
- step 1503 If it is determined in step 1503 that the variable n is smaller than the number of rows registered in the host / storage volume management table 1600, the management software 600 determines whether the i-th row of the host / storage volume management table 1600 satisfies a predetermined condition. It is determined whether or not (step 1504).
- the predetermined condition is a condition of a volume that is not desirable as a storage destination of the image file 503.
- the condition of step 1504 is, for example, “the free capacity of the volume registered in the i-th row is less than the threshold” or “the I / O amount for the volume pool corresponding to the volume registered in the i-th row” May be any of the above or the like, or a combination thereof.
- the management software 600 executes Step 1505 when the above condition is satisfied, and executes Step 1506 when it is not satisfied.
- step 1505 the management software 600 moves the i-th row of the host / storage volume management table 1600 to the last row. After this processing, the i-th row is shifted to the i-th row so that it is not necessary to increment i in step 1506, and it is only necessary to move to step 1507.
- step 1506 the management software 600 increments i in order to move the processing target to the next line of the host / storage volume management table 1600.
- step 1505 or 1506 the management software 600 increments n for repetitive processing (step 1507), and proceeds to step 1503.
- step 1503 If it is determined in step 1503 that the variable n is equal to or greater than the number of rows in the host storage volume management table 1600, the management software 600 ends the priority order determination process (step 1508).
- the rows of the host storage volume management table 1600 are sorted in descending order of priority. That is, the priority of the volume registered in the first line is the highest, and the priority of the volume registered in the last line is the lowest.
- the host volume identified by the value “1” in the host volume ID column 1601 is “host volume 1”, and the storage identified by the value “storage 1” in the storage ID column 1603 is used.
- the device 400 is “storage device 1”, the storage volume identified by the value “Vol 1” in the storage volume ID column 1604 is “storage volume 1”, and the volume pool 700 is identified by the value “pool 1” in the pool ID column 1605 Is described as “volume pool 1”. The same applies to other ID values.
- step 1501 the rows of the host storage volume management table 1600 are sorted in the order of the values in the pool free capacity column 1606. Since the values of the pool free capacity columns 1606 of the host volumes 1, 2, 3, 4, and 5 are 300 GB, 175 GB, 1024 GB, 105 GB, and MAX (that is, the maximum values that can be registered), The rows are arranged in the order of 3, 1, 2, 4.
- step 1504 the determination in step 1504 is performed for each rearranged row. For example, if “20 GB” is used as the threshold for the free capacity of the volume in step 1504, the host volume 1 row is the host because the free capacity of the host volume 1 is “10 GB”, which is smaller than the threshold. Move to the end of the storage volume management table 1600 (step 1505). As a result, when the processing of FIG. 15 ends, the rows of the host / storage volume management table 1600 are arranged in the order of the host volumes 5, 3, 2, 4, 1.
- the priorities of the host volumes 5, 3, 2, 4, 1 are 1, 2, 3, 4, 5 respectively, and those values are registered in the write destination candidate volume management table 1700. That is, when the host volume 5 is selected as the writing destination of the image file 503 and the writing fails, the next highest host volume 3 is selected. Is done.
- the virtual volume corresponding to the volume pool 700 having a larger free capacity is preferentially selected as the storage destination of the image file 503.
- the maximum value that can be registered in the item is registered as the pool free capacity column 1606 of the logical volume 702 that can be allocated to the host computer 100.
- the host volume 5 (that is, the storage volume 3 of the storage apparatus 2) corresponds to such a logical volume 702.
- the priority of the logical volume 702 is always higher than that of the virtual volume 701 as a result of the sorting in step 1501. . This is because, generally, I / O to the logical volume 702 is processed at a higher speed than I / O to the virtual volume 701.
- the logical volume 702 is preferentially selected to select the virtual machine. 500 and the storage device 401 are intended to be secured.
- I / O the virtual volume 701
- steps 1504 to 1505 since the priority of a volume that is not desirable as a storage destination of the image file 503 (for example, a volume with a small free capacity or a volume with I / O congestion) is lowered, such a volume is stored in the image file 503. It is difficult to be selected as the storage location of 503.
- a volume that is not desirable as a storage destination of the image file 503 for example, a volume with a small free capacity or a volume with I / O congestion
- FIG. 1 A second embodiment according to the present invention will be described based on FIGS. 1 to 12, 15 to 17, and 19 to 21.
- FIG. The difference between the present embodiment and the first embodiment is that, from the viewpoint of the administrator 1400, the software module that the administrator 1400 executes an operation involving file creation is different.
- the administrator 1400 executes an operation involving file creation on the hypervisor 501.
- the administrator 1400 executes an operation involving file creation on the management software 600.
- the operations of the management software 600, the hypervisor 501, and the additional module 502 of the present embodiment are different from those of the first embodiment.
- the system configuration in this embodiment is the same as that described with reference to FIGS. 1 to 5 in the first embodiment.
- the roles of the software modules are the same as those in the first embodiment except for the management software 600, the hypervisor 501, and the additional module 502.
- the conceptual diagram and management table of the first embodiment shown in FIGS. 6 to 12 are also applied to this embodiment.
- the management software 600 of this embodiment executes the priority determination process of the write destination candidate volume management table 1700 shown in FIG. 15 as in the first embodiment. Differences of the roles of the management software 600, the hypervisor 501, and the additional module 502 of the present embodiment from the first embodiment will be described later.
- FIG. 16 shows the processing from when the management software 600 according to the second embodiment of the present invention acquires the configuration information of the hypervisor 501 and each storage device 400 until the management software 600 holds the write destination candidate volume management table 1700. It is the sequence diagram shown.
- Step 1800 to Step 1805 are the same as Step 1300 to Step 1305 of the first embodiment, and thus description thereof is omitted.
- step 1806 the management software 600 holds the write destination candidate volume management table 1700. This process is repeatedly executed as necessary to update the write destination candidate volume management table 1700.
- the additional module 502 operates as an interface on the hypervisor 501 and does not execute independent processing.
- FIG. 17 shows that, in the second embodiment of the present invention, the administrator 1400 performs a file creation operation on the management software 600, and the management software 600 sends a file to the designated volume of each storage device 400 through the interface (addition module 502). It is the sequence figure which showed the processing until it writes.
- the management software 600 or the like needs to execute the processing shown in FIG. 16 to create the write destination candidate volume management table 1700.
- the administrator 1400 performs a file creation operation on the management software 600 (step 1900).
- the file creation operation refers to creation of a new image file 503 such as “new creation of virtual machine 500” or “adding a new disk image to virtual machine 500”. Refers to an accompanying operation.
- the file creation operation of this embodiment is performed on the management software 600 instead of on the hypervisor 501.
- the management software 600 executes a file creation process, and holds the image file 503 on the memory 211 or the storage device 213 on the management server 200 (step 1901).
- the management software 600 uses the write destination candidate volume management table 1700 to determine a write destination volume (step 1902).
- the management software 600 basically selects the highest-priority volume, and sequentially selects lower-order volumes when a write failure occurs due to an I / O error or insufficient capacity.
- the management software 600 designates the volume determined in step 1902 and transmits the file to the hypervisor 501 through the additional module 502 that is an interface on the hypervisor 501 (step 1903). Note that the management software 600 may transmit a file without specifying a volume. In this case, the additional process described in step 1904 is performed.
- the hypervisor 501 writes the file received in step 1903 to the designated volume (step 1904).
- the additional module 502 is an interface for writing a file to the designated volume when receiving the file to be written and the designation of the volume to which the file is to be written.
- step 1903 when a file is transmitted without specifying a volume, the hypervisor 501 writes the file to any volume regardless of the priority order.
- step 1903 an additional process will be described when a file is transmitted without specifying a volume.
- the management software 600 requests host-side configuration information from the hypervisor 501 and obtains a host volume management table 2106 including an image file sequence 2105 as shown in FIG.
- FIG. 21 is an explanatory diagram illustrating an example of the host volume management table 2106 held by the hypervisor 501 according to the second embodiment of this invention.
- the host volume ID column 2101 to the free capacity column 2104 of the host volume management table 2106 are the same as the corresponding columns of the host volume management table 2100 (FIG. 19), the description thereof is omitted.
- the image file column 2105 of the host volume management table 2106 the file name of the image file 503 stored in the host volume is registered.
- the management software 600 searches the image file sequence 2105 and identifies the volume in which the file was created in step 1904. When the identified volume is a volume with a low priority in the write destination candidate volume management table 1700, the management software 600 presents a warning to the administrator 1400. Note that the determination of low priority may be performed as in the first embodiment. For example, a priority order of the top 10 percent or less, a priority order of 10 or less, or a priority order lower than the order defined by the administrator 1400 may be determined as a low priority order. The above is the description of the case where the file is transmitted in step 1903 without designating the volume.
- 3rd Embodiment according to this invention is described based on FIGS. 1-12, FIG. 14, FIG. 15, and FIGS. 18-20.
- the difference between the present embodiment and the first embodiment is that the processing executed by the management software 600 in the first embodiment is executed by the additional module 502 in the present embodiment. Accordingly, the operation of the additional module 502 of this embodiment is different from that of the first embodiment.
- the system configuration in this embodiment is the same as that described in the first embodiment with reference to FIGS. 1 to 5 except that the management server 200 and the LAN 301 are not required.
- the role of each software module is the same as that of the first embodiment except for the additional module 502.
- the conceptual diagram and management table of the first embodiment shown in FIGS. 6 to 12 are also applied to this embodiment. However, this embodiment does not require the management software 600 on the management server 200.
- the additional module 502 or the like needs to execute the processing shown in FIG. 18 in advance and the additional module 502 holds the write destination candidate volume management table 1700. There is. Further, the processing shown in FIG. 15 is also executed in the same manner as in the first embodiment. However, when the additional module 502 does not control the writing destination and presents a warning to the administrator 1400 in step 1405, the warning is presented by the hypervisor 501.
- FIG. 18 is a sequence showing processing until the additional module 502 according to the third embodiment of the present invention acquires the configuration information of the hypervisor 501 and each storage device 400 and holds the write destination candidate volume management table 1700.
- FIG. 18 is a sequence showing processing until the additional module 502 according to the third embodiment of the present invention acquires the configuration information of the hypervisor 501 and each storage device 400 and holds the write destination candidate volume management table 1700.
- This process is repeatedly executed as necessary to update the write destination candidate volume management table 1700.
- the additional module 502 requests the hypervisor 501 to transmit host side configuration information (step 2000).
- the additional module 502 receives host-side configuration information from the hypervisor 501 (step 2001).
- the configuration information to be acquired is an information item included in the host volume management table 2100. Note that information other than the above may be included in the acquired configuration information. Further, the hypervisor 501 may transmit the configuration information without a request from the additional module 502.
- the additional module 502 requests each storage device 400 to transmit storage-side configuration information (step 2002).
- the additional module 502 receives the storage side configuration information from each storage device 400 (step 2003).
- the configuration information to be acquired is an identifier (storage ID) of each storage apparatus 400 and information items included in the storage volume management table 2200. Information other than the above may be included in the acquired configuration information.
- SCSI Inquiry may be used to issue and reply to requests.
- the additional module 502 issues a SCSI inquiry to request configuration information, and each storage device 400 stores the configuration information in the Vendor-specific area of the inquiry data and returns it.
- the additional module 502 creates a host / storage volume management table 1600 based on the host volume management table 2100 and the storage volume management table 2200 (step 2004).
- the creation method is the same as in the first embodiment.
- the row of the host volume management table 2100 and the row of the storage volume management table 2200 are compared, and the rows in which the storage ID column 2102 and the storage volume ID column 2103 match are combined. As a result, each row of the host storage volume management table 1600 is created.
- the additional module 502 determines the priority order of the write destination candidate volume by the processing shown in FIG. 15 (step 2005).
- the additional module 502 holds the write destination candidate volume management table 1700 (step 2006).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
200 管理サーバ
300 SAN
301 LAN
400 ストレージ装置
500 仮想マシン
501 ハイパーバイザ
502 追加モジュール
503 イメージファイル
600 管理ソフトウェア
700 ボリュームプール
701 仮想ボリューム
703 論理ボリューム
Claims (15)
- 一つ以上の記憶装置と、前記記憶装置に接続される一つ以上のホスト計算機と、を備える計算機システムであって、
前記ホスト計算機は、前記記憶装置に接続される第1インターフェースと、前記第1インターフェースに接続される第1プロセッサと、前記第1プロセッサに接続される第1記憶デバイスと、を含み、
前記第1プロセッサは、各々が一つ以上のアプリケーションを実行する一つ以上の仮想マシンを制御し、
前記記憶装置は、
前記ホスト計算機に接続されるコントローラと、前記コントローラに接続される一つ以上の物理記憶デバイスと、を含み、
複数の仮想ボリュームと、各々が前記物理記憶デバイスの実記憶領域を含む複数のプールと、を対応付ける情報を保持し、
前記ホスト計算機から書き込み先のボリュームとして前記仮想ボリュームが指定されたデータの書き込み要求を受信した場合、書き込み先の前記仮想ボリュームに対応する前記プールに含まれる前記実記憶領域を前記書き込み先の仮想ボリュームに割り当てて、当該データを前記割り当てられた実記憶領域に格納し、
前記計算機システムは、
前記記憶装置が保持する情報に基づいて、前記ホスト計算機による書き込み先としての前記ボリュームの優先順位を決定し、
決定した前記優先順位を保持することを特徴とする計算機システム。 - 請求項1に記載の計算機システムであって、
前記記憶装置は、前記各プールに含まれる実記憶領域のうち、まだ前記仮想ボリュームに割り当てられていない実記憶領域の量を示す情報を保持し、
前記計算機システムは、まだ前記仮想ボリュームに割り当てられていない実記憶領域の量に基づいて前記優先順位を決定することを特徴とする計算機システム。 - 請求項2に記載の計算機システムであって、
前記計算機システムは、前記プールに含まれる実記憶領域のうち、まだ前記仮想ボリュームに割り当てられていない実記憶領域の量が多いほど、当該プールに対応する前記仮想ボリュームの順位が高くなるよう、前記優先順位を決定することを特徴とする計算機システム。 - 請求項3に記載の計算機システムであって、
前記計算機システムは、
ファイル作成指示が入力された場合、前記優先順位が最も高い前記ボリュームを書き込み先として指定し、前記書き込み先として指定されたボリュームへのファイルの書き込みを実行し、
前記書き込み先として指定されたボリュームへのファイルの書き込みに失敗した場合、前記書き込み先として指定されたボリュームの次に前記優先順位が高い前記ボリュームを新たな書き込み先として指定することを特徴とする計算機システム。 - 請求項4に記載の計算機システムであって、
前記ホスト計算機および前記記憶装置は、第1ネットワークを介して相互に接続され、
前記計算機システムは、第2ネットワークを介して前記ホスト計算機及び前記記憶装置に接続される管理計算機をさらに備え、
前記管理計算機は、
前記第2ネットワークに接続される第2インターフェースと、前記第2インターフェースに接続される第2プロセッサと、前記第2プロセッサに接続される第2記憶デバイスと、を含み、
前記第2ネットワークを介して、前記記憶装置から、前記各プールに含まれる実記憶領域のうち、まだ前記仮想ボリュームに割り当てられていない実記憶領域の量を示す情報を取得し、
前記プールに含まれる実記憶領域のうち、まだ前記仮想ボリュームに割り当てられていない実記憶領域の量が多いほど、当該プールに対応する前記仮想ボリュームの順位が高くなるよう、前記優先順位を決定し、
前記決定した優先順位を保持し、
前記ファイル作成指示を入力されると、ファイルを作成し、前記作成されたファイルに加えて、前記作成されたファイルの書き込み先を指定する情報として、前記優先順位が最も高い前記仮想ボリュームを識別する情報を前記ホスト計算機に送信し、
前記第1プロセッサは、前記第1記憶デバイスに格納された仮想マシン制御プログラムを実行することによって、前記一つ以上の仮想マシンを制御し、
前記仮想マシン制御プログラムは、ファイル及び当該ファイルの書き込み先を指定する情報を入力されると、当該指定された書き込み先に当該ファイルを書き込む処理を前記第1プロセッサに実行させ、
前記ホスト計算機は、前記仮想マシン制御プログラムに従って、前記管理計算機から受信したファイルを、前記管理計算機によって指定されたボリュームに書き込むことを特徴とする計算機システム。 - 請求項5に記載の計算機システムであって、
前記記憶装置は、さらに、予め前記実記憶領域が割り当てられ、かつ、前記プールに対応付けられていない一つ以上の論理ボリュームを管理し、
前記ホスト計算機は、前記各仮想ボリュームの空き容量を示す情報を保持し、
前記管理計算機は、
前記第2ネットワークを介して、前記ホスト計算機から、前記各仮想ボリュームの空き容量を示す情報を取得し、
前記一つ以上の論理ボリュームのうち空き容量が所定の閾値より大きいものの順位がいずれの前記仮想ボリュームの順位より高くなり、空き容量が所定の閾値より大きい仮想ボリュームの順位が、前記空き容量が前記所定の閾値より小さい仮想ボリュームの順位より高くなり、かつ、前記プールに含まれる実記憶領域のうち、まだ前記仮想ボリュームに割り当てられていない実記憶領域の量が多いほど、当該プールに対応する前記仮想ボリュームの順位が高くなるよう、前記優先順位を決定することを特徴とする計算機システム。 - 請求項4に記載の計算機システムであって、
前記ホスト計算機および前記記憶装置は、第1ネットワークを介して相互に接続され、
前記計算機システムは、第2ネットワークを介して前記ホスト計算機及び前記記憶装置に接続される管理計算機をさらに備え、
前記管理計算機は、
前記第2ネットワークに接続される第2インターフェースと、前記第2インターフェースに接続される第2プロセッサと、前記第2プロセッサに接続される第2記憶デバイスと、を含み、
前記第2ネットワークを介して、前記記憶装置から、前記各プールに含まれる実記憶領域のうち、まだ前記仮想ボリュームに割り当てられていない実記憶領域の量を示す情報を取得し、
前記プールに含まれる実記憶領域のうち、まだ前記仮想ボリュームに割り当てられていない実記憶領域の量が多いほど、当該プールに対応する前記仮想ボリュームの順位が高くなるよう、前記優先順位を決定し、
前記決定した優先順位を示す情報を前記ホスト計算機に送信し、
前記第1プロセッサは、前記第1記憶デバイスに格納された仮想マシン制御プログラムを実行することによって、前記一つ以上の仮想マシンを制御し、
前記ホスト計算機は、
前記第1記憶デバイスに格納され、前記第1プロセッサによって実行される追加プログラムをさらに含み、
前記管理計算機から受信した前記優先順位を示す情報を保持し、
前記ファイル作成指示を入力されると、前記仮想マシン制御プログラムに従ってファイルを作成し、
前記追加プログラムに従って、前記作成されたファイルを、前記優先順位が最も高い前記ボリュームに書き込むことを特徴とする計算機システム。 - 請求項4に記載の計算機システムであって、
前記ホスト計算機および前記記憶装置は、第1ネットワークを介して相互に接続され、
前記第1プロセッサは、前記第1記憶デバイスに格納された仮想マシン制御プログラムを実行することによって、前記一つ以上の仮想マシンを制御し、
前記ホスト計算機は、
前記第1記憶デバイスに格納され、前記第1プロセッサによって実行される追加プログラムをさらに含み、
前記追加プログラムに従って、前記第1ネットワークを介して、前記記憶装置から、前記各プールに含まれる実記憶領域のうち、まだ前記仮想ボリュームに割り当てられていない実記憶領域の量を示す情報を取得し、
前記プールに含まれる実記憶領域のうち、まだ前記仮想ボリュームに割り当てられていない実記憶領域の量が多いほど、当該プールに対応する前記仮想ボリュームの順位が高くなるよう、前記優先順位を決定し、
前記決定した優先順位を保持し、
前記ファイル作成指示を入力されると、前記仮想マシン制御プログラムに従ってファイルを作成し、
前記追加プログラムに従って、前記作成されたファイルを、前記優先順位が最も高い前記ボリュームに書き込むことを特徴とする計算機システム。 - 請求項8に記載の計算機システムであって、前記ホスト計算機は、前記追加プログラムに従う前記第1ネットワークを介した前記記憶装置との通信に、SCSI Inquiryを用いることを特徴とする計算機システム。
- 請求項3に記載の計算機システムであって、
前記計算機システムは、ファイル作成指示が入力された場合、前記ファイル作成指示に従っていずれかの前記ボリュームへのファイルの書き込みを実行し、前記ファイルが書き込まれたボリュームの前記優先順位が所定の順位より低い場合、警告を出力することを特徴とする計算機システム。 - 請求項1に記載の計算機システムであって、
前記記憶装置は、前記各プールへのI/O量を示す情報を保持し、
前記計算機システムは、前記プールへのI/O量が少ないほど、当該プールに対応する前記仮想ボリュームの順位が高くなるように、前記優先順位を決定することを特徴とする計算機システム。 - 一つ以上の記憶装置と、前記記憶装置に接続される一つ以上のホスト計算機と、を備える計算機システムの制御方法であって、
前記ホスト計算機は、前記記憶装置に接続される第1インターフェースと、前記第1インターフェースに接続される第1プロセッサと、前記第1プロセッサに接続される第1記憶デバイスと、を含み、
前記第1プロセッサは、各々が一つ以上のアプリケーションを実行する一つ以上の仮想マシンを制御し、
前記記憶装置は、
前記ホスト計算機に接続されるコントローラと、前記コントローラに接続される一つ以上の物理記憶デバイスと、を含み、
複数の仮想ボリュームと、各々が前記物理記憶デバイスの実記憶領域を含む複数のプールと、を対応付ける情報を保持し、
前記ホスト計算機から書き込み先のボリュームとして前記仮想ボリュームが指定されたデータの書き込み要求を受信した場合、書き込み先の前記仮想ボリュームに対応する前記プールに含まれる前記実記憶領域を前記書き込み先の仮想ボリュームに割り当てて、当該データを前記割り当てられた実記憶領域に格納し、
前記制御方法は、
前記記憶装置が保持する情報に基づいて、前記ホスト計算機による書き込み先としての前記ボリュームの優先順位を決定する第1手順と、
決定した前記優先順位を保持する第2手順と、を含むことを特徴とする計算機システムの制御方法。 - 請求項12に記載の計算機システムの制御方法であって、
前記記憶装置は、前記各プールに含まれる実記憶領域のうち、まだ前記仮想ボリュームに割り当てられていない実記憶領域の量を示す情報を保持し、
前記第1手順は、まだ前記仮想ボリュームに割り当てられていない実記憶領域の量に基づいて前記優先順位を決定する手順を含むことを特徴とする計算機システムの制御方法。 - 請求項13に記載の計算機システムの制御方法であって、
前記第1手順は、前記プールに含まれる実記憶領域のうち、まだ前記仮想ボリュームに割り当てられていない実記憶領域の量が多いほど、当該プールに対応する前記仮想ボリュームの順位が高くなるよう、前記優先順位を決定する手順を含むことを特徴とする計算機システムの制御方法。 - 請求項14に記載の計算機システムの制御方法であって、
ファイル作成指示が入力された場合、前記優先順位が最も高い前記ボリュームを書き込み先として指定し、前記書き込み先として指定されたボリュームへのファイルの書き込みを実行する第3手順をさらに含み、
前記第3手順は、前記書き込み先として指定されたボリュームへのファイルの書き込みに失敗した場合、前記書き込み先として指定されたボリュームの次に前記優先順位が高い前記ボリュームを新たな書き込み先として指定する手順をさらに含むことを特徴とする計算機システムの制御方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013502103A JP5416860B2 (ja) | 2011-03-02 | 2011-03-02 | 計算機システムおよびその制御方法 |
PCT/JP2011/054727 WO2012117534A1 (ja) | 2011-03-02 | 2011-03-02 | 計算機システムおよびその制御方法 |
US13/131,357 US8745354B2 (en) | 2011-03-02 | 2011-03-02 | Computer system for resource allocation based on orders of proirity, and control method therefor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2011/054727 WO2012117534A1 (ja) | 2011-03-02 | 2011-03-02 | 計算機システムおよびその制御方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012117534A1 true WO2012117534A1 (ja) | 2012-09-07 |
Family
ID=46754038
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/054727 WO2012117534A1 (ja) | 2011-03-02 | 2011-03-02 | 計算機システムおよびその制御方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US8745354B2 (ja) |
JP (1) | JP5416860B2 (ja) |
WO (1) | WO2012117534A1 (ja) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9047313B2 (en) * | 2011-04-21 | 2015-06-02 | Red Hat Israel, Ltd. | Storing virtual machines on a file system in a distributed environment |
US9158561B2 (en) * | 2011-08-18 | 2015-10-13 | Vmware, Inc. | Systems and methods for modifying an operating system for a virtual machine |
JP6142599B2 (ja) * | 2013-03-18 | 2017-06-07 | 富士通株式会社 | ストレージシステム、ストレージ装置および制御プログラム |
CN109976662B (zh) * | 2017-12-27 | 2022-06-14 | 浙江宇视科技有限公司 | 数据存储方法、装置及分布式存储系统 |
US11249852B2 (en) | 2018-07-31 | 2022-02-15 | Portwonx, Inc. | Efficient transfer of copy-on-write snapshots |
US11354060B2 (en) | 2018-09-11 | 2022-06-07 | Portworx, Inc. | Application snapshot for highly available and distributed volumes |
US11494128B1 (en) | 2020-01-28 | 2022-11-08 | Pure Storage, Inc. | Access control of resources in a cloud-native storage system |
US11531467B1 (en) | 2021-01-29 | 2022-12-20 | Pure Storage, Inc. | Controlling public access of resources in a secure distributed storage system |
US11733897B1 (en) | 2021-02-25 | 2023-08-22 | Pure Storage, Inc. | Dynamic volume storage adjustment |
US11520516B1 (en) | 2021-02-25 | 2022-12-06 | Pure Storage, Inc. | Optimizing performance for synchronous workloads |
US11726684B1 (en) | 2021-02-26 | 2023-08-15 | Pure Storage, Inc. | Cluster rebalance using user defined rules |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004070403A (ja) * | 2002-08-01 | 2004-03-04 | Hitachi Ltd | ファイル格納先ボリューム制御方法 |
JP2009116436A (ja) * | 2007-11-02 | 2009-05-28 | Hitachi Ltd | 記憶領域の構成最適化方法、計算機システム及び管理計算機 |
JP2009140356A (ja) * | 2007-12-07 | 2009-06-25 | Hitachi Ltd | 管理装置及び管理方法 |
WO2010122679A1 (ja) * | 2009-04-23 | 2010-10-28 | 株式会社日立製作所 | 計算機システム及びその制御方法 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4175788B2 (ja) | 2001-07-05 | 2008-11-05 | 株式会社日立製作所 | ボリューム制御装置 |
JP4402565B2 (ja) * | 2004-10-28 | 2010-01-20 | 富士通株式会社 | 仮想ストレージ管理プログラム、方法及び装置 |
JP4699837B2 (ja) * | 2005-08-25 | 2011-06-15 | 株式会社日立製作所 | ストレージシステム、管理計算機及びデータ移動方法 |
JP4920979B2 (ja) * | 2006-01-25 | 2012-04-18 | 株式会社日立製作所 | ストレージ装置及びその制御方法 |
JP4958883B2 (ja) * | 2008-10-29 | 2012-06-20 | 株式会社日立製作所 | 管理サーバ装置によるストレージ装置及び空調装置の制御方法及びストレージシステム |
-
2011
- 2011-03-02 WO PCT/JP2011/054727 patent/WO2012117534A1/ja active Application Filing
- 2011-03-02 US US13/131,357 patent/US8745354B2/en not_active Expired - Fee Related
- 2011-03-02 JP JP2013502103A patent/JP5416860B2/ja not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004070403A (ja) * | 2002-08-01 | 2004-03-04 | Hitachi Ltd | ファイル格納先ボリューム制御方法 |
JP2009116436A (ja) * | 2007-11-02 | 2009-05-28 | Hitachi Ltd | 記憶領域の構成最適化方法、計算機システム及び管理計算機 |
JP2009140356A (ja) * | 2007-12-07 | 2009-06-25 | Hitachi Ltd | 管理装置及び管理方法 |
WO2010122679A1 (ja) * | 2009-04-23 | 2010-10-28 | 株式会社日立製作所 | 計算機システム及びその制御方法 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2012117534A1 (ja) | 2014-07-07 |
US8745354B2 (en) | 2014-06-03 |
JP5416860B2 (ja) | 2014-02-12 |
US20120226885A1 (en) | 2012-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5416860B2 (ja) | 計算機システムおよびその制御方法 | |
CN110955487B (zh) | Hci环境下的vm/容器和卷配置决定方法及存储系统 | |
JP4235220B2 (ja) | 計算機システムおよびデータ移行方法 | |
US9524107B2 (en) | Host-based device drivers for enhancing operations in redundant array of independent disks systems | |
JP4884198B2 (ja) | ストレージネットワークの性能管理方法、並びに、その方法を用いた計算機システム及び管理計算機 | |
US10437642B2 (en) | Management system for computer system | |
US9928004B2 (en) | Assigning device adaptors to use to copy source extents to target extents in a copy relationship | |
JP2009238114A (ja) | ストレージ管理方法、ストレージ管理プログラム、ストレージ管理装置およびストレージ管理システム | |
JP5762146B2 (ja) | コンピュータ・ベース・システムにおける資源割当てる方法および装置 | |
JP4748950B2 (ja) | 記憶領域管理方法及びシステム | |
US10069906B2 (en) | Method and apparatus to deploy applications in cloud environments | |
US20130185531A1 (en) | Method and apparatus to improve efficiency in the use of high performance storage resources in data center | |
US20220066786A1 (en) | Pre-scanned data for optimized boot | |
JP2020173727A (ja) | ストレージ管理装置、情報システム、及びストレージ管理方法 | |
US9971785B1 (en) | System and methods for performing distributed data replication in a networked virtualization environment | |
JP2015532734A (ja) | 物理ストレージシステムを管理する管理システム、物理ストレージシステムのリソース移行先を決定する方法及び記憶媒体 | |
CN107430527B (zh) | 具有服务器存储系统的计算机系统 | |
US9239681B2 (en) | Storage subsystem and method for controlling the storage subsystem | |
US9547450B2 (en) | Method and apparatus to change tiers | |
US10824640B1 (en) | Framework for scheduling concurrent replication cycles | |
US20130318102A1 (en) | Data Handling in a Cloud Computing Environment | |
JP7113698B2 (ja) | 情報システム | |
WO2018173300A1 (ja) | I/o制御方法およびi/o制御システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 13131357 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11859945 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013502103 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11859945 Country of ref document: EP Kind code of ref document: A1 |