JP3843713B2 - Computer system and device allocation method - Google Patents

Computer system and device allocation method Download PDF

Info

Publication number
JP3843713B2
JP3843713B2 JP2000238865A JP2000238865A JP3843713B2 JP 3843713 B2 JP3843713 B2 JP 3843713B2 JP 2000238865 A JP2000238865 A JP 2000238865A JP 2000238865 A JP2000238865 A JP 2000238865A JP 3843713 B2 JP3843713 B2 JP 3843713B2
Authority
JP
Japan
Prior art keywords
storage device
computer
device
storage
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2000238865A
Other languages
Japanese (ja)
Other versions
JP2001142648A (en
Inventor
学 北村
憲司 山神
達也 村上
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP24102499 priority Critical
Priority to JP11-241024 priority
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to JP2000238865A priority patent/JP3843713B2/en
Publication of JP2001142648A publication Critical patent/JP2001142648A/en
Application granted granted Critical
Publication of JP3843713B2 publication Critical patent/JP3843713B2/en
Application status is Expired - Fee Related legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

[0001]
BACKGROUND OF THE INVENTION
The present invention relates to a computer system and a method for assigning a storage device to a computer in the computer system, and more particularly to assigning a storage device to a computer in a computer system having a storage subsystem shared by a plurality of computers. On how to do.
[0002]
[Prior art]
In recent years, the amount of information handled by computer systems used by companies and the like has increased dramatically, and along with this, the capacity of disk devices for storing data has been steadily increasing. For example, in a magnetic disk device, a device having a capacity of several TB (terabytes) has become rare. With regard to such a disk device, for example, Japanese Patent Laid-Open No. 9-274544 discloses a technique in which one storage device subsystem is composed of a plurality of types of logical disk devices (hereinafter also referred to as devices). ing. Specifically, devices having different RAID levels, such as RAID 5 and RAID 1 in RAID (Redundant Arrays of Inexpensive Disks), are mixed as a device (logical disk) accessed from the host computer, or a logical disk is configured. As an actual magnetic disk device (physical disk device), a disk subsystem in which devices having different access speeds are mixed is disclosed. The user can use different devices according to the access frequency of each device.
[0003]
On the other hand, with the emergence of Fiber Channel technology as an interface between host computers and peripheral devices such as disk devices, a computer system is configured by connecting multiple host computers and multiple storage devices with a single Fiber Channel cable. It has come to be done. In such a computer system, each host computer can directly access any storage device on the fiber channel. For this reason, as compared with the conventional case where each host computer has a storage device, sharing of data between host computers and reduction of network load can be expected.
[0004]
[Problems to be solved by the invention]
According to the above-described prior art, the number and types of devices that can be accessed by each host computer can be dramatically increased. However, as the number and types of devices that can be accessed by the host computer increase, it becomes difficult to manage the devices in each host computer. While it is possible to access many devices from one host computer, it is difficult for the user to select which device should be used in a certain job. In particular, in the case of a computer system connected via Fiber Channel, a certain host computer can access a device that is not originally used by the host computer. For this reason, it is possible to cause unauthorized access to a device used by another host computer and destroy data.
[0005]
Japanese Patent Laid-Open No. 10-333839 discloses a method for allowing a storage device connected by a fiber channel only to access from a specific host computer in order to solve such a problem. However, when there are multiple storage devices and devices, or when different types of devices are mixed, the processing remains the same, and each host computer is always aware of the device type. There is a need to.
[0006]
An object of the present invention is to make it possible to easily set a device and assign a device to each host computer so that each host computer can use a device suitable for the application as needed. .
[0007]
[Means for Solving the Problems]
In a preferred embodiment, the computer system according to the present invention includes a plurality of computers and a storage subsystem connected to the plurality of computers. The storage subsystem has a plurality of storage devices and a plurality of interfaces, and is connected to each computer. One of the plurality of computers has a storage device in the storage device subsystem and management means for holding information on the connection relationship between each computer and the storage device subsystem. Each computer notifies the management means of its capacity and type when a new device is required. The management means receives the notification and selects a storage device that meets the request. Then, the storage subsystem is instructed to set predetermined information so that the selected device can be accessed from the computer. The management means also returns predetermined information to the computer that requested the device assignment, and the requested computer changes the setting of the computer based on the information, and the device assigned by the computer is changed. Make it available.
[0008]
In another aspect of the present invention, a plurality of computers and a plurality of storage device subsystems are connected via a network. On an arbitrary computer, a storage device included in each storage device subsystem and management means for holding information on the connection relationship between each computer and the storage device subsystem are provided. Each storage subsystem has control means for permitting access to the computer designated by the management means. When each computer needs a new storage device, it notifies the management means of its capacity and type. The management means selects a device that meets the request in response to the notification, and permits the storage subsystem to access from the computer so that the selected device can be accessed from the computer that requires the new storage device. Instruct. The management means also returns predetermined information to the computer that requested the device assignment. The computer that requested the device assignment changes the setting of the computer based on the information returned from the management means so that the device assigned from the computer can be used.
[0009]
DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 is a simplified block diagram showing a configuration example in an embodiment of a computer system to which the present invention is applied.
[0010]
The computer system includes a plurality of host computers, a host computer 1a, a host computer 1b,..., A host computer 1n (collectively referred to as a host 1), a storage subsystem 2 connected to the host 1, a management host computer 3, a network 4 and a secondary storage device 5 which is a storage device disposed in a remote place.
[0011]
The host computers 1a, 1b,... Are computers having a CPU, a memory, and the like, and achieve predetermined functions by the CPU reading and executing an operating system and application programs stored in the memory.
[0012]
The storage device subsystem 2 has a plurality of disk units 21, a disk controller 22, a plurality of ports 23 connected to the host computer 1, an interface 24 for connecting to the secondary storage device 5, and a network interface 25 connected to the network 4. is doing. The storage subsystem 2 in the present embodiment pretends to the host computer 1 as one or a plurality of logical devices by combining a plurality of disk units 21 together. Of course, each disk unit 21 may be shown to the host computer 1 as one logical device.
[0013]
As the port 23, for example, if the host computer 1 to be connected is a so-called open system computer, an interface such as SCSI (Small Computer System Interface) is used. On the other hand, if the host computer 1 is a so-called mainframe, a channel interface such as ESCON (Enterprise System CONnection) is used. Each port 23 may be the same interface or different ones. In the present embodiment, description will be made assuming that SCSI is used as an interface for all ports 23.
[0014]
The disk controller 22 includes a processor 221, a cache memory 222, and a control memory 223. The processor 221 performs access from the host computer 1 and control of the disk unit 21. The processor 221 is particularly useful when the storage subsystem 2 makes the host computer 1 appear to be one or more logical devices as a disk array, rather than a single disk unit 21 as a single unit. , Perform its processing and management. The disk controller 22 communicates with the management host computer 3 via the network interface 25.
[0015]
The cache memory 222 stores frequently read data or temporarily stores write data from the host computer 1 in order to increase the access processing speed from the host computer 1. A part of the cache memory 222 can be made to appear as one or more logical disks and used as a device that does not require access to the magnetic disk unit.
[0016]
The control memory 223 stores a program executed by the processor 221 and is also used to store information for managing the disk unit 21 and a logical device configured by combining a plurality of disk units 21. .
[0017]
Each host computer 1a, 1b,... Has software (program) called a volume manager 11 arranged therein. The volume manager 11 operates in communication with the management manager 31 arranged in the management host computer 3. Each host computer 1 has an interface (I / F) 12, and is connected to the port 23 of the storage subsystem 2 by the interface 12.
[0018]
Next, the management form of the logical devices in the storage subsystem 2 will be described.
[0019]
As described above, the storage subsystem 2 pretends to the host computer 1 as a plurality of disk units 21 as one or a plurality of logical devices, or each disk unit 21 as one logical device. The storage subsystem 2 also causes the host computer 1 to show a part of the cache memory 222 as one or more logical devices. There is no correlation between the number of disk units 21 in the storage subsystem 2 and the number of logical devices.
[0020]
FIG. 2 is a table configuration diagram showing an example of a logical device management table that holds information used by the storage subsystem 2 to manage logical devices.
[0021]
The logical device management table holds a set of items of size 62, configuration 63, status 64, path 65, target ID 66, and LUN 67 for the logical device number 61. In the size 62, information indicating the capacity of the logical device specified by the logical device number 61 is set.
[0022]
In the configuration 63, information indicating the configuration of the logical device, for example, a RAID (Redundant Array of Inexpensive Disks) is configured by the disk unit 21, and the RAID type such as RAID 1 or RAID 5 is assigned to the logical device. Is set. Further, information indicating “cache” is set in the configuration 63 when a part of the cache memory 222 is allocated as the logical device, and “single disk unit” is set when a single disk unit is allocated. Is done.
[0023]
In the state 64, information indicating the state of the logical device is set. The status includes “online”, “offline”, “unimplemented”, and “failure offline”. “Online” indicates that the logical device operates normally and is accessible from the host computer 1. “Offline” indicates that the logical device is defined and operating normally, but cannot be accessed from the host computer 1. This state corresponds to a case where the host computer 1 has been used before, but the host computer 1 no longer uses the device because it is unnecessary. “Unmounted” indicates that the logical device is not defined and cannot be accessed from the host. “Fault offline” indicates that the logical device has failed and cannot be accessed from the host.
[0024]
In the path 65, information indicating to which port of the plurality of ports 23 the logical device is connected is set. Each port 23 is assigned a unique number within the storage device subsystem 2, and the number of the port 23 to which the logical device is connected is recorded in the “path” column. The target ID 66 and LUN 67 are identifiers for identifying logical devices. Here, as these identifiers, SCSI-ID and LUN used when accessing the device from the host computer 1 on the SCSI are used.
[0025]
One logical device can be connected to a plurality of ports, and the same logical device can be accessed from a plurality of host computers 1. In this case, a plurality of entries related to the logical device are created in the logical device management table. For example, in the logical device management table shown in FIG. 2, the device with the logical device number 2 is connected to the two ports 23 with the port numbers 0 and 1. For this reason, there are two items of logical device number 2. When one logical device can be accessed from a plurality of ports 23 in this way, the target ID and LUN corresponding to each path 65 do not need to be the same, and may differ as shown in FIG. good.
[0026]
The information held in the logical device management table is sent to the management host computer 3 through the network interface 24 at an appropriate timing or when the configuration changes due to a failure in the storage subsystem 2. . Therefore, the management host computer 3 also holds a logical device management table similar to the table shown in FIG.
[0027]
FIG. 3 is a table configuration diagram showing an example of a host management table held by the management manager 31 of the management host computer 3.
[0028]
The host management table stores management information including a host name 71, a port number 72, an interface number 73, and a logical device number 74 in order for the management host computer 3 to manage device assignment to each host computer 1. Hold.
[0029]
The port number 72 and the logical device number 74 are numbers defined inside the storage subsystem 2 and are information for identifying each port 23 and logical device of the storage subsystem 2. The port number 72 and the logical device number 74 include the port number of the port to which the host computer 1 identified by the identifier set in the host name 71 is connected and the device number of the logical device assigned to the host computer. Is set.
[0030]
The interface number 73 is a number assigned to manage the interface 12 of each host computer 1. The interface number 73 is necessary particularly when one host computer 1 has a plurality of interfaces 12. A set of the port number 72 and the interface number 73 is an important element for indicating the connection relationship between the host computer 1 and the logical device. For example, the host computer 1 b shown in FIG. 1 includes two interfaces 12, and each interface 12 is connected to a different port 23. In such a case, even if one interface or a line connecting one interface and the storage subsystem 2 cannot be used, the process continues if the other interface is connected to the logical device. And reliability can be improved.
[0031]
The management host computer 3 assigns a logical device to each host computer 1 with reference to the host management table and the logical device management table sent from the storage subsystem 2. The device allocation process will be described below.
[0032]
FIG. 4 is a flowchart showing the flow of processing performed by the volume manager 11 of each host computer 1. This process is performed when a user who uses the host computer 1 or an application program running on the host computer 1 newly needs a device.
[0033]
In step 1001, the volume manager 11 obtains information on the number of devices and the type of devices required from the user or application program. The user or application program specifies information such as the capacity, performance conditions, and reliability level as information about the device. The device capacity is the device size described above. As the performance condition, for example, information related to performance such as a device access speed such as a low-speed disk drive, a high-speed disk drive, and a cache resident disk drive is designated. As the reliability level, for example, information related to device reliability, such as RAID 0, RAID 1, RAID 5, double path, and remote mirror, is designated. In the double path, when the host computer 1 has a plurality of interfaces, a plurality of paths are provided so that the same device can be accessed using the plurality of interfaces. With a double path, even if a path becomes unavailable, it is possible to access the device using another path. The remote mirror causes the secondary storage device 5 to have a copy of the device in the storage device subsystem 2, and even if the storage device subsystem 2 itself cannot be operated due to an earthquake, fire, or the like, the secondary storage device Since data is held in the device 5, reliability can be improved.
[0034]
In step 1002, the volume manager 11 searches for a set of target IDs and LUNs that are not used on the interface 12 of the host computer 1.
[0035]
In step 1003, the volume manager 11 sets the capacity, performance condition, reliability level specified in step 1001, and the unused target ID and LUN set retrieved in step 1002 to the management manager 31 of the management host computer 3. Send and request a new device assignment. The management manager 31 searches for a device to be allocated based on the received information, and returns information specifying the host interface number, target ID, and LUN used to access the device. The processing of the management manager 31 performed here will be described later.
[0036]
In step 1004, the volume manager 11 receives information from the management manager 13. In step 1005, based on the information received from the management manager 13, the setting of the host computer 1 is changed so that a new device can be used.
[0037]
In the case of a so-called open system, a device file is prepared for each device so that the host computer 1 accesses each device, and the device file is accessed. Normally, a device file is prepared when the device configuration process of the host computer 1 is performed, and no device file is created for a device that does not exist during the device configuration process. Therefore, in step 1004, a device file relating to the newly assigned device is created. Specifically, for example, in the Solaris operating system of Sun Microsystems, a new device is recognized and a device file is created by the “drvconfig” command or the “drives” command, and newly assigned from the host computer 1. Access to the device.
[0038]
Finally, in step 1006, the volume manager 11 notifies the user or application program of the assigned device file name, target ID, and LUN information, and ends the process.
[0039]
FIG. 5 is a flowchart showing the flow of processing by the management manager 31 of the management host computer 3 when a new device is allocated.
[0040]
When the management manager 31 receives information such as the device size, performance condition, and reliability level sent from the host computer 1 in step 1101, it is set in the held logical device management table and host management table. Refer to the information and search for a device that meets the request. The devices to be searched here are those for which “offline” is set in the state 64 of the logical device management table (step 1102). As a result of the search, the management manager 31 determines whether a device in the “offline” state that matches the request has been found (step 1103).
[0041]
When a device in the “offline” state that matches the request is found, the management manager 31 is based on the target ID and LUN information received from the host computer 1, and information set in the logical device management table and the host management table. Then, the port number, target ID, and LUN used to connect the device to the host computer 1 are determined (step 1104).
[0042]
Next, the management manager 31 sets the device of the logical device number found in step 1103 so that it can be accessed with the port number, target ID, and LUN determined in step 1104 and puts it online. To instruct. The storage subsystem 2 performs setting according to the instruction from the management manager 31 and returns the result to the management manager 31 (step 1105).
[0043]
When the management manager 31 receives the result from the storage subsystem 2 (step 1106), it returns the interface number, target ID, and LUN to the volume manager 11 of the requested host computer 1 (step 1107).
[0044]
On the other hand, if there is no device that meets the request in the “offline” state in step 1103, the management manager 31 searches for a logical device number whose status 64 in the logical device management table is “unmounted”. (Step 1108). If there is a logical device number “unmounted”, the management manager 31 informs the storage subsystem 2 of information such as the device size, performance condition, reliability level, etc. requested from the host computer 1. Request device construction. The storage subsystem 2 constructs a device having the device number in response to a request from the management manager 31, and returns the result to the management manager 31 (step 1109). When the management manager 31 receives the result, the management manager 31 performs the processing from step 1104 described above (step 1110).
[0045]
FIG. 6 is a flowchart showing the flow of processing executed by the volume manager 11 in the device return processing that is unnecessary in the host computer 1.
[0046]
In the device return process, the volume manager 11 first receives unnecessary device information such as a device file name from the user or the upper application program (step 1201). Based on the received information, the volume manager 11 acquires an interface number, a target ID, and a LUN related to the device to be returned (step 1202).
[0047]
Next, the volume manager 11 changes the setting of the host computer 1 as necessary so that the host computer 1 does not use the device. Specifically, processing such as device file deletion is performed (step 1203). Subsequently, the volume manager 11 notifies the management manager 31 of the interface number, target ID, and LUN acquired in step 1202, and ends the processing (step 1204).
[0048]
FIG. 7 is a flowchart showing the flow of processing performed by the management manager 31 in the device return processing which is unnecessary in the host computer 1.
[0049]
The management manager 31 receives the interface number, target ID, and LUN from the host computer 1 (step 1301). Based on the received interface number, target ID, and LUN, the management manager 31 instructs the storage subsystem 2 to take the device to be returned offline. In response to this instruction, the storage subsystem 2 takes the specified device offline and returns a logical device management table reflecting the result to the management manager 31 (step 1302). When the logical device management table is received from the storage subsystem 2, it is held and the processing is completed (step 1303).
[0050]
In the first embodiment described above, the management host computer is provided and the management manager is arranged there, but the function of the management manager does not necessarily have to exist in the management host computer. For example, it may be configured to exist in any of the host computers 1a, 1b,. Further, the function of the management manager can be provided in the storage subsystem. In this case, each host computer 1a, 1b,... May send a request and receive information directly with the storage device subsystem via the interface.
[0051]
FIG. 8 is a simplified block diagram showing the configuration of a computer system according to the second embodiment of the present invention.
[0052]
The computer system in this embodiment includes a plurality of host computers 1 (host computer 1a, host computer 1b,..., Host computer 1n), a plurality of storage device subsystems 2a, 2b,. A network 4 and a fiber channel switch 6 are included.
[0053]
The host computer 1 has a volume manager 11 as in the first embodiment. The volume manager 11 communicates with the management manager 31 placed in the management host computer 3 and operates. Further, the host computer 1 has an interface (I / F) 12 and is connected to the fiber channel switch 8 through the interface 12.
[0054]
The storage device subsystems 2a, 2b,..., 2m are respectively the same as the storage device subsystem 2 in the first embodiment, the disk unit 21, the disk controller 22, the port 23, and the network interface (network I) connected to the network. / F) 25. As in the first embodiment, there may be a plurality of disk units 21 and ports 23, but here, for simplicity of explanation, it is assumed that there is one disk unit and one port.
[0055]
The fiber channel switch 8 has a plurality of ports 81. Each port 81 is connected to any one of the interfaces 12 of the host computers 1a, 1b,... And the ports 23 of the storage subsystems 2a, 2b,. The fiber channel switch 8 has a network interface 82 and is also connected to the network 4. The fiber channel switch 8 is used to allow the host computers 1a, 1b,... To freely access the storage subsystems 2a, 2b,. In this configuration, basically all the host computers 1 can access all the storage device subsystems 2.
The management host computer 3 has a management manager 31 as in the first embodiment. The management manager 31 operates in communication with the volume manager 11 of each host computer 1a, 1b,.
[0056]
FIG. 9 is a table configuration diagram showing an example of a logical device management table held by the management host computer 3. The logical device management table in this embodiment is used for managing the same information as the logical device management table held by the storage subsystem 2 in the first embodiment. Hereinafter, differences from the logical device management table in the first embodiment will be mainly described.
[0057]
In this embodiment, the management host computer 3 manages all devices included in all storage subsystems 2 with unique numbers. For the purpose of this management, the logical device management table has size 103, configuration 104, status 105, LUN 106, WWN (World Wide Name) 102, and connected host name 107 as information for each device.
[0058]
The size 103, the configuration 104, the state 105, and the LUN 106 are the same as the information held in the logical device management table in the first embodiment. The WWN 102 is information set in the port 23 of the storage subsystem 2, and is information uniquely assigned to each Fiber Channel interface in order to identify each port. The WWN 107 is also called N_PORT_NAME. The connection host name 107 is a host name for identifying a host computer that is permitted to connect to the device.
[0059]
Basically, if a plurality of host computers 1 connected to the fiber channel switch 8 can freely access any storage subsystem 2, there may be a system safety problem. In order to solve such a problem related to the security of the system, for example, Japanese Patent Laid-Open No. 10-333839 permits only access from a specific host computer to a storage device connected by a fiber channel. Techniques that enable this are disclosed. Also in this embodiment, in order to maintain the safety of the system, the storage device subsystem 2 has means for maintaining the safety as disclosed in JP-A-10-333839. To do. However, this is not directly related to the essence of the present invention, and detailed description thereof is omitted here.
[0060]
In this embodiment, the WWN 107 is also given to the interface 12 of each host computer 1. The management host computer 3 manages a set of the host name 108 and the WWN 109 using the table shown in FIG.
[0061]
Hereinafter, operations of the volume manager 11 and the management manager 31 will be described.
[0062]
In this embodiment, the processing executed by the volume manager 11 when assigning a new device to the host computer is basically executed in the same manner as the processing in the first embodiment shown in FIG. That is, when the volume manager 11 receives information on the number and type of devices required from the user or application program, the volume manager 11 requests the management manager 31 to allocate a new device based on the information. When the management device 31 finishes assigning a new device, the volume manager 11 changes the device setting so that the new device can be used from the host computer 1.
[0063]
FIG. 11 shows a flowchart of processing executed by the management manager 31 when a new device is allocated in this embodiment.
[0064]
Similarly, the processing performed by the management manager 31 is performed in substantially the same manner as the processing of the management manager in the first embodiment shown in FIG. In FIG. 11, the same reference numerals as those in FIG. 5 are used for portions where the same processing as that shown in FIG. 5 is performed. In the following, a part where processing different from that in FIG. 5 is performed will be mainly described, and description of a part where processing identical to that in FIG. 5 is performed will be omitted.
[0065]
In this embodiment, the storage subsystem 2 prohibits access from all the host computers 1 in the initial state so that the device is not inadvertently accessed from an unassigned host computer. Yes. For this reason, when the management manager 31 instructs the storage subsystem 2 to bring the device online in step 1105, the management manager 31 also permits the storage subsystem 2 to access the newly allocated device from the host computer 1. To instruct. In this instruction, the management manager 31 notifies the storage subsystem 2 of the WWN of the host computer 1 that should be allowed to access the device. When the host computer 1 accesses the device, the storage subsystem 2 determines whether or not the access is possible based on the WWN received from the management manager 31 (step 2105).
[0066]
Following the processing of step 2105, the management manager 31 changes the setting of the fiber channel switch 8. For example, as shown in FIG. 12, the host computers A and B access the disk units (devices) a and b, but the host computer C accesses only the disk unit (device) c. In this case, the management manager 31 makes a path setting for the Fiber Channel switch 8 so that the port (port d, port e) connected to the disk units a, b cannot be accessed from the port c connected to the host computer C. Do it. This makes it possible to have two switches. Performing such route setting is called zoning. By performing zoning, it is possible to prevent a certain device from being accessed from a host computer to which access is not originally permitted. Further, since the data flow is separated, the performance can be improved (step 2106).
[0067]
After the above processing, the management manager 31 performs the processing of steps 1106 and 1107.
[0068]
FIG. 13 is a simplified block diagram showing a configuration example in the third embodiment of the computer system to which the present invention is applied.
[0069]
In the computer system of this embodiment, a plurality of host computers 1a ′, host computers 1b ′,... Host computers 1n ′ (collectively referred to as hosts 1 ′) are connected via a network interface (I / F) 12 ′ and a network 4. And connected to the file server 9. The file server 9 is connected to the storage device subsystem 2 via an interface (I / F) 92. The storage device subsystem 2 and the secondary storage device 5 that is a storage device disposed at a remote location are the same as those in the first embodiment.
[0070]
The file server 9 includes a network interface 91 connected to each host computer 1 ′, a plurality of interfaces 32 connected to the storage subsystem 2, a management manager 93, and a server program 94.
[0071]
Similar to the management manager 31 in the first embodiment, the management manager 93 performs device allocation in response to a request. The server program 94 is a file server program that provides file access via a network, such as NFS (Network File System). The server program 94 provides means for accessing the file system created by the file server 9 in the storage subsystem 2 from the host computer 1 '.
[0072]
The storage subsystem 2 and the file server 9 may be configured as a so-called NAS (Network Attached Storage) in which each host computer 1 'can be seen as one storage device.
[0073]
The client program 11 ′ of the host computer 1 ′ communicates with the server program 94 on the file server 9, and the file system created by the file server 9 in the storage device subsystem 2 from the application program operating on the host 1 ′. It is a program that makes it possible to use. The client program 11 ′ may be incorporated in an operating system (not shown) on the host 1 ′ depending on the system configuration. The client program 11 ′ requests the management manager 93 to create a new file system or to change the size of the existing file system.
[0074]
In order to make it possible to change the size of an existing file system while the host computer 1 is in operation, the storage subsystem of the present embodiment uses data that exists in a certain logical device to be physically stored in the logical device. A function of moving to a physical disk unit different from the correct disk unit. As specific technical means for realizing such a function, for example, a known technique disclosed in Japanese Patent Laid-Open No. 9-274544 can be applied. Therefore, detailed description thereof is omitted in this specification.
[0075]
FIG. 14 is a flowchart showing the flow of processing performed when the client program 11 'of the host computer 1' constructs a new file system.
[0076]
This processing is performed when a user who uses the host computer 1 'or an application program running on the host computer 1' newly needs a file area.
[0077]
The client program 11 ′ accepts designation of information about a required device in response to a request from a user or an application program. The information acquired here includes information such as required device capacity, performance conditions, reliability level, and the like, as in step 1001 in the first embodiment shown in FIG. 4 (step 2001).
[0078]
Next, the client program 11 ′ transmits information such as the capacity, performance condition, reliability level, etc. specified in step 2001 to the management manager 93 and requests a new file system area. Based on the information received from the client program 11 ', the management manager 93 searches for and prepares a device area that can be allocated, and returns the result to the client program 11'. The processing of the management manager 93 performed at this time will be described later (step 2002).
[0079]
The client program 11 ′ receives a response from the management manager 93 in response to a request for a new area. The response received at this time includes a mount point, for example, in the case of NFS, the host name of the file server, or the host IP address and directory name (step 2003). The client program 11 ′ mounts the file system based on the information received from the management manager 93 (step 2004). Finally, the client program 11 ′ notifies the user or application program of the assigned mount point and ends the process (step 2005).
[0080]
FIG. 15 is a flowchart showing the flow of processing performed by the management manager 93 in response to a request for a new area from the client program 11 ′.
[0081]
This process is basically performed in the same manner as the process of the management manager 31 in the first embodiment shown in FIG. However, the processing of step 1107 in FIG. 5 is changed to the processing of steps 2107, 2111, and 2112.
[0082]
In step 1107 of FIG. 5, information such as the target ID is passed to the requested host computer. In this embodiment, processing is performed on these pieces of information. For this purpose, the management manager 93 passes information about the device such as the target ID to the server program 94 (step 2107), and receives mount point information from the server program 94 (step 2111). Then, the mount point information received from the server program 94 is passed to the requested client program 11 'and the process is terminated (step 2112).
[0083]
FIG. 16 is a flowchart showing the flow of processing executed by the server program that has received information about the device from the management manager.
[0084]
When information about a device is passed from the management manager 93 (step 2201), the server program 94 performs device reconfiguration of the file server 9. Specifically, this process is the same as the process of Step 1005 in the first embodiment shown in FIG. 4 (Step 2202).
[0085]
Subsequently, the server program 94 creates a file system in the newly created device (step 2203), and returns information indicating the mount point of the file system to the management manager 93 (step 2204).
[0086]
Through the above processing, a new file system that can be used from the host computer 1 'can be added.
[0087]
FIG. 17 is a flowchart showing the flow of processing performed by the management manager 93 when changing the size of an existing file system. The processing at the time of requesting a new file system shown in FIG. 15 is different in the following points.
[0088]
When attempting to change the size of an existing file system, the user or application program relates to the mount point of the file system whose size is to be changed, the size to be expanded or reduced, and the like for the client program 11 ′. Issue a request with information. The client program 11 'requests the management manager 93 to change the size of the file system using information specified by the user or the application program. The management manager 93 receives information such as the mount point of the file system to be processed, the size to be expanded, etc. sent from the client program 11 ′ (step 2301).
[0089]
The management manager 93 obtains information such as the target ID and LUN of the logical device storing the file system to be processed based on the mount point received from the client program 11 ′, and determines the logical device. . Then, the management manager 93 obtains information such as the type of the logical device, that is, reliability and performance (step 2302). Subsequently, based on the information obtained in steps 2301 and 2302, the management manager 93 has a free area of the size of the file system after the change in the same way as when adding a new file system, The same type of logical device is secured (steps 1102 to 1110).
[0090]
Thereafter, in step 2304, the management manager 93 instructs the storage subsystem 2 to move data from the logical device in which the file system has been recorded to the newly secured logical device. Data movement is performed transparently from the file server program 94. Since the host computer 1 'accesses the storage subsystem 2 via the file server program 94, this process is transparent to the host computer 1'. Therefore, it is not necessary for the host computer 1 'to stop processing during the data movement.
[0091]
When the data movement is completed, the management manager 93 instructs the server program 94 to extend the file system. Even if the actual device capacity increases, if the file system is not rebuilt, the entire extended capacity cannot be used as the file system. After instructing the server program 94 to expand the file system, the management manager 93 notifies the client program 11 'of the completion of the process and ends the process (step 2305).
[0092]
Through the above processing, it is possible to change the size of the existing file system while the host computer 1 'is operating. When the size of the existing file system is changed, the client program 11 'can use the expanded file system as it is after receiving the notification from the management manager. Therefore, in this case, it is not necessary to carry out the processing of step 2004 and step 2005 in FIG.
[0093]
FIG. 18 is a simplified block diagram showing a configuration example in the fourth embodiment of a computer system to which the present invention is applied.
[0094]
The computer system in the present embodiment includes a plurality of host computers 1 ″ (host computers 1a ″, 1b ″,..., 1n ″), a management host computer 3, a storage device subsystem 2 ′, and a secondary storage device 5. is doing. Each host computer 1 ″ and the storage device subsystem 2 ′ are connected via a fiber channel switch 8. The host computer 1 ″, the storage device subsystem 2 ′, and the fiber channel switch 8 are connected via a network 4. Are connected to each other.
[0095]
The fiber channel switch 8 includes a plurality of ports 81, and switches the connection between these ports to realize data transfer between devices connected to the port 81. The fiber channel switch 8 also includes a network interface 82 for performing communication via the network 4. Each host computer 1 ″ includes a volume manager 11 ″ and one or more interfaces 12. The interface 12 of the host computer 1 ″ is connected to one of a plurality of ports 81 included in the fiber channel 8.
[0096]
The storage subsystem 2 'has a plurality of clusters 26 and an inter-controller connection mechanism 27 that connects the clusters 26 to each other. Each cluster 26 includes a channel processor 23 ′, a drive processor 22 ′, and a plurality of disk units 21. The channel processor 23 ′ and the drive processor 22 ′ in the same cluster are coupled by a bus 28 that is faster than the inter-controller connection mechanism 27. Each channel processor 23 'includes one or a plurality of ports 231 and is connected to the host computer 1 "via the secondary storage device 5 or the fiber channel 8. A plurality of disks are connected to the drive processor 22'. In this embodiment, one or a plurality of logical devices are configured by combining the plurality of disk units 21, or one or a plurality of logical devices are configured by one disk unit 21. It is assumed that the disk units 21 included in the plurality of clusters 26 cannot be combined in configuring a certain logical device.
[0097]
The channel processor 23 ′ shows one or a plurality of logical devices to each host computer 1 ″ and accepts access from each host 1 ″. In principle, the channel processor 23 ′ manages a logical device constituted by the disk unit 21 connected to the drive processor 22 ′ in the cluster 26 to which the channel processor 23 ′ belongs. This is because communication between the channel processor 23 ′ and the drive processor 22 ′ in the same cluster 26 can be performed at a higher speed than communication across clusters. However, when a channel processor 23 'of a certain cluster 26 does not operate due to a failure or the like, the channel processor 23' of another cluster 26 takes over the processing. The channel processor 23 'determines which disk processor 21 is connected to which drive processor 22' is the logical device designated by the host computer 1 ", and passes the processing request to the appropriate drive processor 22 '. 22 ′ interprets the request from the channel processor 23 ′, generates a disk access request for each disk unit 21 in which the logical device is placed, and sends the disk access request to the corresponding disk unit 21.
[0098]
The host computer 1 ″ has substantially the same configuration as the host computer 1 in the first embodiment, but there is a slight difference in the function of the volume manager 11 ″ operating on it. The volume manager 11 ″ has a function of allocating and returning a plurality of logical devices to another higher level application program as a separate logical device in addition to the logical device assignment and return processing performed by the volume manager 11 in the first embodiment. Hereinafter, the logical device created by the volume manager 11 ″ will be referred to as LVOL in order to distinguish it from the logical device managed by the storage subsystem 2 ′. The volume manager 11 ″ apparently combines a plurality of logical devices to form one larger LVOL, or divides one logical device into a plurality of areas, and these areas are set as LVOL on the host computer 1 ″. Can be used by other application programs. It is also possible to expand the capacity of the LVOL by combining a new logical device with the existing LVOL.
[0099]
FIG. 19 is a flowchart showing a flow of processing performed by the volume manager 11 ″ when a volume is newly allocated in the present embodiment.
[0100]
In the processing described here, step 1002 of the device allocation processing in the first embodiment shown in FIG. 4 is replaced with step 1002 ′, and step 1006 is replaced with step 1005 ′ and step 1006 ′. In other steps, the same processing as the corresponding steps in FIG. 4 is performed. Hereinafter, processing performed in steps 1002 ′, 1005 ′, and 1006 ′ will be described.
[0101]
In step 1002 ′, an unused WWN / LUN pair is retrieved from the LVOL management table managed by the volume manager 11 ″. An example of the LVOL management table is shown in FIG. 20. The LVOL management table includes an LVOL name 151. , Device file name 152, size 153, and WWN 154 and LUN 155 information of each device are registered. The LVOL name 151 is assigned to identify the LVOL provided to the application program by the volume manager 11 ″. Identifier. The device file name 152 is the name of the logical device that constitutes the LVOL. The volume manager 11 ″ manages the logical devices belonging to each LVOL based on the device file name. The size 153 indicates the capacity of each logical device constituting the LVOL. One LVOL is composed of a plurality of logical devices. In some cases, a plurality of device files may belong to one LVOL name.
[0102]
In step 1005 ′, the volume manager 11 ″ newly creates an LVOL using the logical device assigned by the management manager 31, and registers the contents in the LVOL management table. In step 1006 ′, the assigned LVOL name. Is notified to the user and the process ends.
[0103]
FIG. 21 is a flowchart showing the processing of the volume manager when the LVOL capacity is expanded in response to a request from the user or an application program.
[0104]
When expanding the capacity of an LVOL, a new logical device is prepared, and a new LVOL is constructed in combination with the logical device that constitutes the LVOL to be expanded. At this time, the newly prepared logical device is usually the same type of logical device as the logical device constructing the LVOL to be expanded. In the present embodiment, the type of logical device constituting the LVOL to be expanded by the volume manager 11 ″ is determined, and the same type of logical device is secured.
[0105]
In this processing, the volume manager 11 ″ first receives information about the LVOL name of the expansion target LVOL and the capacity to be expanded from the user or application program (step 2501). Next, the volume manager 11 ″ is expanded. The management manager 31 is inquired about the types of logical devices constituting the target LVOL (step 2502). The volume manager 11 ″ searches for an unused WWN / LUN pair from the LVOL management table (step 2503). The volume manager 11 ″ determines the type of logical device acquired in steps 2502 and 2503, and the unused WWN. Information including the combination of LUNs is transmitted to the management manager 31. (Step 2504). When the information about the newly assigned logical device is received from the management manager 31 (step 2505), the volume manager 11 ″ performs reconfiguration of the host computer 1 ″, and the newly assigned logical device is transferred to the host computer 1. The volume manager 11 ″ adds a logical device newly allocated to the expansion target LVOL to expand the capacity of the LVOL, and ends the processing (step S2506). 2507).
[0106]
When there is a request for allocation of a new logical volume from the volume manager 11 ″ in step 1003 of FIG. 19 and step 2504 of FIG. 21, the management manager 31 in any case, the device type requested from the volume manager 11 ″, Search for and assign devices that match capacity. For this process, the management manager 31 includes a logical device management table as shown in FIG. 9 and a cluster information table in which information related to the cluster 26 in the storage subsystem 2 ′ is set.
[0107]
FIG. 22 is a table configuration diagram illustrating an example of a cluster information management table.
[0108]
The cluster information management table has an entry corresponding to each cluster 26. For each cluster 26, a cluster number 161 for identifying the cluster, a port number 162 of a port included in the cluster, and a WWN 163 assigned to the port are set. ing. As shown in the figure, when a plurality of ports exist in one cluster 26, each port number and WWN are set in an entry corresponding to the cluster. As described above, when a logical device is constructed in the disk unit 21 connected to a certain drive processor 22 ', the logical device can be accessed from the port 231 in the same cluster. Desirable from the viewpoint of performance. Based on the cluster information table, the management manager 31 ensures that the port 231 used for access from the host computer 1 ″ and the drive processor 22 ′ to which the disk unit 21 in which the newly assigned logical device is constructed are connected are in the same cluster. Set up the device.
[0109]
FIG. 23 is a flowchart showing the flow of device allocation processing by the management manager 31.
[0110]
The device assignment processing in this embodiment is performed in substantially the same manner as the processing in the second embodiment shown in FIG. 11, but some processing is performed in the second embodiment due to the difference in the configuration of the storage subsystem. Is different. Specifically, in step 1109, when a new device construction is requested from the management manager 31 to the storage subsystem 2 ', the storage subsystem 2' constructs a device in accordance with the request. When the logical device is constructed in the storage subsystem 2 ', the management manager 31 receives information indicating in which cluster 26 the logical device is constructed with respect to the newly constructed logical device (step 2610).
[0111]
The management manager 31 refers to the information regarding the logical device received from the storage subsystem 2 ′ and the cluster information management table, and determines from which port the device can be accessed. The management manager 31 further determines the LUN of the device to be newly allocated based on the unused LUN information (step 2604). Finally, the management manager 31 sends information such as WWN and LUN necessary for accessing the newly allocated logical volume to the volume manager 11 ″, and ends the processing (step 2607).
[0112]
The processes other than those described here are the same as those of the management manager in the second embodiment shown in FIG. 11, and the same reference numerals as those in FIG.
[0113]
According to the embodiment described above, device allocation can be performed in accordance with a request even when the host computer is operating. Furthermore, even in an environment in which a plurality of devices are configured with fiber channel switches, it is possible to easily assign devices that meet the requirements to the host computer.
[0114]
Needless to say, the present invention is not limited to the above-described embodiments, and can take various forms within the scope of the gist of the present invention.
[0115]
【The invention's effect】
According to the present invention, storage devices can be dynamically allocated to a host computer as necessary.
[Brief description of the drawings]
FIG. 1 is a block diagram illustrating a configuration example of a computer system according to a first embodiment of this invention.
FIG. 2 is a table configuration diagram showing an example of a logical device management table held by a storage device subsystem.
FIG. 3 is a table configuration diagram showing an example of a host management table held by a management manager.
FIG. 4 is a flowchart showing the flow of processing executed by the volume manager of the host computer.
FIG. 5 is a flowchart showing a flow of processing executed by a management manager.
FIG. 6 is a flowchart showing a flow of processing by a volume manager in device return processing;
FIG. 7 is a flowchart showing a flow of processing by a management manager in device return processing;
FIG. 8 is a block diagram illustrating a configuration example of a computer system according to a second embodiment of this invention.
FIG. 9 is a table configuration diagram showing an example of a logical device management table held by a management manager.
FIG. 10 is a table configuration diagram showing an example of a table for managing a correspondence relationship between a host computer and a WWN held by a management manager.
FIG. 11 is a flowchart showing a flow of processing by a management manager.
FIG. 12 is an explanatory diagram showing a zoning function of the fiber channel switch.
FIG. 13 is a block diagram illustrating a configuration example of a computer system according to a third embodiment of this invention.
FIG. 14 is a flowchart showing a flow of processing by a client program.
FIG. 15 is a flowchart showing a flow of processing by a file server management manager;
FIG. 16 is a flowchart showing a flow of processing by a server program of a file server.
FIG. 17 is a flowchart showing a flow of processing performed by the management manager when expanding a file system.
FIG. 18 is a block diagram illustrating a configuration example of a computer system according to a fourth embodiment of this invention.
FIG. 19 is a flowchart showing a flow of processing by the volume manager.
FIG. 20 is a table configuration diagram showing an example of an LVOL management table.
FIG. 21 is a flowchart showing a flow of processing performed by the volume manager when expanding an LVOL.
FIG. 22 is a table configuration diagram showing an example of a cluster information table.
FIG. 23 is a flowchart showing a flow of processing performed by the management manager when extending an LVOL.
[Brief description of symbols]
1 ... Host computer,
2 ... Storage subsystem
3 ... Host computer for management
4 ... Network
5 ... Secondary storage device
8 ... Fiber Channel switch
11 ... Volume manager
21 ... Disk unit
22 ... Disk controller
23 ... Port
31 ... Management manager
81 ... Port

Claims (15)

  1. A first computer, a storage device subsystem having a storage device that holds data accessed from the first computer, device management information relating to the storage device included in the storage device subsystem, and the storage device A computer system having a second computer having host management information indicating a state of assignment to the first computer,
    The first computer has a request unit that accepts a request for a new storage device from a user or an application program and requests the second computer to allocate a new storage device, and the second computer includes: In response to a request from the requesting means, referring to the device management information and the host management information, a means for determining a storage device that can be allocated to the first computer, and a storage device determined by the determining means A computer system comprising: changing means for changing the setting of the storage subsystem so that the first computer can be accessed.
  2.   The computer system according to claim 1, wherein the storage device is at least a part of a storage area formed in a physical storage device included in the storage device subsystem.
  3.   2. The computer system according to claim 1, wherein the request unit sends information specifying a condition of a storage device to be allocated to the second computer together with the request for allocation.
  4.   The determining means refers to the device management information, and selects a storage device that satisfies the condition specified by the request means and is in an offline state as the assignable storage device. The computer system according to claim 3.
  5.   5. The computer system according to claim 4, wherein the condition includes information for designating at least one of required performance and reliability of the storage device.
  6.   The request unit includes a unit that receives a request for changing the capacity of an existing storage device and transmits the request to the second computer. The second computer has a capacity according to the change request. Means for selecting a storage device that matches the storage device after the change, and for the storage subsystem, the data held in the existing storage device is moved to the storage device selected by the selection means 2. The computer system according to claim 1, wherein the storage subsystem includes means for moving data in response to the instruction.
  7.   2. The computer system according to claim 1, wherein the first computer has means for providing a plurality of storage devices to the application program as one logical device.
  8.   In response to a request for expansion of the device capacity already provided to the application program, the requesting unit is provided with a new storage device having a capacity required for the expansion requested by the second computer. 8. The computer system according to claim 7, wherein the means for requesting allocation and the providing means make a newly allocated storage device a part of the device requested to be expanded.
  9. A first computer, a storage device subsystem having a storage device that holds data accessed from the first computer, and a state of assignment of the storage device of the storage device subsystem to the first computer; A storage device allocation method in a computer system having a second computer to be managed,
    Requesting the second computer to allocate a new storage device from the first computer;
    In the second computer, determining a storage device that can be allocated to the first computer based on the request;
    Instructing the storage subsystem to change the setting so that the storage device determined from the second computer is accessible to the storage subsystem;
    Transferring information necessary for accessing the determined storage device from the second computer to the first computer; and
    A storage device allocation comprising: a step of changing a setting of the first computer so that the storage device determined based on information necessary for the access can be used in the first computer. Method.
  10.   10. The storage device allocation method according to claim 9, wherein the step of requesting the allocation includes a step of transmitting a request including information indicating a condition required for the new storage device to the second computer. .
  11.   11. The storage device allocation method according to claim 10, wherein the required condition includes information for designating at least one of performance and reliability of the new storage device.
  12.   The determining step refers to device management information for managing storage devices in the storage subsystem and determines a storage device that satisfies the conditions and is in an offline state as the assignable storage device. 12. The storage device allocation method according to claim 11, further comprising a step.
  13.   10. The storage device allocation method according to claim 9, further comprising: in the first computer, the determined storage device; and an existing storage device constituting a logical storage used in the first computer. And allocating the logical storage capacity to expand the capacity of the logical storage.
  14. In a computer system having a plurality of computers, a file server to which the plurality of computers are connected via a network, and a storage device subsystem connected to the file server and including a plurality of storage devices,
    The plurality of computers have request means for receiving a request for a new file area from a user or an application program, and requesting the file server to allocate a new file area,
    In response to a request from the request unit, the file server refers to device management information related to the storage device and host management information indicating a state of allocation of the storage device to the computer , and determines the file area. Means for determining a storage device having a configurable storage area, changing means for changing the setting of the storage subsystem so that the storage device determined by the determining means can be accessed from the computer, the determined A computer system comprising: setting means for changing settings of the computer and the file server so that a storage device can be used; and file management means for creating the file area on the determined storage device.
  15. The file management means includes means for transmitting information used to access the created file area to the computer that made the request, and the computer that made the request uses the information used for the access to 15. The computer system according to claim 14, further comprising an application program for accessing the created file area.
JP2000238865A 1999-08-27 2000-08-02 Computer system and device allocation method Expired - Fee Related JP3843713B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP24102499 1999-08-27
JP11-241024 1999-08-27
JP2000238865A JP3843713B2 (en) 1999-08-27 2000-08-02 Computer system and device allocation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2000238865A JP3843713B2 (en) 1999-08-27 2000-08-02 Computer system and device allocation method

Publications (2)

Publication Number Publication Date
JP2001142648A JP2001142648A (en) 2001-05-25
JP3843713B2 true JP3843713B2 (en) 2006-11-08

Family

ID=26535038

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2000238865A Expired - Fee Related JP3843713B2 (en) 1999-08-27 2000-08-02 Computer system and device allocation method

Country Status (1)

Country Link
JP (1) JP3843713B2 (en)

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4501548B2 (en) * 1999-08-27 2010-07-14 株式会社日立製作所 Computer system and device allocation method
US7062591B2 (en) 2001-09-28 2006-06-13 Dot Hill Systems Corp. Controller data sharing using a modular DMA architecture
US7146448B2 (en) 2001-09-28 2006-12-05 Dot Hill Systems Corporation Apparatus and method for adopting an orphan I/O port in a redundant storage controller
US7437493B2 (en) * 2001-09-28 2008-10-14 Dot Hill Systems Corp. Modular architecture for a network storage controller
US7340555B2 (en) 2001-09-28 2008-03-04 Dot Hill Systems Corporation RAID system for performing efficient mirrored posted-write operations
US7536495B2 (en) 2001-09-28 2009-05-19 Dot Hill Systems Corporation Certified memory-to-memory data transfer between active-active raid controllers
US7380115B2 (en) 2001-11-09 2008-05-27 Dot Hill Systems Corp. Transferring data using direct memory access
US8788611B2 (en) 2001-12-28 2014-07-22 Hewlett-Packard Development Company, L.P. Method for using partitioning to provide capacity on demand in data libraries
US7281044B2 (en) 2002-01-10 2007-10-09 Hitachi, Ltd. SAN infrastructure on demand service system
JP4061960B2 (en) 2002-04-26 2008-03-19 株式会社日立製作所 Computer system
JP2003316713A (en) 2002-04-26 2003-11-07 Hitachi Ltd Storage device system
US6925541B2 (en) * 2002-06-12 2005-08-02 Hitachi, Ltd. Method and apparatus for managing replication volumes
JP4175083B2 (en) 2002-10-29 2008-11-05 株式会社日立製作所 Storage device management computer and program
US8095704B2 (en) * 2003-01-13 2012-01-10 Sierra Logic Integrated-circuit implementation of a storage-shelf router and a path controller card for combined use in high-availability mass-storage-device shelves that may be incorporated within disk arrays
JP4345313B2 (en) 2003-01-24 2009-10-14 株式会社日立製作所 Operation management method of storage system based on policy
US7143227B2 (en) 2003-02-18 2006-11-28 Dot Hill Systems Corporation Broadcast bridge apparatus for transferring data to redundant memory subsystems in a storage controller
JP2004280690A (en) 2003-03-18 2004-10-07 Hitachi Ltd Information processing system, and system setting method
JP4294353B2 (en) 2003-03-28 2009-07-08 株式会社日立製作所 Storage system failure management method and apparatus having job management function
JP2004302751A (en) 2003-03-31 2004-10-28 Hitachi Ltd Method for managing performance of computer system and computer system managing performance of storage device
JP4462852B2 (en) 2003-06-23 2010-05-12 株式会社日立製作所 Storage system and storage system connection method
JP4463042B2 (en) 2003-12-26 2010-05-12 株式会社日立製作所 Storage system having volume dynamic allocation function
JP2005215943A (en) * 2004-01-29 2005-08-11 Hitachi Ltd Connection control system for disk device
US7137031B2 (en) * 2004-02-25 2006-11-14 Hitachi, Ltd. Logical unit security for clustered storage area networks
JP4653965B2 (en) * 2004-04-08 2011-03-16 株式会社日立製作所 I / O interface module management method
JP4566668B2 (en) 2004-09-21 2010-10-20 株式会社日立製作所 Encryption / decryption management method in computer system having storage hierarchy
JP2006134021A (en) * 2004-11-05 2006-05-25 Hitachi Ltd Storage system and configuration management method therefor
JP4451293B2 (en) 2004-12-10 2010-04-14 株式会社日立製作所 Network storage system of cluster configuration sharing name space and control method thereof
JP4324088B2 (en) 2004-12-17 2009-09-02 富士通株式会社 Data replication control device
US7315911B2 (en) 2005-01-20 2008-01-01 Dot Hill Systems Corporation Method for efficient inter-processor communication in an active-active RAID system using PCI-express links
US7543096B2 (en) 2005-01-20 2009-06-02 Dot Hill Systems Corporation Safe message transfers on PCI-Express link from RAID controller to receiver-programmable window of partner RAID controller CPU memory
JP5031195B2 (en) * 2005-03-17 2012-09-19 株式会社日立製作所 Storage management software and grouping method
JP4987307B2 (en) * 2005-03-25 2012-07-25 株式会社日立製作所 Storage system
JP4671738B2 (en) 2005-04-01 2011-04-20 株式会社日立製作所 Storage system and storage area allocation method
JP2006285808A (en) * 2005-04-04 2006-10-19 Hitachi Ltd Storage system
JP4681337B2 (en) * 2005-04-06 2011-05-11 株式会社日立製作所 Fiber channel switch device, information processing system, and login processing method
JP4716838B2 (en) * 2005-10-06 2011-07-06 株式会社日立製作所 Computer system, management computer, and volume allocation change method for management computer
JP4885575B2 (en) * 2006-03-08 2012-02-29 株式会社日立製作所 Storage area allocation optimization method and management computer for realizing the method
US7536508B2 (en) 2006-06-30 2009-05-19 Dot Hill Systems Corporation System and method for sharing SATA drives in active-active RAID controller system
US7681089B2 (en) 2007-02-20 2010-03-16 Dot Hill Systems Corporation Redundant storage controller system with enhanced failure analysis capability
WO2014147658A1 (en) * 2013-03-18 2014-09-25 Hitachi, Ltd. Compound storage system and storage control method

Also Published As

Publication number Publication date
JP2001142648A (en) 2001-05-25

Similar Documents

Publication Publication Date Title
EP0869438B1 (en) Heterogeneous computer system, heterogeneous input/output system and data back-up method for the systems
US8510508B2 (en) Storage subsystem and storage system architecture performing storage virtualization and method thereof
US6775702B2 (en) Computer system including a device with a plurality of identifiers
US5819310A (en) Method and apparatus for reading data from mirrored logical volumes on physical disk drives
US6314503B1 (en) Method and apparatus for managing the placement of data in a storage system to achieve increased system performance
US8195865B2 (en) Computer system having an expansion device for virtualizing a migration source logical unit
US6883073B2 (en) Virtualized volume snapshot formation method
US6732104B1 (en) Uniform routing of storage access requests through redundant array controllers
US6810462B2 (en) Storage system and method using interface control devices of different types
EP1776639B1 (en) Disk mirror architecture for database appliance with locally balanced regeneration
US7007147B2 (en) Method and apparatus for data relocation between storage subsystems
US8516191B2 (en) Storage system and method of managing a storage system using a management apparatus
US6889309B1 (en) Method and apparatus for implementing an enterprise virtual storage system
JP3944449B2 (en) Computer system, magnetic disk device, and disk cache control method
US7216148B2 (en) Storage system having a plurality of controllers
US7536491B2 (en) System, method and apparatus for multiple-protocol-accessible OSD storage subsystem
CA2405405C (en) Storage virtualization in a storage area network
JP4307964B2 (en) Access restriction information setting method and apparatus
US7536527B2 (en) Data-migration method
US6832289B2 (en) System and method for migrating data
US6457139B1 (en) Method and apparatus for providing a host computer with information relating to the mapping of logical volumes within an intelligent storage system
US7681002B2 (en) Storage controller and storage control method
US7082497B2 (en) System and method for managing a moveable media library with library partitions
JP2005267327A (en) Storage system
US7089448B2 (en) Disk mirror architecture for database appliance

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20060202

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20060207

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20060410

RD01 Notification of change of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7421

Effective date: 20060410

RD01 Notification of change of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7421

Effective date: 20060418

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20060509

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20060607

A911 Transfer of reconsideration by examiner before appeal (zenchi)

Free format text: JAPANESE INTERMEDIATE CODE: A911

Effective date: 20060713

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20060725

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20060807

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100825

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100825

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110825

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120825

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130825

Year of fee payment: 7

LAPS Cancellation because of no payment of annual fees