US20040153728A1 - Storage system, management server, and method of managing application thereof - Google Patents
Storage system, management server, and method of managing application thereof Download PDFInfo
- Publication number
- US20040153728A1 US20040153728A1 US10/444,650 US44465003A US2004153728A1 US 20040153728 A1 US20040153728 A1 US 20040153728A1 US 44465003 A US44465003 A US 44465003A US 2004153728 A1 US2004153728 A1 US 2004153728A1
- Authority
- US
- United States
- Prior art keywords
- application
- error
- management server
- physical disk
- disk
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0706—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
- G06F11/0727—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a storage system, e.g. in a DASD or network based storage system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0766—Error or fault reporting or storing
Definitions
- the present invention relates to a storage system, a management server, and a method of managing an application thereof.
- a host uses a plurality of disk array devices as external storage devices upon operation of an application.
- a disk array device includes a plurality of disks.
- the host is connected to the disk array devices through a storage area network (SAN), whereby data are distributed to a logical volume (LU) composed of the plurality of disks to be stored.
- SAN storage area network
- LU logical volume
- Japanese Patent Application Laid-open Publication No. 2000-305720 discloses a technology for predicting a failure of a disk.
- a client on a network is monitored by using a WWW browser in order to enhance fault tolerance of the disk array device.
- Japanese Patent Application Laid-open Publication No. Hei11-24850 discloses a technology for recovering data contained in a failed drive such that the data belonging to a volume of a highest frequency of failures first, then the recovery is attempted according to the failure frequency.
- Japanese Patent Application Laid-open Publication No. 2000-20245 discloses a technology for automatically configuring a disk drive connected to a controller.
- a disk array device utilizes a “Redundant Array of Inexpensive Disks” (RAID) technology to prevent data loss or system down of a host. Parity and error correction data are added to data to be written, and the data are distributed to a plurality of disks for storage. In this way, it is possible to restore correct data using the parity even if one of the disks is out of order.
- RAID Redundant Array of Inexpensive Disks
- execution of data restoring processing when a disk is out of order, incurs performance degradation of operation of an application as compared to a normal condition. Moreover, if a spare disk exists when the drive is blocked, the blocked disk will be restored by using the spare disk. Execution of such restoring processing also incurs performance degradation of an application as compared to a normal condition.
- the disk array device includes a physical disk error detecting unit for detecting an error in a physical disk.
- the management server stores a corresponding relationship among the application, a logical volume used by the application, and the physical disk corresponding to the logical volume.
- the management server includes an application detecting unit for detecting the application using the logical volume corresponding to the physical disk with the error according to the corresponding relationship when the physical disk error detecting unit of the disk array device detects the error in the physical disk.
- management server may be disposed inside the disk array device. Moreover, it is also possible to provide a configuration where a management server unit having functions of the management server is incorporated into the disk array device. In other words, the term “management server” also includes the concept of the “management server unit”.
- the present invention can suppress performance degradation of an application.
- FIG. 1 is a block diagram showing an entire configuration of a storage system, which is one of the embodiments of the present invention
- FIG. 2 is a conceptual diagram showing an aspect of distributed data storage according to one of the embodiments of the present invention.
- FIG. 3 is a schematic diagram showing a screen for monitoring performance of a disk array device according to one of the embodiments of the present invention
- FIG. 4 is a block diagram showing functions of a management server according to the embodiments of the present invention.
- FIG. 5 is a schematic diagram showing a screen for monitoring performances of the disk array device according to the embodiments of the present invention.
- FIG. 6 is a flowchart showing processing upon occurrence of a disk failure according to the embodiments of the present invention.
- FIG. 7 is a schematic diagram showing an aspect of switching a drive to which an application gains access, from a main drive to a sub drive.
- a management server may include application notifying unit that notifies a user terminal of information concerning the application detected by the application detecting unit.
- the corresponding relationship stored by the management server contains settings on priority of execution of a job of an application.
- the management server may include application processing capability allocating unit for allocating a processing capability of the application according to the priority of execution of the job of the application which uses the physical disk with the error when the physical disk error detecting unit of the disk array device detects the error in the physical disk.
- the management server may give higher priority to execution of the job of the application using the physical disk with the error when the physical disk error detecting unit of the disk array device detects the error in the physical disk.
- the application processing capability allocating unit of the management server may allocate the processing capability of the application according to operation input of a user.
- the management server may include at least the application detected by the application detecting unit, the disk array device used by the application, and application information displaying unit for displaying application information concerning the processing capability allocated to the application on a screen of a display unit.
- the disk array device maybe capable of executing mirroring, and the management server may include mirror disk switching unit for setting a mirror disk for the application to use, when the physical disk error detecting unit of the disk array device detects an error in the physical disk.
- the expression “capable of executing mirroring” refers to a state where the disk array device supports “RAID-1” (“mirror” or “mirroring”) as a RAID level.
- the disk array device may include the physical disk error detecting unit for detecting an error in the physical disk
- the management server may include internal copy switching unit for executing internal copy processing by allocating a new logical volume to an unused disk area (an empty disk) in the disk array device, when the physical disk error detecting unit of the disk array device detects an error in the physical disk, and thereby setting the new logical volume for the application to use.
- FIG. 1 shows an entire configuration of a storage system according to the present embodiment.
- a plurality of hosts 10 are connected to a plurality of disk array devices (also referred to as “disk subsystems”) 20 through a storage area network (SAN) 40 .
- An application is installed in each of the hosts (computers) 10 by a user, and the hosts 10 share the plurality of disk array devices 20 as external storage devices of data required for running this application.
- the application itself is installed in the disk array devices 20 . That means the scope of the present invention shall not be limited by which directory the application installed in.
- a management server 30 can be connected to the plurality of hosts 10 and to the plurality of disk array devices 20 through a different local area network (LAN) 50 of SAN 40 . Alternatively, the management server 30 may be directly connected to the SAN 40 .
- LAN local area network
- a host agent 11 is installed in each host 10 .
- the host agent 11 is activated when a request for acquiring system configuration information to the respective disk array devices 20 is made due to operation input to the management server 30 by a system administrator or due to an event such as a failure of the disk array device 20 .
- the host agent 11 issues a command to a logical volume of the disk array device 20 , which is accessible by the host 10 of its own, to receive an access path.
- OS operation system
- the host agent 11 utilizes the OS, a database, and an application interface of upper middleware. Thereby acquires a name of a file stored in the logical volume, a capacity of the file, and the location in a file system which the file belongs to.
- Each of the disk array devices 20 includes an external connection interface 21 .
- Each of the disk array devices 20 notifies the management server 30 of the configuration information, the performance and the containing data of its own through the external connection interface 21 .
- the disk array device 20 detects an error such as a failure of a physical disk of its own and notifies the management server 30 .
- the management server 30 may gain access to the disk array device 20 and collect the configuration information, the performance, the data, and information concerning the failure thereof.
- similar information as described above is sent or collected directly to the management server 30 by using a SAN interface for the management server 30 , instead of using the external connection interface 21 .
- the disk array device 20 incorporates a control board (a controller) therein.
- the disk array device 20 is controlled by a microprogram (physical disk error detecting unit) which runs on a processor (CPU, physical disk error detecting unit) implemented in this controller.
- a microprogram physical disk error detecting unit
- this microprogram in operation, it is possible to detect errors including failures of the physical disk of the disk array device 20 , such as an I/O error. In other words, it is possible to predict performance degradation of the application by detecting failures and errors in the physical disk.
- the method of detecting such errors will be described specifically as follows.
- data control of the disk array device 20 is performed by the control board (controller) of the disk array device.
- the microprogram runs on the CPU implemented on this controller and controls the device. When there is a request from an upper layer (such as a host) for reading the data, the microprogram performs the control so as to read the data.
- data reading is performed via a cache, and in the case where the data do not exist in the cache, the data are read from the drive and stored in the cache or directly sent to the upper layer.
- the data to be read are normally stored in the drives in a distributed manner.
- the RAID adopts the mode of regenerating data from parity even if part of the data cannot be read.
- the control of reading the distributed data is performed by the microprogram. If a part of the data cannot be read, the control of reading the parity for restoring the data is performed by the microprogram. Therefore, the processing for performing data restoration from the parity is executed by the microprogram. Accordingly, the frequency the restoration can be perceived by the microprogram.
- the microprogram can also perceive the frequency of failures of when accessing each drive.
- a given threshold such as a number of simple cumulative failures or a number of cumulative failures.
- FIG. 2 is a conceptual diagram showing an aspect of distributed data storage.
- the disk array device 20 includes a plurality of drives.
- five disks marked by A to E are installed.
- a logical unit (LU) is defined in these five disk drives.
- This logical unit is regarded as one logical volume from the outside.
- a logical volume (LU0) includes four drives marked by A, B, C and D.
- RAID-5 the data are distributed and written across the drives (physical disks) HDD-A to HDD-E as in D 1 , D 2 and D 3 . Further, parity P 1 is also written.
- the drives HDD-A to HDD-E which store the distributed data D 1 , D 2 and D 3 and the parity P 1 , collectively construct the logical volume (logical unit) LU0 or a logical volume LU1.
- the management server 30 stores corresponding relationships among applications AP-A to AP-E run on the host 10 , the logical volumes LU0 and LU1 used by the applications, and the physical disks HDD-A to HDD-E corresponding to the logical volumes LU0 and LU1.
- the corresponding relationships as shown in FIG. 3 may be referenced not only by the system administrator who uses the management server 30 , but also by a user who uses the applications through the hosts, with a web screen or the like.
- the management server 30 has a computer system (application detecting unit, mirror disk switching unit, internal copy switching unit, application notifying unit, application processing capability allocating unit, and application information displaying unit).
- FIG. 4 is a block diagram showing the functions of this management server 30 .
- the management server 30 includes a user management layer 31 , an object management layer 32 , an agent management layer 33 , and an interface layer 34 .
- the object management layer 32 includes a database for accumulating configuration information concerning the respective disk array devices 20 . As described above, the corresponding relationships among the applications, the logical volumes and the physical disks are stored in this database.
- the interface layer 34 includes a subsystem interface 341 and a host interface 342 .
- a plurality of user terminals A to C are connected to the management server 30 through the user management layer 31 .
- the plurality of disk array devices 20 are connected to the management server 30 through the subsystem interface 341 .
- the hosts 10 are connected to the management server 30 through the host interface 342 .
- the user management layer 31 manages the user terminals A to C.
- the system administrator is also included as one of the users.
- the object management layer 32 acquires information concerning the configurations, performances and errors such as failures of the respective disk array devices 20 as well as information concerning other events, and stores the information in the database.
- the information to be stored in this database includes: settings concerning internal access paths and the logical volumes of the respective disk array devices 20 , capacities, access authorization, and data transfer of the respective logical volumes; settings concerning data copy among the respective disk array devices 20 ; settings concerning the performances and control of the respective disk array devices 20 ; and settings of methods of acquiring and maintaining the performance data, abnormality such as failures, and the configuration information of events in the respective disc devices 20 by user operations of the respective disk array devices.
- management server 30 Although only one management server 30 is shown in FIG. 1, it is possible to use a plurality of management servers 30 . Moreover, the management server 30 may be installed inside the disk array device 20 . Furthermore, it is also possible to adopt a configuration where a management server unit having the functions of the management server 30 is incorporated into the disk array device 20 . In other words, the term “management server” also may be interpreted as the “management server unit”. Meanwhile, the management server unit maybe located in a position physically distant from the disk array device 20 .
- the management server 30 makes periodic inquiries to the respective disk array devices 20 and acquires the information concerning the events of abnormality such as failures. Alternatively, the information concerning the events such as failures or maintenance detected by the respective disk array devices 20 is notified to the agent management layer 33 of the management server 30 through the subsystem interface 341 .
- the agent management layer 33 Upon notification of the events such as failures, the agent management layer 33 notifies the object management layer 32 by using an interrupt function.
- the management server 30 recognizes a status change of the disk array device 20 by the object management layer 32 that has received such notification. After recognizing this event, the management server 30 acquires the configuration information of the disk array device 20 and updates the information in the database.
- a screen for monitoring the performances of the disk array devices is displayed on a display unit of the management server 30 by a graphical user interface (GUI)
- GUI graphical user interface
- the information displayed on this screen is based on the corresponding relationships in FIG. 3 as previously described.
- description will be firstly given regarding an A system as the “application”.
- the A system uses a disk array device “D-B”.
- a “drive status” of this disk array device D-B is indicated as “OK”, in other words, there are no abnormality such as a breakdown or a failure therein.
- Priority of execution of a job (“job priority”) of this A system is set to “B” which is lower than “A”, and allocation of a processing capability is set to a level “10”.
- a field for “other status” is blank since the “drive status” is “OK” and indicates that no special measures are taken.
- the B system uses a disk array device “D-B”.
- the “drive status” of the disk array device D-B is indicated as “failure”.
- Priority of execution of a job (“job priority”) of this B system is set to “A” which is higher than “B”, thus the priority is set higher than the above-described A system.
- allocation of a processing capability regarding this B system is set to a level “10”.
- a field for “other status” is indicated as “mirror in use” since “drive status” is “failure”, and shows that data on a mirror disk are used as a primary (main) I/O.
- the occurrence of the failure and the state of switching the access drive are notified to the system administrator (S 40 ).
- the system administrator For such notification, as shown in FIG. 5, an appropriate method such as displaying on the displaying unit of the management server is used.
- the access drive is set back to the original drive, which is the normal drive (S 50 to S 60 ).
- countermeasure processing for load distribution of the application having the risk of the performance degradation is executed according to the job priority. For example, as shown in FIG. 5, if there is a risk of performance degradation of the B system as the application having the higher priority “A” and the mirror drive is not used under that status, the allocation of the processing capability is increased from “10” to “15”, so that the performance of the application can be maintained without being influenced by the drive failure.
- an increase in the allocation of the processing capability is equivalent to an increase in an allocation rate of the CPU resources.
- Such allocation of the processing capabilities and settings of the job priority can be performed through the screen displayed on the display unit in FIG. 5 by the GUI. Thereafter, when the failed drive is recovered or restored by a replacement, the allocation of the processing capabilities and the job priority of the applications are set back to the original states (S 170 to S 180 ).
- the management server stores corresponding relationships among the applications, the logical volumes used by the applications, and the physical disks corresponding to the logical volumes. Thus, it is possible to detect the application which maybe influenced by the error in the physical disk and to predict the performance degradation thereof. Based on this prediction, it is possible to suppress the performance degradation or an abnormal termination.
- the management server notifies the user terminal of information concerning the application which may be influenced by the error in the physical disk. In this way, it is possible to advise the user on the risk of the performance degradation of the application.
- the priority of execution of the job of the application is set in the corresponding relationship stored by the management server.
- the management server allocates the processing capability of the application according to the priority of execution of the job of the application which uses the physical disk with the error. In this way, it is possible to control the load of the application.
- the management server gives higher priority to execution of the job of the application which uses the physical disk with the error. In this way, it is possible to suppress the performance degradation of the application.
- the management server allocates the processing capability of the application according to operation input of the user.
- the user can control the load of the application which may be influenced by the error in the physical disk.
- the management server displays the information concerning the applications, the disk array devices used by the applications, and the processing capabilities allocated to the applications on the screen of the display unit. In this way, the user can monitor the logical volume and the application which may be influenced by the error in the physical disk, and the processing capability allocated to the application on the display unit.
- the management server Upon detection of the error in the physical disk, the management server sets the corresponding mirror disk to which the application uses. In this way, it is possible to prevent the performance degradation of the application beforehand.
- the management server Upon detection of the error in the physical disk, the management server allocates a new logical volume to an unused disk area in the disk array device and executes internal copy processing, whereby the management server sets the new logical volume to which the application uses. In this way, it is possible to prevent the performance degradation of the application beforehand.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
In a storage system having a plurality of disk array devices connected through a network to a host for running an application, and a management server for monitoring the disk array devices, the disk array device includes a physical disk error detecting unit for detecting an error in a physical disk. Meanwhile, the management server stores a corresponding relationship among the application, a logical volume used by the application, and the physical disk corresponding to the logical volume. Moreover, the management server includes an application detecting unit for detecting the application using the logical volume corresponding to the physical disk with the error according to the corresponding relationship when the physical disk error detecting unit of the disk array device detects the error in the physical disk.
Description
- The present application claims priority upon Japanese Patent Application No. 2002-150145 filed on May 24, 2002, which is herein incorporated by reference.
- 1. Field of the Invention
- The present invention relates to a storage system, a management server, and a method of managing an application thereof.
- 2. Description of the Related Art
- A host uses a plurality of disk array devices as external storage devices upon operation of an application. A disk array device includes a plurality of disks. To be more precise, the host is connected to the disk array devices through a storage area network (SAN), whereby data are distributed to a logical volume (LU) composed of the plurality of disks to be stored.
- As a technology for enhancing fault tolerance of such a disk array device, for example, Japanese Patent Application Laid-open Publication No. 2000-305720 discloses a technology for predicting a failure of a disk. Meanwhile, according to Japanese Patent Application Laid-open Publication No. 2001-167035, a client on a network is monitored by using a WWW browser in order to enhance fault tolerance of the disk array device. Moreover, Japanese Patent Application Laid-open Publication No. Hei11-24850 discloses a technology for recovering data contained in a failed drive such that the data belonging to a volume of a highest frequency of failures first, then the recovery is attempted according to the failure frequency. Furthermore, Japanese Patent Application Laid-open Publication No. 2000-20245 discloses a technology for automatically configuring a disk drive connected to a controller.
- In general, a disk array device utilizes a “Redundant Array of Inexpensive Disks” (RAID) technology to prevent data loss or system down of a host. Parity and error correction data are added to data to be written, and the data are distributed to a plurality of disks for storage. In this way, it is possible to restore correct data using the parity even if one of the disks is out of order.
- However, execution of data restoring processing, when a disk is out of order, incurs performance degradation of operation of an application as compared to a normal condition. Moreover, if a spare disk exists when the drive is blocked, the blocked disk will be restored by using the spare disk. Execution of such restoring processing also incurs performance degradation of an application as compared to a normal condition.
- Nevertheless, such restoring processing has been executed without notifying a user. Accordingly, the user would not recognize that the cause of the performance degradation of the application is due to the data restoring processing and has occasionally sought other causes.
- In addition, when the drive is blocked, a system administrator receiving such warning has had difficulty predicting influence to the performance degradation of the application.
- In a storage system having a plurality of disk array devices connected to a host, which runs an application, through a network, and a management server for monitoring each drive installed in the disk array devices, the disk array device includes a physical disk error detecting unit for detecting an error in a physical disk. The management server stores a corresponding relationship among the application, a logical volume used by the application, and the physical disk corresponding to the logical volume. The management server includes an application detecting unit for detecting the application using the logical volume corresponding to the physical disk with the error according to the corresponding relationship when the physical disk error detecting unit of the disk array device detects the error in the physical disk.
- Note that the management server may be disposed inside the disk array device. Moreover, it is also possible to provide a configuration where a management server unit having functions of the management server is incorporated into the disk array device. In other words, the term “management server” also includes the concept of the “management server unit”.
- The present invention can suppress performance degradation of an application.
- Features and objects of the present invention other than the above will become clear by reading the description of the present specification with reference to the accompanying drawings.
- For a more complete understanding of the present invention and the advantages thereof, reference is now made to the following description taken in conjunction with the accompanying drawings wherein:
- FIG. 1 is a block diagram showing an entire configuration of a storage system, which is one of the embodiments of the present invention;
- FIG. 2 is a conceptual diagram showing an aspect of distributed data storage according to one of the embodiments of the present invention;
- FIG. 3 is a schematic diagram showing a screen for monitoring performance of a disk array device according to one of the embodiments of the present invention;
- FIG. 4 is a block diagram showing functions of a management server according to the embodiments of the present invention;
- FIG. 5 is a schematic diagram showing a screen for monitoring performances of the disk array device according to the embodiments of the present invention;
- FIG. 6 is a flowchart showing processing upon occurrence of a disk failure according to the embodiments of the present invention; and
- FIG. 7 is a schematic diagram showing an aspect of switching a drive to which an application gains access, from a main drive to a sub drive.
- At least the following matters will be made clear by the explanation in the present specification and the description of the accompanying drawings.
- A management server may include application notifying unit that notifies a user terminal of information concerning the application detected by the application detecting unit.
- The corresponding relationship stored by the management server contains settings on priority of execution of a job of an application. The management server may include application processing capability allocating unit for allocating a processing capability of the application according to the priority of execution of the job of the application which uses the physical disk with the error when the physical disk error detecting unit of the disk array device detects the error in the physical disk.
- The management server may give higher priority to execution of the job of the application using the physical disk with the error when the physical disk error detecting unit of the disk array device detects the error in the physical disk.
- The application processing capability allocating unit of the management server may allocate the processing capability of the application according to operation input of a user.
- The management server may include at least the application detected by the application detecting unit, the disk array device used by the application, and application information displaying unit for displaying application information concerning the processing capability allocated to the application on a screen of a display unit.
- The disk array device maybe capable of executing mirroring, and the management server may include mirror disk switching unit for setting a mirror disk for the application to use, when the physical disk error detecting unit of the disk array device detects an error in the physical disk.
- Herein, the expression “capable of executing mirroring” refers to a state where the disk array device supports “RAID-1” (“mirror” or “mirroring”) as a RAID level.
- The disk array device may include the physical disk error detecting unit for detecting an error in the physical disk, and the management server may include internal copy switching unit for executing internal copy processing by allocating a new logical volume to an unused disk area (an empty disk) in the disk array device, when the physical disk error detecting unit of the disk array device detects an error in the physical disk, and thereby setting the new logical volume for the application to use.
- FIG. 1 shows an entire configuration of a storage system according to the present embodiment. A plurality of
hosts 10 are connected to a plurality of disk array devices (also referred to as “disk subsystems”) 20 through a storage area network (SAN) 40. An application is installed in each of the hosts (computers) 10 by a user, and thehosts 10 share the plurality ofdisk array devices 20 as external storage devices of data required for running this application. Alternatively, there is also a case where the application itself is installed in thedisk array devices 20. That means the scope of the present invention shall not be limited by which directory the application installed in. Amanagement server 30 can be connected to the plurality ofhosts 10 and to the plurality ofdisk array devices 20 through a different local area network (LAN) 50 of SAN 40. Alternatively, themanagement server 30 may be directly connected to the SAN 40. - A
host agent 11 is installed in eachhost 10. Thehost agent 11 is activated when a request for acquiring system configuration information to the respectivedisk array devices 20 is made due to operation input to themanagement server 30 by a system administrator or due to an event such as a failure of thedisk array device 20. In order to acquire “host logical configuration information” on an operation system (OS) of thehost 10 dominating thehost agent 11, thehost agent 11 issues a command to a logical volume of thedisk array device 20, which is accessible by thehost 10 of its own, to receive an access path. - The
host agent 11 utilizes the OS, a database, and an application interface of upper middleware. Thereby acquires a name of a file stored in the logical volume, a capacity of the file, and the location in a file system which the file belongs to. - Each of the
disk array devices 20 includes anexternal connection interface 21. Each of thedisk array devices 20 notifies themanagement server 30 of the configuration information, the performance and the containing data of its own through theexternal connection interface 21. Moreover, thedisk array device 20 detects an error such as a failure of a physical disk of its own and notifies themanagement server 30. Alternatively, themanagement server 30 may gain access to thedisk array device 20 and collect the configuration information, the performance, the data, and information concerning the failure thereof. In another embodiment where themanagement server 30 is connected only to theSAN 40, similar information as described above is sent or collected directly to themanagement server 30 by using a SAN interface for themanagement server 30, instead of using theexternal connection interface 21. - The
disk array device 20 incorporates a control board (a controller) therein. Thedisk array device 20 is controlled by a microprogram (physical disk error detecting unit) which runs on a processor (CPU, physical disk error detecting unit) implemented in this controller. - By this microprogram in operation, it is possible to detect errors including failures of the physical disk of the
disk array device 20, such as an I/O error. In other words, it is possible to predict performance degradation of the application by detecting failures and errors in the physical disk. The method of detecting such errors will be described specifically as follows. For example, data control of thedisk array device 20 is performed by the control board (controller) of the disk array device. The microprogram runs on the CPU implemented on this controller and controls the device. When there is a request from an upper layer (such as a host) for reading the data, the microprogram performs the control so as to read the data. Normally, data reading is performed via a cache, and in the case where the data do not exist in the cache, the data are read from the drive and stored in the cache or directly sent to the upper layer. In the RAID, the data to be read are normally stored in the drives in a distributed manner. Moreover, the RAID adopts the mode of regenerating data from parity even if part of the data cannot be read. In the event of reading the data, the control of reading the distributed data is performed by the microprogram. If a part of the data cannot be read, the control of reading the parity for restoring the data is performed by the microprogram. Therefore, the processing for performing data restoration from the parity is executed by the microprogram. Accordingly, the frequency the restoration can be perceived by the microprogram. For example, the microprogram can also perceive the frequency of failures of when accessing each drive. Thus, it is possible to predict an occurrence of an error such as a failure in a certain drive if the failures of when accessing exceed a given threshold (such as a number of simple cumulative failures or a number of cumulative failures). Such processing is adopted not only for reading processing, but also for writing processing similarly. - FIG. 2 is a conceptual diagram showing an aspect of distributed data storage. As shown in FIG. 2, the
disk array device 20 includes a plurality of drives. In the example shown in FIG. 2, five disks marked by A to E are installed. Moreover, a logical unit (LU) is defined in these five disk drives. This logical unit is regarded as one logical volume from the outside. For example, a logical volume (LU0) includes four drives marked by A, B, C and D. In thedisk array device 20, at a RAID level referred to as “RAID-5”, the data are distributed and written across the drives (physical disks) HDD-A to HDD-E as in D1, D2 and D3. Further, parity P1 is also written. In this way, the drives HDD-A to HDD-E, which store the distributed data D1, D2 and D3 and the parity P1, collectively construct the logical volume (logical unit) LU0 or a logical volume LU1. - Meanwhile, as shown in the corresponding table in FIG. 3, regarding the
disk array device 20, themanagement server 30 stores corresponding relationships among applications AP-A to AP-E run on thehost 10, the logical volumes LU0 and LU1 used by the applications, and the physical disks HDD-A to HDD-E corresponding to the logical volumes LU0 and LU1. - The corresponding relationships as shown in FIG. 3 may be referenced not only by the system administrator who uses the
management server 30, but also by a user who uses the applications through the hosts, with a web screen or the like. - The
management server 30 has a computer system (application detecting unit, mirror disk switching unit, internal copy switching unit, application notifying unit, application processing capability allocating unit, and application information displaying unit). FIG. 4 is a block diagram showing the functions of thismanagement server 30. Themanagement server 30 includes auser management layer 31, anobject management layer 32, anagent management layer 33, and aninterface layer 34. Theobject management layer 32 includes a database for accumulating configuration information concerning the respectivedisk array devices 20. As described above, the corresponding relationships among the applications, the logical volumes and the physical disks are stored in this database. Theinterface layer 34 includes asubsystem interface 341 and ahost interface 342. - A plurality of user terminals A to C are connected to the
management server 30 through theuser management layer 31. Moreover, the plurality ofdisk array devices 20 are connected to themanagement server 30 through thesubsystem interface 341. Furthermore, thehosts 10 are connected to themanagement server 30 through thehost interface 342. - The
user management layer 31 manages the user terminals A to C. Herein, the system administrator is also included as one of the users. Theobject management layer 32 acquires information concerning the configurations, performances and errors such as failures of the respectivedisk array devices 20 as well as information concerning other events, and stores the information in the database. To be more precise, the information to be stored in this database includes: settings concerning internal access paths and the logical volumes of the respectivedisk array devices 20, capacities, access authorization, and data transfer of the respective logical volumes; settings concerning data copy among the respectivedisk array devices 20; settings concerning the performances and control of the respectivedisk array devices 20; and settings of methods of acquiring and maintaining the performance data, abnormality such as failures, and the configuration information of events in therespective disc devices 20 by user operations of the respective disk array devices. - Although only one
management server 30 is shown in FIG. 1, it is possible to use a plurality ofmanagement servers 30. Moreover, themanagement server 30 may be installed inside thedisk array device 20. Furthermore, it is also possible to adopt a configuration where a management server unit having the functions of themanagement server 30 is incorporated into thedisk array device 20. In other words, the term “management server” also may be interpreted as the “management server unit”. Meanwhile, the management server unit maybe located in a position physically distant from thedisk array device 20. - The
management server 30 makes periodic inquiries to the respectivedisk array devices 20 and acquires the information concerning the events of abnormality such as failures. Alternatively, the information concerning the events such as failures or maintenance detected by the respectivedisk array devices 20 is notified to theagent management layer 33 of themanagement server 30 through thesubsystem interface 341. - Upon notification of the events such as failures, the
agent management layer 33 notifies theobject management layer 32 by using an interrupt function. Themanagement server 30 recognizes a status change of thedisk array device 20 by theobject management layer 32 that has received such notification. After recognizing this event, themanagement server 30 acquires the configuration information of thedisk array device 20 and updates the information in the database. - As shown in a schematic diagram in FIG. 5, a screen for monitoring the performances of the disk array devices is displayed on a display unit of the
management server 30 by a graphical user interface (GUI) The information displayed on this screen is based on the corresponding relationships in FIG. 3 as previously described. As shown in FIG. 5, description will be firstly given regarding an A system as the “application”. The A system uses a disk array device “D-B”. A “drive status” of this disk array device D-B is indicated as “OK”, in other words, there are no abnormality such as a breakdown or a failure therein. Priority of execution of a job (“job priority”) of this A system is set to “B” which is lower than “A”, and allocation of a processing capability is set to a level “10”. A field for “other status” is blank since the “drive status” is “OK” and indicates that no special measures are taken. - Next, description will be given regarding a B system as the “application”. The B system uses a disk array device “D-B”. The “drive status” of the disk array device D-B is indicated as “failure”. Priority of execution of a job (“job priority”) of this B system is set to “A” which is higher than “B”, thus the priority is set higher than the above-described A system. Moreover, allocation of a processing capability regarding this B system is set to a level “10”. A field for “other status” is indicated as “mirror in use” since “drive status” is “failure”, and shows that data on a mirror disk are used as a primary (main) I/O.
- Next, description will be given with reference to a flowchart in FIG. 6, regarding countermeasure processing by the management server in the case where a failure occurs in the disk (a drive or a physical disk). Firstly, when an error in the physical disk such as an I/O error in the drive is detected (S10), judgment is made on whether a mirror drive exists (S20). If the mirror drive exists (S20: YES), the access drive which application uses is switched from the main drive to a sub drive as shown in FIG. 7 (S30). In this way, with respect to the failure of disk, it is possible to eliminate influence to the operation of the application, attributable to an increase in load of processing due to data restoration from the parity. Subsequently, the occurrence of the failure and the state of switching the access drive are notified to the system administrator (S40). For such notification, as shown in FIG. 5, an appropriate method such as displaying on the displaying unit of the management server is used. In addition, it is also possible to notify the information shown in FIG. 5 to the user who uses the application through the host, with a Web screen or the like. Thereafter, if the failed drive is recovered or restored by a replacement, then the access drive is set back to the original drive, which is the normal drive (S50 to S60).
- On the contrary, if the mirror drive does not exist (S20: NO), as a temporary processing, judgment is made on whether it is possible to create a mirror (an internal copy or a synchronous copy area) internally (S70). If it is possible to create the internal copy (S70: YES), a new logical volume is created in another drive without failure (S80). The new logical volume, which is internally copied, is set to the access drive which the application uses (S90). Subsequently, the occurrence of the failure and the state of switching the access drive are notified to the system administrator (S100). For such notification, as shown in FIG. 5, an appropriate method such as displaying on the displaying unit of the management server is used. In addition, it is also possible to notify the information shown in FIG. 5 to the user who uses the application through the host, with a Web screen or the like. Thereafter, when the failed drive is recovered or restored by a replacement, the access drive is set back to the original drive, which is the normal drive, and the internal copy is deleted (S110 to S120).
- Meanwhile, if it is impossible to create the internal copy (S70: NO), it is possible to use another disk array device. Based on the above-described corresponding table in FIG. 3, the logical volume corresponding to the failed drive is retrieved (S130). Thereafter, as a result of the retrieval, the application using the acquired logical volume is detected (S140). Subsequently, the risk of performance degradation of the detected application is notified to the system administrator (S150). For such notification, as shown in FIG. 5, an appropriate method such as displaying on the displaying unit of the management server is used. In addition, it is also possible to notify the information shown in FIG. 5 to the user who uses the application through the host, with a Web screen or the like.
- Next, as shown in the above-described information displayed on the screen in FIG. 5, countermeasure processing for load distribution of the application having the risk of the performance degradation is executed according to the job priority. For example, as shown in FIG. 5, if there is a risk of performance degradation of the B system as the application having the higher priority “A” and the mirror drive is not used under that status, the allocation of the processing capability is increased from “10” to “15”, so that the performance of the application can be maintained without being influenced by the drive failure. Herein, an increase in the allocation of the processing capability is equivalent to an increase in an allocation rate of the CPU resources.
- Moreover, a similar effect can be exerted by raising the job priority of the application (S160). Alternatively, it is also possible to relatively avoid a decrease in the processing capability of the application having the higher priority “A” by reducing the allocation of the processing capability at the A system side as the application having the lower priority “B”.
- Such allocation of the processing capabilities and settings of the job priority can be performed through the screen displayed on the display unit in FIG. 5 by the GUI. Thereafter, when the failed drive is recovered or restored by a replacement, the allocation of the processing capabilities and the job priority of the applications are set back to the original states (S170 to S180).
- As for another embodiment, it is also possible to distribute the load by each logical volume unit, in the state without failures such as errors in a disk (drive). This is performed by monitoring a usage status of the application and an access status regarding the respective logical volumes. In this way, it is possible to prevent an event such as performance degradation of a certain logical volume.
- The embodiments of the present invention can exert the following effects:
- The management server stores corresponding relationships among the applications, the logical volumes used by the applications, and the physical disks corresponding to the logical volumes. Thus, it is possible to detect the application which maybe influenced by the error in the physical disk and to predict the performance degradation thereof. Based on this prediction, it is possible to suppress the performance degradation or an abnormal termination.
- The management server notifies the user terminal of information concerning the application which may be influenced by the error in the physical disk. In this way, it is possible to advise the user on the risk of the performance degradation of the application.
- The priority of execution of the job of the application is set in the corresponding relationship stored by the management server. The management server allocates the processing capability of the application according to the priority of execution of the job of the application which uses the physical disk with the error. In this way, it is possible to control the load of the application.
- The management server gives higher priority to execution of the job of the application which uses the physical disk with the error. In this way, it is possible to suppress the performance degradation of the application.
- The management server allocates the processing capability of the application according to operation input of the user. The user can control the load of the application which may be influenced by the error in the physical disk.
- The management server displays the information concerning the applications, the disk array devices used by the applications, and the processing capabilities allocated to the applications on the screen of the display unit. In this way, the user can monitor the logical volume and the application which may be influenced by the error in the physical disk, and the processing capability allocated to the application on the display unit.
- Upon detection of the error in the physical disk, the management server sets the corresponding mirror disk to which the application uses. In this way, it is possible to prevent the performance degradation of the application beforehand.
- Upon detection of the error in the physical disk, the management server allocates a new logical volume to an unused disk area in the disk array device and executes internal copy processing, whereby the management server sets the new logical volume to which the application uses. In this way, it is possible to prevent the performance degradation of the application beforehand.
- Although the present invention has been described above based on the embodiments, it is to be noted that the present invention shall not be limited to the embodiments stated herein, and that various modifications can be made without departing from spirit of the invention.
Claims (18)
1. A storage system comprising:
a plurality of disk array devices connected through a network to a host for running an application; and
a management server for monitoring said disk array devices, wherein said disk array device includes a physical disk error detecting unit for detecting an error in a physical disk, said management server stores a corresponding relationship among said application, a logical volume used by said application, and said physical disk corresponding to said logical volume, and
said management server includes an application detecting unit for detecting said application using said logical volume corresponding to said physical disk with the error according to said corresponding relationship when said physical disk error detecting unit of said disk array device detects the error in said physical disk.
2. A storage system according to claim 1 , wherein said management server includes an application notifying unit for notifying a user terminal of information concerning said application detected by said application detecting unit.
3. A storage system according to claim 1 ,
wherein said corresponding relationship stored by said management server contains settings on priority of execution of a job of said application, and
said management server includes an application processing capability allocating unit for allocating a processing capability of said application according to said priority of execution of the job of said application which uses said physical disk with the error when said physical disk error detecting unit of said disk array device detects the error in said physical disk.
4. A storage system according to claim 3 , wherein said management server gives higher priority to execution of said job of said application which uses said physical disk with said error when said physical disk error detecting means of said disk array device detect said error in said physical disk.
5. A storage system according to claim 3 , wherein said application processing capability allocating unit of said management server allocates the processing capability of said application according to operation input of a user.
6. A storage system according to claim 1 , wherein said management server at least includes applications detected by said application detecting unit, said disk array device used by said application, and an application information displaying unit for displaying application information concerning a processing capability allocated to said application on a screen of a display unit.
7. A management server for monitoring a plurality of disk array devices connected through a network to a host for running an application and having a physical disk error detecting unit for detecting an error in a physical disk, wherein
said management server stores a corresponding relationship among said application, a logical volume used by said application, and said physical disk corresponding to said logical volume, and
said management server comprises:
an application detecting unit for detecting said application using a logical volume corresponding to said physical disk with said error according to said corresponding relationship when said physical disk error detecting unit of said disk array device detects said error in said physical disk.
8. A management server according to claim 7 , further comprising:
an application notifying unit for notifying a user terminal of information concerning said application detected by said application detecting unit.
9. A management server according to claim 7 ,
wherein said corresponding relationship contains settings on priority of execution of a job of said application, and
said management server further comprises an application processing capability allocating unit for allocating a processing capability of said application according to said priority of execution of said job of said application which uses said physical disk with said error, when said physical disk error detecting unit of said disk array device detects said error in said physical disk.
10. A management server according to claim 9 , wherein the higher priority is given for execution of said job of said application using said physical disk with said error when said physical disk error detecting unit of said disk array device detects said error in said physical disk.
11. A management server according to claim 9 , wherein said application processing capability allocating unit allocates the processing capability of said application according to operation input of a user.
12. A management server according to claim 7 , further comprising at least:
applications detected by said application detecting unit, said disk array device used by said application, and an application information displaying unit for displaying application information concerning a processing capability allocated to said application on a screen of a display unit.
13. A method of managing an application by a management server for monitoring a plurality of disk array devices connected through a network to a host for running an application, said method comprising:
allowing said management server to store a corresponding relationship among said application, a logical volume used by said application, and a physical disk corresponding to said logical volume, and
allowing said management server to detect said application using said logical volume corresponding to said physical disk with an error according to said corresponding relationship when said disk array device detects said error in said physical disk.
14. A method of managing an application according to claim 13 ,
wherein information concerning said detected application is notified to a user terminal.
15. A method of managing an application according to claim 13 ,
wherein said corresponding relationship contains settings on priority of execution of a job of said application, and
a processing capability of said application is allocated according to the priority of execution of said job of said application which uses said physical disk with said error when said disk array device detects said error in said physical disk.
16. A method of managing an application according to claim 14 , wherein higher priority is given to execution of said job of said application using said physical disk with said error when said disk array device detects said error in said physical disk.
17. A method of managing an application according to claim 15 , wherein the processing capability of said application is allocated according to operation input of a user.
18. A method of managing an application according to claim 13 , wherein application information concerning said detected application, said disk array device used by said application, and the processing capability allocated to said application are at least displayed on a screen of a display unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/492,340 US7457916B2 (en) | 2002-05-24 | 2006-07-24 | Storage system, management server, and method of managing application thereof |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002150145A JP2003345531A (en) | 2002-05-24 | 2002-05-24 | Storage system, management server, and its application managing method |
JP2002-150145 | 2002-05-24 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/492,340 Continuation US7457916B2 (en) | 2002-05-24 | 2006-07-24 | Storage system, management server, and method of managing application thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
US20040153728A1 true US20040153728A1 (en) | 2004-08-05 |
Family
ID=29545307
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/444,650 Abandoned US20040153728A1 (en) | 2002-05-24 | 2003-05-22 | Storage system, management server, and method of managing application thereof |
US11/492,340 Expired - Fee Related US7457916B2 (en) | 2002-05-24 | 2006-07-24 | Storage system, management server, and method of managing application thereof |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/492,340 Expired - Fee Related US7457916B2 (en) | 2002-05-24 | 2006-07-24 | Storage system, management server, and method of managing application thereof |
Country Status (3)
Country | Link |
---|---|
US (2) | US20040153728A1 (en) |
EP (1) | EP1369785A3 (en) |
JP (1) | JP2003345531A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050039085A1 (en) * | 2003-08-12 | 2005-02-17 | Hitachi, Ltd. | Method for analyzing performance information |
US20050289385A1 (en) * | 2004-05-26 | 2005-12-29 | Hitachi, Ltd. | Method and system for managing of job execution |
US20060101402A1 (en) * | 2004-10-15 | 2006-05-11 | Miller William L | Method and systems for anomaly detection |
US20060107096A1 (en) * | 2004-11-04 | 2006-05-18 | Findleton Iain B | Method and system for network storage device failure protection and recovery |
US20070180314A1 (en) * | 2006-01-06 | 2007-08-02 | Toru Kawashima | Computer system management method, management server, computer system, and program |
US20080126857A1 (en) * | 2006-08-14 | 2008-05-29 | Robert Beverley Basham | Preemptive Data Protection for Copy Services in Storage Systems and Applications |
US20080320332A1 (en) * | 2007-06-21 | 2008-12-25 | Joanna Katharine Brown | Error Processing Across Multiple Initiator Network |
US20100031082A1 (en) * | 2008-07-31 | 2010-02-04 | Dan Olster | Prioritized Rebuilding of a Storage Device |
US20100122115A1 (en) * | 2008-11-11 | 2010-05-13 | Dan Olster | Storage Device Realignment |
US7882393B2 (en) | 2007-03-28 | 2011-02-01 | International Business Machines Corporation | In-band problem log data collection between a host system and a storage system |
US20130031407A1 (en) * | 2011-07-27 | 2013-01-31 | Cleversafe, Inc. | Identifying a slice error in a dispersed storage network |
US8707107B1 (en) * | 2011-12-09 | 2014-04-22 | Symantec Corporation | Systems and methods for proactively facilitating restoration of potential data failures |
US20150006593A1 (en) * | 2013-06-27 | 2015-01-01 | International Business Machines Corporation | Managing i/o operations in a shared file system |
US9389808B2 (en) | 2013-08-22 | 2016-07-12 | Kabushiki Kaisha Toshiba | Storage device and data processing method |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4575689B2 (en) * | 2004-03-18 | 2010-11-04 | 株式会社日立製作所 | Storage system and computer system |
JP2005326935A (en) | 2004-05-12 | 2005-11-24 | Hitachi Ltd | Management server for computer system equipped with virtualization storage and failure preventing/restoring method |
JP4646574B2 (en) | 2004-08-30 | 2011-03-09 | 株式会社日立製作所 | Data processing system |
WO2006036812A2 (en) * | 2004-09-22 | 2006-04-06 | Xyratex Technology Limited | System and method for network performance monitoring and predictive failure analysis |
US20070050544A1 (en) * | 2005-09-01 | 2007-03-01 | Dell Products L.P. | System and method for storage rebuild management |
US7669087B1 (en) * | 2006-07-31 | 2010-02-23 | Sun Microsystems, Inc. | Method and apparatus for managing workload across multiple resources |
JP5057741B2 (en) * | 2006-10-12 | 2012-10-24 | 株式会社日立製作所 | Storage device |
JP2009077959A (en) * | 2007-09-26 | 2009-04-16 | Toshiba Corp | Ultrasonic image diagnostic device and its control program |
US9069667B2 (en) | 2008-01-28 | 2015-06-30 | International Business Machines Corporation | Method to identify unique host applications running within a storage controller |
JP2010097385A (en) | 2008-10-16 | 2010-04-30 | Fujitsu Ltd | Data management program, storage device diagnostic program, and multi-node storage system |
US8495417B2 (en) * | 2009-01-09 | 2013-07-23 | Netapp, Inc. | System and method for redundancy-protected aggregates |
CN101876885B (en) * | 2010-06-18 | 2015-11-25 | 中兴通讯股份有限公司 | A kind of method and apparatus of assignment logic drive |
US8782525B2 (en) * | 2011-07-28 | 2014-07-15 | National Insturments Corporation | Displaying physical signal routing in a diagram of a system |
JP5985403B2 (en) | 2013-01-10 | 2016-09-06 | 株式会社東芝 | Storage device |
JP6005533B2 (en) | 2013-01-17 | 2016-10-12 | 株式会社東芝 | Storage device and storage method |
JPWO2015040728A1 (en) * | 2013-09-20 | 2017-03-02 | 富士通株式会社 | Information processing apparatus, information processing method, and program |
CN104571965A (en) * | 2015-01-19 | 2015-04-29 | 浪潮集团有限公司 | Raid reconstruction optimizing method |
Citations (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US91944A (en) * | 1869-06-29 | Improved machine for turning and scraping grindstones | ||
US99579A (en) * | 1870-02-08 | John maetino | ||
US99578A (en) * | 1870-02-08 | Improvement in hoisting-machines | ||
US99598A (en) * | 1870-02-08 | Improvement in dental articulators | ||
US124215A (en) * | 1872-03-05 | Improvement in adjustable horseshoes | ||
US172145A (en) * | 1876-01-11 | Improvement in harvesters | ||
US191311A (en) * | 1877-05-29 | Improvement in devices for punching sheet metal for pipe-elbows | ||
US5815652A (en) * | 1995-05-31 | 1998-09-29 | Hitachi, Ltd. | Computer management system |
US6209059B1 (en) * | 1997-09-25 | 2001-03-27 | Emc Corporation | Method and apparatus for the on-line reconfiguration of the logical volumes of a data storage system |
US20020066050A1 (en) * | 2000-11-28 | 2002-05-30 | Lerman Jesse S. | Method for regenerating and streaming content from a video server using raid 5 data striping |
US20020091944A1 (en) * | 2001-01-10 | 2002-07-11 | Center 7, Inc. | Reporting and maintenance systems for enterprise management from a central location |
US20020099579A1 (en) * | 2001-01-22 | 2002-07-25 | Stowell David P. M. | Stateless, event-monitoring architecture for performance-based supply chain management system and method |
US20020099598A1 (en) * | 2001-01-22 | 2002-07-25 | Eicher, Jr. Daryl E. | Performance-based supply chain management system and method with metalerting and hot spot identification |
US20020099578A1 (en) * | 2001-01-22 | 2002-07-25 | Eicher Daryl E. | Performance-based supply chain management system and method with automatic alert threshold determination |
US6460151B1 (en) * | 1999-07-26 | 2002-10-01 | Microsoft Corporation | System and method for predicting storage device failures |
US20020191311A1 (en) * | 2001-01-29 | 2002-12-19 | Ulrich Thomas R. | Dynamically scalable disk array |
US20030023933A1 (en) * | 2001-07-27 | 2003-01-30 | Sun Microsystems, Inc. | End-to-end disk data checksumming |
US6529995B1 (en) * | 1999-06-18 | 2003-03-04 | Storage Technology Corporation | Method and apparatus for maintaining and restoring mapping table entries and data in a raid system |
US20030061331A1 (en) * | 2001-09-27 | 2003-03-27 | Yasuaki Nakamura | Data storage system and control method thereof |
US6557122B1 (en) * | 1998-04-07 | 2003-04-29 | Hitachi, Ltd. | Notification system for informing a network user of a problem in the network |
US20030115604A1 (en) * | 2001-12-18 | 2003-06-19 | Pioneer Corporation | Program recording and viewing reservation system and method thereof |
US20030172145A1 (en) * | 2002-03-11 | 2003-09-11 | Nguyen John V. | System and method for designing, developing and implementing internet service provider architectures |
US6629207B1 (en) * | 1999-10-01 | 2003-09-30 | Hitachi, Ltd. | Method for loading instructions or data into a locked way of a cache memory |
US6629267B1 (en) * | 2000-05-15 | 2003-09-30 | Microsoft Corporation | Method and system for reporting a program failure |
US20040078461A1 (en) * | 2002-10-18 | 2004-04-22 | International Business Machines Corporation | Monitoring storage resources used by computer applications distributed across a network |
US6874106B2 (en) * | 2001-08-06 | 2005-03-29 | Fujitsu Limited | Method and device for notifying server failure recovery |
US6928589B1 (en) * | 2004-01-23 | 2005-08-09 | Hewlett-Packard Development Company, L.P. | Node management in high-availability cluster |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001337790A (en) * | 2000-05-24 | 2001-12-07 | Hitachi Ltd | Storage unit and its hierarchical management control method |
US7103653B2 (en) * | 2000-06-05 | 2006-09-05 | Fujitsu Limited | Storage area network management system, method, and computer-readable medium |
JP4794068B2 (en) | 2000-06-05 | 2011-10-12 | 富士通株式会社 | Storage area network management system |
-
2002
- 2002-05-24 JP JP2002150145A patent/JP2003345531A/en active Pending
-
2003
- 2003-05-22 US US10/444,650 patent/US20040153728A1/en not_active Abandoned
- 2003-05-23 EP EP03011752A patent/EP1369785A3/en not_active Withdrawn
-
2006
- 2006-07-24 US US11/492,340 patent/US7457916B2/en not_active Expired - Fee Related
Patent Citations (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US91944A (en) * | 1869-06-29 | Improved machine for turning and scraping grindstones | ||
US124215A (en) * | 1872-03-05 | Improvement in adjustable horseshoes | ||
US99578A (en) * | 1870-02-08 | Improvement in hoisting-machines | ||
US99598A (en) * | 1870-02-08 | Improvement in dental articulators | ||
US172145A (en) * | 1876-01-11 | Improvement in harvesters | ||
US99579A (en) * | 1870-02-08 | John maetino | ||
US191311A (en) * | 1877-05-29 | Improvement in devices for punching sheet metal for pipe-elbows | ||
US5815652A (en) * | 1995-05-31 | 1998-09-29 | Hitachi, Ltd. | Computer management system |
US6209059B1 (en) * | 1997-09-25 | 2001-03-27 | Emc Corporation | Method and apparatus for the on-line reconfiguration of the logical volumes of a data storage system |
US6557122B1 (en) * | 1998-04-07 | 2003-04-29 | Hitachi, Ltd. | Notification system for informing a network user of a problem in the network |
US6529995B1 (en) * | 1999-06-18 | 2003-03-04 | Storage Technology Corporation | Method and apparatus for maintaining and restoring mapping table entries and data in a raid system |
US6460151B1 (en) * | 1999-07-26 | 2002-10-01 | Microsoft Corporation | System and method for predicting storage device failures |
US6629207B1 (en) * | 1999-10-01 | 2003-09-30 | Hitachi, Ltd. | Method for loading instructions or data into a locked way of a cache memory |
US6629267B1 (en) * | 2000-05-15 | 2003-09-30 | Microsoft Corporation | Method and system for reporting a program failure |
US20020066050A1 (en) * | 2000-11-28 | 2002-05-30 | Lerman Jesse S. | Method for regenerating and streaming content from a video server using raid 5 data striping |
US20020091944A1 (en) * | 2001-01-10 | 2002-07-11 | Center 7, Inc. | Reporting and maintenance systems for enterprise management from a central location |
US20020099598A1 (en) * | 2001-01-22 | 2002-07-25 | Eicher, Jr. Daryl E. | Performance-based supply chain management system and method with metalerting and hot spot identification |
US20020099578A1 (en) * | 2001-01-22 | 2002-07-25 | Eicher Daryl E. | Performance-based supply chain management system and method with automatic alert threshold determination |
US20020099579A1 (en) * | 2001-01-22 | 2002-07-25 | Stowell David P. M. | Stateless, event-monitoring architecture for performance-based supply chain management system and method |
US20020191311A1 (en) * | 2001-01-29 | 2002-12-19 | Ulrich Thomas R. | Dynamically scalable disk array |
US20030023933A1 (en) * | 2001-07-27 | 2003-01-30 | Sun Microsystems, Inc. | End-to-end disk data checksumming |
US6874106B2 (en) * | 2001-08-06 | 2005-03-29 | Fujitsu Limited | Method and device for notifying server failure recovery |
US20030061331A1 (en) * | 2001-09-27 | 2003-03-27 | Yasuaki Nakamura | Data storage system and control method thereof |
US20030115604A1 (en) * | 2001-12-18 | 2003-06-19 | Pioneer Corporation | Program recording and viewing reservation system and method thereof |
US20030172145A1 (en) * | 2002-03-11 | 2003-09-11 | Nguyen John V. | System and method for designing, developing and implementing internet service provider architectures |
US20040078461A1 (en) * | 2002-10-18 | 2004-04-22 | International Business Machines Corporation | Monitoring storage resources used by computer applications distributed across a network |
US6928589B1 (en) * | 2004-01-23 | 2005-08-09 | Hewlett-Packard Development Company, L.P. | Node management in high-availability cluster |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090177839A1 (en) * | 2003-08-12 | 2009-07-09 | Hitachi, Ltd. | Method for analyzing performance information |
US8006035B2 (en) | 2003-08-12 | 2011-08-23 | Hitachi, Ltd. | Method for analyzing performance information |
US8407414B2 (en) | 2003-08-12 | 2013-03-26 | Hitachi, Ltd. | Method for analyzing performance information |
US20050039085A1 (en) * | 2003-08-12 | 2005-02-17 | Hitachi, Ltd. | Method for analyzing performance information |
US7096315B2 (en) | 2003-08-12 | 2006-08-22 | Hitachi, Ltd. | Method for analyzing performance information |
US7127555B2 (en) | 2003-08-12 | 2006-10-24 | Hitachi, Ltd. | Method for analyzing performance information |
US20070016736A1 (en) * | 2003-08-12 | 2007-01-18 | Hitachi, Ltd. | Method for analyzing performance information |
US8209482B2 (en) | 2003-08-12 | 2012-06-26 | Hitachi, Ltd. | Method for analyzing performance information |
US7310701B2 (en) | 2003-08-12 | 2007-12-18 | Hitachi, Ltd. | Method for analyzing performance information |
US7523254B2 (en) | 2003-08-12 | 2009-04-21 | Hitachi, Ltd. | Method for analyzing performance information |
US20080098110A1 (en) * | 2003-08-12 | 2008-04-24 | Hitachi, Ltd. | Method for analyzing performance information |
US20050289385A1 (en) * | 2004-05-26 | 2005-12-29 | Hitachi, Ltd. | Method and system for managing of job execution |
US7421613B2 (en) | 2004-05-26 | 2008-09-02 | Hitachi, Ltd. | Method and system for managing of job execution |
US20060101402A1 (en) * | 2004-10-15 | 2006-05-11 | Miller William L | Method and systems for anomaly detection |
US7529967B2 (en) * | 2004-11-04 | 2009-05-05 | Rackable Systems Inc. | Method and system for network storage device failure protection and recovery |
US20060107096A1 (en) * | 2004-11-04 | 2006-05-18 | Findleton Iain B | Method and system for network storage device failure protection and recovery |
US20070180314A1 (en) * | 2006-01-06 | 2007-08-02 | Toru Kawashima | Computer system management method, management server, computer system, and program |
US7797572B2 (en) * | 2006-01-06 | 2010-09-14 | Hitachi, Ltd. | Computer system management method, management server, computer system, and program |
US7676702B2 (en) * | 2006-08-14 | 2010-03-09 | International Business Machines Corporation | Preemptive data protection for copy services in storage systems and applications |
US20080126857A1 (en) * | 2006-08-14 | 2008-05-29 | Robert Beverley Basham | Preemptive Data Protection for Copy Services in Storage Systems and Applications |
US7882393B2 (en) | 2007-03-28 | 2011-02-01 | International Business Machines Corporation | In-band problem log data collection between a host system and a storage system |
US20080320332A1 (en) * | 2007-06-21 | 2008-12-25 | Joanna Katharine Brown | Error Processing Across Multiple Initiator Network |
US7779308B2 (en) | 2007-06-21 | 2010-08-17 | International Business Machines Corporation | Error processing across multiple initiator network |
US20100031082A1 (en) * | 2008-07-31 | 2010-02-04 | Dan Olster | Prioritized Rebuilding of a Storage Device |
US8006128B2 (en) | 2008-07-31 | 2011-08-23 | Datadirect Networks, Inc. | Prioritized rebuilding of a storage device |
US8010835B2 (en) * | 2008-11-11 | 2011-08-30 | Datadirect Networks, Inc. | Storage device realignment |
US8250401B2 (en) | 2008-11-11 | 2012-08-21 | Datadirect Networks, Inc. | Storage device realignment |
US20100122115A1 (en) * | 2008-11-11 | 2010-05-13 | Dan Olster | Storage Device Realignment |
US20130031407A1 (en) * | 2011-07-27 | 2013-01-31 | Cleversafe, Inc. | Identifying a slice error in a dispersed storage network |
US8914667B2 (en) * | 2011-07-27 | 2014-12-16 | Cleversafe, Inc. | Identifying a slice error in a dispersed storage network |
US8707107B1 (en) * | 2011-12-09 | 2014-04-22 | Symantec Corporation | Systems and methods for proactively facilitating restoration of potential data failures |
US20150006593A1 (en) * | 2013-06-27 | 2015-01-01 | International Business Machines Corporation | Managing i/o operations in a shared file system |
US9244939B2 (en) * | 2013-06-27 | 2016-01-26 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Managing I/O operations in a shared file system |
US9772877B2 (en) | 2013-06-27 | 2017-09-26 | Lenovo Enterprise Solution (Singapore) PTE., LTD. | Managing I/O operations in a shared file system |
US9389808B2 (en) | 2013-08-22 | 2016-07-12 | Kabushiki Kaisha Toshiba | Storage device and data processing method |
Also Published As
Publication number | Publication date |
---|---|
US7457916B2 (en) | 2008-11-25 |
EP1369785A3 (en) | 2011-03-30 |
JP2003345531A (en) | 2003-12-05 |
US20070011579A1 (en) | 2007-01-11 |
EP1369785A2 (en) | 2003-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7457916B2 (en) | Storage system, management server, and method of managing application thereof | |
US9348724B2 (en) | Method and apparatus for maintaining a workload service level on a converged platform | |
US7502955B2 (en) | Disk array system and control method thereof | |
US7631219B2 (en) | Method and computer program product for marking errors in BIOS on a RAID controller | |
US8271761B2 (en) | Storage system and management method thereof | |
US7480780B2 (en) | Highly available external storage system | |
JP4606455B2 (en) | Storage management device, storage management program, and storage system | |
US7565508B2 (en) | Allocating clusters to storage partitions in a storage system | |
WO2011108027A1 (en) | Computer system and control method therefor | |
US20080256397A1 (en) | System and Method for Network Performance Monitoring and Predictive Failure Analysis | |
US20060077724A1 (en) | Disk array system | |
US8566637B1 (en) | Analyzing drive errors in data storage systems | |
JP2007058419A (en) | Storage system with logic circuit constructed according to information inside memory on pld | |
WO2015114643A1 (en) | Data storage system rebuild | |
US20100100677A1 (en) | Power and performance management using MAIDx and adaptive data placement | |
US20070101188A1 (en) | Method for establishing stable storage mechanism | |
US8782465B1 (en) | Managing drive problems in data storage systems by tracking overall retry time | |
US7886186B2 (en) | Storage system and management method for the same | |
JP2022017216A (en) | Storage system with disaster recovery function and its operation method | |
JPH09269871A (en) | Data re-redundancy making system in disk array device | |
JP4640071B2 (en) | Information processing apparatus, information processing restoration method, and information processing restoration program | |
US20060090032A1 (en) | Method and computer program product of obtaining temporary conventional memory usage in BIOS | |
US7478269B2 (en) | Method and computer program product of keeping configuration data history using duplicated ring buffers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HITACHI, LTD., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SUZUKI, KATSUYOSHI;KAMANO, TOSHIMITSU;MURAOKA, KENJI;REEL/FRAME:014620/0069;SIGNING DATES FROM 20030611 TO 20030617 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |