US20080263111A1 - Storage operation management program and method and a storage management computer - Google Patents
Storage operation management program and method and a storage management computer Download PDFInfo
- Publication number
- US20080263111A1 US20080263111A1 US12/143,192 US14319208A US2008263111A1 US 20080263111 A1 US20080263111 A1 US 20080263111A1 US 14319208 A US14319208 A US 14319208A US 2008263111 A1 US2008263111 A1 US 2008263111A1
- Authority
- US
- United States
- Prior art keywords
- volume
- replication
- storage
- policy
- route
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2002—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant
- G06F11/2007—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant using redundant communication media
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2069—Management of state, configuration or failover
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99953—Recoverability
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99955—Archiving or backup
Definitions
- This invention relates to a storage operation management program, a storage operation management method and a storage managing computer. More particularly, the invention relates to a storage operation management program, a storage operation management method and a storage managing computer each capable of operating and managing replication of data area inside the same storage or among storages in accordance with information of the data areas of the storages.
- the snapshot function is the one that makes it possible to conduct data backup on the on-line basis while continuing the business operations in a computer system that is to continuously operate for 24 hours and for 365 days without interruption.
- a storage of a computer system operating normally transfers updating data to a storage of a computer system installed and operating at a different remote place so that the data stored in the storage of the remote place can be brought substantially equivalent to the data stored in the storage of the computer system normally operating, and the loss of data can be minimized even when any accident occurs in the computer system normally operating.
- a technology that decides a remote copy schedule and a route of a line in consideration of access frequency of data areas and a condition of replication line.
- All the prior art technologies described above are directed to reduce a processing load of a replication processing from the data area of a replication source to the data area of a replication destination and to improve transfer efficiency.
- these technologies are based on the premise that the data area of the replication source and the data area of the replication destination coexist.
- a storage operation management program for operating and managing replication of data areas inside a storage or among a plurality of storages comprises a processing step of accepting a request for generation of a data area of a replication destination for a data area of a replication source; a process step of retrieving a data area capable of becoming a replication destination coincident with properties of a data area corresponding to policy of the data area of the replication source from existing data areas; and a process step of instructing the storage to generate a replication pair of the data areas.
- a storage operation management method for operating and managing replication of data areas inside a storage or among a plurality of storages comprises a processing step of accepting a request of generation of a data area of a replication destination for a data area of a replication source; a process step of retrieving a data area capable of becoming the replication destination coincident with properties of a data area corresponding to policy of the data area of the replication source from existing data areas; and a process step of generating a replication pair on the basis of the retrieval result.
- a data area for managing and replicating policy of a data area and its properties and a data area for replication when a data area for managing and replicating policy of a data area and its properties and a data area for replication is selected, the policy and properties of a data area of a replication source are acquired, a data area of a replication destination coincident with policy and properties of the data area of the replication source is generated and a replication pair of the data areas is generated. Consequently, an operation of data doubling in accordance with the policy and properties of the data area becomes explicitly possible, and automation of setting of data doubling in accordance with the policy of the data area and data doubling can be effectively constituted from among a plurality of storages connected through SAN or IP.
- the policy of the data area of the replication source is brought into conformity in the aspect of the operation, too.
- judgment is made in accordance with the policy of the data area of the replication source as to whether another connectable route is set, or data to be doubled is stored in a cache inside a storage to wait for restoration of the fault of a line, or replication data having low priority is omitted. Consequently, the operation of data doubling can be effectively made.
- the resolution method described above manages the policy of the data areas of the storages and their property, can conduct automatic setting of data doubling in accordance with the policy of the data area by utilizing the policy and can operate data doubling flexibly and explicitly for users.
- FIG. 1 is a block diagram showing a construction of a storage operation management system according to a first embodiment of the invention
- FIG. 2 is an explanatory view useful for explaining a table group managed by a volume management module 105 ;
- FIG. 3 is a flowchart useful for explaining a processing operation for setting replication of volumes in accordance with policy of a volume of replication source and generating a pair of the volume of the replication source and a volume of the replication destination;
- FIG. 4 is a block diagram showing a construction of a storage operation management system according to a second embodiment of the invention.
- FIG. 5 is an explanatory view useful for explaining a table group managed by a route management module 401 ;
- FIG. 6 is a flowchart useful for explaining a processing operation for generating a replication in accordance with policy of a line for connecting volumes when a volume replication is generated between storages in the second embodiment of the invention
- FIG. 7 is a flowchart useful for explaining a processing operation of fault shooting when any fault occurs in a route of replication during the volume replication operation between the storages;
- FIG. 8 is a flowchart useful for explaining a processing operation for generating a replication pair in another storage.
- FIG. 9 is a block diagram showing a construction of a storage operation management system according to a third embodiment of the invention.
- FIG. 1 is a block diagram showing a construction of a storage operation management system according to a first embodiment of the invention.
- the storage operation management system includes a managing computer (manager) 100 and a plurality of storages 11 a to 11 n that are connected to one another through a data network 130 and through a network 140 .
- the storage 11 a includes volumes (data areas, hereinafter called merely “volumes”) 113 a , 113 b , . . . , 113 n as management areas for storing data that are managed in practice by a computer, a space area 114 that will be divided and set as volumes in future, data communication equipment 110 for transmitting and receiving data I/O of read/write data of the volumes, network communication equipment 111 for communicating with the managing computer 100 , etc, and a controller 112 for practically controlling the storages.
- the data communication equipment 110 and the network communication equipment 111 may be constituted and arranged as one communication equipment by connecting them together through IP (Internet Protocol) connection in a connection form (such as Ethernet (trade mark)).
- IP Internet Protocol
- the storage 11 b too, includes volumes 123 a , 123 b , . . . , 123 n , a space area 124 , data communication equipment 120 , network communication equipment 121 and a controller 122 in the same way as the storage 11 a .
- the other storages 11 b , . . . , 11 n have also the same construction as the storage 11 a.
- the data network 130 is a cable as a route of data communication of the storages and the computers.
- the network 140 is a route that can communicate with the managing computer 100 , etc. These networks may be either mere buses or LAN.
- the data network 130 and the network 140 are the same with each other depending on the communication form and may be Ethernet (trademark), for example.
- the managing computer 100 includes a memory 101 , a CPU 102 , network communication equipment 103 and a storage device 104 .
- the storage device 104 includes therein a volume management module 105 and a later-appearing volume management module table group 200 managed by the volume management module 105 .
- the volume management module 105 achieves the processing in the embodiment of the invention.
- the volume management module 105 includes a volume generation part 106 for instructing generation of the volumes to the storages 11 a , 11 b , . . . , 11 n , a volume pair generation part 107 for generating a replication pair from among the volumes generated by the instruction of the volume generation part 106 , and a volume pair selection part 108 for selecting the volume pair that can be replicated from policy of the volumes (for replication source, for replication destination, for both, etc) and their properties (performance, reliability, etc) and from policy of the storages (for financial systems, for various management businesses of companies, for database, etc) and their properties (performance, reliability, etc).
- the volume management module 105 is accomplished when software, not shown, stored in the storage device of the managing computer 100 is written into the memory 101 and is executed by the CPU 102 .
- the controller 112 of the storage 11 a and the controller 122 of the storage 11 b cooperate with each other, transmit the content of the volume 113 a to the volume 123 a through the data network 130 and replicate the content.
- the respective computers using the volume 113 a and the volume 123 a and the controllers 112 and 122 may cooperate with one another and conduct replication.
- FIG. 2 explains the table group that the volume management module 105 manages.
- the volume management module table group 200 managed by the volume management module 105 includes a volume information table 210 , a storage information table 220 and a pair information table 230 .
- the volume information table 210 stores a volume ID 211 allocated to identify all the volumes of the storages managed by the managing computer 100 , a storage ID 212 representing an identifier of the storage to which the volume belongs, a storage volume ID 213 representing the identifier of the volume managed inside each storage in the storage, a policy 214 representing the use policy of the volume such as for the replication source, for the replication destination, for both replication source and destination, for the replication source for remote copy, each being designated by the user and the application, a volume capacity 215 , Read/Write 216 representing read/write frequency from and to the volume set by the policy of the user and the application (for financial system, for various management businesses of companies, for database, etc), performance 217 representing a read/write speed of the volume, reliability 218 representing reliability of the volume in terms of a numerical value, and pairing possible/impossible information 219 representing whether or not the volume can be set as the volume that can be paired as the replication source with volume of the replication destination.
- a volume ID 211 allocated to identify
- the storage ID 212 is an identifier of each storage 11 a , 11 b , . . . , 11 n .
- the storage volume ID 213 is an identifier of each volume 113 a , 113 , . . . , 113 n .
- Values of from 0 to 10 are entered as Read/Write information 216 .
- the policy is limited to only Read.
- frequency of Write becomes higher.
- the policy is limited to only Write. Values acquired by normalizing performance such as a read/write speed, etc between 1 and 10, that is, values of from 1 to 10, are entered to the performance information 217 representing performance of the volume.
- Reliability information 218 representing reliability of the volume in terms of a numerical value is acquired by normalizing reliability between 1 and 10, that is, values from 1 to 10, are entered.
- the value 1 represents the highest degree of fault occurrence of the volume and the value 10 represents the lowest degree of fault occurrence.
- the managing computer 100 decides a rule such that the volume of RAID0 is 1, RAID1 is 10, RAID5 is 5, and so forth, and executes the management operation.
- Read/Write information 216 , performance information 217 and reliability information 218 represent the properties of the corresponding volume.
- the storage information table 220 stores information representing each of the storage ID 221 as the identifier of the storage managed by the managing computer 100 , a space capacity 222 representing the capacity of a space area the storage does not yet set and use as the volume, policy 223 to be given to the storage, maximum reliability 224 that can be achieved when the storage generates the volume by use of the space area and maximum performance 225 that can be achieved when the storage generates the volume by use of the space area.
- the value of the space capacity 222 represents the capacity of the space area 114 .
- the storage ID 221 has the same value as the value of the storage ID 212 of the volume information table 210 , it represents the same storage.
- the pair information table 230 represents information of replication of the volumes the managing computer 100 manages. This table includes information of each of a pair ID 231 for identifying pair information, a main volume ID 232 representing a volume ID of the volume of the replication source, a sub-volume ID 233 representing a volume ID of the volume of the replication destination and a replication type 234 representing a replication type.
- the value of the main volume ID 232 and the value of the sub-volume 233 represent the same volume when the values of the volume ID in the volume information table 210 are the same.
- a value “synchronous” or a value “asynchronous” is entered to the replication type 234 .
- replication to the volume of the replication destination is made when write occurs in the volume of the replication source.
- replication is made in a unit of a certain schedule (every predetermined time, data quantity per replication).
- the user or the application may set this replication type.
- the replication type may be set as a part of the processing of the volume management module that will be explained next.
- FIG. 3 is a flowchart useful for explaining a processing operation for setting replication of the volume in accordance with the policy of the volume of the replication source and then generating the pair of the volume of the replication source and the volume of the replication destination. Next, this flowchart will be explained.
- the volume management module 105 executes the processing that will be explained with reference to this flowchart.
- the volume management module 105 acquires the pair generation request of the volume to be replicated with the ID of the volume of the replication source from the user and the application (Step 300 ).
- the volume management module 105 may start executing the processing of Step 300 after the volume of the replication source is generated but before it receives the ID of the volume.
- the volume management module 105 may create the volume of the replication source and the volume of the replication destination during a series of processing and may execute a plurality of settings.
- the volume management module 105 may acquire a request that the replication is created by use of the volumes inside the storage or the volumes between the storages.
- Step 301 information of the policy 223 of the storage is retrieved on the basis of the storage ID, and whether or not the storage having a volume capable of forming the pair with the volume of the replication source exists is judged.
- the flow shifts to Step 308 and if not, to Step 302 (Step 301 ).
- Step 301 When the request for setting replication of the volumes exists between the storages in Step 301 and when all the storages other than the storage to which the volume of the replication source belongs, that is, all the storages other than the storage to which the volume of the replication source registered to the storage information table 200 , have registered their policy to the limited policy of “for main storage” that generates only the volume of the replication source to the policy 223 as typified by the storage of the storage ID 3 in the storage information table in FIG. 2 , the flow proceeds to Step 308 to notify that the storage capable of forming the volume of the replication destination does not exist. The flow may proceed to next Step 302 when the storage information table 220 does not manage the information of policy 223 .
- Step 301 When the storage capable of forming the pair is found existing in the volume as a result of judgment in Step 301 , whether or not that storage has the policy of forming the pair is judged. In other words, when the volumes having the policy not capable of becoming the volume of the replication destination other than the volumes belonging to the storage having the policy not capable of forming the volume of the replication destination are retrieved and when all the volumes so retrieved cannot become the volume of the replication destination, the flow proceeds to Step 305 and if not, the flow proceeds to Step 303 (Step 302 ).
- Step 302 when the request for setting replication of the volume exists between the storages, for example, and when the volume of the replication destination cannot be designated in case all the policies of the volumes other than the storage to which the volume of the replication source belongs are “for replication source” as typified by the volume ID 1 in the volume information table 210 , the volume of the replication source cannot be designated and the flow therefore proceeds to Step 305 . If not, the flow proceeds to Step 303 .
- the volume of the replication destination cannot be designated if all the policies of the volumes of the storage to which the volume of the replication source belongs are “for main volume”. Therefore, the flow proceeds to Step 305 and to Step 303 when the policy is not “for main volume”.
- Step 302 When the storage is found having the policy of forming the pair in the judgment of Step 302 , whether or not the volume capable of forming the pair exists is judged. In other words, the volumes that cannot be used at present as the replication destination other than the volumes that cannot form the volume of the replication destination are retrieved on the basis of information of pairing impossible/impossible 219 . When the object volumes all have the information “impossible” in the pairing possible/impossible information 219 as represented by the volume ID 2 of the volume information table, the existing volumes managed by the managing computer cannot accomplish the replication destination. Therefore, the flow proceeds to Step 305 , and to Step 304 when they can (Step 303 ).
- Step 303 When the volume capable of forming the pair is found existing in the judgment of Step 303 , whether or not the properties accessorial to the volume, that is, the values of capacity 215 , Read/Write 216 , performance 217 and reliability 218 of the volume information table 210 are coincident are checked. The flow proceeds to Step 307 when the coincident volume exists and to Step 305 when not.
- the term “coincidence” hereby used means in principle that the values are the same. Even when the volume having properties coincident with these properties does not exist, a volume having the property values approximate to these values may be used depending on the operation principle.
- the values of capacity 215 , performance 217 and reliability 218 described above the values of the replication destination may be the same or greater.
- the volume having a value 3 as the value of the volume ID 211 of the volume information table 210 is designated as the volume of the replication source during the processing of Steps 300 to 304 , it becomes possible to set the volume having a value 6 of the volume ID 211 of the volume information table 210 to the volume of the replication destination. In other words, it is possible to set the volume having properties that are the same as or greater than the properties accessorial to the replication source to the volume of the replication destination (Step 304 ).
- Step 304 When a volume coincident with the properties accessorial to the volume described above is not found in the judgment of Step 304 , it means that no corresponding volume as the volume of the replication destination exists among the existing volumes managed in the volume information table. Therefore, whether or not this corresponding volume can be generated from the space area of the storage is judged. In other words, whether or not the volume coincident with the properties of the volume of the replication source can be generated is checked from the storage information table 220 with the exception of the storages not having the space area, and whether or not the volume coincident with the volume of the replication source can be generated is checked from capacity 222 , reliability 224 and maximum performance 225 inside the storage information table 220 .
- Step 306 When it can, the flow proceeds to Step 306 and when it cannot, the flow proceeds to Step 308 .
- the volume having the volume ID 211 value of 7 in the volume information table 210 is designated as the volume of the replication source, for example, other volumes cannot be set as the volume of the replication destination in case that the properties are judged as non-coincident and designation of the volume is judged as impossible in Step 304 . Therefore, this processing judges whether or not a volume of the replication source can be generated afresh from the space area is judged. At this time, it is possible to know from the information of the storage information table 220 that the volume of the replication destination can be achieved from the storage of the storage ID 4 (Step 305 ).
- Step 305 When the judgment result of Step 305 represents that the corresponding volume can be generated from the space area, the instruction is given to the controller of the storage to generate the volume of the replication destination coincident with the volume of the replication source, and the volume management table 210 and the storage information table 220 are updated.
- the controller of the storage generates the volume in accordance with the instruction described above.
- the volume of the volume ID 7 is the replication source as in the example taken in Step 305
- the volume having the properties coincident with those of the volume of the volume ID 7 is generated from the storage of the storage ID 4 , and that volume is registered to the volume information table (Step 306 ).
- Step 306 When the volume coincident with the volume of the replication source is found existing among the existing volumes after the processing of Step 306 or in the judgment of Step 304 , the replication pair is generated for the storage of the replication source and the storage of the replication destination by use of the corresponding volume as the replication source, and the volume of the replication source and the volume of the replication destination set afresh are registered to the pair information table 230 . The processing is then finished (Step 307 ).
- Step 308 When the storage capable of forming the pair is not found in the judgment of Step 301 or when the judgment result of Step 305 represents that the corresponding volume cannot be generate from the space area, the effect that the pair cannot be set with volumes of the replication destination indicated by the information 219 representing whether or not the volume can be set as the volume of the replication destination of the volume information table is indicated, and the processing is finished (Step 308 ).
- Step 307 When the processing of Step 307 described above is the processing for the volume in which the value of Read/Write 216 of the properties of the volume is only Read, a subsequent replication operation does not exist once replication is made. Therefore, a report may be given to notify that setting of replication is released after replication is finished once. In the case of replication of the volume in which Read occupies the major proportion, replication does not occur so frequently. Therefore, this property may be notified to the user as the factor for making the asynchronous replication schedule. The change of the asynchronous schedule may be urged to the user depending on the policy of the volume and on the load of replication.
- Step 308 fails to provide the volume of the replication source.
- a message may be outputted on the basis of the condition of the point of the occurrence of the failure.
- Step 301 fails, for example, a message to the effect that “storage capable of generating volume of replication destination does not exist” may be additionally outputted.
- the volume pair generation part 107 executes the processing of Steps 300 and 307
- the volume generation part 106 executes the processing of Step 306
- the volume pair selection part 108 executes the processing of Steps 301 to 305 and 308 .
- This embodiment executes the processing described above and can generate the replication by setting the volume of the replication destination in accordance with the policy of the volume of the replication source and its properties.
- the judgment steps from Steps 302 to 304 in the series of the process steps described above are the sequence for finding out the storage of the replication destination from the existing volumes that have already been generated.
- the process steps of Steps 305 and 306 are the sequence for generating afresh the volume of the replication destination.
- the volume cannot be selected in the sequence of Steps 302 to 304 after Step 301 it is also possible to change the processing in such a fashion that the processing proceeds to Step 308 and is then finished.
- Steps 302 to 304 and the processing of Steps 305 and 306 may be executed in parallel after Step 301 , and the volume that can be set as the replication destination may be displayed on the screen of the managing computer for the submission to the user and to the application.
- the connection distance between the storage to which the volume of the replication source belongs and the storage to which the volume of the replication destination belongs may be added as a condition in accordance with the policy of the volume of the replication source and its properties. For example, when the policy of the volume of the replication source is “volume for which most important data must be secured at the time of accident”, the volume having the greatest inter-storage distance is preferentially selected. Further, the site of the storage and its position of either one, or both, of the replication source and the replication destination may be used as a condition for generating the replication pair of the volumes of both replication source and destination.
- the policy of the volume of the replication source is “volume whose leak is inhibited by company rule or law”
- the generation of the replication pair is permitted only in a specific country, a specific city, a specific company and a specific department.
- the policy of such a volume may be stored in and managed by the volume information table 210 .
- FIG. 4 is a block diagram showing a construction of storage operation management system according to a second embodiment of the invention.
- the second embodiment of the invention executes replication in accordance with a condition of a route of a volume and its policy.
- the second embodiment shown in FIG. 4 employs a construction in which a route management module 401 and a later-appearing route management module table group managed by the path management module are added to the managing computer 100 of the first embodiment, a cache 415 , 425 , 435 is further added to each of the plurality of storages 11 a , 11 b and 11 c , and switches A 440 and B 450 are disposed to mutually connect the storages 11 a , 11 b and 11 c , and data networks 460 to 464 are further added.
- FIG. 4 a route management module 401 and a later-appearing route management module table group managed by the path management module
- the switches A 440 and B 450 and the data networks 461 and 463 for connecting these switches may use a public communication network.
- the route management module 401 accomplishes the processing in the second embodiment of the invention.
- the processing is accomplished as software stored inside the storage device 104 of the managing computer 100 is read into the memory 101 and is executed by the CPU 102 .
- the data route between the storages is accomplished through the switch A 440 , the switch B 450 and the data networks 460 to 464 .
- the data networks 460 to 464 are the cables in the same way as the network 130 shown in FIG. 1 .
- the storage 11 a can be connected to the storage 11 b through the data network 460 , the switch A 440 , the data network 461 , the switch B 450 and the data network 462 .
- the storage 11 a can also be connected to the storage 11 b through another route extending from the data network 460 , the switch A 440 , the data network 463 , the switch B 450 to the data network 462 as a different route.
- FIG. 5 is an explanatory view useful for explaining the table group that the route management module 401 manages.
- the route management module table group 500 managed by the route management module 401 includes a route information table 510 and a cache information table 520 .
- the route information table 510 is a table for managing information of the route of the data network used for replicating the volume.
- This table 510 includes a route ID 511 as an identifier representing the replication route between the storages, an actual route 512 by cable information such as a cable name as the network for generating the route, a condition 513 representing the condition of the route 513 , an associated storage 514 representing the storage connected to the route, an associated pair 515 for identifying a replication pair of the volumes using the route and a policy 516 representing the properties of the route.
- the managing computer 100 acquires the condition of the route from the switches A and B, the storages, etc, and manages this route information table 510 .
- information of the security level of the route may be stored in and managed by the route information table 510 , and the policy 516 may store a concrete maximum transfer speed, etc, of the route instead of the conditions “high speed” and “low speed”.
- the route 512 represents the route between the storages to which the replication pair is set or which have the possibility of setting.
- a route having the route ID 1 is represented as the route that connects the storage 11 a to the storage 11 b through the data network 460 , the switch A 440 , the data network 461 , the switch B 450 and the data network 462 .
- the condition 513 registers the condition as to whether or not each route operates normally.
- the condition includes “normal” and “abnormal”, and the condition in which the line does not operate normally is registered as “abnormal”.
- the condition may also be changed to “abnormal”.
- the route information table manages the value of the network load at which the policy cannot be satisfied.
- the value of the associated pair 515 presents the same volume replication pair.
- the cache information table 520 is a table for managing a cache use ratio prepared for speeding up replication, that is, information of use capacity/maximum capacity, for each storage and stores the storage ID 521 and information of the cache use ratio 522 of each storage.
- the caches in this instance correspond to the caches 415 , 425 and 435 .
- FIG. 6 is a flowchart useful for explaining the processing for generating the replication depending on the policy of the line for connecting the volumes when volume replication is made between the storages in the second embodiment of the invention. Next, this processing will be explained.
- the processing shown in this flowchart is the one that is contained in the volume management module 105 and in the route management module 401 .
- the volume management module 105 acquires the generation request of the volume pair to be replicated from the user and from the application together with the ID of the volume of the replication source (Step 600 ).
- the volume management module 105 may start processing Step 600 after the volume of the replication source is generated but before it receives the ID of the volume.
- the volume management module 105 may generate the volume of the replication source and the volume of the replication destination in a series of processing and may set replication.
- the volume management module 105 may acquire the request that the replication is created by use of the volumes inside the storages or the volumes between the storages.
- the volume management module 105 retrieves information of the policy 223 of the volume of the replication source acquired in Step 600 , and causes the route management module 401 to retrieve the policy of the line between the storage to which the volume of the replication source belongs and the volume that can be connected. Next, whether or not the line that can accomplish the policy of the volume of the replication source exists is judged. When the line having the policy capable of generating the volume of the replication destination exists, the flow proceeds to the processing of Step 602 and to Step 603 when such a line does not exist (Step 601 ).
- Step 601 When the generation request of the volumes exists between the storages and the volume of the replication source has the policy “Connection is to be made through high-speed line in replication of volumes” in the processing of Step 601 , for example, the storage ID to which the volume of the replication source belongs is entered into the associated storage 514 of the route information table 510 of the line retrieved, and the route having the line that can be used as the “high-speed” policy is retrieved.
- the processing proceeds to Step 602 and when the corresponding line does not exist, to the processing of Step 603 .
- Step 601 When the corresponding line exists in the judgment of Step 601 , the processing shifts to the processing of Step 301 and to the following Steps explained with reference to the flow of FIG. 3 . In this case, the processing from Step 301 is executed for the associated storage acquired in Step 601 (Step 602 ).
- Step 601 When the corresponding line is not found in the judgment of Step 601 , the volume of the replication destination cannot be provided, and the report is made to this effect.
- a message may be outputted on the basis of the condition of the occurrence point of the error. For example, a message “Line capable of generating volume of replication destination does not exist” may be additionally outputted.
- the connectable line can be selected in accordance with the policy of the line and with the policy of the volume designated as the replication source.
- FIG. 7 is a flowchart useful for explaining the processing operation of fault shooting when any fault occurs in the replication route during the volume replication between the storages. Next, this operation will be explained.
- the route management module 401 executes this operation. However, the table of the table group 200 managed by the volume management module 105 is sometimes called out, and the volume management module 105 operates in this case.
- the fault occurs in a plurality of routes during the volume replication operation between the storages and this route fault is detected.
- setting of replication exists with the volume 113 a of the storage 11 a in FIG. 4 being the replication source and the volume 123 a of the storage 11 b being the replication destination.
- the replication type 234 of the pair information table 230 is “synchronous” and the Write request is raised for the volume 114 a , or when the replication type 234 is “asynchronous” and the user or the application designates the start of replication, the controller of the storage start replication.
- Step 700 When the controller of the storage or the switch A 440 or B 450 detects the fault of the route after the start of this replication, detection of the fault is notified to the managing computer 100 through the storage network communication equipment and the network, and the managing computer 100 receives the communication content (Step 700 ).
- the route fault includes the case where the fault is notified from the storage or the switch and the case where the managing computer 100 periodically asks the fault of the storage or the switch.
- the condition of the route having the value 1 of the route ID 511 is “abnormal” as shown in the route information table 510 , for example, the managing computer 100 receives the route fault in Step 700 .
- Step 701 When the route fault is detected, whether or not another route should be searched for the volume 113 a of the replication source failing replication is judged depending on the policy 213 of the volume ID 211 corresponding to the volume 113 a (Step 701 ).
- Step 702 When the search of another route is judged as necessary, the flow proceeds to Step 702 and when not, to Step 706 .
- the volume is the one having high importance in the policy 213 of the volume, for example, replication must be made quickly and the flow may proceed at this time to Step 702 .
- the flow may proceed to Step 706 .
- Step 701 When the judgment of Step 701 represents that another route must be searched, whether or not another route capable of reaching the same storage exists among the routes managed by the managing computer 100 is checked (Step 702 ).
- the managing computer 100 may give an instruction to search data capable of being actually connected through the switch or the storage.
- the route ID 511 , the route 512 , the condition 513 , the associated storage 514 and the service 516 are registered.
- the flow proceeds to Step 703 and when not, to Step 706 .
- the policy 516 may be acquired from each switch, or the user or the application may set the policy 516 . It will be assumed, for example, that the fault occurs in the route of the data network 461 in the route connecting the storage 11 a and the storage 11 b through the data network 460 , the switch A 440 , the data network 461 , the switch B 450 and the data network 462 in the construction shown in FIG. 4 , it is possible to set another route for connecting the storages 11 a and 11 b through the data network 460 , the switch A 440 , the data network 463 , the switch B 450 and the data network 462 .
- Step 702 the policy of the volume of the replication destination and the policy of the route are compared on the basis of the policy of the network and whether or not the policies are coincident is judged. When they are coincident, the flow proceeds to Step 704 and when not, to Step 706 (Step 703 ).
- Step 706 When the replication request of the volume of the replication destination is “high speed” and when the policy of the route is “low speed” in this case, for example, the requirement cannot be satisfied and the flow proceeds to Step 706 .
- replication may be carried out by use of the low speed line.
- This request may be registered to the policy 213 of the volume information table, or the user or the application may give this request instruction as a part of the processing of Step 703 .
- Step 703 When the policies are found coincident in the judgment of Step 703 , setting of the normal route is requested for the switches and the storages, and the volumes mutually confirm that the replication processing is possible.
- a judgment may be made to automatically select one route in accordance with the policy, or the user or the application may be allowed to judge by providing a plurality of results (Step 704 ).
- the managing computer gives the start instruction of replication to the storages by use of the route set in Step 704 (Step 705 ).
- Step 708 the flow proceeds to Step 707 (Step 706 ).
- Step 706 When the judgment result of Step 706 represents that the replication data should not be stored permanently in the cache, the managing computer 100 gives the instruction to the controller of the storage to omit the cache data of the volume to be omitted from the cache and to reduce the use ratio of the cache (Step 707 ).
- Step 708 When the judgment result represents that the replication data should be permanently stored in the cache after the processing of Step 707 or in Step 706 , the route fault is reported and the processing is finished (Step 708 ).
- the invention checks whether or not the policy can be kept if a processing of replication increases.
- the invention can set so as to execute a replication processing in which the fault occurs as another route.
- the second embodiment of the invention can judge whether or not replication should be made through another route when any fault occurs in a plurality of routes, or whether or not the content of the cache should be omitted depending on the policy of the volume, and can thus make efficient replication of the volumes.
- FIG. 8 is a flowchart useful for explaining the processing operation for generating the replication pair in another storage when another route cannot be secured or when replication cannot be conducted even when the replication data is permanently stored in the cache. Next, this processing will be explained.
- the processing shown in this flowchart is the one that is contained in the volume management module 105 and the route management module 401 .
- Step 800 In the pair after the fault occurs, whether or not fault recovery is impossible or a drastic delay occurs is judged. This judgment is made by setting a threshold value to the time from the occurrence of the volume fault to recovery of the fault of the route (Step 800 ).
- Step 801 whether or not the replication may be generated by use of other storages is judged from the policy 214 of the volumes of the volume information table 210 .
- the processing is as such finished without doing anything at all (Step 801 ).
- Step 801 When the judgment result of Step 801 represents that setting of replication may well be set to other storages, the storages that can be registered as the volume replication route are searched, and whether or not such storages exist is judged (Step 802 ).
- Step 803 when the storages having the route exist.
- the processing is finished when the route does not exist.
- the route is under the condition represented by the route information table 510 and the condition of the route 1 is “abnormal”, for example, the route 3 represents the route the condition of which is normal and which satisfies the policy.
- Step 802 When the judgment result in Step 802 represents that the storage capable of setting the route exists, the pair is generated.
- This pair generation processing is the one that executes the flow explained with reference to FIG. 3 .
- the pair generation processing is executed in accordance with the instruction that the processing should be executed within the range of the storages obtained by the processing of Step 802 (Step 803 ).
- the range management module 401 executes the processing of each Step 800 , 801 , 803 and the volume management module 105 executes the processing of Step 803 .
- this embodiment stops once the replication processing when the fault occurs but can change setting of the pairs registered at present depending on the policy of the volumes, and can continue replication itself irrelevantly to the restoration speed of the route fault.
- the processing of each embodiment of the invention described above can be constituted as a processing program.
- the processing program can be stored and provided in the form of storage media such as HD, DAT, FD, MO, DVD-ROM, CD-ROM, and so forth.
- FIG. 9 is a block diagram showing a construction of storage operation system according to a third embodiment of the invention.
- the third embodiment employs the construction in which the functions of the managing computer in the constructions of the first and second embodiments are disposed inside the storage.
- the third embodiment of the invention shown in FIG. 9 represents an example where a processing part for executing management is disposed inside the storage 11 a shown in FIG. 4 , for example.
- the volume management module 105 and the route management module 401 existing in the storage device 104 of the managing computer 100 shown in FIG. 4 are disposed inside a storage device 900 of the storage 11 a .
- the volume management module 105 and the route management module 401 are accomplished as software stored in the storage device 900 of the storage 11 a is read and executed by the controller 112 .
- a data synchronization module 901 is disposed so that mutual information is coincident while keeping data consistency with other storages. Data consistency is kept by use of the network communication equipment 111 , etc.
- the third embodiment of the invention having the construction described above can eliminate the managing computer 100 utilized in the first and second embodiments.
- the third embodiment uses the data synchronization module 901 from a plurality of managing computers 100 when the managing computers 100 are provided to a large-scale storage operation management system. Therefore, when one managing computer cannot keep information, the third embodiment makes it possible to share the data of the plurality of managing computers.
- the volume pair management module 105 can generate a volume replication pair between or inside the storages in accordance with the policy of the volumes or properties, and can generate the volume replication pair on the basis of the policy of the volume.
- the route management module 401 and the volume management module 105 can classify fault shooting in accordance with the policy of the volume and can make fault shooting of volume replication depending on the policy of the volume.
- the invention can efficiently operate and manage replication of the data areas inside the same storage or between the storages in accordance with information of the data area of the storage.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A managing computer (manager) manages replication of data areas inside a storage or among storages. A storage volume management module of the manager manages policy of the volume and its properties. When replication of a volume is set, a volume of a replication destination appropriate for a volume of a replication source is generated using the policy and the properties to form a replication pair. A route management module of the manager and its volume management module bring the policy and the properties of the volumes, and the policies and conditions of lines into conformity with one another. When any fault occurs in a line route used for data transfer during replication of the volume, a separate line route is utilized, and a fault countermeasure is taken for replication of the volumes in accordance with the policy and properties of the volumes.
Description
- This application is a continuation application of U.S. application Ser. No. 11/648,655 filed Jan. 3, 2007, now allowed, which is a continuation of U.S. application Ser. No. 10/650,851, filed Aug. 29, 2003, now U.S. Pat. No. 7,191,198.
- 1. Field of the Invention
- This invention relates to a storage operation management program, a storage operation management method and a storage managing computer. More particularly, the invention relates to a storage operation management program, a storage operation management method and a storage managing computer each capable of operating and managing replication of data area inside the same storage or among storages in accordance with information of the data areas of the storages.
- 2. Description of the Related Art
- Technologies such as a snapshot function for doubling data inside a storage (e.g. JP-A-2001-306407) and a remote copy function for doubling data among storages in consideration of backup to cope with disasters are known as prior art technologies for improving versatility of computer systems and for achieving storages capable of backing up data for a non-stop operation of the computer systems.
- The snapshot function is the one that makes it possible to conduct data backup on the on-line basis while continuing the business operations in a computer system that is to continuously operate for 24 hours and for 365 days without interruption.
- According to the remote copy function, a storage of a computer system operating normally transfers updating data to a storage of a computer system installed and operating at a different remote place so that the data stored in the storage of the remote place can be brought substantially equivalent to the data stored in the storage of the computer system normally operating, and the loss of data can be minimized even when any accident occurs in the computer system normally operating.
- To efficiently process a plurality of remote copy requests in the remote copy function, a technology is known that decides a remote copy schedule and a route of a line in consideration of access frequency of data areas and a condition of replication line.
- All the prior art technologies described above are directed to reduce a processing load of a replication processing from the data area of a replication source to the data area of a replication destination and to improve transfer efficiency. In other words, these technologies are based on the premise that the data area of the replication source and the data area of the replication destination coexist.
- On the other hand, the number of computer systems that use a plurality of storages connected with one another through SAN (Storage Area Network) or IP (Internet Protocol) by use of a dedicated storage network (mainly, Fibre Channel) and share data of a large capacity distributively stored in these storages has increased with the increase of the data capacity. Among the storages connected by SAN, etc, some of them are sometimes products of different manufacturers or have different performance such as an access speed. It has thus become more difficult to select, generate and designate the data area having reliability and the access speed corresponding to those of policy of users and applications such as the policy for a financial system, a policy for various management businesses for companies, a policy for databases, and so forth. A prior art technology that makes it possible to generate a data area coincident with the policy of the user is also known.
- However, the prior art technologies described above do not consider generation of a data area of a replication source and a data area of a replication destination for doubling data by snapshot or remote copy.
- To efficiently set data doubling in accordance with user policy from among various storages connected to SAN or IP, it becomes necessary to generate a data area of a replication destination similar to a data area of a replication source that satisfies reliability and an access speed of the user policy such as the financial system, various management businesses, databases, etc, described above and to generate a replication pair of the data areas. When any fault occurs in the replication operation for doubling data in the business such as in the financial system for which data reliability is severely required, the prior art technologies described above executes again replication after the fault is restored. It may be possible to change a route of replication from the data area of the replication source to the data area of the replication destination in data doubling. Depending on the policy of the data area and on the operation policy of data doubling as a whole, however, it is necessary to decide the operation by judging whether the route of replication is to be changed or to wait for the restoration of the fault. Consequently, it is difficult to efficiently operate the overall system.
- In view of the problems described above, it is an object of the invention to provide a storage operation management program, a storage operation management method and a storage managing computer each capable of operating and managing replication of data areas inside the same storage or among storages in accordance with information of the data areas of the storages.
- According to a feature of the invention, a storage operation management program for operating and managing replication of data areas inside a storage or among a plurality of storages comprises a processing step of accepting a request for generation of a data area of a replication destination for a data area of a replication source; a process step of retrieving a data area capable of becoming a replication destination coincident with properties of a data area corresponding to policy of the data area of the replication source from existing data areas; and a process step of instructing the storage to generate a replication pair of the data areas.
- According to another feature of the invention, a storage operation management method for operating and managing replication of data areas inside a storage or among a plurality of storages comprises a processing step of accepting a request of generation of a data area of a replication destination for a data area of a replication source; a process step of retrieving a data area capable of becoming the replication destination coincident with properties of a data area corresponding to policy of the data area of the replication source from existing data areas; and a process step of generating a replication pair on the basis of the retrieval result.
- More concretely, in an aspect of the invention, when a data area for managing and replicating policy of a data area and its properties and a data area for replication is selected, the policy and properties of a data area of a replication source are acquired, a data area of a replication destination coincident with policy and properties of the data area of the replication source is generated and a replication pair of the data areas is generated. Consequently, an operation of data doubling in accordance with the policy and properties of the data area becomes explicitly possible, and automation of setting of data doubling in accordance with the policy of the data area and data doubling can be effectively constituted from among a plurality of storages connected through SAN or IP.
- In another aspect of the invention, the policy of the data area of the replication source is brought into conformity in the aspect of the operation, too. When any fault or delay of a replication process occurs in a route connecting a data area of a replication source and a data area of a replication destination in the case of remote copy, for example, judgment is made in accordance with the policy of the data area of the replication source as to whether another connectable route is set, or data to be doubled is stored in a cache inside a storage to wait for restoration of the fault of a line, or replication data having low priority is omitted. Consequently, the operation of data doubling can be effectively made.
- According to the invention, the resolution method described above manages the policy of the data areas of the storages and their property, can conduct automatic setting of data doubling in accordance with the policy of the data area by utilizing the policy and can operate data doubling flexibly and explicitly for users.
- Other objects, features and advantages of the invention will become apparent from the following description of the embodiments of the invention taken in conjunction with the accompanying drawings.
-
FIG. 1 is a block diagram showing a construction of a storage operation management system according to a first embodiment of the invention; -
FIG. 2 is an explanatory view useful for explaining a table group managed by avolume management module 105; -
FIG. 3 is a flowchart useful for explaining a processing operation for setting replication of volumes in accordance with policy of a volume of replication source and generating a pair of the volume of the replication source and a volume of the replication destination; -
FIG. 4 is a block diagram showing a construction of a storage operation management system according to a second embodiment of the invention; -
FIG. 5 is an explanatory view useful for explaining a table group managed by aroute management module 401; -
FIG. 6 is a flowchart useful for explaining a processing operation for generating a replication in accordance with policy of a line for connecting volumes when a volume replication is generated between storages in the second embodiment of the invention; -
FIG. 7 is a flowchart useful for explaining a processing operation of fault shooting when any fault occurs in a route of replication during the volume replication operation between the storages; -
FIG. 8 is a flowchart useful for explaining a processing operation for generating a replication pair in another storage; and -
FIG. 9 is a block diagram showing a construction of a storage operation management system according to a third embodiment of the invention. - A construction of a storage operation management system and an operation management method according to preferred embodiments of the invention will be hereinafter explained in detail with reference to the accompanying drawings.
-
FIG. 1 is a block diagram showing a construction of a storage operation management system according to a first embodiment of the invention. - The storage operation management system according to the first embodiment of the invention includes a managing computer (manager) 100 and a plurality of
storages 11 a to 11 n that are connected to one another through adata network 130 and through anetwork 140. Thestorage 11 a includes volumes (data areas, hereinafter called merely “volumes”) 113 a, 113 b, . . . , 113 n as management areas for storing data that are managed in practice by a computer, aspace area 114 that will be divided and set as volumes in future,data communication equipment 110 for transmitting and receiving data I/O of read/write data of the volumes,network communication equipment 111 for communicating with the managingcomputer 100, etc, and acontroller 112 for practically controlling the storages. Thedata communication equipment 110 and thenetwork communication equipment 111 may be constituted and arranged as one communication equipment by connecting them together through IP (Internet Protocol) connection in a connection form (such as Ethernet (trade mark)). - The
storage 11 b, too, includesvolumes space area 124,data communication equipment 120,network communication equipment 121 and acontroller 122 in the same way as thestorage 11 a. Theother storages 11 b, . . . , 11 n have also the same construction as thestorage 11 a. - The
data network 130 is a cable as a route of data communication of the storages and the computers. Thenetwork 140 is a route that can communicate with the managingcomputer 100, etc. These networks may be either mere buses or LAN. Thedata network 130 and thenetwork 140 are the same with each other depending on the communication form and may be Ethernet (trademark), for example. - The managing
computer 100 includes amemory 101, aCPU 102,network communication equipment 103 and astorage device 104. Thestorage device 104 includes therein avolume management module 105 and a later-appearing volume managementmodule table group 200 managed by thevolume management module 105. - The
volume management module 105 achieves the processing in the embodiment of the invention. Thevolume management module 105 includes avolume generation part 106 for instructing generation of the volumes to thestorages pair generation part 107 for generating a replication pair from among the volumes generated by the instruction of thevolume generation part 106, and a volumepair selection part 108 for selecting the volume pair that can be replicated from policy of the volumes (for replication source, for replication destination, for both, etc) and their properties (performance, reliability, etc) and from policy of the storages (for financial systems, for various management businesses of companies, for database, etc) and their properties (performance, reliability, etc). Thevolume management module 105 is accomplished when software, not shown, stored in the storage device of the managingcomputer 100 is written into thememory 101 and is executed by theCPU 102. - When the storage operation management system shown in
FIG. 1 replicates the volumes between the storages, such as when avolume 113 a of thestorage 11 a is a volume of the replication source and thevolume 123 a of thestorage 11 b is a volume of a replication destination, thecontroller 112 of thestorage 11 a and thecontroller 122 of thestorage 11 b cooperate with each other, transmit the content of thevolume 113 a to thevolume 123 a through thedata network 130 and replicate the content. In this case, the respective computers using thevolume 113 a and thevolume 123 a and thecontrollers -
FIG. 2 explains the table group that thevolume management module 105 manages. The volume managementmodule table group 200 managed by thevolume management module 105 includes a volume information table 210, a storage information table 220 and a pair information table 230. - The volume information table 210 stores a
volume ID 211 allocated to identify all the volumes of the storages managed by the managingcomputer 100, astorage ID 212 representing an identifier of the storage to which the volume belongs, astorage volume ID 213 representing the identifier of the volume managed inside each storage in the storage, apolicy 214 representing the use policy of the volume such as for the replication source, for the replication destination, for both replication source and destination, for the replication source for remote copy, each being designated by the user and the application, avolume capacity 215, Read/Write 216 representing read/write frequency from and to the volume set by the policy of the user and the application (for financial system, for various management businesses of companies, for database, etc),performance 217 representing a read/write speed of the volume,reliability 218 representing reliability of the volume in terms of a numerical value, and pairing possible/impossible information 219 representing whether or not the volume can be set as the volume that can be paired as the replication source with volume of the replication destination. - The
storage ID 212 is an identifier of eachstorage storage volume ID 213 is an identifier of eachvolume 113 a, 113, . . . , 113 n. Values of from 0 to 10 are entered as Read/Write information 216. In the case of thevalue 1, the policy is limited to only Read. As the value increases from 2, frequency of Write becomes higher. In the case of 10, the policy is limited to only Write. Values acquired by normalizing performance such as a read/write speed, etc between 1 and 10, that is, values of from 1 to 10, are entered to theperformance information 217 representing performance of the volume. In the case of thevalue 1, access performance to the volume is the lowest and in the case of 10, access performance reaches maximum.Reliability information 218 representing reliability of the volume in terms of a numerical value is acquired by normalizing reliability between 1 and 10, that is, values from 1 to 10, are entered. Thevalue 1 represents the highest degree of fault occurrence of the volume and thevalue 10 represents the lowest degree of fault occurrence. When setting is made in only the RAID level, for example, the managingcomputer 100 decides a rule such that the volume of RAID0 is 1, RAID1 is 10, RAID5 is 5, and so forth, and executes the management operation. In other words, Read/Write information 216,performance information 217 andreliability information 218 represent the properties of the corresponding volume. - The storage information table 220 stores information representing each of the
storage ID 221 as the identifier of the storage managed by the managingcomputer 100, aspace capacity 222 representing the capacity of a space area the storage does not yet set and use as the volume,policy 223 to be given to the storage,maximum reliability 224 that can be achieved when the storage generates the volume by use of the space area andmaximum performance 225 that can be achieved when the storage generates the volume by use of the space area. - In the case of the
storage 11 a, for example, the value of thespace capacity 222 represents the capacity of thespace area 114. When thestorage ID 221 has the same value as the value of thestorage ID 212 of the volume information table 210, it represents the same storage. - The pair information table 230 represents information of replication of the volumes the managing
computer 100 manages. This table includes information of each of apair ID 231 for identifying pair information, amain volume ID 232 representing a volume ID of the volume of the replication source, asub-volume ID 233 representing a volume ID of the volume of the replication destination and areplication type 234 representing a replication type. - The value of the
main volume ID 232 and the value of the sub-volume 233 represent the same volume when the values of the volume ID in the volume information table 210 are the same. A value “synchronous” or a value “asynchronous” is entered to thereplication type 234. In the case of “synchronous”, replication to the volume of the replication destination is made when write occurs in the volume of the replication source. In the case of “asynchronous”, replication is made in a unit of a certain schedule (every predetermined time, data quantity per replication). The user or the application may set this replication type. Alternatively, the replication type may be set as a part of the processing of the volume management module that will be explained next. -
FIG. 3 is a flowchart useful for explaining a processing operation for setting replication of the volume in accordance with the policy of the volume of the replication source and then generating the pair of the volume of the replication source and the volume of the replication destination. Next, this flowchart will be explained. Thevolume management module 105 executes the processing that will be explained with reference to this flowchart. - First, the
volume management module 105 acquires the pair generation request of the volume to be replicated with the ID of the volume of the replication source from the user and the application (Step 300). - At this time, the
volume management module 105 may start executing the processing ofStep 300 after the volume of the replication source is generated but before it receives the ID of the volume. In other words, when receiving the replication generation request, thevolume management module 105 may create the volume of the replication source and the volume of the replication destination during a series of processing and may execute a plurality of settings. Alternatively, thevolume management module 105 may acquire a request that the replication is created by use of the volumes inside the storage or the volumes between the storages. - Next, information of the
policy 223 of the storage is retrieved on the basis of the storage ID, and whether or not the storage having a volume capable of forming the pair with the volume of the replication source exists is judged. When the storage has the policy not capable of creating the volume as the replication source, the flow shifts to Step 308 and if not, to Step 302 (Step 301). - When the request for setting replication of the volumes exists between the storages in
Step 301 and when all the storages other than the storage to which the volume of the replication source belongs, that is, all the storages other than the storage to which the volume of the replication source registered to the storage information table 200, have registered their policy to the limited policy of “for main storage” that generates only the volume of the replication source to thepolicy 223 as typified by the storage of the storage ID3 in the storage information table inFIG. 2 , the flow proceeds to Step 308 to notify that the storage capable of forming the volume of the replication destination does not exist. The flow may proceed tonext Step 302 when the storage information table 220 does not manage the information ofpolicy 223. - When the storage capable of forming the pair is found existing in the volume as a result of judgment in
Step 301, whether or not that storage has the policy of forming the pair is judged. In other words, when the volumes having the policy not capable of becoming the volume of the replication destination other than the volumes belonging to the storage having the policy not capable of forming the volume of the replication destination are retrieved and when all the volumes so retrieved cannot become the volume of the replication destination, the flow proceeds to Step 305 and if not, the flow proceeds to Step 303 (Step 302). - In
Step 302, when the request for setting replication of the volume exists between the storages, for example, and when the volume of the replication destination cannot be designated in case all the policies of the volumes other than the storage to which the volume of the replication source belongs are “for replication source” as typified by thevolume ID 1 in the volume information table 210, the volume of the replication source cannot be designated and the flow therefore proceeds to Step 305. If not, the flow proceeds to Step 303. When replication of the volume is set inside the storage, the volume of the replication destination cannot be designated if all the policies of the volumes of the storage to which the volume of the replication source belongs are “for main volume”. Therefore, the flow proceeds to Step 305 and to Step 303 when the policy is not “for main volume”. - When the storage is found having the policy of forming the pair in the judgment of
Step 302, whether or not the volume capable of forming the pair exists is judged. In other words, the volumes that cannot be used at present as the replication destination other than the volumes that cannot form the volume of the replication destination are retrieved on the basis of information of pairing impossible/impossible 219. When the object volumes all have the information “impossible” in the pairing possible/impossible information 219 as represented by the volume ID2 of the volume information table, the existing volumes managed by the managing computer cannot accomplish the replication destination. Therefore, the flow proceeds to Step 305, and to Step 304 when they can (Step 303). - When the volume capable of forming the pair is found existing in the judgment of
Step 303, whether or not the properties accessorial to the volume, that is, the values ofcapacity 215, Read/Write 216,performance 217 andreliability 218 of the volume information table 210 are coincident are checked. The flow proceeds to Step 307 when the coincident volume exists and to Step 305 when not. The term “coincidence” hereby used means in principle that the values are the same. Even when the volume having properties coincident with these properties does not exist, a volume having the property values approximate to these values may be used depending on the operation principle. As to the values ofcapacity 215,performance 217 andreliability 218 described above, the values of the replication destination may be the same or greater. For example, when the volume having avalue 3 as the value of thevolume ID 211 of the volume information table 210 is designated as the volume of the replication source during the processing ofSteps 300 to 304, it becomes possible to set the volume having avalue 6 of thevolume ID 211 of the volume information table 210 to the volume of the replication destination. In other words, it is possible to set the volume having properties that are the same as or greater than the properties accessorial to the replication source to the volume of the replication destination (Step 304). - When a volume coincident with the properties accessorial to the volume described above is not found in the judgment of
Step 304, it means that no corresponding volume as the volume of the replication destination exists among the existing volumes managed in the volume information table. Therefore, whether or not this corresponding volume can be generated from the space area of the storage is judged. In other words, whether or not the volume coincident with the properties of the volume of the replication source can be generated is checked from the storage information table 220 with the exception of the storages not having the space area, and whether or not the volume coincident with the volume of the replication source can be generated is checked fromcapacity 222,reliability 224 andmaximum performance 225 inside the storage information table 220. When it can, the flow proceeds to Step 306 and when it cannot, the flow proceeds to Step 308. When the volume having thevolume ID 211 value of 7 in the volume information table 210 is designated as the volume of the replication source, for example, other volumes cannot be set as the volume of the replication destination in case that the properties are judged as non-coincident and designation of the volume is judged as impossible inStep 304. Therefore, this processing judges whether or not a volume of the replication source can be generated afresh from the space area is judged. At this time, it is possible to know from the information of the storage information table 220 that the volume of the replication destination can be achieved from the storage of the storage ID4 (Step 305). - When the judgment result of
Step 305 represents that the corresponding volume can be generated from the space area, the instruction is given to the controller of the storage to generate the volume of the replication destination coincident with the volume of the replication source, and the volume management table 210 and the storage information table 220 are updated. The controller of the storage generates the volume in accordance with the instruction described above. When the volume of the volume ID7 is the replication source as in the example taken inStep 305, the volume having the properties coincident with those of the volume of the volume ID7 is generated from the storage of the storage ID4, and that volume is registered to the volume information table (Step 306). - When the volume coincident with the volume of the replication source is found existing among the existing volumes after the processing of
Step 306 or in the judgment ofStep 304, the replication pair is generated for the storage of the replication source and the storage of the replication destination by use of the corresponding volume as the replication source, and the volume of the replication source and the volume of the replication destination set afresh are registered to the pair information table 230. The processing is then finished (Step 307). - When the storage capable of forming the pair is not found in the judgment of
Step 301 or when the judgment result ofStep 305 represents that the corresponding volume cannot be generate from the space area, the effect that the pair cannot be set with volumes of the replication destination indicated by theinformation 219 representing whether or not the volume can be set as the volume of the replication destination of the volume information table is indicated, and the processing is finished (Step 308). - When the processing of
Step 307 described above is the processing for the volume in which the value of Read/Write 216 of the properties of the volume is only Read, a subsequent replication operation does not exist once replication is made. Therefore, a report may be given to notify that setting of replication is released after replication is finished once. In the case of replication of the volume in which Read occupies the major proportion, replication does not occur so frequently. Therefore, this property may be notified to the user as the factor for making the asynchronous replication schedule. The change of the asynchronous schedule may be urged to the user depending on the policy of the volume and on the load of replication. - Because the processing of
Step 308 described above fails to provide the volume of the replication source, the report of the failure is made. In this case, a message may be outputted on the basis of the condition of the point of the occurrence of the failure. WhenStep 301 fails, for example, a message to the effect that “storage capable of generating volume of replication destination does not exist” may be additionally outputted. - In the processing described above, the volume
pair generation part 107 executes the processing ofSteps volume generation part 106 executes the processing ofStep 306 and the volumepair selection part 108 executes the processing ofSteps 301 to 305 and 308. - This embodiment executes the processing described above and can generate the replication by setting the volume of the replication destination in accordance with the policy of the volume of the replication source and its properties.
- When a request for generating a plurality of pairs exists in a series of processing described above, the processing explained with reference to
FIG. 3 is executed either repeatedly or in parallel and setting of the pairs is executed so that all the designated pairs can be generated. When any error occurs during the generation process of the plurality of pairs and the processing ofStep 308 is therefore executed, the pairs that have been set so far are omitted. - The judgment steps from
Steps 302 to 304 in the series of the process steps described above are the sequence for finding out the storage of the replication destination from the existing volumes that have already been generated. The process steps ofSteps Steps Step 301, and then the processing proceeds to Step 302 whenStep 305 has the condition in which volume cannot be generated and to Step 308 whenStep 305 has the condition in which the volume cannot be selected. When the volume cannot be selected in the sequence ofSteps 302 to 304 afterStep 301, it is also possible to change the processing in such a fashion that the processing proceeds to Step 308 and is then finished. - The processing described above may further be changed in such a fashion as to proceed to Step 305 after
Step 301 and then to Step 308 when the volume cannot be generated, and to terminate the processing. Furthermore, the processing ofSteps 302 to 304 and the processing ofSteps Step 301, and the volume that can be set as the replication destination may be displayed on the screen of the managing computer for the submission to the user and to the application. - When the volume of the replication source and the volume of the replication destination are generated as the replication pair in
Step 307 in the processing described above, the connection distance between the storage to which the volume of the replication source belongs and the storage to which the volume of the replication destination belongs may be added as a condition in accordance with the policy of the volume of the replication source and its properties. For example, when the policy of the volume of the replication source is “volume for which most important data must be secured at the time of accident”, the volume having the greatest inter-storage distance is preferentially selected. Further, the site of the storage and its position of either one, or both, of the replication source and the replication destination may be used as a condition for generating the replication pair of the volumes of both replication source and destination. For example, when the policy of the volume of the replication source is “volume whose leak is inhibited by company rule or law”, the generation of the replication pair is permitted only in a specific country, a specific city, a specific company and a specific department. The policy of such a volume may be stored in and managed by the volume information table 210. -
FIG. 4 is a block diagram showing a construction of storage operation management system according to a second embodiment of the invention. When a plurality of connection methods for connecting storages exists in the volume replication operation between the storages, the second embodiment of the invention hereby explained executes replication in accordance with a condition of a route of a volume and its policy. - In the construction of the first storage operation management system of the invention shown in
FIG. 1 , the second embodiment shown inFIG. 4 employs a construction in which aroute management module 401 and a later-appearing route management module table group managed by the path management module are added to the managingcomputer 100 of the first embodiment, acache storages storages data networks 460 to 464 are further added. Incidentally, thoughFIG. 4 shows only threestorages FIG. 1 . The switches A440 and B450 and thedata networks - The
route management module 401 accomplishes the processing in the second embodiment of the invention. The processing is accomplished as software stored inside thestorage device 104 of the managingcomputer 100 is read into thememory 101 and is executed by theCPU 102. The data route between the storages is accomplished through the switch A440, the switch B450 and thedata networks 460 to 464. Thedata networks 460 to 464 are the cables in the same way as thenetwork 130 shown inFIG. 1 . Thestorage 11 a, for example, can be connected to thestorage 11 b through thedata network 460, the switch A440, thedata network 461, the switch B450 and thedata network 462. Thestorage 11 a can also be connected to thestorage 11 b through another route extending from thedata network 460, the switch A440, thedata network 463, the switch B450 to thedata network 462 as a different route. -
FIG. 5 is an explanatory view useful for explaining the table group that theroute management module 401 manages. The route managementmodule table group 500 managed by theroute management module 401 includes a route information table 510 and a cache information table 520. - The route information table 510 is a table for managing information of the route of the data network used for replicating the volume. This table 510 includes a
route ID 511 as an identifier representing the replication route between the storages, anactual route 512 by cable information such as a cable name as the network for generating the route, acondition 513 representing the condition of theroute 513, an associatedstorage 514 representing the storage connected to the route, an associatedpair 515 for identifying a replication pair of the volumes using the route and apolicy 516 representing the properties of the route. The managingcomputer 100 acquires the condition of the route from the switches A and B, the storages, etc, and manages this route information table 510. Incidentally, information of the security level of the route may be stored in and managed by the route information table 510, and thepolicy 516 may store a concrete maximum transfer speed, etc, of the route instead of the conditions “high speed” and “low speed”. - In the explanation described above, the
route 512 represents the route between the storages to which the replication pair is set or which have the possibility of setting. In the case of the example shown in the drawing, a route having theroute ID 1 is represented as the route that connects thestorage 11 a to thestorage 11 b through thedata network 460, the switch A440, thedata network 461, the switch B450 and thedata network 462. - The
condition 513 registers the condition as to whether or not each route operates normally. In this embodiment, the condition includes “normal” and “abnormal”, and the condition in which the line does not operate normally is registered as “abnormal”. When the load to the line becomes high and the line cannot satisfy the policy, the condition may also be changed to “abnormal”. In this case, it is preferred that the route information table manages the value of the network load at which the policy cannot be satisfied. - When the values of the
pair ID 231 of the pair information table 230 are the same, the value of the associatedpair 515 presents the same volume replication pair. - The cache information table 520 is a table for managing a cache use ratio prepared for speeding up replication, that is, information of use capacity/maximum capacity, for each storage and stores the
storage ID 521 and information of thecache use ratio 522 of each storage. The caches in this instance correspond to thecaches -
FIG. 6 is a flowchart useful for explaining the processing for generating the replication depending on the policy of the line for connecting the volumes when volume replication is made between the storages in the second embodiment of the invention. Next, this processing will be explained. The processing shown in this flowchart is the one that is contained in thevolume management module 105 and in theroute management module 401. - First of all, the
volume management module 105 acquires the generation request of the volume pair to be replicated from the user and from the application together with the ID of the volume of the replication source (Step 600). - At this time, the
volume management module 105 may start processingStep 600 after the volume of the replication source is generated but before it receives the ID of the volume. In other words, when receiving the replication generation request, thevolume management module 105 may generate the volume of the replication source and the volume of the replication destination in a series of processing and may set replication. Alternatively, thevolume management module 105 may acquire the request that the replication is created by use of the volumes inside the storages or the volumes between the storages. - Next, the
volume management module 105 retrieves information of thepolicy 223 of the volume of the replication source acquired inStep 600, and causes theroute management module 401 to retrieve the policy of the line between the storage to which the volume of the replication source belongs and the volume that can be connected. Next, whether or not the line that can accomplish the policy of the volume of the replication source exists is judged. When the line having the policy capable of generating the volume of the replication destination exists, the flow proceeds to the processing ofStep 602 and to Step 603 when such a line does not exist (Step 601). - When the generation request of the volumes exists between the storages and the volume of the replication source has the policy “Connection is to be made through high-speed line in replication of volumes” in the processing of
Step 601, for example, the storage ID to which the volume of the replication source belongs is entered into the associatedstorage 514 of the route information table 510 of the line retrieved, and the route having the line that can be used as the “high-speed” policy is retrieved. When the corresponding line exists, the processing proceeds to Step 602 and when the corresponding line does not exist, to the processing ofStep 603. - When the corresponding line exists in the judgment of
Step 601, the processing shifts to the processing ofStep 301 and to the following Steps explained with reference to the flow ofFIG. 3 . In this case, the processing fromStep 301 is executed for the associated storage acquired in Step 601 (Step 602). - When the corresponding line is not found in the judgment of
Step 601, the volume of the replication destination cannot be provided, and the report is made to this effect. In this case, a message may be outputted on the basis of the condition of the occurrence point of the error. For example, a message “Line capable of generating volume of replication destination does not exist” may be additionally outputted. - When the processing described above is executed, the connectable line can be selected in accordance with the policy of the line and with the policy of the volume designated as the replication source.
-
FIG. 7 is a flowchart useful for explaining the processing operation of fault shooting when any fault occurs in the replication route during the volume replication between the storages. Next, this operation will be explained. Theroute management module 401 executes this operation. However, the table of thetable group 200 managed by thevolume management module 105 is sometimes called out, and thevolume management module 105 operates in this case. - It will be assumed that the fault occurs in a plurality of routes during the volume replication operation between the storages and this route fault is detected. For example, it will be assumed that setting of replication exists with the
volume 113 a of thestorage 11 a inFIG. 4 being the replication source and thevolume 123 a of thestorage 11 b being the replication destination. When thereplication type 234 of the pair information table 230 is “synchronous” and the Write request is raised for the volume 114 a, or when thereplication type 234 is “asynchronous” and the user or the application designates the start of replication, the controller of the storage start replication. When the controller of the storage or the switch A440 or B450 detects the fault of the route after the start of this replication, detection of the fault is notified to the managingcomputer 100 through the storage network communication equipment and the network, and the managingcomputer 100 receives the communication content (Step 700). - The route fault includes the case where the fault is notified from the storage or the switch and the case where the managing
computer 100 periodically asks the fault of the storage or the switch. When the condition of the route having thevalue 1 of theroute ID 511 is “abnormal” as shown in the route information table 510, for example, the managingcomputer 100 receives the route fault inStep 700. - When the route fault is detected, whether or not another route should be searched for the
volume 113 a of the replication source failing replication is judged depending on thepolicy 213 of thevolume ID 211 corresponding to thevolume 113 a (Step 701). - When the search of another route is judged as necessary, the flow proceeds to Step 702 and when not, to Step 706. When the volume is the one having high importance in the
policy 213 of the volume, for example, replication must be made quickly and the flow may proceed at this time to Step 702. When the policy of the volume is not particularly important, the flow may proceed to Step 706. - When the judgment of
Step 701 represents that another route must be searched, whether or not another route capable of reaching the same storage exists among the routes managed by the managingcomputer 100 is checked (Step 702). - At this time, the managing
computer 100 may give an instruction to search data capable of being actually connected through the switch or the storage. Alternatively, it is possible to register in advance the storage route and to utilize this information. In this case, it is only necessary to register the set of the storages for each route as another attribute to the route management table 510. When the route exists and is not registered to the route management table 510, theroute ID 511, theroute 512, thecondition 513, the associatedstorage 514 and theservice 516 are registered. When the route exists, the flow proceeds to Step 703 and when not, to Step 706. - Incidentally, the
policy 516 may be acquired from each switch, or the user or the application may set thepolicy 516. It will be assumed, for example, that the fault occurs in the route of thedata network 461 in the route connecting thestorage 11 a and thestorage 11 b through thedata network 460, the switch A440, thedata network 461, the switch B450 and thedata network 462 in the construction shown inFIG. 4 , it is possible to set another route for connecting thestorages data network 460, the switch A440, thedata network 463, the switch B450 and thedata network 462. - When the route exists in
Step 702, the policy of the volume of the replication destination and the policy of the route are compared on the basis of the policy of the network and whether or not the policies are coincident is judged. When they are coincident, the flow proceeds to Step 704 and when not, to Step 706 (Step 703). - When the replication request of the volume of the replication destination is “high speed” and when the policy of the route is “low speed” in this case, for example, the requirement cannot be satisfied and the flow proceeds to Step 706. However, when there is the policy in which replication is more preferably continued even at the low line speed in accordance with the condition of fault, replication may be carried out by use of the low speed line. This request may be registered to the
policy 213 of the volume information table, or the user or the application may give this request instruction as a part of the processing ofStep 703. - When the policies are found coincident in the judgment of
Step 703, setting of the normal route is requested for the switches and the storages, and the volumes mutually confirm that the replication processing is possible. When a plurality of routes exists, a judgment may be made to automatically select one route in accordance with the policy, or the user or the application may be allowed to judge by providing a plurality of results (Step 704). - Next, the managing computer gives the start instruction of replication to the storages by use of the route set in Step 704 (Step 705).
- When the judgment result proves NO in the judgment of
Steps - When the judgment result of
Step 706 represents that the replication data should not be stored permanently in the cache, the managingcomputer 100 gives the instruction to the controller of the storage to omit the cache data of the volume to be omitted from the cache and to reduce the use ratio of the cache (Step 707). - When the judgment result represents that the replication data should be permanently stored in the cache after the processing of
Step 707 or inStep 706, the route fault is reported and the processing is finished (Step 708). - When a fault of replication occurs in the processing explained above, replication is sometimes made through a route different from the registered route. In this case, the processing capacity of the line of the route is sometimes affected depending on the other replication processing. Therefore, even when the policy of the route is “high speed”, the invention checks whether or not the policy can be kept if a processing of replication increases. When the policy can be kept, the invention can set so as to execute a replication processing in which the fault occurs as another route.
- As explained above, the second embodiment of the invention can judge whether or not replication should be made through another route when any fault occurs in a plurality of routes, or whether or not the content of the cache should be omitted depending on the policy of the volume, and can thus make efficient replication of the volumes.
-
FIG. 8 is a flowchart useful for explaining the processing operation for generating the replication pair in another storage when another route cannot be secured or when replication cannot be conducted even when the replication data is permanently stored in the cache. Next, this processing will be explained. The processing shown in this flowchart is the one that is contained in thevolume management module 105 and theroute management module 401. The processing explained hereby generates separately the replication pair of the volumes in replication of the volumes for which fault recovery cannot be made. - In the pair after the fault occurs, whether or not fault recovery is impossible or a drastic delay occurs is judged. This judgment is made by setting a threshold value to the time from the occurrence of the volume fault to recovery of the fault of the route (Step 800).
- Next, whether or not the replication may be generated by use of other storages is judged from the
policy 214 of the volumes of the volume information table 210. When the judgment result represents that other storages are not utilized, such as when the policy inhibits replication to other storages, the processing is as such finished without doing anything at all (Step 801). - When the judgment result of
Step 801 represents that setting of replication may well be set to other storages, the storages that can be registered as the volume replication route are searched, and whether or not such storages exist is judged (Step 802). - When the policy relating to the distance is set in this judgment, judgment may be made to the effect that the policy is contradictory even when the route can be registered. The flow proceeds to Step 803 when the storages having the route exist. The processing is finished when the route does not exist. When the route is under the condition represented by the route information table 510 and the condition of the
route 1 is “abnormal”, for example, theroute 3 represents the route the condition of which is normal and which satisfies the policy. - When the judgment result in
Step 802 represents that the storage capable of setting the route exists, the pair is generated. This pair generation processing is the one that executes the flow explained with reference toFIG. 3 . The pair generation processing is executed in accordance with the instruction that the processing should be executed within the range of the storages obtained by the processing of Step 802 (Step 803). - In the processing described above, the
range management module 401 executes the processing of eachStep volume management module 105 executes the processing ofStep 803. - By executing the processing described above, this embodiment stops once the replication processing when the fault occurs but can change setting of the pairs registered at present depending on the policy of the volumes, and can continue replication itself irrelevantly to the restoration speed of the route fault.
- The processing of each embodiment of the invention described above can be constituted as a processing program. The processing program can be stored and provided in the form of storage media such as HD, DAT, FD, MO, DVD-ROM, CD-ROM, and so forth.
-
FIG. 9 is a block diagram showing a construction of storage operation system according to a third embodiment of the invention. The third embodiment employs the construction in which the functions of the managing computer in the constructions of the first and second embodiments are disposed inside the storage. - The third embodiment of the invention shown in
FIG. 9 represents an example where a processing part for executing management is disposed inside thestorage 11 a shown inFIG. 4 , for example. Thevolume management module 105 and theroute management module 401 existing in thestorage device 104 of the managingcomputer 100 shown inFIG. 4 are disposed inside astorage device 900 of thestorage 11 a. Thevolume management module 105 and theroute management module 401 are accomplished as software stored in thestorage device 900 of thestorage 11 a is read and executed by thecontroller 112. Adata synchronization module 901 is disposed so that mutual information is coincident while keeping data consistency with other storages. Data consistency is kept by use of thenetwork communication equipment 111, etc. - The third embodiment of the invention having the construction described above can eliminate the managing
computer 100 utilized in the first and second embodiments. The third embodiment uses thedata synchronization module 901 from a plurality of managingcomputers 100 when the managingcomputers 100 are provided to a large-scale storage operation management system. Therefore, when one managing computer cannot keep information, the third embodiment makes it possible to share the data of the plurality of managing computers. - According to each embodiment of the invention described above, the volume
pair management module 105 can generate a volume replication pair between or inside the storages in accordance with the policy of the volumes or properties, and can generate the volume replication pair on the basis of the policy of the volume. - In the volume replication operation, the
route management module 401 and thevolume management module 105 can classify fault shooting in accordance with the policy of the volume and can make fault shooting of volume replication depending on the policy of the volume. - As explained above, the invention can efficiently operate and manage replication of the data areas inside the same storage or between the storages in accordance with information of the data area of the storage.
- It should be further understood by those skilled in the art that although the foregoing description has been made on embodiments of the invention, the invention is not limited thereto and various changes and modifications may be made without departing from the spirit of the invention and the scope of the appended claims.
Claims (5)
1.-17. (canceled)
18. A storage system coupled to a computer, comprising:
a plurality of storage devices comprising a plurality of volumes; and
a managing computer coupled to the plurality of storage devices having capacity information representing volume capacities of the plurality of volumes and read/write information representing read/write frequencies of the plurality of volumes;
wherein the management computer receives a request designating a copy source volume among the plurality of volumes, selecting a copy destination volume, which capacity is the same or larger than a capacity of the copy source volume, from the plurality of volumes based on the capacity information,
wherein a copy source storage device among the plurality of storage devices corresponding to the copy source volume forms a copy pair between the copy source volume and the copy destination volume, and copies the contents of the copy source volume to the copy destination volume, and
wherein the copy source storage device releases the copy pair after the copy, according to a part of the read/write information corresponding to the copy source volume.
19. A storage system according to claim 18 , wherein the copy destination volume is included in a copy destination storage device, which is one of the plurality of storage devices, and wherein the selection of the copy destination volume is based on a distance between the copy source storage device and the copy destination storage device.
20. A storage system according to claim 18 , wherein the read/write information includes a plurality of values for representing the read/write frequencies of the plurality of volumes, each of the plurality of values corresponding to a read/write frequency of each of the plurality of volumes.
21. A storage system according to claim 20 , wherein the part of the read/write information corresponding to the copy source volume is one of the plurality of values corresponding to the copy source volume.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/143,192 US20080263111A1 (en) | 2003-05-08 | 2008-06-20 | Storage operation management program and method and a storage management computer |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003130297A JP2004334574A (en) | 2003-05-08 | 2003-05-08 | Operation managing program and method of storage, and managing computer |
JP2003-130297 | 2003-05-08 | ||
US10/650,851 US7191198B2 (en) | 2003-05-08 | 2003-08-29 | Storage operation management program and method and a storage management computer |
US11/648,655 US7483928B2 (en) | 2003-05-08 | 2007-01-03 | Storage operation management program and method and a storage management computer |
US12/143,192 US20080263111A1 (en) | 2003-05-08 | 2008-06-20 | Storage operation management program and method and a storage management computer |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/648,655 Continuation US7483928B2 (en) | 2003-05-08 | 2007-01-03 | Storage operation management program and method and a storage management computer |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080263111A1 true US20080263111A1 (en) | 2008-10-23 |
Family
ID=33410550
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/650,851 Expired - Fee Related US7191198B2 (en) | 2003-05-08 | 2003-08-29 | Storage operation management program and method and a storage management computer |
US11/648,655 Expired - Lifetime US7483928B2 (en) | 2003-05-08 | 2007-01-03 | Storage operation management program and method and a storage management computer |
US12/143,192 Abandoned US20080263111A1 (en) | 2003-05-08 | 2008-06-20 | Storage operation management program and method and a storage management computer |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/650,851 Expired - Fee Related US7191198B2 (en) | 2003-05-08 | 2003-08-29 | Storage operation management program and method and a storage management computer |
US11/648,655 Expired - Lifetime US7483928B2 (en) | 2003-05-08 | 2007-01-03 | Storage operation management program and method and a storage management computer |
Country Status (3)
Country | Link |
---|---|
US (3) | US7191198B2 (en) |
EP (1) | EP1507206A3 (en) |
JP (1) | JP2004334574A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060212668A1 (en) * | 2005-03-17 | 2006-09-21 | Fujitsu Limited | Remote copy method and storage system |
US20100199042A1 (en) * | 2009-01-30 | 2010-08-05 | Twinstrata, Inc | System and method for secure and reliable multi-cloud data replication |
US20110320508A1 (en) * | 2010-04-02 | 2011-12-29 | Hitachi, Ltd. | Computer system management method and client computer |
US20130036326A1 (en) * | 2011-08-03 | 2013-02-07 | International Business Machines Corporation | Acquiring a storage system into copy services management software |
US8533850B2 (en) | 2010-06-29 | 2013-09-10 | Hitachi, Ltd. | Fraudulent manipulation detection method and computer for detecting fraudulent manipulation |
US8850592B2 (en) | 2010-03-10 | 2014-09-30 | Hitachi, Ltd. | Unauthorized operation detection system and unauthorized operation detection method |
Families Citing this family (79)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050038954A1 (en) * | 2003-06-04 | 2005-02-17 | Quantum Corporation | Storage drive having universal format across media types |
JP4537022B2 (en) * | 2003-07-09 | 2010-09-01 | 株式会社日立製作所 | A data processing method, a storage area control method, and a data processing system that limit data arrangement. |
JP4863605B2 (en) | 2004-04-09 | 2012-01-25 | 株式会社日立製作所 | Storage control system and method |
CA2546304A1 (en) | 2003-11-13 | 2005-05-26 | Commvault Systems, Inc. | System and method for performing an image level snapshot and for restoring partial volume data |
JP4426275B2 (en) | 2003-12-16 | 2010-03-03 | 株式会社日立製作所 | Remote copy control method |
JP2005222404A (en) * | 2004-02-06 | 2005-08-18 | Hitachi Ltd | Storage control subsystem having virtual storage unit |
JP4756852B2 (en) * | 2004-11-25 | 2011-08-24 | 株式会社東芝 | Document management apparatus, document management method, and document management program |
JP2006185108A (en) * | 2004-12-27 | 2006-07-13 | Hitachi Ltd | Management computer for managing data of storage system, and data management method |
JP2006293850A (en) | 2005-04-13 | 2006-10-26 | Hitachi Ltd | Remote copy system and remote copy method |
US8903949B2 (en) * | 2005-04-27 | 2014-12-02 | International Business Machines Corporation | Systems and methods of specifying service level criteria |
JP4963808B2 (en) * | 2005-08-05 | 2012-06-27 | 株式会社日立製作所 | Storage control system |
JP4686303B2 (en) * | 2005-08-24 | 2011-05-25 | 株式会社日立製作所 | Storage management method and storage system |
JP2007102452A (en) * | 2005-10-04 | 2007-04-19 | Fujitsu Ltd | System management program and system management method |
JP4843294B2 (en) * | 2005-11-04 | 2011-12-21 | 株式会社日立製作所 | Computer system and management computer |
US7606844B2 (en) * | 2005-12-19 | 2009-10-20 | Commvault Systems, Inc. | System and method for performing replication copy storage operations |
US7636743B2 (en) * | 2005-12-19 | 2009-12-22 | Commvault Systems, Inc. | Pathname translation in a data replication system |
US8661216B2 (en) | 2005-12-19 | 2014-02-25 | Commvault Systems, Inc. | Systems and methods for migrating components in a hierarchical storage network |
US7962709B2 (en) * | 2005-12-19 | 2011-06-14 | Commvault Systems, Inc. | Network redirector systems and methods for performing data replication |
US7617262B2 (en) | 2005-12-19 | 2009-11-10 | Commvault Systems, Inc. | Systems and methods for monitoring application data in a data replication system |
US7651593B2 (en) | 2005-12-19 | 2010-01-26 | Commvault Systems, Inc. | Systems and methods for performing data replication |
CA2632935C (en) * | 2005-12-19 | 2014-02-04 | Commvault Systems, Inc. | Systems and methods for performing data replication |
JP4800059B2 (en) * | 2006-02-13 | 2011-10-26 | 株式会社日立製作所 | Virtual storage system and control method thereof |
US7831793B2 (en) * | 2006-03-01 | 2010-11-09 | Quantum Corporation | Data storage system including unique block pool manager and applications in tiered storage |
US7480817B2 (en) * | 2006-03-31 | 2009-01-20 | International Business Machines Corporation | Method for replicating data based on probability of concurrent failure |
US7702866B2 (en) * | 2006-03-31 | 2010-04-20 | International Business Machines Corporation | Use of volume containers in replication and provisioning management |
JP2007323218A (en) * | 2006-05-31 | 2007-12-13 | Hitachi Ltd | Backup system |
US8726242B2 (en) | 2006-07-27 | 2014-05-13 | Commvault Systems, Inc. | Systems and methods for continuous data replication |
JP4949791B2 (en) | 2006-09-29 | 2012-06-13 | 株式会社日立製作所 | Volume selection method and information processing system |
JP2008117094A (en) * | 2006-11-02 | 2008-05-22 | Hitachi Ltd | Storage system, storage device, and storage management method |
US8290808B2 (en) * | 2007-03-09 | 2012-10-16 | Commvault Systems, Inc. | System and method for automating customer-validated statement of work for a data storage environment |
JP2008269171A (en) * | 2007-04-18 | 2008-11-06 | Hitachi Ltd | Storage system, management server, method for supporting system reconfiguration of storage system, and method for supporting system reconfiguration of management server |
JP4434235B2 (en) | 2007-06-05 | 2010-03-17 | 株式会社日立製作所 | Computer system or computer system performance management method |
JP2009020568A (en) * | 2007-07-10 | 2009-01-29 | Hitachi Ltd | Storage system, and method for designing disaster recovery configuration |
US8495315B1 (en) * | 2007-09-29 | 2013-07-23 | Symantec Corporation | Method and apparatus for supporting compound disposition for data images |
JP5172574B2 (en) | 2008-09-29 | 2013-03-27 | 株式会社日立製作所 | Management computer used to build a backup configuration for application data |
US8204859B2 (en) * | 2008-12-10 | 2012-06-19 | Commvault Systems, Inc. | Systems and methods for managing replicated database data |
US9495382B2 (en) * | 2008-12-10 | 2016-11-15 | Commvault Systems, Inc. | Systems and methods for performing discrete data replication |
US8504517B2 (en) | 2010-03-29 | 2013-08-06 | Commvault Systems, Inc. | Systems and methods for selective data replication |
US8352422B2 (en) | 2010-03-30 | 2013-01-08 | Commvault Systems, Inc. | Data restore systems and methods in a replication environment |
US8504515B2 (en) | 2010-03-30 | 2013-08-06 | Commvault Systems, Inc. | Stubbing systems and methods in a data replication environment |
US8725698B2 (en) | 2010-03-30 | 2014-05-13 | Commvault Systems, Inc. | Stub file prioritization in a data replication system |
WO2011150391A1 (en) | 2010-05-28 | 2011-12-01 | Commvault Systems, Inc. | Systems and methods for performing data replication |
JP5605847B2 (en) * | 2011-02-08 | 2014-10-15 | Necソリューションイノベータ株式会社 | Server, client, backup system having these, and backup method therefor |
JP5707214B2 (en) * | 2011-04-21 | 2015-04-22 | みずほ情報総研株式会社 | File management system and file management method |
US9298715B2 (en) | 2012-03-07 | 2016-03-29 | Commvault Systems, Inc. | Data storage system utilizing proxy device for storage operations |
US9471578B2 (en) | 2012-03-07 | 2016-10-18 | Commvault Systems, Inc. | Data storage system utilizing proxy device for storage operations |
US9342537B2 (en) | 2012-04-23 | 2016-05-17 | Commvault Systems, Inc. | Integrated snapshot interface for a data storage system |
US20130304705A1 (en) * | 2012-05-11 | 2013-11-14 | Twin Peaks Software, Inc. | Mirror file system |
JP5949352B2 (en) * | 2012-09-07 | 2016-07-06 | 株式会社Ihi | Monitoring data management system |
US8918555B1 (en) * | 2012-11-06 | 2014-12-23 | Google Inc. | Adaptive and prioritized replication scheduling in storage clusters |
US9336226B2 (en) | 2013-01-11 | 2016-05-10 | Commvault Systems, Inc. | Criteria-based data synchronization management |
US9886346B2 (en) | 2013-01-11 | 2018-02-06 | Commvault Systems, Inc. | Single snapshot for multiple agents |
US9639426B2 (en) | 2014-01-24 | 2017-05-02 | Commvault Systems, Inc. | Single snapshot for multiple applications |
US9753812B2 (en) | 2014-01-24 | 2017-09-05 | Commvault Systems, Inc. | Generating mapping information for single snapshot for multiple applications |
US9495251B2 (en) | 2014-01-24 | 2016-11-15 | Commvault Systems, Inc. | Snapshot readiness checking and reporting |
US9632874B2 (en) | 2014-01-24 | 2017-04-25 | Commvault Systems, Inc. | Database application backup in single snapshot for multiple applications |
JP2015162001A (en) * | 2014-02-26 | 2015-09-07 | 富士通株式会社 | Storage management device, storage device, and storage management program |
US10042716B2 (en) | 2014-09-03 | 2018-08-07 | Commvault Systems, Inc. | Consolidated processing of storage-array commands using a forwarder media agent in conjunction with a snapshot-control media agent |
US9774672B2 (en) | 2014-09-03 | 2017-09-26 | Commvault Systems, Inc. | Consolidated processing of storage-array commands by a snapshot-control media agent |
US9448731B2 (en) | 2014-11-14 | 2016-09-20 | Commvault Systems, Inc. | Unified snapshot storage management |
US9648105B2 (en) | 2014-11-14 | 2017-05-09 | Commvault Systems, Inc. | Unified snapshot storage management, using an enhanced storage manager and enhanced media agents |
US10503753B2 (en) | 2016-03-10 | 2019-12-10 | Commvault Systems, Inc. | Snapshot replication operations based on incremental block change tracking |
US10649655B2 (en) * | 2016-09-30 | 2020-05-12 | Western Digital Technologies, Inc. | Data storage system with multimedia assets |
US10416922B1 (en) * | 2017-04-28 | 2019-09-17 | EMC IP Holding Company LLC | Block-based backups for large-scale volumes and advanced file type devices |
US10409521B1 (en) * | 2017-04-28 | 2019-09-10 | EMC IP Holding Company LLC | Block-based backups for large-scale volumes |
US11016694B1 (en) * | 2017-10-30 | 2021-05-25 | EMC IP Holding Company LLC | Storage drivers for remote replication management |
US10740022B2 (en) | 2018-02-14 | 2020-08-11 | Commvault Systems, Inc. | Block-level live browsing and private writable backup copies using an ISCSI server |
US10956078B2 (en) | 2018-03-27 | 2021-03-23 | EMC IP Holding Company LLC | Storage system with loopback replication process providing object-dependent slice assignment |
US10866969B2 (en) * | 2018-03-28 | 2020-12-15 | EMC IP Holding Company LLC | Storage system with loopback replication process providing unique identifiers for collision-free object pairing |
US11042318B2 (en) | 2019-07-29 | 2021-06-22 | Commvault Systems, Inc. | Block-level data replication |
US11892983B2 (en) | 2021-04-29 | 2024-02-06 | EMC IP Holding Company LLC | Methods and systems for seamless tiering in a distributed storage system |
US12093435B2 (en) | 2021-04-29 | 2024-09-17 | Dell Products, L.P. | Methods and systems for securing data in a distributed storage system |
US11922071B2 (en) | 2021-10-27 | 2024-03-05 | EMC IP Holding Company LLC | Methods and systems for storing data in a distributed system using offload components and a GPU module |
US12007942B2 (en) * | 2021-10-27 | 2024-06-11 | EMC IP Holding Company LLC | Methods and systems for seamlessly provisioning client application nodes in a distributed system |
US11762682B2 (en) | 2021-10-27 | 2023-09-19 | EMC IP Holding Company LLC | Methods and systems for storing data in a distributed system using offload components with advanced data services |
US11677633B2 (en) | 2021-10-27 | 2023-06-13 | EMC IP Holding Company LLC | Methods and systems for distributing topology information to client nodes |
US11809285B2 (en) | 2022-02-09 | 2023-11-07 | Commvault Systems, Inc. | Protecting a management database of a data storage management system to meet a recovery point objective (RPO) |
JP2023136323A (en) * | 2022-03-16 | 2023-09-29 | 株式会社日立製作所 | storage system |
US12056018B2 (en) | 2022-06-17 | 2024-08-06 | Commvault Systems, Inc. | Systems and methods for enforcing a recovery point objective (RPO) for a production database without generating secondary copies of the production database |
Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5778383A (en) * | 1995-08-08 | 1998-07-07 | Apple Computer, Inc. | System for dynamically caching and constructing software resource tables |
US6119208A (en) * | 1997-04-18 | 2000-09-12 | Storage Technology Corporation | MVS device backup system for a data processor using a data storage subsystem snapshot copy capability |
US20010047412A1 (en) * | 2000-05-08 | 2001-11-29 | Weinman Joseph B. | Method and apparatus for maximizing distance of data mirrors |
US20020055972A1 (en) * | 2000-05-08 | 2002-05-09 | Weinman Joseph Bernard | Dynamic content distribution and data continuity architecture |
US20020059329A1 (en) * | 1997-12-04 | 2002-05-16 | Yoko Hirashima | Replication method |
US20020143999A1 (en) * | 2001-03-30 | 2002-10-03 | Kenji Yamagami | Path selection methods for storage based remote copy |
US20030046270A1 (en) * | 2001-08-31 | 2003-03-06 | Arkivio, Inc. | Techniques for storing data based upon storage policies |
US20030167312A1 (en) * | 2000-09-13 | 2003-09-04 | Yoshiaki Mori | Method of copying data and recording medium including a recorded program for copying data |
US20030187812A1 (en) * | 2002-03-27 | 2003-10-02 | Microsoft Corporation | Method and system for managing data records on a computer network |
US20030195903A1 (en) * | 2002-03-19 | 2003-10-16 | Manley Stephen L. | System and method for asynchronous mirroring of snapshots at a destination using a purgatory directory and inode mapping |
US20030204597A1 (en) * | 2002-04-26 | 2003-10-30 | Hitachi, Inc. | Storage system having virtualized resource |
US20030233518A1 (en) * | 2002-06-12 | 2003-12-18 | Hitachi, Ltd. | Method and apparatus for managing replication volumes |
US20040117602A1 (en) * | 2002-12-12 | 2004-06-17 | Nexsil Communications, Inc. | Native Copy Instruction for File-Access Processor with Copy-Rule-Based Validation |
US6754792B2 (en) * | 2000-12-20 | 2004-06-22 | Hitachi, Ltd. | Method and apparatus for resynchronizing paired volumes via communication line |
US20050033757A1 (en) * | 2001-08-31 | 2005-02-10 | Arkivio, Inc. | Techniques for performing policy automated operations |
US20050066095A1 (en) * | 2003-09-23 | 2005-03-24 | Sachin Mullick | Multi-threaded write interface and methods for increasing the single file read and write throughput of a file server |
US20050071390A1 (en) * | 2003-09-30 | 2005-03-31 | Livevault Corporation | Systems and methods for backing up data files |
US6922763B2 (en) * | 2002-03-29 | 2005-07-26 | Hitachi, Ltd. | Method and apparatus for storage system |
US20050216661A1 (en) * | 2004-03-29 | 2005-09-29 | Hitachi, Ltd. | Method and apparatus for multistage volume locking |
US6996672B2 (en) * | 2002-03-26 | 2006-02-07 | Hewlett-Packard Development, L.P. | System and method for active-active data replication |
US7103740B1 (en) * | 2003-12-31 | 2006-09-05 | Veritas Operating Corporation | Backup mechanism for a multi-class file system |
US7197520B1 (en) * | 2004-04-14 | 2007-03-27 | Veritas Operating Corporation | Two-tier backup mechanism |
US7263590B1 (en) * | 2003-04-23 | 2007-08-28 | Emc Corporation | Method and apparatus for migrating data in a computer system |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE69938378T2 (en) * | 1998-08-20 | 2009-04-30 | Hitachi, Ltd. | Copy data to storage systems |
JP3868708B2 (en) | 2000-04-19 | 2007-01-17 | 株式会社日立製作所 | Snapshot management method and computer system |
JP2002222061A (en) | 2001-01-25 | 2002-08-09 | Hitachi Ltd | Method for setting storage area, storage device, and program storage medium |
JP2003122508A (en) | 2001-10-15 | 2003-04-25 | Hitachi Ltd | Volume management method and device |
US6880052B2 (en) | 2002-03-26 | 2005-04-12 | Hewlett-Packard Development Company, Lp | Storage area network, data replication and storage controller, and method for replicating data using virtualized volumes |
JP2004157637A (en) * | 2002-11-05 | 2004-06-03 | Hitachi Ltd | Storage management method |
JP4537022B2 (en) * | 2003-07-09 | 2010-09-01 | 株式会社日立製作所 | A data processing method, a storage area control method, and a data processing system that limit data arrangement. |
-
2003
- 2003-05-08 JP JP2003130297A patent/JP2004334574A/en active Pending
- 2003-08-29 US US10/650,851 patent/US7191198B2/en not_active Expired - Fee Related
-
2004
- 2004-03-04 EP EP04005178A patent/EP1507206A3/en not_active Withdrawn
-
2007
- 2007-01-03 US US11/648,655 patent/US7483928B2/en not_active Expired - Lifetime
-
2008
- 2008-06-20 US US12/143,192 patent/US20080263111A1/en not_active Abandoned
Patent Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5778383A (en) * | 1995-08-08 | 1998-07-07 | Apple Computer, Inc. | System for dynamically caching and constructing software resource tables |
US6119208A (en) * | 1997-04-18 | 2000-09-12 | Storage Technology Corporation | MVS device backup system for a data processor using a data storage subsystem snapshot copy capability |
US20020059329A1 (en) * | 1997-12-04 | 2002-05-16 | Yoko Hirashima | Replication method |
US20010047412A1 (en) * | 2000-05-08 | 2001-11-29 | Weinman Joseph B. | Method and apparatus for maximizing distance of data mirrors |
US20020055972A1 (en) * | 2000-05-08 | 2002-05-09 | Weinman Joseph Bernard | Dynamic content distribution and data continuity architecture |
US20030167312A1 (en) * | 2000-09-13 | 2003-09-04 | Yoshiaki Mori | Method of copying data and recording medium including a recorded program for copying data |
US6754792B2 (en) * | 2000-12-20 | 2004-06-22 | Hitachi, Ltd. | Method and apparatus for resynchronizing paired volumes via communication line |
US20020143999A1 (en) * | 2001-03-30 | 2002-10-03 | Kenji Yamagami | Path selection methods for storage based remote copy |
US20030046270A1 (en) * | 2001-08-31 | 2003-03-06 | Arkivio, Inc. | Techniques for storing data based upon storage policies |
US20050033757A1 (en) * | 2001-08-31 | 2005-02-10 | Arkivio, Inc. | Techniques for performing policy automated operations |
US20030195903A1 (en) * | 2002-03-19 | 2003-10-16 | Manley Stephen L. | System and method for asynchronous mirroring of snapshots at a destination using a purgatory directory and inode mapping |
US6996672B2 (en) * | 2002-03-26 | 2006-02-07 | Hewlett-Packard Development, L.P. | System and method for active-active data replication |
US20030187812A1 (en) * | 2002-03-27 | 2003-10-02 | Microsoft Corporation | Method and system for managing data records on a computer network |
US6922763B2 (en) * | 2002-03-29 | 2005-07-26 | Hitachi, Ltd. | Method and apparatus for storage system |
US20030204597A1 (en) * | 2002-04-26 | 2003-10-30 | Hitachi, Inc. | Storage system having virtualized resource |
US20040205310A1 (en) * | 2002-06-12 | 2004-10-14 | Hitachi, Ltd. | Method and apparatus for managing replication volumes |
US20030233518A1 (en) * | 2002-06-12 | 2003-12-18 | Hitachi, Ltd. | Method and apparatus for managing replication volumes |
US20040117602A1 (en) * | 2002-12-12 | 2004-06-17 | Nexsil Communications, Inc. | Native Copy Instruction for File-Access Processor with Copy-Rule-Based Validation |
US7263590B1 (en) * | 2003-04-23 | 2007-08-28 | Emc Corporation | Method and apparatus for migrating data in a computer system |
US20050066095A1 (en) * | 2003-09-23 | 2005-03-24 | Sachin Mullick | Multi-threaded write interface and methods for increasing the single file read and write throughput of a file server |
US20050071390A1 (en) * | 2003-09-30 | 2005-03-31 | Livevault Corporation | Systems and methods for backing up data files |
US7103740B1 (en) * | 2003-12-31 | 2006-09-05 | Veritas Operating Corporation | Backup mechanism for a multi-class file system |
US20050216661A1 (en) * | 2004-03-29 | 2005-09-29 | Hitachi, Ltd. | Method and apparatus for multistage volume locking |
US7197520B1 (en) * | 2004-04-14 | 2007-03-27 | Veritas Operating Corporation | Two-tier backup mechanism |
Non-Patent Citations (2)
Title |
---|
Dialeris et al., "Oracle8i Recovery Manager User's Guide and Reference, Release 2 (8.1.6)", 1999, Oracle Corporation * |
Romero et al., "Backup and Recovery Advanced User's Guide", December 2003, Oracle Corporation * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060212668A1 (en) * | 2005-03-17 | 2006-09-21 | Fujitsu Limited | Remote copy method and storage system |
US7971011B2 (en) | 2005-03-17 | 2011-06-28 | Fujitsu Limited | Remote copy method and storage system |
US20100199042A1 (en) * | 2009-01-30 | 2010-08-05 | Twinstrata, Inc | System and method for secure and reliable multi-cloud data replication |
US8762642B2 (en) | 2009-01-30 | 2014-06-24 | Twinstrata Inc | System and method for secure and reliable multi-cloud data replication |
US8850592B2 (en) | 2010-03-10 | 2014-09-30 | Hitachi, Ltd. | Unauthorized operation detection system and unauthorized operation detection method |
US20110320508A1 (en) * | 2010-04-02 | 2011-12-29 | Hitachi, Ltd. | Computer system management method and client computer |
US9124616B2 (en) * | 2010-04-02 | 2015-09-01 | Hitachi, Ltd. | Computer system management method and client computer |
US8533850B2 (en) | 2010-06-29 | 2013-09-10 | Hitachi, Ltd. | Fraudulent manipulation detection method and computer for detecting fraudulent manipulation |
US20130036326A1 (en) * | 2011-08-03 | 2013-02-07 | International Business Machines Corporation | Acquiring a storage system into copy services management software |
US8788877B2 (en) * | 2011-08-03 | 2014-07-22 | International Business Machines Corporation | Acquiring a storage system into copy services management software |
Also Published As
Publication number | Publication date |
---|---|
US20070112897A1 (en) | 2007-05-17 |
US7191198B2 (en) | 2007-03-13 |
US20040225697A1 (en) | 2004-11-11 |
US7483928B2 (en) | 2009-01-27 |
EP1507206A3 (en) | 2008-07-23 |
EP1507206A2 (en) | 2005-02-16 |
JP2004334574A (en) | 2004-11-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7483928B2 (en) | Storage operation management program and method and a storage management computer | |
US7469289B2 (en) | Storage system having virtualized resource | |
US8229897B2 (en) | Restoring a file to its proper storage tier in an information lifecycle management environment | |
US7536444B2 (en) | Remote copying system and remote copying method | |
EP1569120B1 (en) | Computer system for recovering data based on priority of the data | |
US6571354B1 (en) | Method and apparatus for storage unit replacement according to array priority | |
JP5254611B2 (en) | Metadata management for fixed content distributed data storage | |
US6598174B1 (en) | Method and apparatus for storage unit replacement in non-redundant array | |
JP5567342B2 (en) | Network data storage system and data access method thereof | |
US7188187B2 (en) | File transfer method and system | |
US7254684B2 (en) | Data duplication control method | |
EP1510921A2 (en) | Remote copy storage system | |
EP1796004A1 (en) | Storage system and data processing system | |
US20050188254A1 (en) | Storage system making possible data synchronization confirmation at time of asynchronous remote copy | |
US20140188957A1 (en) | Hierarchical storage system and file management method | |
US8161008B2 (en) | Information processing apparatus and operation method thereof | |
JP2007241486A (en) | Memory system | |
KR100968301B1 (en) | System, apparatus, and method for automatic copy function selection | |
JP4287092B2 (en) | File management system and file management method | |
JP2003015933A (en) | File level remote copy method for storage device | |
JP2006185108A (en) | Management computer for managing data of storage system, and data management method | |
US8214613B2 (en) | Storage system and copy method | |
CN113076065B (en) | Data output fault tolerance method in high-performance computing system | |
US7149935B1 (en) | Method and system for managing detected corruption in stored data | |
US20030005358A1 (en) | Decentralized, self-regulating system for automatically discovering optimal configurations in a failure-rich environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |