GB2514569A - System to control backup migration and recovery of data - Google Patents

System to control backup migration and recovery of data Download PDF

Info

Publication number
GB2514569A
GB2514569A GB1309543.5A GB201309543A GB2514569A GB 2514569 A GB2514569 A GB 2514569A GB 201309543 A GB201309543 A GB 201309543A GB 2514569 A GB2514569 A GB 2514569A
Authority
GB
United Kingdom
Prior art keywords
data
system
target
recovery
machines
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1309543.5A
Other versions
GB201309543D0 (en
Inventor
Ted Byrne
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PUSH BUTTON RECOVERY Ltd
Original Assignee
PUSH BUTTON RECOVERY Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PUSH BUTTON RECOVERY Ltd filed Critical PUSH BUTTON RECOVERY Ltd
Priority to GB1309543.5A priority Critical patent/GB2514569A/en
Publication of GB201309543D0 publication Critical patent/GB201309543D0/en
Publication of GB2514569A publication Critical patent/GB2514569A/en
Application status is Withdrawn legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from or digital output to record carriers, e.g. RAID, emulated record carriers, networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from or digital output to record carriers, e.g. RAID, emulated record carriers, networked record carriers
    • G06F3/0601Dedicated interfaces to storage systems
    • G06F3/0602Dedicated interfaces to storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0617Improving the reliability of storage systems in relation to availability
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from or digital output to record carriers, e.g. RAID, emulated record carriers, networked record carriers
    • G06F3/0601Dedicated interfaces to storage systems
    • G06F3/0628Dedicated interfaces to storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from or digital output to record carriers, e.g. RAID, emulated record carriers, networked record carriers
    • G06F3/0601Dedicated interfaces to storage systems
    • G06F3/0668Dedicated interfaces to storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Abstract

Control of the backup, migration and recovery of IT systems in one consolidated platform, which supports the entire recovery continuum including files and folders, volumes, production machines (physical and virtual) through to clustered environments. The data from source machine(s) which has been nominated to require protection is continuously captured in real time. The data is then lifted over the network (LAN/WAN) to the target machines, where protection, migration and recovery are executed to create consistent data in and across volumes. The target machines are located on the premises and/or in different sites, countries or even continents. The system recovers not just one but many source machines on each target machine in parallel. Protection may be attained by integrating with the volume shadow copy service. After initial synchronization during back-up/archiving, only incremental changes may be replicated. The virtual or physical source or server may be recovered in parallel from last consistent back-up.

Description

BACKGROUND:

17 Detailed Prior Art.

18 There is prior art which is as following:

19 Granular recovery of data: US7885938 (Bi) discloses techniques for granular recovery of data 21 from local and remote storage, which are realized as a method for 22 recovery of data from local and remote storage comprising of 23 determining a recovery location, determining a location of backup data, 24 hard linking one or more portions of the backup data to the recovery location in the event that the one or more portions of the backup data to 26 be hard linked are determined to be on a volume of the recovery 27 location, virtually linking one or more portions of the backup data to the 28 recovery location in the event that the one or more portions of the 29 backup data to be virtually linked are determined to be on a volume different from the volume of the recovery location, and performing 31 recovery utilizing one or more portions of recovery data.

32 EP1379949 (Al) and US7096382 (B2) disclose a system and a method 33 for asynchronous replication for storage area networks. A data backup 34 and recovery system for use with at least one server interconnected with at least one storage device, one data recovery device control 36 information bearing an order stamp regarding data communications 37 between corresponding server and storage device.

38 US8156375 (B1)describes methods for efficient restoration of granular 39 application data, where a method for restoring one or more portions of application data comprises of virtualising one or more backup files of 41 the application data into a specified staging area, running a recovery 42 process for the one or more backup files.

43 (358209290 (Bi) discloses a system and method for generically 44 performing a granular restore operation from a volume image backup.

CN102388369 (A) discloses a system or method for granular 46 application data lifecycle sourcing from a single backup is disclosed.

47 The computer system periodically creates a primary backup copy of 48 data stored on a storage system in order to create a plurality of primary 49 backup copies. The computer system also periodically creates a secondary backup copy of data stored on the storage system in order 5) to create a first plurality of secondary backup copies, wherein each of 52 the secondary backup copies of the first plurality is created in part by 53 copying data from a respective one of the primary backup copies. The 54 periodicity of creating the primary backup copies, however, is distinct from the periodicity of creating the secondary backup copies of the first 56 plurality.

57 US8315983 (BI) discloses a method and apparatus for performing 58 granular restoration from machine images stored on sequential backup 59 media is disclosed.

GB2444338 (A) describes a distributed system, such as a decentralized 61 peer-to-peer network, wherein a user can use any computer in the 62 system and be presented with their own data and desktop. The system 63 further comprises perpetual data storage wherein it is ensured there are 64 several copies of each piece of data at several geographic locations such that the number of copies is monitored and further copies or 66 replicas are generated when any type of failure is detected or there is 67 corruption in any single copy. This ensures that the number of copies of 68 the piece of data is maintained in the network. It further provides self- 69 healing, fault resistance and duplicates removal.

U52007156966 (Al) discloses a method and system for providing 71 granular timed invalidation of dynamically generated objects stored in a 72 cache. The techniques incorporates the ability to configure the 73 expiration time of objects stored by the cache to fine granular time 74 intervals, such as the granularity of time intervals provided by a packet processing timer of a packet processing engine. As such, the cache 76 objects with expiry times down to very small intervals of time.

77 US20061 55676 (Al) describes a modular data and storage 78 management system. The system includes a time variance interface 79 that provides for storage into storage media of data that is received overtime.

81 Synchronization of data 82 US2010262647 (Al) disclosed granular data synchronization of 83 multiple data objects. A three-tiered cache is automatically generated 84 by an application program. A server data abject is stored in a first tier and includes a first set of properties for a current state of data stored on 86 a server. A client data object is stored in a second tier and includes a 87 second set of properties including one or more properties in the first set 88 and/or un-persisted edits made to the first set. A view data object is 89 stored in a third tier and include a third set of properties including un-persisted and unsaved edits made to the first or second set which are 91 being viewed on a client. The server and client data objects are 92 synchronized to determine edits made to the data stored on the server.

93 EP1498814 (Al) discloses a method and system for controlling content, 94 gets precedence and is replicated. A replica set is comprised of a set of resources. Each resource is associated with resource data and 96 resource meta-data. For files-based systems, resource data includes 97 file contents and attributes, while resource meta-data includes 98 additional attributes that are relevant for negotiating synchronization 99 during replication. An extra field called a "fence value" is added to the meta-data associated with each resource. During synchronization, first 101 fence values are compared. The resource with the highest fence value 102 includes the content that is controlling and replicated. If fence values

P

103 are equal (and greater than a particular value), the controlling resource 104 is determined based on other meta-data.

JP2007233543 (A) to attain synchronous processing of a database in a 106 realistic time while constructing a network between a main site and a 107 sub-site by a narrow bandwidth line with reduced operation cost of a 108 system.

109 Recovery data and backup systems CN102541686 (A) discloses a method for achieving backup and 111 disaster recovery of a system by utilizing a virtual machine, which 112 utilizes a Remus module of a Xen virtual machine to achieve the 113 disaster recovery of a computer system and utilizes a libvirt bank 114 commonly used by all the virtual machines to achieve the backup of the computer system.

116 CN101989929 (A) discloses a disaster recovery data backup method 117 and a disaster recovery data backup system, and belongs to the field of 118 network management. The system comprises a receiving module, a 119 segmentation module, a calculation and search module, a comparison module and a backup module.

121 U52012084261 (Al) discloses cloud storage services can be used to 122 facilitate secondary backup and disaster data recovery without the need 123 for specialized backup servers at the secondary location or cloud 124 storage service. Backup data streams are transferred to a cloud storage service 126 US2011060722 (Al) discloses a centralized management mode 127 backup disaster recovery system, which comprises: a control console 128 for performing centralized control on a data container, a backup 129 process module, storage medium, and a standby machine through respective control operations.

131 US2009248760 (Al) discloses backup method of a computer system 132 capable of recovery independently by a backup center alone, even if 133 reorganization of a database cannot be completed due to a disaster of 134 some kind.

KR20090013972 (A) discloses a real time data file backup/recovery 136 system is provided to disperse load of a system input-output using a 137 multithread technology and to imitate exacfly, safety and rapidly data 138 file of a user computer system to a target path and to be restored safely 139 when disaster breaks out in the computer system CN10122 1522 (A) discloses a method for data synchronization in a 141 disaster recovery backup system, wherein, data synchronization 142 between a primary system and a standby system is realized.

143 U57533229 (81) discloses one or more computer systems, a carrier 144 medium, and a method are provided for backing up virtual machines.

The backup may occur, e.g., to a backup medium or to a disaster 146 recovery site. An apparatus includes a computer system configured to 147 execute at least a first virtual machine, wherein the computer system is 148 configured to: (i) capture a state of the first virtual machine, the state 149 corresponding to a point in time in the execution of the first virtual machine; and (ii) copy at least a portion of the state to a destination 151 separate from a storage device to which the first virtual machine is 152 suspendable..

153 US2006036895 (A1)describes computer tools and methods combine 154 periodic backup and restore features with migration features to transfer the components of a failed system to a new system, which new system 156 may be dissimilar to the old system. As well as backing up and 157 transferring critical data files during the disaster recovery operation, the 158 invention also transfers, inter alia, applications, user states, hardware 159 settings, software settings, user preferences and other user settings, menus, and directories.

161 W02004025498 (Al) discloses to a computer primary data storage 162 system that integrates the functionality of file backup and remote 163 replication to provide an integrated storage system that protects its data 164 from loss related to system or network failures or the physical loss of a data centre.

166 W02008121249 (A2) discloses an advanced clock synchronization 167 technique is adapted for use with a replication service in a data backup 168 and recovery storage environment.

169 The current invention is based on a system that controls the backup, migration and recovery of IT systems in one consolidated platform, 171 where it supports the entire recovery continuum including files 2nd 172 folders1 volumes, production machines (physical and virtual) through to 173 clustered environments Summaiy of Invention 174 The invention is a system that controls the backup, migration and recovery of IT systems in one consolidated platform, where it supports 176 the entire recovery continuum including files and folders, volumes, 177 production machines (physical and virtual) through to clustered 178 environments 179 Another embodiment is that the data from source machine(s) which has been nominated to require protection is continuously captured in real 181 time. The data is then lifted over the network (LAN/WAN) to the target 182 machines, where protection, migration and recovery are executed. The 183 target machines are located on the premises andfor in different sites, 184 countries or even continents.

F

Another embodiment is that the process of protection is attained by 186 integrating with Volume Shadow Copy Service (VSS) on the source 187 machines, 189 Another embodiment is that the system creates consistent data in volumes and across volumes, 192 Data is lifted across the LANMAN network to Target machines and 193 matching volume sets are created on the targets to collect the 194 consistent data.

Another embodiment is that the system allows the user to nominate the 196 backup/archiving regime to be adopted, e.g. daily/weekly/monthly 197 archiving, which are accomplished without interruption of the 198 continuous replication occurring on the source machines.

199 The system conducts initial synchronization whilst backup/archiving, after which only incremental changes on the source machine(s) are 201 replicated to the target machine(s) so as to attain synchronization 202 Another embodiment is that the recovering of the server (s) process is: 203 i. From last consistent backup 204 ii. Backup first and recover Server(s) from consistent time line 205 IU. Archive current collected data (in-consistent) and once recover 206 server is consistent then collect data (no data loss).

207 iv. Recover without archive consistent and/or in-consistent.

208 v. Migrate back upped machine to virtual or physical machine.

209 Another embodiment is that the system is designed to cater for all 210 servers, Virtual or Physical, and to recover to Virtual or Physical.

211 Another embodiment is that the system recovers not just one but many 212 source machines on each target machine in parallel, 214 Another embodiment is that the system controls many target machines 215 and can therefore protect very large numbers of source machines, and 216 data volumes can range from a few gigabytes to many terabytes.

DESCRIPTION

218 Figure 1: schematic of the system.

219 Figure 2: schematic of normal replication of continuous data in 220 protection mode.

221 Figure 3: schematic of recovery of source machines from the local 222 target machine.

223 Figure 4: schematic of disaster recovery process to remote target 224 machines.

226 Figure 1 shows core concept of the system. The source machines (1) 227 represent production machines which require protection and from 228 where all data is captured (3) and then lifted (4) to target machines (5) 229 which are located on premises and/or in different sites, countries or 230 even continents. The target machine(s) (2) is where the protection (5), 231 migration (6) and recovery (7) processes of the system are executed.

232 The system allows the source machine(s) (1) which can be physical or 233 virtual to be linked via WAN/LAN/IP (8) to target machine(s) (2) which 234 can be physical or virtual. The data is continuously (9) captured (3) 235 from all files and folders (10) from the source machine(s) (1) and is 236 lifted to target machines at other sites (11) across LAN, WAN & lP (12).

237 The target machines (2) then provide data protection (5) by conducting 238 continuous data backup (13) by archiving at consistent data points (14).

239 The data migrate (15) from physical to physical machine -P2P, or 240 physical to virtual machine -P2V, or virtual to physical machine -V2P 241 (15) and from any storage type to any storage (16). The system recover 242 (7) data from entire machine (17) or from live point to any point (18).

244 The system integrates with Volume Shadow Copy Service (VSS) on the 245 source machines. VSS creates recoverable copies of the operating 246 system, applications and system data which the system then repflcates 247 to target machine(s).

248 Data from source machine(s) which have been nominated for protection 249 is continuously captured in real time.

250 The data is then lifted over the network (LAN/WAN) to the target 251 machines, where protection, migration and recovery are executed.

252 These target machines can be located on the premises and/or in 253 different sites, countries or even continents.

254 The system automatically provisions the target machine(s) disks.

255 Replication can be scheduled to make best use of network bandwidth - 256 i.e. overnight, at weekend's etc. On completion of the initial 257 synchronisation, only incremental changes on the source machine(s) 258 are replicated to the target machine(s).

260 As shown above, the target machine(s) can be physical or virtual 261 machine(s). By leveraging virtualisation on target machine(s), the 262 system provides a flexible and highly scalable enterprise class solution 263 supporting the needs of all computer dependent organisations from 264 small to medium enterprises through to global enterprises.

266 Source machines are supported on either physical or virtual. Source 267 machines can be stand alone or clustered on physical hardware or 268 virtualised or any combination thereof.

270 Target machines can either be physical machines or, more likely, 271 virtualised machines 273 Management system is created and is attached to both source and 274 target machines via GUI. The management system is supplied with 275 source server information such as the number of volumes sizes of 276 volumes, operating system versions, physical or virtual machine etc. 278 When the target server resides on a virtual machine, it has desired 279 available data stores.

281 The system on management machine controls the process and is 282 linked to source and target server(s), initially, the replication service is 283 suspended so that the system does not affect production services by 284 overloading the source machine, The system provides these options 285 because there might be times when there is little or no production 286 services being carried out, for example out of normal working hours or 287 at weekends. At such times it is useful to have all available machine 288 resource supporting the replication service, particularly during the initial 289 synchronisation.

291 Desired data stores are selected for target machines and replication 292 service is restored. Then source servers (s) are linked to target servers 293 (s) on management machine graphical user interface (GUI) 295 For replication, volumes are selected. The target volume needs to be 296 larger or smaller than the original source volume so it can be re-sized at 297 this point. The target server(s) create the target volumes, and create 298 relationship between source and target server(s).

300 The system is provided with source server information such as the 301 number of volumes, sizes of volumes, operating system versions, 302 physical or virtual machine etc. 304 Volumes to be replicated are selected and the job is dispatched to the 305 target server(s) where the target volumes are created. The target 306 server, after re-booting, the job continues and the relationship between 307 source and target server(s) is created. The process needs to be 308 repeated for each source server.

310 The Management Server is activated and initial synchronisation starts 311 and when the initial synchronisation has completed and replication gets 312 ongoing, Once all source servers have been attached and activated, 313 the system supports archiving via scheduled tasks on target server(s).

314 This creates a backup script on the target server which is used to run a 315 scheduled task.

317 Archiving is activated once all source servers have been attached. The 318 system supports archiving via scheduled tasks on target server(s).

320 This creates a backup script on the target server which can now be 321 used to run a scheduled task.

323 From time to time there may be a requirement to execute a backup 324 outside the normal backup schedule, individual volumes or servers are 325 selected for non scheduled backups. There are 3 options, backup the 326 replicated data as is stands at the time, backup the replicated data 327 using the last complete and consistent shadow set, and revert the 328 replication data to its last consistent point. The backup then runs on the 329 target server(s).

331 Files & folders to be recovered from archives are selected from archive 332 list. Appropriate drives and desired location are assigned.

334 End of day recovery and backup are conducted. The system will list all 335 the volumes that are available to be recovered from the archives.

336 Recovery is launched by selecting the required volume(s) and process 337 is initiated at the relevant target server(s) where the volumes will be 338 provisioned and the archived data restored to requested volumes.

340 Once all machines / volumes are recovered, new virtual machine(s) are 341 registered and are displayed in the host inventory. A virtual machine is 342 a software computer that is like a physical computer and runs on an 343 operating system and applications. An operating system installed on a 344 virtual machine is called a guest operating system. Because every 345 virtual machine is an isolated computing environment, one can use 346 virtual machines as desktop or workstation environments, as testing 347 environments, or to consolidate server applications. Virtual machines 348 run on hosts. The same host can run on many virtual machines.

349 Backup & recover process create an archive and then perform a 350 recovery using archive. Required volume(s) are selected, and will (a) 351 use the replicated data as it stands at the time of selection and (b) use 352 the last complete and consistent shadow set.

354 The data is sent to the relevant target server(s) where the target 355 volumes are backed up before the recovery volume(s) are provisioned 356 and the archived data restored. Once all machines/volumes have been 357 recovered, new virtual machine(s) are registered in host inventory.

359 The system conducts sector copy volumes & recovery. This recovery is 360 best suited for creating test & development environments. Machine(s) 361 are recovered by performing a sector copy of the replicated volume(s).

362 Required volume(s), (a) use replicated data as it stands at time of 363 selection and (b) use the last complete and consistent shadow set. The 364 data is then sent to the relevant target server(s). Volumes are 365 provisioned for the recovery. The data is recovered by doing a sector 366 by sector copy of the target data as per option (a) or (b) above. Once all 367 machines/volumes have been recovered new virtual machine(s) are 368 registered in the host inventory.

369 The system conducts disaster recovery and migration. The disaster 370 recovery is the fastest recovery. This recovery option is also used for 371 machine migrations -i.e. physical to physical physical to virtual, virtual 372 to physical and virtual to virtual.

373 Two options are available as follows: 374 Option 1 -Launch Disaster Recovery -is for disaster situation as the 375 target volumes are used for the recovery which means that replication 376 is stopped and can no longer continue.

377 Option 2 -Launch Disaster Recovery Test -is used when recovery is 378 for test purposes. Replication can be re-continued on completion of the 379 test without having to recollect all volumes and data.

380 Either option requires volume(s) to be selected. The data is sent to the 381 relevant target server(s). For the Launch Disaster Recovery option the 382 target volumes is prepared for recovery and the target server is then 383 rebooted. For the Launch Disaster Recovery Test option the target 384 volumes is prepared for recovery and the target server is shut down for 385 the duration of the test.

387 Once all machines I volumes have been recovered, new virtual 388 machine(s) is registered in the host inventory.

390 The system for supporting clustered source machines are the same as 391 for protecting individual source machines. The system must be 392 installed on each node of a cluster.

393 In the system Management GUI, each node should first be added as an 394 individual server Process is used to configure clusters, change and 395 define cluster name. Required nodes are then selected and cluster 396 configuration is updated and on completion of the process, the nodes 397 are displayed and listed along with the appropriate cluster. The first 398 node is attached in the same way as previously described i.e. for 399 selected node, source attached to target (s) server and required 400 volumes selected. The data is then sent to the target servers, and when 401 completed! is displayed in the GUI. The 2 node of the cluster is added 402 manually and replication is activated.

404 Overall status will show that data related to the source server 405 status/configuration1 data related to the target server 406 status/configuration, data related to the management server 407 status/configuration and data related to the current status of replication.

409 Real time reporting and monitoring for volumes of the source server, 410 displays data relating to the current status of replication presenting the 411 following information: 412 * It is being replicated to Targets 1 & 2: 413 * It is in a related volume set. In other words its consistency is linked 414 to other pre-determined volumes; 415 * The number of shadow copies since the 1st synchronisation 416 * The date and time of the current shadow copy 417 * The size of the volume in Gbs and how much data has been 418 replicated to Targets 1 & 2. Note -these can be different numbers 419 because some files are deliberately not replicated between source 420 and target, 421 * The date and time of the previous shadow copy and the size of the 422 volume and how much data has been replicated to the Targets.

423 All of the data explained above is continuously updated and refreshed 424 by the system 426 Source server filter simplifies the visual representation in the GUI to 427 show only the selected source server and any related target server(s).

428 Filter function simplifies the visual representation in the GUI to show 429 only the selected source server and any related target server(s).

432 Overall Status provides a visual representation of the replication status 433 of all servers under the Management Server's control. The replication 434 status is as follows: 435 * replication is running normally.

436 * replication has been deliberately stopped at user request.

437 * volumes have been aftached but replication has not yet been 438 activated.

439 * initial synchronisation has been started and the replication process 440 is underway but not yet completed.

441 * replication has experienced an error which will require investigation 442 within the Jogs.

443 * replication has been purposefully stopped by the Management 444 Server due to an error it is aware of.

446 Overall Status provides a visual representation of the archive space on 447 each target server, provide information of how much free archive space 448 is available in both percentage and GBs terms and there is also a pie 449 chart representation.

451 The scan (monitoring) services for source server (s), target server(s) 452 and the archive environment are all managed from the system.

454 The system enables the Management Server to continually monitor and 455 update the replication status of source server(s), target server(s) and 456 archive environments. The scan services can be stopped/started as 457 required.

459 The system hosts involved in replication can provision disk(s), create 460 recovered virtual machine(s) etc. 462 The system has process which describes the archives and recovery of 463 server(s) at end of the day. The management tool provides an event 464 log, configuration options for the GUI and an update tool for system 465 running on all source servers and target servers 467 Refresh Rate is configurable and controls how often the GUI in the 468 overall status is refreshed using information from the Management 469 Server.

470 The system has an option to filter out source server(s) that have been 471 added to the Management Server but not yet attached to a target 472 server(s).

474 The system provides a visual representation of the archive space on 475 each target server; free archive space is available in both percentage 476 and GBs terms and pie chart representation.

478 The system provides the scan (monitoring) services for source server 479 (s), target server(s) and the archive environment.

481 System features enable the management server to continually monitor 482 and update the replication status of source server(s), target server(s) 483 and archive environments.

485 The managemnt system provides an event log, configuration options for 486 the GUI and an update on all source servers and target servers. The 487 event log provides a visual and chronological list of executed functions 488 for trouble shooting Further detailed information is available from the 489 application logs on the relevant machine.

491 The system can recover not just one but many source machines on 492 each target machine in parallel. Precise numbers will vary depending 493 on the target machine specification and other factors. The system has 494 been designed to manage many target machines and can therefore 495 protect very large numbers of source machines. Furthermore, data 496 volumes can range from a few gigabytes to many terabytes.

498 The system is supported on either physical or virtual source machines.

499 Source machines can be stand alone or clustered on physical hardware 500 or virtualised or any combination thereof. Target machines can either 501 be physical machines or, more likely, virtualised machines.

503 Examples

505 Normal replication of continuous data in protection mode is shown by 506 figure 2. Source machines (19) both physical and virtual is protected (5) 507 by the system on 2 target machines (26, 27), one situated locally for 508 high availability (26), and one situated remotely (27) i.e. a secondary 509 site or a hosted site, for full disaster recovery protection. The source 510 machine (20) is whose data is to be captured (3). The source machine 51! (20) is linked to virtual machines (21, 22) which retain different set of 512 data e.g. accounts, production. The source machine is also linked 513 physical machines (23, 24) which retain different type of data e.g. sales 514 and pensions, The source machine is also physically linked to storage 515 area network (SAN)/network attached storage (NAS)/direct attached 516 storage (DAS) (25). The captured data is lifted across wide area 517 network (WAN)/local area network (LAN)/lnternet Protocol (lP) (8) to 518 target machines which could be local on premise (26) or remote (27).

519 The local target machine (28) is linked to virtual machine with system 520 (29). The local target machine is also physically linked to 521 SAN/NAS/DAS (33). The remote target machine (30) is linked to virtual 522 machine with system (31) which is further linked to remote machine 523 with management system (32). The remote target machine (30) is also 524 physically linked to SAN/NAS/DAS (34). Local replication of data 525 occurs between local target system (29) and LANAN/IP (8) and 526 similarly remote replication of data occurs between local target system 527 (31) and LANIWAN/IP (8).

528 In the event of a localised problem affecting one or more source 529 machines (20, 21, 22) but not affecting the entire site, The system can 530 execute recovery of selected source machines from the local (on 531 premise) target machine as shown in Figure 3 where there is high 532 availability mode to local target.

533 In the event of a major disaster or site outage, the system can execute 534 recovery of all protected source machines from the remote (hosted) 535 target machine. Figure 4 shows disaster recovery process to remote 536 target machines. The local source machines (19, 20, 21, 22, 23, and 537 24) stop along with local storage SANJ/NAS/DAS (25) which stops to lift 538 data to LAN/WAN/IF' (8). The remote target machine (30) is in 539 operation along with system (31) and management system (32) remote 540 machines, which recover virtual data (37, 38, 39, and 40) such as 541 accounts, production, sales and pensions stored on remote machines.

GB1309543.5A 2013-05-29 2013-05-29 System to control backup migration and recovery of data Withdrawn GB2514569A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1309543.5A GB2514569A (en) 2013-05-29 2013-05-29 System to control backup migration and recovery of data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1309543.5A GB2514569A (en) 2013-05-29 2013-05-29 System to control backup migration and recovery of data

Publications (2)

Publication Number Publication Date
GB201309543D0 GB201309543D0 (en) 2013-07-10
GB2514569A true GB2514569A (en) 2014-12-03

Family

ID=48784814

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1309543.5A Withdrawn GB2514569A (en) 2013-05-29 2013-05-29 System to control backup migration and recovery of data

Country Status (1)

Country Link
GB (1) GB2514569A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1640868A2 (en) * 2004-09-22 2006-03-29 Microsoft Corporation Method and system for synthetic backup and restore
US20060224642A1 (en) * 2005-04-01 2006-10-05 Microsoft Corporation Production server to data protection server mapping
US20080077622A1 (en) * 2006-09-22 2008-03-27 Keith Robert O Method of and apparatus for managing data utilizing configurable policies and schedules
US7478117B1 (en) * 2004-12-10 2009-01-13 Symantec Operating Corporation Restoring system state from volume shadow copy service by mounting disks
US20110167221A1 (en) * 2010-01-06 2011-07-07 Gururaj Pangal System and method for efficiently creating off-site data volume back-ups

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1640868A2 (en) * 2004-09-22 2006-03-29 Microsoft Corporation Method and system for synthetic backup and restore
US7478117B1 (en) * 2004-12-10 2009-01-13 Symantec Operating Corporation Restoring system state from volume shadow copy service by mounting disks
US20060224642A1 (en) * 2005-04-01 2006-10-05 Microsoft Corporation Production server to data protection server mapping
US20080077622A1 (en) * 2006-09-22 2008-03-27 Keith Robert O Method of and apparatus for managing data utilizing configurable policies and schedules
US20110167221A1 (en) * 2010-01-06 2011-07-07 Gururaj Pangal System and method for efficiently creating off-site data volume back-ups

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Intronis cloud backup service, http://www.intronis.com/product/backup-software.php?wsid=blg-textlink [accessed on 11 November 2013] *

Also Published As

Publication number Publication date
GB201309543D0 (en) 2013-07-10

Similar Documents

Publication Publication Date Title
US8655850B2 (en) Systems and methods for resynchronizing information
US9898481B2 (en) Data synchronization management
US9098455B2 (en) Systems and methods of event driven recovery management
CN103608786B (en) Managing replication of virtual storage in the recovery site
AU2006262045B2 (en) System and method for high performance enterprise data protection
US8595191B2 (en) Systems and methods for performing data management operations using snapshots
US7096392B2 (en) Method and system for automated, no downtime, real-time, continuous data protection
US8977596B2 (en) Back up using locally distributed change detection
US9575848B2 (en) Remote data protection in a networked storage computing environment
US8612396B1 (en) Cloning and recovery of data volumes
EP1635244B1 (en) Method and system for creating an archive routine for protecting data in a data protection system
US7672979B1 (en) Backup and restore techniques using inconsistent state indicators
US9256605B1 (en) Reading and writing to an unexposed device
US7831682B2 (en) Providing a reliable backing store for block data storage
US9336292B2 (en) Provisioning and managing replicated data instances
JP3957278B2 (en) File transfer method and system
US9529550B2 (en) Managing access of multiple executing programs to non-local block data storage
US8769186B2 (en) Providing executing programs with reliable access to non-local block data storage
US8375248B2 (en) Method and system for virtual on-demand recovery
EP2366151B1 (en) Method and system for managing replicated database data
US7546484B2 (en) Managing backup solutions with light-weight storage nodes
US20100179941A1 (en) Systems and methods for performing discrete data replication
US8001079B2 (en) System and method for system state replication
US9519656B2 (en) System and method for providing a virtualized replication and high availability environment
US8639966B2 (en) Data transfer and recovery process

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)