US20220300383A1 - Mount and migrate - Google Patents

Mount and migrate Download PDF

Info

Publication number
US20220300383A1
US20220300383A1 US17/805,820 US202217805820A US2022300383A1 US 20220300383 A1 US20220300383 A1 US 20220300383A1 US 202217805820 A US202217805820 A US 202217805820A US 2022300383 A1 US2022300383 A1 US 2022300383A1
Authority
US
United States
Prior art keywords
storage
server database
copy
database
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/805,820
Inventor
Michael Harold SALINS
Durgesh Kumar VERMA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to US17/805,820 priority Critical patent/US20220300383A1/en
Assigned to GOOGLE LLC reassignment GOOGLE LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ACTIFIO, INC.
Assigned to ACTIFIO, INC. reassignment ACTIFIO, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VERMA, DURGESH KUMAR, SALINS, MICHAEL HAROLD
Publication of US20220300383A1 publication Critical patent/US20220300383A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1469Backup restoration techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/119Details of migration of file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1461Backup scheduling policy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1464Management of the backup or restore process for networked environments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/214Database migration support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/80Database-specific techniques

Definitions

  • the subject matter disclosed in this application generally relates to server databases, such as SQL databases, and in particular to migrating or restoring server these databases.
  • the database data may be stored in storage 1204 that can be accessed by a database server 1201 hosting, for example, SQL server database 1203 , to enable application 1202 .
  • a user may want to migrate the database to another server, such as SQL Server Database 1211 on database server 1209 , or restore the SQL Server Database 1203 on server 1201 .
  • a user initiates the migrate or restore and waits for the data to migrate or restore. The user must wait for all the data to be copied before the SQL Server Database 1211 or restored SQL Server Database 1203 is brought online and can be used.
  • the instant application provides systems and/or methods that allow access of data by a new application before a migrate or restore is complete.
  • the disclosed subject matter includes a computerized method of migrating or restoring a server database such that the migrated server database can be used before data has copied to storage for the migrated server database.
  • the method includes storing a copy of data used by a server database in a copy storage, the copy storage being different storage than a first storage used by the server database.
  • the method includes mounting the copy of data in the copy storage to a second server database such that the second server database can use the copy of data on the copy storage.
  • the method further includes bringing the second server database online, the second server database using the copy of data on the copy storage.
  • the method further includes copying the copy of data to a second storage for use by the second server database; and finalizing the migrate to the second server database.
  • Finalizing the migrate to the second server database includes switching the second server database to use the second storage, and unmounting the copy of data in the copy storage. The method thereby permits use of the second server database before copying data to the second storage.
  • the disclosed subject matter includes a system for migrating or restoring a server database such that the migrated server database can be used before data has copied to storage for the migrated server database.
  • the system includes a processor and a memory coupled to the processor and including instructions that, when executed by the processor, cause the processor to store a copy of data used by a server database in a copy storage, the copy storage being different storage than a first storage used by the server database.
  • the system processor is also caused to mount the copy of data in the copy storage to a second server database such that the second server database can use the copy of data on the copy storage.
  • the system processor is also caused to bring the second server database online, the second server database using the copy of data on the copy storage.
  • the system processor is also caused to copy the copy of data to a second storage for use by the second server database.
  • the system processor is also caused to finalize the migrate to the second server database, including switch the second server database to use the second storage, and unmount the copy of data in the copy storage. The system thereby permits use of the second server database before copying data to the second storage.
  • the disclosed subject matter includes a non-transitory computer readable medium having executable instructions operable to cause an apparatus to store a copy of data used by a server database in a copy storage, the copy storage being different storage than a first storage used by the server database.
  • the apparatus is also caused to mount the copy of data in the copy storage to a second server database such that the second server database can use the copy of data on the copy storage.
  • the apparatus is also caused to bring the second server database online, the second server database using the copy of data on the copy storage.
  • the apparatus is also caused to copy the copy of data to a second storage for use by the second server database.
  • the apparatus is also caused to finalize the migrate to the second server database, including switch the second server database to use the second storage, and unmount the copy of data in the copy storage. The apparatus is thereby permitted to use the second server database before copying data to the second storage.
  • the second server database is the same as the server database.
  • the storage used by the second server database is the same as a storage used by the server database, and in other further implementations, the storage used by the second server database is different than a storage used by the server database.
  • the second server database is different from the server database.
  • the storage used by the second server database is the same as a storage used by the server database, and in other further implementations, the storage used by the second server database is different than a storage used by the server database.
  • switching the second server database to use the second storage includes bringing the second server database offline, refreshing the data on the storage for use by the server database, modifying the second server database to use data in the second storage, and bringing the second server database online.
  • the a transaction log is maintained including information representative of transactions on the database, and after one of the refreshing the data on the storage for use by the server database, the modifying the second server database to use data in the second storage, or the bringing the second server database online, the second database is updated based on the transaction log.
  • FIGS. 1A-1E illustrate a system and method of mounting and migrating data according to certain implementations
  • FIG. 2 illustrates a flowchart of a method of mounting a database for a restore according to certain implementations
  • FIG. 3 illustrates a flow chart of steps to restore a database via migrate according to certain implementations
  • FIG. 4 illustrates a flowchart of steps to finalize a restore performed via migration according to certain implementations
  • FIG. 5 illustrates exemplary XML syntax to specify the destination for a migration according to certain implementations
  • FIG. 6 illustrates a GUI for initiating a mount, according to certain implementations
  • FIG. 7 illustrates a GUI for selecting to initiate a migrate, according to certain implementations
  • FIG. 8 illustrates a GUI for initiating a migrate with the original file locations preserved, according to certain implementations
  • FIG. 9 illustrates a GUI for initiating a migrate with new file locations selected by volume, according to certain implementations.
  • FIG. 10 illustrates a GUI for initiating a migrate with new file locations selected by file, according to certain implementations
  • FIG. 11 illustrates a GUI for selecting to initiate finalization, or cancel migration, according to certain implementations.
  • FIG. 12 illustrates an environment applicable for mounting and migrating data according to certain implementations.
  • the mounting and migrating system and process described herein allows a user to copy a SQL Server database to a server and storage with minimum delay before they can use the database, thereby reducing recovery time objective (RTO).
  • a user uses the system and/or process described herein to restore a database, for example, to what it looked like before an earlier point in time.
  • a restore can be used to recover from corruption in the database or an irreversible change that was accidentally made.
  • the copying does not restore the database mount & migrate is when a user wants to copy their database, for example when copying the database to another server and storage.
  • Mount & migrate without restoring can be useful, for example, when the user requires a long-term or permanent copy of a database for e.g., production, testing, development, QA, analytics, or other purpose as would be apparent to one skilled in the art.
  • FIG. 1 illustrates a system 100 for migrating or restoring data according to certain implementations.
  • FIG. 1A illustrates the system 100 before a request to migrate or restore data.
  • the system 100 includes production database server 101 , production storage 104 , application 102 , data management virtualization engine 107 , copy data storage 108 , development (“dev”) database server 109 , production storage 114 and application 110 .
  • the production database server 101 includes SQL server database 103 and connector 105 .
  • Development database server 109 includes SQL server 111 and connector 112 .
  • system 100 is similar or analogous to the system 100 described in U.S. Pat. No. 10,282,201, which incorporated herein by reference.
  • like numerals in the instant application correspond to like numerals in U.S. Pat. No. 10,282,201 and the description thereof.
  • SQL server database 103 is hosted on production database server 101 .
  • production database server 101 is used for production environments. In other implementations, production database server 101 is used for other purposes or environments, such as development, test, quality assurance, analytics, disaster recovery, or others as would be apparent to one of skill in the art.
  • Application 102 uses the SQL server database 103 for storing application data. The application can consume the SQL server database for various purposes using a variety of different connections, as would be apparent to one of skill in the art.
  • the SQL server database 103 consumes data on production storage 104 . In certain implementations, the production storage 104 is connected via a block storage protocol or is locally attached storage.
  • the production database server 101 also includes a connector 105 that can be used to interface with data management virtualization engine 107 , such as to read data from or write data to copy data storage 108 .
  • the system 100 also hosts SQL server 111 on database server 109 .
  • the development database server 109 does not yet have a database for the restore, which can be added in FIG. 1C .
  • SQL server 111 can be used in development environments.
  • SQL server 111 can be used for purposes other than development environments, such as production, test, quality assurance, analytics, disaster recovery, or others as would be apparent to one of skill in the art.
  • Application 110 uses the SQL server 111 for storing application data.
  • the SQL server 111 consumes data on production storage 113 .
  • the production storage 113 is connected via a block storage protocol or is locally attached storage.
  • the dev database server 109 also includes a connector 112 that can be used to interface with data management virtualization engine 107 , such as to read data from or write data to copy data storage 108 .
  • the system 100 also includes data management virtualization engine 107 .
  • the data management virtualization engine 107 manages the process for data capture and for mount and migrate activities. In certain implementations, it manages the copy data storage 108 to keep track of multiple point in time images of the source database (SQL server database 103 ) and makes a read/write virtual version of them available when and where needed.
  • data management virtualization engine 107 is an Actifio appliance.
  • the data management virtualization engine 107 uses copy data storage 108 to store data. In certain implementations, copy data storage 108 can be CDS, Sky, or CDX storage. As discussed above, the data management virtualization engine 107 interfaces with production database server 101 via connector 105 , and with dev database server 109 via connector 112 .
  • production storage 104 can be different storage, provisioned on the same storage, or some combination thereof.
  • FIG. 1B illustrates an exemplary first step for migrating data or restoring data according to certain implementations.
  • the SQL server database 103 remains online and operational during the process.
  • a user configures the Data Management Virtualization Engine 107 to capture SQL Server Database 103 from database server 101 and store it on Copy Data Storage 108 .
  • the below description focuses on a migrate from SQL Server Database 103 to database server 109 .
  • similar steps are taken with respect to a restore of SQL Server Database 103 on the same server and storage, as would be apparent to one of skill in the art from the descriptions herein.
  • data management virtualization engine 107 performs a periodic and scheduled data capture 106 to capture data from production storage 104 and copies it to copy data storage 108 . This is performed through connector 105 . In certain implementations this is performed as described in in U.S. Pat. No. 10,282,201.
  • FIG. 1C illustrates an exemplary second step for migrating data according to certain implementations.
  • the data management virtualization engine 107 stores multiple point-in-time copies of SQL server database 103 on copy data storage 108 .
  • the user requests a mount of a point-in-time copy of SQL server database 103 to SQL server 111 .
  • the user requests a restore of SQL server database 103 back to the production database server 101 .
  • Data management virtualization engine 107 mounts the copy data storage 108 to SQL server 111 so that the SQL server 111 can access the data stored thereon. This permits the copy of SQL Server Database 103 to begin running before the data is copied to production storage 113 .
  • the data management virtualization engine 107 mounts the copy data storage 108 to production database Server 101 so that the SQL server can access the data stored thereon, and instructs the connector 105 to remove SQL Server Database 103 and replace it with the copy mounted from the copy data storage 108 (for example as discussed with respect to FIG. 2 ). This permits the copy of SQL Server Database 103 to begin running before the data is restored to production storage 104 .
  • FIG. 1D illustrates an exemplary third step for migrating data according to certain implementations.
  • Data management virtualization engine 107 schedules and then performs a data migrate 115 of the data from copy data storage 108 to production storage 113 (for example as described with respect to FIG. 3 ). This is performed through connector 112 .
  • the data copy will be repeated on a frequency specified by the user.
  • the copying is incremental-forever updates after the initial copy.
  • the copying is a block level exact duplication of SQL server database 103 data files and does not require data formatting.
  • FIG. 1E illustrates an exemplary fourth step for migrating data according to certain implementations.
  • an SQL database can be migrated with earlier access to the migrated database. This early access to the database improves productivity. Without this capability, customers first wait for all of the data to be copied to the destination server and storage before they may begin accessing the database. In certain implementations, the benefit is magnified proportionally to the size of the database as larger databases take longer to copy.
  • FIG. 2 illustrates a flowchart of a method of restoring a database, according to certain implementations. In certain implementations, this is performed on the system 100 of FIG. 1 .
  • the customer initiates the SQL database restore (e.g., of SQL server database 103 ).
  • the mount replaces the source database.
  • the mount is an app-aware mount so that the data is more easily accessed by the particular application, for example, as an SQL database.
  • step 220 is performed as described in FIG. 1C , discussed above. By replacing the database with the app-aware mount, the database can be brought online and used, without waiting for the data to be copied (restored) first.
  • FIG. 3 illustrates a flowchart of a method of migrating a database, according to certain implementations.
  • the method of FIG. 3 follows the method of FIG. 2 . In certain implementations, this is performed on the system 100 of FIG. 1 .
  • the user initiates a migrate.
  • the migrate can be similar to that described in FIG. 1D .
  • the system takes a snapshot of the database to be migrated.
  • the snapshot is a Microsoft Volume Shadow Copy Service (“VSS”) snapshot, for example, to achieve data consistency.
  • VSS Microsoft Volume Shadow Copy Service
  • the new change block tracking (“CBT”) bitmaps are added on the database (“DB”) files.
  • DB database
  • step 340 the connector, (e.g., connector 112 ) copies incremental data from the storage, such as copy data storage 108 to the server disk using n ⁇ 1 bitmaps, where “n” is the current migration job, and the n ⁇ 1 bitmaps were created from step 330 of the previous migration job. If the migration job is the first migration job, then there are no bitmaps and the copy will copy all data, not just changes.
  • step 350 the VSS snapshot, for example on data management virtualization engine 107 , that was created in step 320 is deleted, and the bitmaps from the previous iteration (if any) are deleted.
  • step 360 the system, such as data management virtualization engine 107 , sleeps for, for example, a user specified interval.
  • the interval is 1-24 hours.
  • a scheduler can be implemented to schedule migration jobs.
  • step 370 the system determines whether the user has finalized the restore, in which case no additional migrations are needed. This is accomplished by determining whether the user has initiated the finalize process by clicking, for example, “Begin Finalize Process” as shown in FIG. 11 .
  • the finalize process becomes available to the user once the first migration job as completed successfully. The finalize process is used so that the copy that is on the production storage 113 , which is not initially used, is used to run the database from that storage.
  • Finalize will takes the DB offline to prevent any more changes, it will copy the changes to the production storage 113 , then SQL server 111 is updated to run the database from production storage 113 instead of the mounted copy data storage 108 .
  • the DB is then brought online again. In certain implementations, the user must initiate this process so that the user can determine when the DB goes offline, such as during a scheduled outage.
  • step 380 If the user has finalized the restore, the process ends in step 380 . If the user has not finalized the restore, the process repeats steps 320 - 370 until it is determined that the user has finalized the migrate/restore in step 370 . This permits the system to continue to capture changed data from the mounted database to migrate to the production storage.
  • FIG. 4 illustrates a flowchart of a method 400 of finalizing a restore or migrate, according to certain implementations.
  • the method of FIG. 4 follows the method of FIG. 3 . In certain implementations, this is performed after step 350 in FIG. 3 has completed at least one time.
  • the customer initiates “finalize” for the restore, for example, after at least one migration job has completed successfully. In certain implementations, this occurs during step 360 in FIG. 3 .
  • the database is taken offline in order to finalize the restore or migrate.
  • the system performs a final refresh of the data on server disks using VSS and bitmaps, for example as described in steps 320 - 350 discussed above. This synchronizes the data with the latest version of the database.
  • step 440 the database is modified to use the data on production storage 113 rather than the copy data storage 108 .
  • step 450 the database is brought online.
  • step 460 the system unmounts the storage from which the database had been running (e.g., copy data storage 108 ).
  • step 470 the process ends.
  • the system includes a transaction log for tracking transactions during the finalization process. For example, after a database is taken offline for a final refresh, the system can continue to handle transactions for the database and store them in a transaction log. After the final refresh, such as described herein, the database is brought back online and the transaction logs are used to re-sync the database with transactions that occurred while the database was offline. In certain implementations, this is used on multi-node or distributed databases, such that only the specified node to be migrated or restored is taken offline and then re-synched with transactions that occurred while it was offline. In certain implementations, this allows for further simplicity and automation, and allows for per-node granularity.
  • Examples of such a multi-node database include Microsoft SQL Server Availability Groups (AG).
  • AG Microsoft SQL Server Availability Groups
  • the database is a member of an AG, it is not just taken offline, synchronized, switched to the new storage, and brought online. Instead, just on the specified node in the AG, the database is dropped from the AG, the final synchronization and switch to new storage is performed, and the database is re-joined to the AG on that node. SQL re-syncs the database on the node with transaction logs for any transactions that took place during this process.
  • FIG. 5 illustrates exemplary API parameter options to select the destination for the migrate of a database according to certain implementations.
  • 510 illustrates the option to specify the migrate should copy the files to the same location as the production database.
  • 520 illustrates the option to specify the migrate should copy the files with different target file location for each production database file.
  • the user also specifies the “file” option for each file, and provides the file name, source and target location for each.
  • FIGS. 6-11 illustrate a GUI and process for migrating a database according to certain implementations.
  • FIG. 6 illustrates a GUI for initiating a mount, according to certain implementations, for example after selecting to migrate a database.
  • the same GUI can be used except that the user does not select a target and the system does not display the target.
  • a user selects a target 601 that as the mount destination. From the list 603 of available databases, the user selects one or more databases 605 to migrate (e.g., SQL server database 103 ). In certain implementations, the user selects a label 607 for the migrate. By selecting submit 609 , the user initiates a mount, such as discussed with respect to FIGS.
  • the system mounts the new database to the SQL Server (e.g., SQL server 111 ) on the selected target server (e.g., Dev Database Server 109 ) so that the new database can begin running off of the copied data (e.g., copy data storage 108 ).
  • SQL Server e.g., SQL server 111
  • target server e.g., Dev Database Server 109
  • FIG. 7 illustrates a GUI where the user can choose to initiate a migrate, according to certain implementations.
  • the system can display active images that that have been initiated, for example, as shown in FIG. 6 .
  • the system displays a list containing each active image and includes enough information for the user to identify the one they want. In certain implementations, this can include the application, source host, image state, and/or active image.
  • the image state can be mounted, indicating, like in FIG.
  • migrating indicating that the data is in a process of migrating as described herein, or finalizing, indicating that the process is in the finalization step as described herein.
  • the image states can be the same but indicate that they are part of a restore. The user can select an active image and choose to migrate the image. If a migrate is selected, for example, the system begins a process of migrating such as shown in FIG. 1D and FIG. 3 .
  • the user after the user initiates a migration, they are asked to specify a label 801 / 901 / 1001 , frequency 803 / 903 / 1003 , and smart copy thread count 805 / 905 / 1005 for the migrate.
  • the active image selected was created by a mount (and not a restore mount)
  • the user also is allowed to specify the location to which the database will be migrated, such as shown in FIGS. 8-10 , and can choose if database files should be renamed while being migrated 807 / 907 / 1007 .
  • a user enters a label so that migration jobs linked to this request can easily be identified and reported on later. The user selects the frequency to determine how often migration jobs should run ( FIG.
  • the user may enter the smart copy thread count to control the performance of the copy process, and the user may select if database files should be renamed so that renamed databases can continue to match the underlying files for the database.
  • the user can also select a destination for the migrated files on the new server, i.e., to where the files should be copied.
  • the location is the same drive/path as they were on the source server, as shown in FIG. 8 .
  • the location is a new location at the volume level, as shown in FIG. 9 . If this is selected, the user specifies a target volume path name for each source volume being migrated.
  • the location is a new location at the file level, as shown in FIG. 10 . If this is selected, the user specifies a target location path name for each source file to be migrated.
  • the migrate request is completed, for example, by selecting submit 809 / 909 / 1009 . In certain implementations, this begins a migration to the new location, such as is shown in FIG. 1D and FIG. 3 .
  • FIG. 11 illustrates a GUI for finalizing a migrate, according to certain implementations. In certain implementations, this is similar to the GUI shown in FIG. 7 .
  • the image state changes to migrating or restore migrating. For a given active image, a migrate or restore is canceled by selecting cancel restore/migrate 1103 .
  • a migrate or restore is finalized by selecting begin finalize process 1105 . In certain implementations, this begins the process described, for example, in FIG. 1E and FIG. 4 .
  • an option (“run resync job now”) is included to run a migration job without waiting for the next scheduled job. For example, the system performs step 360 in FIG. 3 .
  • the subject matter described herein can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structural means disclosed in this specification and structural equivalents thereof, or in combinations of them.
  • the subject matter described herein can be implemented as one or more computer program products, such as one or more computer programs tangibly embodied in an information carrier (e.g., in a machine readable storage device), or embodied in a propagated signal, for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers).
  • a computer program (also known as a program, software, software application, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file.
  • a program can be stored in a portion of a file that holds other programs or data, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Systems and methods are disclosed for migrating or restoring a server database such that the migrated server database can be used before data has copied to storage for the migrated server database. Data used by a server database is copied to a copy storage, which is mounted to a second server database. The second server database is brought online using the copy of data. The copy of data is copied to a second storage. The second server database is brought offline and switched to run from the second storage. The second server database is brought back online, thereby permitting use of the second server database before copying data to the second storage.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This U.S. patent application is a continuation of, and claims priority under 35 U.S.C. § 120 from, U.S. patent application Ser. No. 16/812,064, filed on Mar. 6, 2020. The disclosure of this prior art application is considered part of the disclosure of this application and is hereby incorporated by reference in its entirety.
  • TECHNICAL FIELD
  • The subject matter disclosed in this application generally relates to server databases, such as SQL databases, and in particular to migrating or restoring server these databases.
  • BACKGROUND
  • When a database is migrated or restored, it can only be used once the migrate or restore is complete. As shown in FIG. 12, the database data may be stored in storage 1204 that can be accessed by a database server 1201 hosting, for example, SQL server database 1203, to enable application 1202. In certain circumstances, a user may want to migrate the database to another server, such as SQL Server Database 1211 on database server 1209, or restore the SQL Server Database 1203 on server 1201. To do so, a user initiates the migrate or restore and waits for the data to migrate or restore. The user must wait for all the data to be copied before the SQL Server Database 1211 or restored SQL Server Database 1203 is brought online and can be used.
  • A problem arises if the user wants to use the SQL Server Database 1211, or the restored SQL Server Database 1203 before the migrate or restore is complete. According to certain implementations, the instant application solves this problem with systems and/or methods that allow access of data before a migrate or restore is complete.
  • SUMMARY
  • According to certain implementations, the instant application provides systems and/or methods that allow access of data by a new application before a migrate or restore is complete.
  • The disclosed subject matter includes a computerized method of migrating or restoring a server database such that the migrated server database can be used before data has copied to storage for the migrated server database. The method includes storing a copy of data used by a server database in a copy storage, the copy storage being different storage than a first storage used by the server database. The method includes mounting the copy of data in the copy storage to a second server database such that the second server database can use the copy of data on the copy storage. The method further includes bringing the second server database online, the second server database using the copy of data on the copy storage. The method further includes copying the copy of data to a second storage for use by the second server database; and finalizing the migrate to the second server database. Finalizing the migrate to the second server database includes switching the second server database to use the second storage, and unmounting the copy of data in the copy storage. The method thereby permits use of the second server database before copying data to the second storage.
  • The disclosed subject matter includes a system for migrating or restoring a server database such that the migrated server database can be used before data has copied to storage for the migrated server database. The system includes a processor and a memory coupled to the processor and including instructions that, when executed by the processor, cause the processor to store a copy of data used by a server database in a copy storage, the copy storage being different storage than a first storage used by the server database. The system processor is also caused to mount the copy of data in the copy storage to a second server database such that the second server database can use the copy of data on the copy storage. The system processor is also caused to bring the second server database online, the second server database using the copy of data on the copy storage. The system processor is also caused to copy the copy of data to a second storage for use by the second server database. The system processor is also caused to finalize the migrate to the second server database, including switch the second server database to use the second storage, and unmount the copy of data in the copy storage. The system thereby permits use of the second server database before copying data to the second storage.
  • The disclosed subject matter includes a non-transitory computer readable medium having executable instructions operable to cause an apparatus to store a copy of data used by a server database in a copy storage, the copy storage being different storage than a first storage used by the server database. The apparatus is also caused to mount the copy of data in the copy storage to a second server database such that the second server database can use the copy of data on the copy storage. The apparatus is also caused to bring the second server database online, the second server database using the copy of data on the copy storage. The apparatus is also caused to copy the copy of data to a second storage for use by the second server database. The apparatus is also caused to finalize the migrate to the second server database, including switch the second server database to use the second storage, and unmount the copy of data in the copy storage. The apparatus is thereby permitted to use the second server database before copying data to the second storage.
  • In certain implementations, the second server database is the same as the server database. In further implementations, the storage used by the second server database is the same as a storage used by the server database, and in other further implementations, the storage used by the second server database is different than a storage used by the server database.
  • In certain implementations, the second server database is different from the server database. In further implementations, the storage used by the second server database is the same as a storage used by the server database, and in other further implementations, the storage used by the second server database is different than a storage used by the server database.
  • In certain implementations, switching the second server database to use the second storage includes bringing the second server database offline, refreshing the data on the storage for use by the server database, modifying the second server database to use data in the second storage, and bringing the second server database online. In further implementations, the a transaction log is maintained including information representative of transactions on the database, and after one of the refreshing the data on the storage for use by the server database, the modifying the second server database to use data in the second storage, or the bringing the second server database online, the second database is updated based on the transaction log.
  • Before explaining example implementations consistent with the present disclosure in detail, it is to be understood that the disclosure is not limited in its application to the details of constructions and to the arrangements set forth in the following description or illustrated in the drawings. The disclosure is capable of implementations in addition to those described and is capable of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein, as well as in the abstract, are for the purpose of description and should not be regarded as limiting.
  • These and other capabilities of implementations of the disclosed subject matter will be more fully understood after a review of the following figures, detailed description, and claims. It is to be understood that both the foregoing general description and the following detailed description are explanatory only and are not restrictive of the claimed subject matter.
  • DESCRIPTION OF DRAWINGS
  • Various objects, features, and advantages of the disclosed subject matter can be more fully appreciated with reference to the following detailed description of the disclosed subject matter when considered in connection with the following drawings.
  • FIGS. 1A-1E illustrate a system and method of mounting and migrating data according to certain implementations;
  • FIG. 2 illustrates a flowchart of a method of mounting a database for a restore according to certain implementations;
  • FIG. 3 illustrates a flow chart of steps to restore a database via migrate according to certain implementations;
  • FIG. 4 illustrates a flowchart of steps to finalize a restore performed via migration according to certain implementations;
  • FIG. 5 illustrates exemplary XML syntax to specify the destination for a migration according to certain implementations;
  • FIG. 6 illustrates a GUI for initiating a mount, according to certain implementations;
  • FIG. 7 illustrates a GUI for selecting to initiate a migrate, according to certain implementations;
  • FIG. 8 illustrates a GUI for initiating a migrate with the original file locations preserved, according to certain implementations;
  • FIG. 9 illustrates a GUI for initiating a migrate with new file locations selected by volume, according to certain implementations;
  • FIG. 10 illustrates a GUI for initiating a migrate with new file locations selected by file, according to certain implementations;
  • FIG. 11 illustrates a GUI for selecting to initiate finalization, or cancel migration, according to certain implementations; and
  • FIG. 12 illustrates an environment applicable for mounting and migrating data according to certain implementations.
  • DETAILED DESCRIPTION
  • In the following description, numerous specific details are set forth regarding the systems and methods of the disclosed subject matter and the environment in which such systems and methods may operate, in order to provide a thorough understanding of the disclosed subject matter. It will be apparent to one skilled in the art, however, that the disclosed subject matter may be practiced without such specific details, and that certain features, which are well known in the art, are not described in detail in order to avoid complication of the disclosed subject matter. In addition, it will be understood that the implementations described below are only examples, and that it is contemplated that there are other systems and methods that are within the scope of the disclosed subject matter.
  • In certain implementations, the mounting and migrating system and process described herein allows a user to copy a SQL Server database to a server and storage with minimum delay before they can use the database, thereby reducing recovery time objective (RTO). For example, in certain implementations, a user uses the system and/or process described herein to restore a database, for example, to what it looked like before an earlier point in time. For example, a restore can be used to recover from corruption in the database or an irreversible change that was accidentally made. In other implementations, the copying does not restore the database mount & migrate is when a user wants to copy their database, for example when copying the database to another server and storage. Mount & migrate without restoring can be useful, for example, when the user requires a long-term or permanent copy of a database for e.g., production, testing, development, QA, analytics, or other purpose as would be apparent to one skilled in the art.
  • While the description herein focuses on SQL server databases, one of ordinary skill would recognize that systems and methods described herein could be applied to other databases, disks, or filesystems. Nonlimiting examples of other uses include but are not limited to Oracle (for example, in non-ASM configurations), other database engines that can run on a Windows platform, and database engines or filesystem in a non-Windows, non-LVM configuration.
  • FIG. 1 illustrates a system 100 for migrating or restoring data according to certain implementations. FIG. 1A illustrates the system 100 before a request to migrate or restore data. The system 100 includes production database server 101, production storage 104, application 102, data management virtualization engine 107, copy data storage 108, development (“dev”) database server 109, production storage 114 and application 110. The production database server 101 includes SQL server database 103 and connector 105. Development database server 109 includes SQL server 111 and connector 112.
  • In certain implementations, the system 100 is similar or analogous to the system 100 described in U.S. Pat. No. 10,282,201, which incorporated herein by reference. In certain implementations, like numerals in the instant application correspond to like numerals in U.S. Pat. No. 10,282,201 and the description thereof.
  • SQL server database 103 is hosted on production database server 101. In certain implementations, production database server 101 is used for production environments. In other implementations, production database server 101 is used for other purposes or environments, such as development, test, quality assurance, analytics, disaster recovery, or others as would be apparent to one of skill in the art. Application 102 uses the SQL server database 103 for storing application data. The application can consume the SQL server database for various purposes using a variety of different connections, as would be apparent to one of skill in the art. The SQL server database 103 consumes data on production storage 104. In certain implementations, the production storage 104 is connected via a block storage protocol or is locally attached storage. The production database server 101 also includes a connector 105 that can be used to interface with data management virtualization engine 107, such as to read data from or write data to copy data storage 108.
  • The system 100 also hosts SQL server 111 on database server 109. In certain implementations, at this point, the development database server 109 does not yet have a database for the restore, which can be added in FIG. 1C. In certain implementations, SQL server 111 can be used in development environments. In other implementations, SQL server 111 can be used for purposes other than development environments, such as production, test, quality assurance, analytics, disaster recovery, or others as would be apparent to one of skill in the art. Application 110 uses the SQL server 111 for storing application data. The SQL server 111 consumes data on production storage 113. In certain implementations, the production storage 113 is connected via a block storage protocol or is locally attached storage. The dev database server 109 also includes a connector 112 that can be used to interface with data management virtualization engine 107, such as to read data from or write data to copy data storage 108.
  • The system 100 also includes data management virtualization engine 107. In certain implementations, the data management virtualization engine 107 manages the process for data capture and for mount and migrate activities. In certain implementations, it manages the copy data storage 108 to keep track of multiple point in time images of the source database (SQL server database 103) and makes a read/write virtual version of them available when and where needed. In certain implementations, data management virtualization engine 107 is an Actifio appliance. The data management virtualization engine 107 uses copy data storage 108 to store data. In certain implementations, copy data storage 108 can be CDS, Sky, or CDX storage. As discussed above, the data management virtualization engine 107 interfaces with production database server 101 via connector 105, and with dev database server 109 via connector 112.
  • In certain implementations, production storage 104, copy data storage 108, and production storage 113 can be different storage, provisioned on the same storage, or some combination thereof.
  • FIG. 1B illustrates an exemplary first step for migrating data or restoring data according to certain implementations. In certain implementations, the SQL server database 103 remains online and operational during the process. First, a user configures the Data Management Virtualization Engine 107 to capture SQL Server Database 103 from database server 101 and store it on Copy Data Storage 108. For illustration purposes, the below description focuses on a migrate from SQL Server Database 103 to database server 109. However, it should be appreciated that similar steps are taken with respect to a restore of SQL Server Database 103 on the same server and storage, as would be apparent to one of skill in the art from the descriptions herein.
  • Once the user configures the data capture, data management virtualization engine 107 performs a periodic and scheduled data capture 106 to capture data from production storage 104 and copies it to copy data storage 108. This is performed through connector 105. In certain implementations this is performed as described in in U.S. Pat. No. 10,282,201.
  • FIG. 1C illustrates an exemplary second step for migrating data according to certain implementations. In certain implementations, the data management virtualization engine 107 stores multiple point-in-time copies of SQL server database 103 on copy data storage 108. The user requests a mount of a point-in-time copy of SQL server database 103 to SQL server 111. Alternatively, the user requests a restore of SQL server database 103 back to the production database server 101. Data management virtualization engine 107 mounts the copy data storage 108 to SQL server 111 so that the SQL server 111 can access the data stored thereon. This permits the copy of SQL Server Database 103 to begin running before the data is copied to production storage 113. If the user requests a restore instead of a mount, then the data management virtualization engine 107 mounts the copy data storage 108 to production database Server 101 so that the SQL server can access the data stored thereon, and instructs the connector 105 to remove SQL Server Database 103 and replace it with the copy mounted from the copy data storage 108 (for example as discussed with respect to FIG. 2). This permits the copy of SQL Server Database 103 to begin running before the data is restored to production storage 104.
  • FIG. 1D illustrates an exemplary third step for migrating data according to certain implementations. Data management virtualization engine 107 schedules and then performs a data migrate 115 of the data from copy data storage 108 to production storage 113 (for example as described with respect to FIG. 3). This is performed through connector 112. In certain implementations, the data copy will be repeated on a frequency specified by the user. In certain implementations, the copying is incremental-forever updates after the initial copy. In certain implementations, the copying is a block level exact duplication of SQL server database 103 data files and does not require data formatting.
  • FIG. 1E illustrates an exemplary fourth step for migrating data according to certain implementations. Once all data has been copied from copy data storage 108 to production storage 113 at least once, the user initiates a finalize step. In this step, the database copy 103 running on SQL Server 111 is taken offline and one final incremental migration is performed. Then the connector 112 instructs the SQL server 111 to switch to using data stored on production storage 113. In certain implementations, the data presented from copy data storage 108 is removed from database server 109.
  • By performing the steps in FIGS. 1B-E, an SQL database can be migrated with earlier access to the migrated database. This early access to the database improves productivity. Without this capability, customers first wait for all of the data to be copied to the destination server and storage before they may begin accessing the database. In certain implementations, the benefit is magnified proportionally to the size of the database as larger databases take longer to copy.
  • FIG. 2 illustrates a flowchart of a method of restoring a database, according to certain implementations. In certain implementations, this is performed on the system 100 of FIG. 1. In step 210, the customer initiates the SQL database restore (e.g., of SQL server database 103). In step 220 the mount replaces the source database. In certain implementations, the mount is an app-aware mount so that the data is more easily accessed by the particular application, for example, as an SQL database. In certain implementations, step 220 is performed as described in FIG. 1C, discussed above. By replacing the database with the app-aware mount, the database can be brought online and used, without waiting for the data to be copied (restored) first.
  • FIG. 3 illustrates a flowchart of a method of migrating a database, according to certain implementations. For example, the method of FIG. 3 follows the method of FIG. 2. In certain implementations, this is performed on the system 100 of FIG. 1. In step 310, the user initiates a migrate. For example, in certain implementations, the migrate can be similar to that described in FIG. 1D. In step 320 the system takes a snapshot of the database to be migrated. In certain implementations, the snapshot is a Microsoft Volume Shadow Copy Service (“VSS”) snapshot, for example, to achieve data consistency. In step 330 the new change block tracking (“CBT”) bitmaps are added on the database (“DB”) files. The process for creating and storing bitmaps is described, for example, in U.S. Pat. No. 9,384,254, which his incorporated herein by reference. In step 340 the connector, (e.g., connector 112) copies incremental data from the storage, such as copy data storage 108 to the server disk using n−1 bitmaps, where “n” is the current migration job, and the n−1 bitmaps were created from step 330 of the previous migration job. If the migration job is the first migration job, then there are no bitmaps and the copy will copy all data, not just changes. In step 350 the VSS snapshot, for example on data management virtualization engine 107, that was created in step 320 is deleted, and the bitmaps from the previous iteration (if any) are deleted. In step 360 the system, such as data management virtualization engine 107, sleeps for, for example, a user specified interval. In certain implementations, the interval is 1-24 hours. In certain implementations, a scheduler can be implemented to schedule migration jobs. After the interval is complete, in step 370 the system determines whether the user has finalized the restore, in which case no additional migrations are needed. This is accomplished by determining whether the user has initiated the finalize process by clicking, for example, “Begin Finalize Process” as shown in FIG. 11. In certain implementations, the finalize process becomes available to the user once the first migration job as completed successfully. The finalize process is used so that the copy that is on the production storage 113, which is not initially used, is used to run the database from that storage. Finalize will takes the DB offline to prevent any more changes, it will copy the changes to the production storage 113, then SQL server 111 is updated to run the database from production storage 113 instead of the mounted copy data storage 108. The DB is then brought online again. In certain implementations, the user must initiate this process so that the user can determine when the DB goes offline, such as during a scheduled outage.
  • If the user has finalized the restore, the process ends in step 380. If the user has not finalized the restore, the process repeats steps 320-370 until it is determined that the user has finalized the migrate/restore in step 370. This permits the system to continue to capture changed data from the mounted database to migrate to the production storage.
  • FIG. 4 illustrates a flowchart of a method 400 of finalizing a restore or migrate, according to certain implementations. For example, the method of FIG. 4 follows the method of FIG. 3. In certain implementations, this is performed after step 350 in FIG. 3 has completed at least one time. In step 410 the customer initiates “finalize” for the restore, for example, after at least one migration job has completed successfully. In certain implementations, this occurs during step 360 in FIG. 3. In step 420 the database is taken offline in order to finalize the restore or migrate. In step 430 the system performs a final refresh of the data on server disks using VSS and bitmaps, for example as described in steps 320-350 discussed above. This synchronizes the data with the latest version of the database. In step 440 the database is modified to use the data on production storage 113 rather than the copy data storage 108. In step 450 the database is brought online. In step 460 the system unmounts the storage from which the database had been running (e.g., copy data storage 108). In step 470 the process ends.
  • In certain implementations, the system includes a transaction log for tracking transactions during the finalization process. For example, after a database is taken offline for a final refresh, the system can continue to handle transactions for the database and store them in a transaction log. After the final refresh, such as described herein, the database is brought back online and the transaction logs are used to re-sync the database with transactions that occurred while the database was offline. In certain implementations, this is used on multi-node or distributed databases, such that only the specified node to be migrated or restored is taken offline and then re-synched with transactions that occurred while it was offline. In certain implementations, this allows for further simplicity and automation, and allows for per-node granularity.
  • Examples of such a multi-node database include Microsoft SQL Server Availability Groups (AG). In certain implementations, during the finalize stage, if the database is a member of an AG, it is not just taken offline, synchronized, switched to the new storage, and brought online. Instead, just on the specified node in the AG, the database is dropped from the AG, the final synchronization and switch to new storage is performed, and the database is re-joined to the AG on that node. SQL re-syncs the database on the node with transaction logs for any transactions that took place during this process.
  • FIG. 5 illustrates exemplary API parameter options to select the destination for the migrate of a database according to certain implementations. 510 illustrates the option to specify the migrate should copy the files to the same location as the production database. For example, for the option “restore-location”, the user specifies usesourcelocation=“true” to indicate that the data should be migrated to the same location as the source location. 520 illustrates the option to specify the migrate should copy the files with different target file location for each production database file. For example, for the option “restore-location”, the user specifies usesourcelocation=“false” to indicate that the data should be migrated to a different location as the source location. The user also specifies the “file” option for each file, and provides the file name, source and target location for each. The 530 illustrates the option to specify the migrate should copy the files to a different target volume, while keeping files that were on the same production volume together on the new target volume. Similar to command 520, for the option “restore-location”, the user specifies usesourcelocation=“false” to indicate that the data should be migrated to a different location as the source location. The user also specifies the “volume” option for each production volume and provides the source and target location for each.
  • FIGS. 6-11 illustrate a GUI and process for migrating a database according to certain implementations. FIG. 6 illustrates a GUI for initiating a mount, according to certain implementations, for example after selecting to migrate a database. In certain implementations, if the user instead selects to restore a database, the same GUI can be used except that the user does not select a target and the system does not display the target. A user selects a target 601 that as the mount destination. From the list 603 of available databases, the user selects one or more databases 605 to migrate (e.g., SQL server database 103). In certain implementations, the user selects a label 607 for the migrate. By selecting submit 609, the user initiates a mount, such as discussed with respect to FIGS. 1 and 2. In certain implementations, once submit 609 is selected, the system mounts the new database to the SQL Server (e.g., SQL server 111) on the selected target server (e.g., Dev Database Server 109) so that the new database can begin running off of the copied data (e.g., copy data storage 108).
  • FIG. 7 illustrates a GUI where the user can choose to initiate a migrate, according to certain implementations. For example, the system can display active images that that have been initiated, for example, as shown in FIG. 6. The system displays a list containing each active image and includes enough information for the user to identify the one they want. In certain implementations, this can include the application, source host, image state, and/or active image. For example, for a migrate, the image state can be mounted, indicating, like in FIG. 1C, that a database has been mounted for use on the new database server (e.g., SQL server 111), migrating, indicating that the data is in a process of migrating as described herein, or finalizing, indicating that the process is in the finalization step as described herein. For a restore with mount and migrate, the image states can be the same but indicate that they are part of a restore. The user can select an active image and choose to migrate the image. If a migrate is selected, for example, the system begins a process of migrating such as shown in FIG. 1D and FIG. 3.
  • In certain implementations, after the user initiates a migration, they are asked to specify a label 801/901/1001, frequency 803/903/1003, and smart copy thread count 805/905/1005 for the migrate. If the active image selected was created by a mount (and not a restore mount), then the user also is allowed to specify the location to which the database will be migrated, such as shown in FIGS. 8-10, and can choose if database files should be renamed while being migrated 807/907/1007. For example, a user enters a label so that migration jobs linked to this request can easily be identified and reported on later. The user selects the frequency to determine how often migration jobs should run (FIG. 3, item 360). The user may enter the smart copy thread count to control the performance of the copy process, and the user may select if database files should be renamed so that renamed databases can continue to match the underlying files for the database. The user can also select a destination for the migrated files on the new server, i.e., to where the files should be copied. For example, in certain implementations, the location is the same drive/path as they were on the source server, as shown in FIG. 8. In other implementations, the location is a new location at the volume level, as shown in FIG. 9. If this is selected, the user specifies a target volume path name for each source volume being migrated. In other implementations, the location is a new location at the file level, as shown in FIG. 10. If this is selected, the user specifies a target location path name for each source file to be migrated. The migrate request is completed, for example, by selecting submit 809/909/1009. In certain implementations, this begins a migration to the new location, such as is shown in FIG. 1D and FIG. 3.
  • FIG. 11 illustrates a GUI for finalizing a migrate, according to certain implementations. In certain implementations, this is similar to the GUI shown in FIG. 7. In certain implementations, once data has been migrated, the image state changes to migrating or restore migrating. For a given active image, a migrate or restore is canceled by selecting cancel restore/migrate 1103. In addition, a migrate or restore is finalized by selecting begin finalize process 1105. In certain implementations, this begins the process described, for example, in FIG. 1E and FIG. 4. In certain implementations, an option (“run resync job now”) is included to run a migration job without waiting for the next scheduled job. For example, the system performs step 360 in FIG. 3.
  • The subject matter described herein can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structural means disclosed in this specification and structural equivalents thereof, or in combinations of them. The subject matter described herein can be implemented as one or more computer program products, such as one or more computer programs tangibly embodied in an information carrier (e.g., in a machine readable storage device), or embodied in a propagated signal, for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers). A computer program (also known as a program, software, software application, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file. A program can be stored in a portion of a file that holds other programs or data, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
  • The processes and logic flows described in this specification, including the method steps of the subject matter described herein, can be performed by one or more programmable processors executing one or more computer programs to perform functions of the subject matter described herein by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus of the subject matter described herein can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • The processes and logic flows described in this specification, including the method steps of the subject matter described herein, can be performed by one or more programmable processors executing one or more computer programs to perform functions of the subject matter described herein by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus of the subject matter described herein can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • The processes and logic flows described in this specification, including the method steps of the subject matter described herein, can be performed by one or more programmable processors executing one or more computer programs to perform functions of the subject matter described herein by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus of the subject matter described herein can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • The processes and logic flows described in this specification, including the method steps of the subject matter described herein, can be performed by one or more programmable processors executing one or more computer programs to perform functions of the subject matter described herein by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus of the subject matter described herein can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • The processes and logic flows described in this specification, including the method steps of the subject matter described herein, can be performed by one or more programmable processors executing one or more computer programs to perform functions of the subject matter described herein by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus of the subject matter described herein can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • The processes and logic flows described in this specification, including the method steps of the subject matter described herein, can be performed by one or more programmable processors executing one or more computer programs to perform functions of the subject matter described herein by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus of the subject matter described herein can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • The processes and logic flows described in this specification, including the method The processes and logic flows described in this specification, including the method steps of the subject matter described herein, can be performed by one or more programmable processors executing one or more computer programs to perform functions of the subject matter described herein by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus of the subject matter described herein can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).

Claims (20)

What is claimed is:
1. A computer-implemented method that when executed by data processing hardware causes the data processing hardware to perform operations comprising:
receiving a migration request from a user requesting to migrate data from source storage of a source server database to target storage of a target server database;
in response to receiving the migration request, copying initial data from the source storage to a copy storage, the copy storage different than the source storage of the source server database and the target storage of the target server database;
migrating the copy of the initial data from the copy storage to the target storage of the target server database;
after migrating the copy of the initial data from the copy storage to the target storage of the target server database, determining that an update occurred in the copy of the initial data at the copy storage; and
synchronizing the target storage of the target server database with the copy storage using the updated copy of the initial data.
2. The computer-implemented method of claim 1, wherein the target server database is the same as the source server database.
3. The computer-implemented method of claim 1, wherein the target storage of the target server database is the same as the source storage of the source server database.
4. The computer-implemented method of claim 1, wherein the target server database is different than the source server database.
5. The computer-implemented method of claim 1, wherein the target storage of the target server database is different than the source storage of the source server database.
6. The computer-implemented method of claim 1, wherein the source server database comprises an SQL server database.
7. The computer-implemented method of claim 1, wherein copying the initial data from the source server database to the copy storage comprises capturing the initial data periodically based on a scheduled data capture.
8. The computer-implemented method of claim 1, wherein determining that the update occurred in the copy of the initial data at the copy storage comprises maintaining a transaction log comprising information representative of transactions at the source server database.
9. The computer-implemented method of claim 1, wherein the target server database is a distributed database.
10. The computer-implemented method of claim 1, wherein the source server database is a distributed database.
11. A system comprising:
data processing hardware; and
memory hardware in communication with the data processing hardware, the memory hardware storing instructions that when executed on the data processing hardware cause the data processing hardware to perform operations comprising:
receiving a migration request from a user requesting to migrate data from source storage of a source server database to target storage of a target server database;
in response to receiving the migration request, copying initial data from the source storage to a copy storage, the copy storage different than the source storage of the source server database and the target storage of the target server database;
migrating the copy of the initial data from the copy storage to the target storage of the target server database;
after migrating the copy of the initial data from the copy storage to the target storage of the target server database, determining that an update occurred in the copy of the initial data at the copy storage; and
synchronizing the target storage of the target server database with the copy storage using the updated copy of the initial data.
12. The system of claim 11, wherein the target server database is the same as the source server database.
13. The system of claim 11, wherein the target storage of the target server database is the same as the source storage of the source server database.
14. The system of claim 11, wherein the target server database is different than the source server database.
15. The system of claim 11, wherein the target storage of the target server database is different than the source storage of the source server database.
16. The system of claim 11, wherein the source server database comprises an SQL server database.
17. The system of claim 11, wherein copying the initial data from the source server database to the copy storage comprises capturing the initial data periodically based on a scheduled data capture.
18. The system of claim 11, wherein determining that the update occurred in the copy of the initial data at the copy storage comprises maintaining a transaction log comprising information representative of transactions at the source server database.
19. The system of claim 11, wherein the target server database is a distributed database.
20. The system of claim 11, wherein the source server database is a distributed database.
US17/805,820 2020-03-06 2022-06-07 Mount and migrate Pending US20220300383A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/805,820 US20220300383A1 (en) 2020-03-06 2022-06-07 Mount and migrate

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/812,064 US11372733B2 (en) 2020-03-06 2020-03-06 Mount and migrate
US17/805,820 US20220300383A1 (en) 2020-03-06 2022-06-07 Mount and migrate

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/812,064 Continuation US11372733B2 (en) 2020-03-06 2020-03-06 Mount and migrate

Publications (1)

Publication Number Publication Date
US20220300383A1 true US20220300383A1 (en) 2022-09-22

Family

ID=77555908

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/812,064 Active 2040-08-05 US11372733B2 (en) 2020-03-06 2020-03-06 Mount and migrate
US17/805,820 Pending US20220300383A1 (en) 2020-03-06 2022-06-07 Mount and migrate

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/812,064 Active 2040-08-05 US11372733B2 (en) 2020-03-06 2020-03-06 Mount and migrate

Country Status (1)

Country Link
US (2) US11372733B2 (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040193625A1 (en) * 2003-03-27 2004-09-30 Atsushi Sutoh Data control method for duplicating data between computer systems
US7167965B2 (en) * 2001-04-30 2007-01-23 Hewlett-Packard Development Company, L.P. Method and system for online data migration on storage systems with performance guarantees
US20130110770A1 (en) * 2011-10-27 2013-05-02 Scott Stevelinck Database clone
US20130191345A1 (en) * 2012-01-24 2013-07-25 Symantec Corporation Volume and partial volume merge to synchronize to non-homogeneous drive layouts
US20150019487A1 (en) * 2013-07-09 2015-01-15 Oracle International Corporation Online database migration
US20150019495A1 (en) * 2013-07-09 2015-01-15 Delphix Corp. Customizable storage system for virtual databases
US20160077925A1 (en) * 2014-09-16 2016-03-17 Actifio, Inc. Multi-threaded smart copy
US20160321339A1 (en) * 2015-04-30 2016-11-03 Actifio, Inc. Data provisioning techniques
US9823982B1 (en) * 2015-06-19 2017-11-21 Amazon Technologies, Inc. Archiving and restoration of distributed database log records
US20180060181A1 (en) * 2015-10-23 2018-03-01 Oracle International Corporation Transportable Backups for Pluggable Database Relocation
US10649952B1 (en) * 2019-01-23 2020-05-12 Cohesity, Inc. Using a secondary storage system to maintain functionality of a database during database migration
US20200310921A1 (en) * 2019-03-26 2020-10-01 Commvault Systems, Inc. Streamlined secondary copy operations for data stored on shared file storage
US10909143B1 (en) * 2017-04-14 2021-02-02 Amazon Technologies, Inc. Shared pages for database copies
US20210034474A1 (en) * 2019-07-31 2021-02-04 Rubrik, Inc. Streaming database recovery using cluster live mounts
US10929428B1 (en) * 2017-11-22 2021-02-23 Amazon Technologies, Inc. Adaptive database replication for database copies
US20210209078A1 (en) * 2020-01-07 2021-07-08 Nirvaha Corporation Incrementally updated database server database images and database cloning, using windows virtual hard drives and database server backups
US20210240582A1 (en) * 2020-01-30 2021-08-05 Rubrik, Inc. Extended recovery of a database exported to a native database recovery environment
US11341104B1 (en) * 2019-03-21 2022-05-24 Amazon Technologies, Inc. In place resize of a distributed database
US20220229822A1 (en) * 2017-12-07 2022-07-21 Zte Corporation Data processing method and device for distributed database, storage medium, and electronic device

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7167965B2 (en) * 2001-04-30 2007-01-23 Hewlett-Packard Development Company, L.P. Method and system for online data migration on storage systems with performance guarantees
US20040193625A1 (en) * 2003-03-27 2004-09-30 Atsushi Sutoh Data control method for duplicating data between computer systems
US20130110770A1 (en) * 2011-10-27 2013-05-02 Scott Stevelinck Database clone
US20130191345A1 (en) * 2012-01-24 2013-07-25 Symantec Corporation Volume and partial volume merge to synchronize to non-homogeneous drive layouts
US20150019487A1 (en) * 2013-07-09 2015-01-15 Oracle International Corporation Online database migration
US20150019495A1 (en) * 2013-07-09 2015-01-15 Delphix Corp. Customizable storage system for virtual databases
US20160077925A1 (en) * 2014-09-16 2016-03-17 Actifio, Inc. Multi-threaded smart copy
US20160321339A1 (en) * 2015-04-30 2016-11-03 Actifio, Inc. Data provisioning techniques
US9823982B1 (en) * 2015-06-19 2017-11-21 Amazon Technologies, Inc. Archiving and restoration of distributed database log records
US20180060181A1 (en) * 2015-10-23 2018-03-01 Oracle International Corporation Transportable Backups for Pluggable Database Relocation
US10909143B1 (en) * 2017-04-14 2021-02-02 Amazon Technologies, Inc. Shared pages for database copies
US10929428B1 (en) * 2017-11-22 2021-02-23 Amazon Technologies, Inc. Adaptive database replication for database copies
US20220229822A1 (en) * 2017-12-07 2022-07-21 Zte Corporation Data processing method and device for distributed database, storage medium, and electronic device
US10649952B1 (en) * 2019-01-23 2020-05-12 Cohesity, Inc. Using a secondary storage system to maintain functionality of a database during database migration
US11232067B2 (en) * 2019-01-23 2022-01-25 Cohesity, Inc. Using a secondary storage system to maintain functionality of a database during database migration
US11341104B1 (en) * 2019-03-21 2022-05-24 Amazon Technologies, Inc. In place resize of a distributed database
US20200310921A1 (en) * 2019-03-26 2020-10-01 Commvault Systems, Inc. Streamlined secondary copy operations for data stored on shared file storage
US20210034474A1 (en) * 2019-07-31 2021-02-04 Rubrik, Inc. Streaming database recovery using cluster live mounts
US20210209078A1 (en) * 2020-01-07 2021-07-08 Nirvaha Corporation Incrementally updated database server database images and database cloning, using windows virtual hard drives and database server backups
US20210240582A1 (en) * 2020-01-30 2021-08-05 Rubrik, Inc. Extended recovery of a database exported to a native database recovery environment

Also Published As

Publication number Publication date
US11372733B2 (en) 2022-06-28
US20210279147A1 (en) 2021-09-09

Similar Documents

Publication Publication Date Title
US11797395B2 (en) Application migration between environments
US11687424B2 (en) Automated media agent state management
US20220114067A1 (en) Systems and methods for instantiation of virtual machines from backups
US8793230B2 (en) Single-database multiple-tenant software system upgrade
US9092500B2 (en) Utilizing snapshots for access to databases and other applications
US7672979B1 (en) Backup and restore techniques using inconsistent state indicators
US7076685B2 (en) Information replication system mounting partial database replications
US7478099B1 (en) Methods and apparatus for collecting database transactions
WO2016044403A1 (en) Copy data techniques
EP3019987B1 (en) Virtual database rewind
US8112665B1 (en) Methods and systems for rapid rollback and rapid retry of a data migration
US20190155936A1 (en) Replication Catch-up Strategy
US10809922B2 (en) Providing data protection to destination storage objects on remote arrays in response to assignment of data protection to corresponding source storage objects on local arrays
US10884783B2 (en) Virtual machine linking
JP2003330782A (en) Computer system
US11500738B2 (en) Tagging application resources for snapshot capability-aware discovery
US11144233B1 (en) Efficiently managing point-in-time copies of data within a primary storage system
US11093380B1 (en) Automated testing of backup component upgrades within a data protection environment
US20210334165A1 (en) Snapshot capability-aware discovery of tagged application resources
US20220300383A1 (en) Mount and migrate
US9563519B2 (en) Reversing changes executed by change management
US11442815B2 (en) Coordinating backup configurations for a data protection environment implementing multiple types of replication
US11474728B2 (en) Data storage volume record management for application-level recovery
US11068354B1 (en) Snapshot backups of cluster databases
Petrovska Development of application for selecting an ideal data migration solution in a heterogeneous storage environment

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ACTIFIO, INC.;REEL/FRAME:060131/0775

Effective date: 20210316

Owner name: ACTIFIO, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SALINS, MICHAEL HAROLD;VERMA, DURGESH KUMAR;SIGNING DATES FROM 20201208 TO 20201209;REEL/FRAME:060131/0642

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED