US20190278856A9 - Asynchronous Shared Application Upgrade - Google Patents

Asynchronous Shared Application Upgrade Download PDF

Info

Publication number
US20190278856A9
US20190278856A9 US15/266,917 US201615266917A US2019278856A9 US 20190278856 A9 US20190278856 A9 US 20190278856A9 US 201615266917 A US201615266917 A US 201615266917A US 2019278856 A9 US2019278856 A9 US 2019278856A9
Authority
US
United States
Prior art keywords
container
metadata
application
database
executing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/266,917
Other versions
US20180075086A1 (en
US10635658B2 (en
Inventor
Philip Yam
Thomas Baby
Andre Kruglikov
Kumar Rajamani
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle International Corp
Original Assignee
Oracle International Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oracle International Corp filed Critical Oracle International Corp
Priority to US15/266,917 priority Critical patent/US10635658B2/en
Assigned to ORACLE INTERNATIONAL CORPORATION reassignment ORACLE INTERNATIONAL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BABY, THOMAS, KRUGLIKOV, ANDRE, RAJAMANI, KUMAR, YAM, PHILIP
Publication of US20180075086A1 publication Critical patent/US20180075086A1/en
Publication of US20190278856A9 publication Critical patent/US20190278856A9/en
Application granted granted Critical
Publication of US10635658B2 publication Critical patent/US10635658B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • G06F17/30377
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2379Updates performed during online database operations; commit processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1474Saving, restoring, recovering or retrying in transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/87Monitoring of transactions

Definitions

  • This disclosure relates to hot upgrade for multitenant database systems. Techniques are presented for diverting, to cloned metadata, live access to original metadata of an application container database that is being concurrently upgraded.
  • Database consolidation involves distributing and sharing computing resources among multiple databases managed by one or more database servers of database management systems (DBMS).
  • Databases may be consolidated using a container DBMS.
  • a consolidated database such as a multitenant container database (CDB), includes one or more pluggable databases (PDBs).
  • CDB multitenant container database
  • PDBs pluggable databases
  • each pluggable database may be opened or closed in the container database independently from other pluggable databases.
  • a DBMS may have multiple server instances for a same container database.
  • sharding, replication, and horizontal scaling are topologies that may utilize multiple server instances for a database.
  • each server instance occupies a separate host computer.
  • Server instances may exchange data content and control information over a computer network. For example, server instances may collaborate to answer a federated query, to synchronize replication, and to rebalance data storage demand.
  • Each database server instance may be a container database that contains one or more pluggable databases. However, server instances need not have identical sets of pluggable databases. For example, one server instance may have a particular pluggable database that another server instance lacks.
  • Pluggable databases may be “plugged in” to a container database, and may be transported between database servers and/or DBMSs.
  • the container DBMS may manage multiple pluggable databases and a given database server instance may serve those pluggable databases from the container database.
  • a given container database allows multiple pluggable databases to run on the same database server and/or database server instance, allowing the computing resources of a single database server or instance to be shared between multiple pluggable databases.
  • a database application may be composed of an application root and multiple pluggable databases.
  • the application root and application pluggable databases belong to a container database.
  • an application's pluggable databases may be more or less isolated from other applications of the same container database by placing all of the application's pluggable databases within a dedicated application container.
  • the related application titled “Application Containers for Container Databases” further explains the nature of a container database, an application container, and an application pluggable database, as well as their relationship to each other.
  • an application container is responsible for storing application metadata.
  • the application container may have an application root, which is another database that belongs to the container database.
  • An application may access a pluggable database by establishing a database session on the container DBMS for that pluggable database, where a database session represents the connection between an application and the container DBMS for accessing the pluggable database.
  • a database session is initiated for a pluggable database by, for example, transmitting a request for a new connection to the container DBMS, the request specifying the pluggable database.
  • a container DBMS may host multiple database sessions, each database session being for one of multiple pluggable databases.
  • the architecture of a container database greatly facilitates transporting the pluggable databases of an application between database servers and/or DBMSs.
  • Tablespace files and a data dictionary store may be moved between environments of container DBMSs using readily available mechanisms for copying and moving files.
  • An application container provides advantages for database consolidation. Some advantages are provided as a consequence of using pluggable databases, such as a high degree of isolation concurrently along with a high degree of resource sharing. Multiple pluggable databases may run on the same database server and/or database server instance, allowing the computing resources of a single database server or instance to be shared between multiple pluggable databases. Other advantages are arise from the application container itself, such as providing resources of an application for sharing by pluggable databases of the application.
  • the isolation provided by an application container is at an existential level.
  • the users of a database session established for a pluggable database may only access or otherwise view database objects defined via the database dictionary of the pluggable database or the database dictionary of its application container. Database objects of other application containers cannot be accessed or viewed. This degree of isolation is extended to administrators.
  • FIG. 1 is a block diagram that depicts an example database server that diverts, to cloned metadata, live access to original metadata of an application container that is being concurrently upgraded, in an embodiment
  • FIG. 2 is a flow diagram that depicts an example process that diverts, to cloned metadata, live access to original metadata of an application container that is being concurrently upgraded, in an embodiment
  • FIG. 3 is a block diagram that depicts an example container database that maintains an association between an application container and a reference container to achieve diversion of metadata retrieval, in an embodiment
  • FIG. 4 is a block diagram that depicts an example container database that processes undo records to make consistent a reference container, in an embodiment
  • FIG. 5 is a scenario diagram that depicts an example system of computers that selectively upgrades a subset of pluggable databases of an application container, in an embodiment
  • FIG. 6 is a block diagram that illustrates a computer system upon which an embodiment of the invention may be implemented.
  • a database server stores, within an application container of an application, original metadata that defines objects for use by pluggable databases of the application.
  • the database server receives an upgrade or other maintenance request to adjust the original metadata.
  • the database server creates, in response to receiving the maintenance request, a reference container that contains cloned metadata that is a copy of the original metadata.
  • the database server receives, during or after creating the reference container, a read request to read one of the objects.
  • the database server concurrently performs both of: executing the maintenance request upon the original metadata, and executing the read request upon the cloned metadata of the reference container.
  • an identifier determines which metadata is actually read.
  • an identifier may be configured to identify either the reference container or the application container. The current value of the identifier may be used to select which metadata is actually read during metadata retrieval.
  • FIG. 1 is a block diagram that depicts an example database server 100 , in an embodiment.
  • Database server 100 diverts, to cloned metadata, live access to original metadata of an application container that is being concurrently upgraded.
  • Database server 100 may be a server instance of a database management system (DBMS) or other relational database system.
  • Database server 100 may be hosted by at least one computer such as a rack server such as a blade, a personal computer, a mainframe, a network appliance, a virtual machine, or other computing device.
  • Database server 100 may access data that is stored in memory, on disks, or over a network.
  • Database server 100 may be a multitenant system that hosts databases that are associated with various applications. Database server 100 may organize its databases according to a hierarchy.
  • a pluggable database, such as 130 is the finest grained database.
  • An application, such as 110 may include one or multiple pluggable databases.
  • application 110 may be an inventory application that has one pluggable database to manage West Coast inventory and a similar pluggable database to manage East Coast inventory. As such, both pluggable databases may share some metadata, with the possibility of centralized maintenance.
  • application 110 may be a retailing application that has one pluggable database for inventory management and another pluggable database for account management, again with some shared metadata.
  • Database server 100 may host multiple applications, such as 110 .
  • pluggable databases of application 110 may be contained within an application container, such as 120 .
  • Metadata that is shared by pluggable databases of application 110 may be centralized within metadata, such as original metadata 140 , within application container 120 .
  • pluggable databases of application 110 may share a relational schema that occupies original metadata 140 .
  • original metadata 140 contains or is contained within a control file and/or a data dictionary.
  • application container 120 contains a pluggable database, such as 130 or an additional one such as an application root database, which contains some or all of original metadata 140 .
  • application container 120 itself directly contains all of original metadata 140 without delegating such containment to any of the pluggable databases of application container 120 .
  • a new software version of application 110 may be released.
  • a new version may be a major release with changes that affect multiple parts of application 110 or a minor patch that is limited in scope and impact.
  • database server 100 receives a command to upgrade application 110 with a major release.
  • a minor patch may be applied to application 110 without taking application 110 out of service. Whereas, applying a major release would traditionally disrupt service of application 110 .
  • database server 100 is configured to apply a major release without disrupting service. Database server 100 achieves this by creating cloned metadata 180 that may temporarily act as metadata system of record for application container 120 while the upgrade adjusts original metadata 140 .
  • database server 100 may receive maintenance request 160 to upgrade application 110 .
  • Maintenance request 160 may be a digital message such as an XML document, a remote request such as an HTTP Get, a subroutine invocation, a software command, or other software signal.
  • Maintenance request 160 may be generated by a script such as of SQL or shell, or manually entered such as at a command prompt. Maintenance request 160 may be delivered to database server 100 either synchronously such as with HTTP or asynchronously such as with Java message service (JMS).
  • JMS Java message service
  • Maintenance request 160 may bear an identifier of application container 120 .
  • Maintenance request 160 may specify a file path to a script such as of SQL or shell and/or a file path to a software package or archive that contains a release of application 110 .
  • Database server 100 may react to maintenance request 160 by creating reference container 170 within a same database server instance 100 as application container 120 .
  • Reference container 170 need not be a complete clone of application container 120 .
  • reference container 170 need not have a copy of pluggable database 130 .
  • reference container 170 may allow only read only use, comprise read only files, and/or refuse client connections.
  • cloned metadata 180 should more or less be a clone of original metadata 140 .
  • metadata objects such as 150
  • object 155 may be copied into cloned metadata 180 , such as object 155 .
  • client read-access demand (such as read request 190 ) upon object 150 may instead be satisfied by reading its clone, object 155 .
  • original metadata 140 and object 150 may be inconsistent or otherwise unavailable during execution of read request 190 , such as during an upgrade of application 110 .
  • Read request 190 may be a digital message such as an XML document, a remote request such as an HTTP Get, a subroutine invocation, a software command, or other software signal.
  • Read request 190 may be generated by a script such as of SQL or shell, or manually entered such as at a command prompt. Read request 190 may be delivered to database server 100 either synchronously such as with HTTP or asynchronously such as with Java message service (JMS).
  • JMS Java message service
  • Read request 190 may identify pluggable database 130 or application container 120 .
  • Read request 190 may expressly specify reading of metadata.
  • read request 190 may specify reading of non-metadata data that impliedly requires reading metadata to support reading of other data.
  • Database server 100 detects that it receives read request 190 while original metadata 140 is being upgraded. Database server 100 reacts to detecting this condition by executing read request 190 against cloned metadata 180 instead of original metadata 140 . The mechanics of diverting read request 190 from original metadata 140 to cloned metadata 180 are discussed later herein.
  • FIG. 2 is a flow diagram that depicts an example process that diverts, to cloned metadata, live access to original metadata of an application container that is being concurrently upgraded.
  • FIG. 2 is discussed with reference to FIG. 1 .
  • Step 201 is preparatory.
  • original metadata is stored within an application container.
  • database server 100 stores original metadata 140 within files of application container 120 .
  • Original metadata 140 may be created and stored when application container 120 is created. For example, installation of application 110 may cause creation of application container 120 , pluggable databases such as 130 , and original metadata 140 .
  • Storage of original metadata 140 may occur into volatile or non-volatile memory or onto mechanical disk. Such storage may be remote or local to the computer(s) that host database server 100 .
  • a maintenance request to adjust the original metadata is received.
  • database server 100 may receive maintenance request 160 to upgrade application 110 .
  • maintenance request 160 specifies that application container 120 should remain in service during execution of maintenance request 160 .
  • a reference container is created.
  • database server 100 creates reference container 170 while fulfilling maintenance request 160 .
  • maintenance request 160 expressly indicates that a reference container should be created.
  • database server implicitly creates reference container 170 to fulfil maintenance request 160 .
  • database server 100 creates reference container 170 by copying some or all files of application container 120 .
  • some or all of original metadata 140 may occupy a control file that may be copied from application container 140 to reference container 170 .
  • a read request is received during or after creation of the reference container and while maintenance request 205 is outstanding (received but unfulfilled).
  • database server 100 receives read request 190 that attempts to read original metadata 140 .
  • Embodiments of database server 100 may detect that application container 120 is being upgraded and that original metadata 140 may be unavailable because of the ongoing upgrade. In an embodiment, database server 100 sets an upgrade flag upon receipt of maintenance request 160 to indicate an ongoing upgrade. Database server 100 may check the upgrade flag to decide how to process read request 190 .
  • database server 100 may establish more significant state changes in response to receiving maintenance request 160 to indicate an ongoing upgrade. For example, database server 100 may temporarily adjust original metadata 140 to indicate that some or all access to original metadata 140 should be diverted to cloned metadata 180 .
  • database server 100 maintains a lookup table or other association that maps metadata access to actual metadata, such as 140 or 180 .
  • the mapping is from client connection to actual metadata.
  • the mapping is from pluggable database to actual metadata.
  • one pluggable database of application container 120 may have its metadata access diverted to cloned metadata 180 .
  • another pluggable database of the same application container 120 at the same time may directly access original metadata 140 .
  • diversion or selective (mapped) diversion may remain in effect after upgrading application container 120 and perhaps indefinitely.
  • cloned metadata 180 is not upgraded and may be retained for backward compatibility needed to support a legacy pluggable database whose codebase maintenance has ceased.
  • an identifier determines which metadata is actually read.
  • an identifier may be configured to identify either the reference container or the application container. The current value of the identifier may be used to select which metadata is actually read during metadata retrieval.
  • adjustment of original metadata 140 during an upgrade includes storing an identifier or locator of cloned metadata 180 into original metadata 140 .
  • an application root database of application container 120 contains original metadata 140
  • adjustment of original metadata 140 includes replacement of a reference to the application root database with a reference to a clone of the application root database that occupies reference container 170 .
  • this may include storing an identifier of an object of cloned metadata 180 , such as object 155 , into original metadata 140 .
  • declaration of object 150 may be decoupled from the implementation of object 150 .
  • original metadata 140 or another part of application container 120 may contain a metadata entry that declares object 150 and provides a pointer or reference to object 150 itself.
  • database server 100 temporarily adjusts the declaration of object 150 , such that the reference within the declaration points to object 155 instead of object 150 .
  • database server 100 may ordinarily inspect references and identifiers within original metadata 140 as part of detecting where metadata objects, such as 150 , actually reside, such as within cloned metadata 180 . The mechanics of metadata cross-referencing and retargeting of metadata are discussed later herein.
  • Steps 205 - 206 may potentially occur simultaneously. This may be partly because requests 160 and 190 may be fulfilled by separate threads of execution and partly because requests 160 and 190 operate upon different metadata collections 140 and 180 .
  • step 205 the maintenance request is executed upon the original metadata.
  • database server 100 executes maintenance request 160 to upgrade application container 120 .
  • Execution of maintenance request 160 may be more or less long-running as compared to ordinary online transaction processing (OLTP) such as for read request 190 .
  • maintenance request 160 may cause original metadata 140 to become inconsistent or otherwise unavailable during execution of maintenance request 160 and without causing any interruption of service at application container 120 .
  • the read request is executed upon the cloned metadata.
  • database server 100 executes read request 190 to retrieve desired information from cloned metadata 180 , even though read request 190 would ordinarily retrieve the same information from original metadata 140 instead.
  • step 205 After completion of step 205 , original metadata 140 has regained consistency and may again become the metadata system of record for application container 120 . By this point metadata 140 and 180 , which were more or less identical upon completion of step 203 , may have divergent content.
  • the upgrade of application 110 may cause addition of a new column to a metadata table of original metadata 140 .
  • upgraded metadata (such as the new column) will not be available to application 110 .
  • step 205 may finally include or be followed by restoration of identifiers and references that were retargeted to divert access to cloned metadata 180 .
  • any reference to cloned metadata 180 or object 155 should be restored by database server 100 to once again respectively refer to original metadata 140 and object 150 .
  • step 205 the restoration of original targeting of metadata should not occur until step 205 is complete, such restoration need not wait for completion of step 206 , which is the actual use of cloned metadata 180 .
  • step 206 the reinstatement of original metadata 140 as a system of record may occur even though some reading of cloned metadata 180 is still ongoing.
  • reinstatement of original metadata 140 as a system of record is full reinstatement for all uses and clients of application container 120 .
  • reinstatement is limited to a subset of the pluggable databases of application container 120 .
  • one pluggable database may have its metadata retrieval resume access of original metadata 140 . Whereas metadata access by another pluggable database may continue to be diverted to cloned metadata 180 .
  • a so-called ‘synchronization’ (sync) command designates a subset of pluggable databases that should have restored access to original metadata 140 , which was upgraded and is ready to be returned into service.
  • a sync command also causes additional upgrade activity to be applied to the subset of pluggable databases. The sync command is discussed later herein.
  • steps 205 - 206 finish, and cloned metadata 180 may become unnecessary.
  • the upgrading of application 110 is complete, and reference container 170 may be deleted.
  • scenarios explained below may more or less prevent deletion of reference container 170 , such as upgrading application container 120 without upgrading all of its pluggable databases. In that case, at least one of its pluggable databases may have a continued need for cloned metadata 180 .
  • reference container 170 may be inappropriate until some time after upgrading application container 120 .
  • reference container 170 is not deleted so that a subset of application pluggable databases can be restored, within application container 120 , to an historic version from backup that expects old metadata.
  • database server 100 automatically deletes reference container 170 when no longer needed.
  • a command such as an interactive command, causes database server 100 to delete reference container 170 .
  • FIG. 3 is a block diagram that depicts an example container database 300 that maintains an association between an application container and a reference container to achieve diversion of metadata retrieval, in an embodiment.
  • Container database 300 may be hosted by an implementation of database server 100 .
  • Container database 300 may be a multitenant container database that may contain at least one application container, such as 320 , for at least one application.
  • Application container 320 contains original metadata 340 .
  • the database server creates reference container 370 , which includes copying cloned metadata 380 from original metadata 340 .
  • pluggable databases contained within application container 320 may read cloned metadata 380 , instead of original metadata 340 , to retrieve metadata for ordinary purposes.
  • retrieval of metadata from one application container, 370 for use by a pluggable database of another application container, 320 , may need a cross-reference from application container 320 to reference container 370 .
  • each application container may have its own unique identifier.
  • reference container 370 is identified by identifier 357 .
  • application container 320 may be identified by a different identifier.
  • an identifier may be a more or less globally unique identifier.
  • a database identifier is guaranteed to be unique only within container database 300 .
  • the identifier of one application container may be specified within an association that logically binds one application container to another for the purpose of metadata retrieval.
  • application container 320 contains association 356 as a reference or pointer that identifies reference container 370 as the source for metadata of a pluggable database or its application container.
  • association 356 may be used to divert metadata access for all pluggable databases within application container 320 .
  • each pluggable database of application container 320 has its own association 356 .
  • each pluggable database may have its metadata retrieval diverted to its own reference container.
  • a pluggable database may have its metadata retrieval diverted to another application container that is not a reference clone.
  • the database server may examine association 356 to detect from which application container the metadata should be retrieved. The database server may then read the metadata from whichever application container is specified by association 356 . Upon completion of the upgrade, the database server may reset association 356 to refer to application container 320 .
  • FIG. 4 is a block diagram that depicts an example container database 400 that processes undo records to make consistent a reference container, in an embodiment.
  • Container database 400 may be hosted by an implementation of database server 100 .
  • Container database 400 contains application container 420 .
  • the database server may copy reference container 470 from application container 420 , such as by copying data files.
  • copying may occur while application 420 sustains a transactional load. For example, transactions may remain ongoing until after creation of reference container 470 .
  • intermediate data written by an ongoing transaction may be included when copying application container 420 to reference container 470 .
  • a copied transaction such as 460
  • row 495 may be in an inconsistent state.
  • transaction 460 will not finish within reference container 470 , although the transaction may eventually finish within application container 420 .
  • reference container 470 may be inconsistent. Because reference container 470 is used solely for metadata access, inconsistent ordinary (not metadata) data may be tolerated (ignored).
  • relational table 490 may be part of cloned metadata instead of ordinary data.
  • relational table 490 may contain predefined specification data, such as zip codes.
  • Achieving consistency within reference container 470 may involve undoing (rolling back) transaction 460 .
  • the database server typically processes undo records, such as 480 , to roll back a transaction.
  • Each undo record 480 may contain content of a data block as it existed immediately before being modified by a transaction.
  • undo record 480 may hold the prior content of row 495 as it was before transaction 460 .
  • the database server may apply undo record 480 to reference container 470 to roll back transaction 460 .
  • the database server reads undo record 480 from within application container 480 .
  • Undo record 480 may be copied into reference container 470 during 470 ′s creation (by cloning).
  • the database server accomplishes the roll back by reading the copy of undo record 480 that occupies reference container 470 .
  • the undo record may contain object identifiers that are valid only within application container 420 and not within reference container 470 .
  • the undo record may be applied as-is to application container 420 .
  • the undo record cannot be applied as-is to reference container 470 because identifiers might not be portable between application containers.
  • the database server may, for example with the help of a lookup table, translate any identifier that is specified by undo record 480 and that is valid within application container 420 to a different identifier that is valid within reference container 470 .
  • the database server may populate the lookup table as part of the container cloning process that creates reference container 470 . For example, when the database server assigns a new identifier, such as 457 , to a clone of a metadata object, the database server may add to the lookup table an entry that maps old identifier 456 to new identifier 457 .
  • the database server may use old identifier 456 , as specified in undo record 480 , as a lookup key into the lookup table to translate old identifier 456 into new identifier 457 . In this way, the database server may apply undo records to reference container 470 , even though the undo records specify identifiers that are invalid within reference container 470 .
  • Reference container 470 may contain some written data that was committed by a completed transaction and some written data that is uncommitted (not yet committed) by an ongoing transaction. As such, the database server may need a mechanism to distinguish uncommitted writes that should be undone within reference container 470 and committed writes that should not be disturbed.
  • application container 420 (and its clone, reference container 470 ) may maintain a listing of pending (ongoing) transactions and there corresponding undo records.
  • undo record 480 may specify which transaction created it.
  • each transaction is assigned a unique serial number or timestamp when committed, and this serial number may be associated with or recorded within each undo record for the transaction.
  • the database server detects which undo records correspond to uncommitted transactions by detecting which undo records have no associated transaction serial number.
  • the database server may be configured to apply to reference container 470 those undo records that do not have an associated transaction serial number.
  • FIG. 5 is a scenario diagram that depicts an example system of computers 500 that selectively upgrades a subset of pluggable databases of an application container, in an embodiment.
  • System 500 may be composed of at least one computer.
  • System 500 includes clients 515 - 516 and database server 510 .
  • Clients 515 - 516 may be external clients of database server 510 .
  • client 516 may occupy a different computer than database server 510 .
  • client 516 may occupy a same computer as database server 510 , but occupy a different operating system process.
  • client 516 may be embedded within database server 510 .
  • client 516 may implement a maintenance chore that database server 510 performs upon itself.
  • An implementation of client 516 may include a codebase that contains a database connector, such as an open database connectivity (ODBC) driver. Communication between client 516 and database server 510 may occur through transport control protocol (TCP) sockets, through shared memory, or other inter-process channel.
  • ODBC open database connectivity
  • TCP transport control protocol
  • client 516 may send upgrade request 501 to database server 510 to upgrade the software of application container 520 that is contained within database server 510 .
  • application container 520 may reside in a container database that resides in database server 510 .
  • Database server 510 may be an implementation of database server 100 .
  • Database server 510 may react to upgrade request 501 by cloning application container 520 to create reference container 570 , shown as create 502 .
  • application containers 520 and 570 occupy a same container database.
  • database server 510 has a plurality of container databases, and application containers 520 and 570 occupy separate container databases.
  • reference container 570 is not the only work that database server 510 must perform to fulfill upgrade request 501 .
  • a software upgrade of application container 520 involves altering the metadata of application container 520 .
  • upgrading the metadata of application container 520 involves the execution of data manipulation language (DML) and/or data definition language (DDL) statements, which may be scripted or dynamically generated. In an embodiment, all or some of these statements may be recorded, along with their actual parameters.
  • DML data manipulation language
  • DDL data definition language
  • upgrading the metadata of application container 520 involves creating or modifying a database view.
  • application 520 may restrict some of its pluggable databases to using a limited view that exposes less metadata.
  • Application container 520 contains pluggable databases such as 530 . Also in response to upgrade request 501 , database server 510 adjusts a metadata pointer for each pluggable database or for application container 520 as a whole.
  • the metadata pointer(s) refer to reference container 570 instead of application container 520 .
  • client 515 may send read request 504 to database server 510 to read metadata for pluggable database 530 .
  • read request 504 is fulfilled by read metadata 505 that reads reference container 570 instead of application container 520 .
  • read request 504 occurs after the creation (create 502 ) of reference container 570 .
  • the shown embodiment achieves limited asynchrony by enabling read request 504 to access cloned metadata while upgrade request 501 is being simultaneously executed, so long as interactions 502 - 503 have finished.
  • database server 510 receives read request 504 during the creation of reference container 570 and buffers the request without processing it until interactions 502 - 503 have finished.
  • database server 510 detects that metadata retrieval for application container 520 is diverted to reference container 570 .
  • database server 510 may still be upgrading application container 520 for upgrade request 501 .
  • database server 510 sends (diverts) read metadata 505 to read the metadata of reference container 570 to fulfill read request 504 .
  • application container 520 may have a legacy pluggable database that cannot be upgraded because development of the legacy pluggable database as ceased.
  • the legacy pluggable database may have a continued need for the backwards-compatible metadata of reference container 570 .
  • pluggable database 530 is upgradable.
  • database server 510 may be directed by itself or by another agent to upgrade a subset of pluggable databases.
  • database server 510 may receive a command, such as a scripted or interactive command, to upgrade specified pluggable databases, such as 530 .
  • Database server 510 may react by invoking synchronize 506 upon pluggable database 530 .
  • Synchronize 506 cancels configured diversion 503 for specified pluggable databases. However before, during, or after cancellation and depending on the implementation, synchronize 506 may also cause the specified pluggable databases to be individually upgraded.
  • application container 520 and pluggable database 530 may have metadata that contains a respective data dictionary.
  • the data dictionary of pluggable database 530 may be updated to reflect new data objects that are added to application container 520 . This updating may involve recording, refreshing, or updating a link or pointer between application container 520 and pluggable database 530 , or between objects within application container 520 and pluggable database 530 .
  • Upgrading pluggable database 530 may entail executing logic of components 510 , 520 , and/or 530 . It may also entail replaying (repeating), into pluggable database 530 , database statements that were recorded while upgrading the metadata of application container 520 during execution of upgrade request 501 , as explained above.
  • the database statements are recorded as a re-playable script. Either creating or replaying the script may involve filtering out (skipping) statements that are only operable or useful at application container 520 .
  • metadata objects may be replicated into both of application container 520 and pluggable database 530 .
  • application container 520 may have objects that pluggable database 530 lacks.
  • pluggable database 530 After fulfilment of synchronize 506 , diversion ceases for pluggable database 530 , which is re-associated with the metadata of application container 520 . As such, read request 507 is satisfied by read metadata 508 using metadata from application container 520 instead of from reference container 570 . However, pluggable databases of application container 520 that have not been upgraded, such as by synchronize 506 , continue to have metadata retrieval diverted to reference container 570 .
  • the techniques described herein are implemented by one or more special-purpose computing devices.
  • the special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination.
  • ASICs application-specific integrated circuits
  • FPGAs field programmable gate arrays
  • Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques.
  • the special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques.
  • FIG. 6 is a block diagram that illustrates a computer system 600 upon which an embodiment of the invention may be implemented.
  • Computer system 600 includes a bus 602 or other communication mechanism for communicating information, and a hardware processor 604 coupled with bus 602 for processing information.
  • Hardware processor 604 may be, for example, a general purpose microprocessor.
  • Computer system 600 also includes a main memory 606 , such as a random access memory (RAM) or other dynamic storage device, coupled to bus 602 for storing information and instructions to be executed by processor 604 .
  • Main memory 606 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 604 .
  • Such instructions when stored in non-transitory storage media accessible to processor 604 , render computer system 600 into a special-purpose machine that is customized to perform the operations specified in the instructions.
  • Computer system 600 further includes a read only memory (ROM) 608 or other static storage device coupled to bus 602 for storing static information and instructions for processor 604 .
  • ROM read only memory
  • a storage device 66 such as a magnetic disk or optical disk, is provided and coupled to bus 602 for storing information and instructions.
  • Computer system 600 may be coupled via bus 602 to a display 612 , such as a cathode ray tube (CRT), for displaying information to a computer user.
  • a display 612 such as a cathode ray tube (CRT)
  • An input device 614 is coupled to bus 602 for communicating information and command selections to processor 604 .
  • cursor control 616 is Another type of user input device
  • cursor control 616 such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 604 and for controlling cursor movement on display 612 .
  • This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
  • Computer system 600 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 600 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 600 in response to processor 604 executing one or more sequences of one or more instructions contained in main memory 606 . Such instructions may be read into main memory 606 from another storage medium, such as storage device 66 . Execution of the sequences of instructions contained in main memory 606 causes processor 604 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
  • Non-volatile media includes, for example, optical or magnetic disks, such as storage device 66 .
  • Volatile media includes dynamic memory, such as main memory 606 .
  • Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
  • Storage media is distinct from but may be used in conjunction with transmission media.
  • Transmission media participates in transferring information between storage media.
  • transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 602 .
  • transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
  • Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 604 for execution.
  • the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer.
  • the remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem.
  • a modem local to computer system 600 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal.
  • An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 602 .
  • Bus 602 carries the data to main memory 606 , from which processor 604 retrieves and executes the instructions.
  • the instructions received by main memory 606 may optionally be stored on storage device 66 either before or after execution by processor 604 .
  • Computer system 600 also includes a communication interface 618 coupled to bus 602 .
  • Communication interface 618 provides a two-way data communication coupling to a network link 620 that is connected to a local network 622 .
  • communication interface 618 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
  • ISDN integrated services digital network
  • communication interface 618 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • LAN local area network
  • Wireless links may also be implemented.
  • communication interface 618 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • Network link 620 typically provides data communication through one or more networks to other data devices.
  • network link 620 may provide a connection through local network 622 to a host computer 624 or to data equipment operated by an Internet Service Provider (ISP) 626 .
  • ISP 626 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 628 .
  • Internet 628 uses electrical, electromagnetic or optical signals that carry digital data streams.
  • the signals through the various networks and the signals on network link 620 and through communication interface 618 which carry the digital data to and from computer system 600 , are example forms of transmission media.
  • Computer system 600 can send messages and receive data, including program code, through the network(s), network link 620 and communication interface 618 .
  • a server 630 might transmit a requested code for an application program through Internet 628 , ISP 626 , local network 622 and communication interface 618 .
  • the received code may be executed by processor 604 as it is received, and/or stored in storage device 66 , or other non-volatile storage for later execution.

Abstract

Techniques are provided for diverting, to cloned metadata, live access to original metadata of an application container that is being concurrently upgraded. In an embodiment, a database server stores, within an application container of an application, original metadata that defines objects for use by pluggable databases of the application. The database server receives a maintenance request to adjust the original metadata. The database server creates, in response to receiving the maintenance request, a reference container that contains cloned metadata that is a copy of the original metadata. The database server receives, during or after creating the reference container, a read request to read one of the objects. The database server concurrently performs both of: executing the maintenance request upon the original metadata, and executing the read request upon the cloned metadata of the reference container.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is related to the following applications, each of which is incorporated by reference as if fully set forth herein:
      • application Ser. No. 13/631,815, filed Sep. 28, 2012, titled “Container Database” (Attorney Ref. No.: 50277-4026);
      • U.S. Pat. No. 9,298,564, filed Mar. 14, 2013, titled “In Place Point-In-Time Recovery Of Pluggable Databases” (Attorney Ref. No.: 50277-4075);
      • application Ser. No. 14/202,091, filed Mar. 10, 2014, titled “Instantaneous Unplug Of Pluggable Database From One Container Database And Plug Into Another Container Database” (Attorney Ref. No.: 50277-4088);
      • application Ser. No. 15/093,506, filed Apr. 7, 2016, titled “Migrating A Pluggable Database Between Database Server Instances With Minimal Impact To Performance” (Attorney Ref. No. 50277-4969);
      • application Ser. No. 15/215,443, filed Jul. 20, 2016, titled “Techniques For Keeping A Copy Of A Pluggable Database Up To Date With A Source Pluggable Database In Read-Write Mode” (Attorney Ref. No. 50277-4971); and
      • application Ser. No. 15/215,446, filed Jul. 20, 2016, titled “Near-zero Downtime Relocation of a Pluggable Database across Container Databases” (Attorney Ref. No. 50277-4972).
      • application Ser. No. ______, filed ______, titled “ Application Containers for Container Databases” (Attorney Ref. No. 50277-4966).
    FIELD OF THE DISCLOSURE
  • This disclosure relates to hot upgrade for multitenant database systems. Techniques are presented for diverting, to cloned metadata, live access to original metadata of an application container database that is being concurrently upgraded.
  • BACKGROUND
  • Database consolidation involves distributing and sharing computing resources among multiple databases managed by one or more database servers of database management systems (DBMS). Databases may be consolidated using a container DBMS. A consolidated database, such as a multitenant container database (CDB), includes one or more pluggable databases (PDBs). In a container DBMS, each pluggable database may be opened or closed in the container database independently from other pluggable databases.
  • Furthermore, a DBMS may have multiple server instances for a same container database. For example, sharding, replication, and horizontal scaling are topologies that may utilize multiple server instances for a database.
  • Typically each server instance occupies a separate host computer. Server instances may exchange data content and control information over a computer network. For example, server instances may collaborate to answer a federated query, to synchronize replication, and to rebalance data storage demand.
  • Each database server instance may be a container database that contains one or more pluggable databases. However, server instances need not have identical sets of pluggable databases. For example, one server instance may have a particular pluggable database that another server instance lacks.
  • Pluggable databases may be “plugged in” to a container database, and may be transported between database servers and/or DBMSs. The container DBMS may manage multiple pluggable databases and a given database server instance may serve those pluggable databases from the container database. As such, a given container database allows multiple pluggable databases to run on the same database server and/or database server instance, allowing the computing resources of a single database server or instance to be shared between multiple pluggable databases.
  • A database application may be composed of an application root and multiple pluggable databases. The application root and application pluggable databases belong to a container database. Without introducing a separate database server, an application's pluggable databases may be more or less isolated from other applications of the same container database by placing all of the application's pluggable databases within a dedicated application container. The related application titled “Application Containers for Container Databases” further explains the nature of a container database, an application container, and an application pluggable database, as well as their relationship to each other. For example, an application container is responsible for storing application metadata. For such storage, the application container may have an application root, which is another database that belongs to the container database.
  • An application may access a pluggable database by establishing a database session on the container DBMS for that pluggable database, where a database session represents the connection between an application and the container DBMS for accessing the pluggable database. A database session is initiated for a pluggable database by, for example, transmitting a request for a new connection to the container DBMS, the request specifying the pluggable database. A container DBMS may host multiple database sessions, each database session being for one of multiple pluggable databases.
  • The architecture of a container database greatly facilitates transporting the pluggable databases of an application between database servers and/or DBMSs. Tablespace files and a data dictionary store may be moved between environments of container DBMSs using readily available mechanisms for copying and moving files.
  • An application container provides advantages for database consolidation. Some advantages are provided as a consequence of using pluggable databases, such as a high degree of isolation concurrently along with a high degree of resource sharing. Multiple pluggable databases may run on the same database server and/or database server instance, allowing the computing resources of a single database server or instance to be shared between multiple pluggable databases. Other advantages are arise from the application container itself, such as providing resources of an application for sharing by pluggable databases of the application.
  • The isolation provided by an application container is at an existential level. The users of a database session established for a pluggable database may only access or otherwise view database objects defined via the database dictionary of the pluggable database or the database dictionary of its application container. Database objects of other application containers cannot be accessed or viewed. This degree of isolation is extended to administrators.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In the drawings:
  • FIG. 1 is a block diagram that depicts an example database server that diverts, to cloned metadata, live access to original metadata of an application container that is being concurrently upgraded, in an embodiment;
  • FIG. 2 is a flow diagram that depicts an example process that diverts, to cloned metadata, live access to original metadata of an application container that is being concurrently upgraded, in an embodiment;
  • FIG. 3 is a block diagram that depicts an example container database that maintains an association between an application container and a reference container to achieve diversion of metadata retrieval, in an embodiment;
  • FIG. 4 is a block diagram that depicts an example container database that processes undo records to make consistent a reference container, in an embodiment;
  • FIG. 5 is a scenario diagram that depicts an example system of computers that selectively upgrades a subset of pluggable databases of an application container, in an embodiment;
  • FIG. 6 is a block diagram that illustrates a computer system upon which an embodiment of the invention may be implemented.
  • DETAILED DESCRIPTION
  • In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
  • Embodiments are described herein according to the following outline:
  • 1.0 General Overview
  • 2.0 Example Database Server
      • 2.1 Database Application and Application Container
      • 2.2 Application Container Upgrade
      • 2.3 Application Container Clone
      • 2.4 Metadata Diversion
  • 3.0 Metadata Diversion Process
      • 3.1 Application Container Upgrade
      • 3.2 Metadata Location Mapping
      • 3.3 Metadata Switching
  • 4.0 Metadata Binding
  • 5.0 Clone Consistency
      • 5.1 Undo Record and Transaction Rollback
      • 5.2 Rollback Mechanics
  • 6.0 Independent Synchronization
      • 6.1 SQL Recording
      • 6.2 Limited Asynchrony
      • 6.3 SQL Replay
  • 7.0 Hardware Overview
  • 1.0 General Overview
  • Techniques are provided for diverting, to cloned metadata, live access to original metadata of an application container that is being concurrently upgraded. In an embodiment, a database server stores, within an application container of an application, original metadata that defines objects for use by pluggable databases of the application. The database server receives an upgrade or other maintenance request to adjust the original metadata. The database server creates, in response to receiving the maintenance request, a reference container that contains cloned metadata that is a copy of the original metadata. The database server receives, during or after creating the reference container, a read request to read one of the objects. The database server concurrently performs both of: executing the maintenance request upon the original metadata, and executing the read request upon the cloned metadata of the reference container.
  • In an embodiment an identifier, such as an application version identifier or an application container identifier, determines which metadata is actually read. For example, an identifier may be configured to identify either the reference container or the application container. The current value of the identifier may be used to select which metadata is actually read during metadata retrieval.
  • 2.0 Example Database Server
  • FIG. 1 is a block diagram that depicts an example database server 100, in an embodiment. Database server 100 diverts, to cloned metadata, live access to original metadata of an application container that is being concurrently upgraded.
  • Database server 100 may be a server instance of a database management system (DBMS) or other relational database system. Database server 100 may be hosted by at least one computer such as a rack server such as a blade, a personal computer, a mainframe, a network appliance, a virtual machine, or other computing device. Database server 100 may access data that is stored in memory, on disks, or over a network.
  • 2.1 Database Application and Application Container Database
  • Database server 100 may be a multitenant system that hosts databases that are associated with various applications. Database server 100 may organize its databases according to a hierarchy.
  • A pluggable database, such as 130, is the finest grained database. An application, such as 110, may include one or multiple pluggable databases. For example, application 110 may be an inventory application that has one pluggable database to manage West Coast inventory and a similar pluggable database to manage East Coast inventory. As such, both pluggable databases may share some metadata, with the possibility of centralized maintenance. In another example, application 110 may be a retailing application that has one pluggable database for inventory management and another pluggable database for account management, again with some shared metadata. Database server 100 may host multiple applications, such as 110.
  • Some or all pluggable databases of application 110 may be contained within an application container, such as 120. Metadata that is shared by pluggable databases of application 110 may be centralized within metadata, such as original metadata 140, within application container 120.
  • For example, some or all of the pluggable databases of application 110 may share a relational schema that occupies original metadata 140. In embodiments, original metadata 140 contains or is contained within a control file and/or a data dictionary.
  • In an embodiment, application container 120 contains a pluggable database, such as 130 or an additional one such as an application root database, which contains some or all of original metadata 140. In an alternative embodiment, application container 120 itself directly contains all of original metadata 140 without delegating such containment to any of the pluggable databases of application container 120.
  • 2.2 Application Container Upgrade
  • Over time, a new software version of application 110 may be released. A new version may be a major release with changes that affect multiple parts of application 110 or a minor patch that is limited in scope and impact. During operation by this example, database server 100 receives a command to upgrade application 110 with a major release.
  • A minor patch may be applied to application 110 without taking application 110 out of service. Whereas, applying a major release would traditionally disrupt service of application 110.
  • However, database server 100 is configured to apply a major release without disrupting service. Database server 100 achieves this by creating cloned metadata 180 that may temporarily act as metadata system of record for application container 120 while the upgrade adjusts original metadata 140.
  • In operation, database server 100 may receive maintenance request 160 to upgrade application 110. Maintenance request 160 may be a digital message such as an XML document, a remote request such as an HTTP Get, a subroutine invocation, a software command, or other software signal.
  • Maintenance request 160 may be generated by a script such as of SQL or shell, or manually entered such as at a command prompt. Maintenance request 160 may be delivered to database server 100 either synchronously such as with HTTP or asynchronously such as with Java message service (JMS).
  • Maintenance request 160 may bear an identifier of application container 120. Maintenance request 160 may specify a file path to a script such as of SQL or shell and/or a file path to a software package or archive that contains a release of application 110.
  • 2.3 Application Container Clone
  • Database server 100 may react to maintenance request 160 by creating reference container 170 within a same database server instance 100 as application container 120. Reference container 170 need not be a complete clone of application container 120.
  • For example, reference container 170 need not have a copy of pluggable database 130. In embodiments, reference container 170 may allow only read only use, comprise read only files, and/or refuse client connections.
  • However, cloned metadata 180 should more or less be a clone of original metadata 140. For example, within original metadata 140 are metadata objects, such as 150, that may be copied into cloned metadata 180, such as object 155.
  • 2.4 Metadata Diversion
  • Because metadata objects 150 and 155 have the same data, client read-access demand (such as read request 190) upon object 150 may instead be satisfied by reading its clone, object 155. Furthermore, original metadata 140 and object 150 may be inconsistent or otherwise unavailable during execution of read request 190, such as during an upgrade of application 110.
  • Read request 190 may be a digital message such as an XML document, a remote request such as an HTTP Get, a subroutine invocation, a software command, or other software signal.
  • Read request 190 may be generated by a script such as of SQL or shell, or manually entered such as at a command prompt. Read request 190 may be delivered to database server 100 either synchronously such as with HTTP or asynchronously such as with Java message service (JMS).
  • Read request 190 may identify pluggable database 130 or application container 120. Read request 190 may expressly specify reading of metadata. Alternatively, read request 190 may specify reading of non-metadata data that impliedly requires reading metadata to support reading of other data.
  • Database server 100 detects that it receives read request 190 while original metadata 140 is being upgraded. Database server 100 reacts to detecting this condition by executing read request 190 against cloned metadata 180 instead of original metadata 140. The mechanics of diverting read request 190 from original metadata 140 to cloned metadata 180 are discussed later herein.
  • 3.0 Metadata Diversion Process
  • FIG. 2 is a flow diagram that depicts an example process that diverts, to cloned metadata, live access to original metadata of an application container that is being concurrently upgraded. FIG. 2 is discussed with reference to FIG. 1.
  • Step 201 is preparatory. In step 201, original metadata is stored within an application container. For example, database server 100 stores original metadata 140 within files of application container 120.
  • Original metadata 140 may be created and stored when application container 120 is created. For example, installation of application 110 may cause creation of application container 120, pluggable databases such as 130, and original metadata 140.
  • Storage of original metadata 140 may occur into volatile or non-volatile memory or onto mechanical disk. Such storage may be remote or local to the computer(s) that host database server 100.
  • 3.1 Application Container Upgrade
  • In step 202, a maintenance request to adjust the original metadata is received. For example, database server 100 may receive maintenance request 160 to upgrade application 110. In an embodiment, maintenance request 160 specifies that application container 120 should remain in service during execution of maintenance request 160.
  • In step 203 and in response to receiving the maintenance request, a reference container is created. For example, database server 100 creates reference container 170 while fulfilling maintenance request 160.
  • In an embodiment, maintenance request 160 expressly indicates that a reference container should be created. In an embodiment, database server implicitly creates reference container 170 to fulfil maintenance request 160.
  • In an embodiment, database server 100 creates reference container 170 by copying some or all files of application container 120. For example, some or all of original metadata 140 may occupy a control file that may be copied from application container 140 to reference container 170.
  • 3.2 Metadata Location Mapping
  • In step 204, a read request is received during or after creation of the reference container and while maintenance request 205 is outstanding (received but unfulfilled). For example, database server 100 receives read request 190 that attempts to read original metadata 140.
  • Read request 190 may attempt to read particular objects, such as 150, that occupy original metadata 140. Read request 190 may originate from within application 110, such as during live and ordinary transaction processing against application container 120 or an included pluggable database, such as 130.
  • Embodiments of database server 100 may detect that application container 120 is being upgraded and that original metadata 140 may be unavailable because of the ongoing upgrade. In an embodiment, database server 100 sets an upgrade flag upon receipt of maintenance request 160 to indicate an ongoing upgrade. Database server 100 may check the upgrade flag to decide how to process read request 190.
  • In embodiments, database server 100 may establish more significant state changes in response to receiving maintenance request 160 to indicate an ongoing upgrade. For example, database server 100 may temporarily adjust original metadata 140 to indicate that some or all access to original metadata 140 should be diverted to cloned metadata 180.
  • In an embodiment, database server 100 maintains a lookup table or other association that maps metadata access to actual metadata, such as 140 or 180. In an embodiment, the mapping is from client connection to actual metadata.
  • In an embodiment, the mapping is from pluggable database to actual metadata. For example, one pluggable database of application container 120 may have its metadata access diverted to cloned metadata 180. Whereas, another pluggable database of the same application container 120 at the same time may directly access original metadata 140.
  • In an embodiment, diversion or selective (mapped) diversion may remain in effect after upgrading application container 120 and perhaps indefinitely. For example, cloned metadata 180 is not upgraded and may be retained for backward compatibility needed to support a legacy pluggable database whose codebase maintenance has ceased.
  • In an embodiment, an identifier determines which metadata is actually read. For example, an identifier may be configured to identify either the reference container or the application container. The current value of the identifier may be used to select which metadata is actually read during metadata retrieval.
  • In an embodiment, adjustment of original metadata 140 during an upgrade includes storing an identifier or locator of cloned metadata 180 into original metadata 140. In a preferred embodiment, an application root database of application container 120 contains original metadata 140, and adjustment of original metadata 140 includes replacement of a reference to the application root database with a reference to a clone of the application root database that occupies reference container 170.
  • In an embodiment, this may include storing an identifier of an object of cloned metadata 180, such as object 155, into original metadata 140.
  • For example, the declaration of object 150 may be decoupled from the implementation of object 150. For example and although not shown, original metadata 140 or another part of application container 120 may contain a metadata entry that declares object 150 and provides a pointer or reference to object 150 itself.
  • In an embodiment and as part of creating reference container 170, database server 100 temporarily adjusts the declaration of object 150, such that the reference within the declaration points to object 155 instead of object 150. In an embodiment, database server 100 may ordinarily inspect references and identifiers within original metadata 140 as part of detecting where metadata objects, such as 150, actually reside, such as within cloned metadata 180. The mechanics of metadata cross-referencing and retargeting of metadata are discussed later herein.
  • 3.3 Metadata Switching
  • Steps 205-206 may potentially occur simultaneously. This may be partly because requests 160 and 190 may be fulfilled by separate threads of execution and partly because requests 160 and 190 operate upon different metadata collections 140 and 180.
  • In step 205, the maintenance request is executed upon the original metadata. For example, database server 100 executes maintenance request 160 to upgrade application container 120.
  • Execution of maintenance request 160 may be more or less long-running as compared to ordinary online transaction processing (OLTP) such as for read request 190. For example, maintenance request 160 may cause original metadata 140 to become inconsistent or otherwise unavailable during execution of maintenance request 160 and without causing any interruption of service at application container 120.
  • In step 206, the read request is executed upon the cloned metadata. For example, database server 100 executes read request 190 to retrieve desired information from cloned metadata 180, even though read request 190 would ordinarily retrieve the same information from original metadata 140 instead.
  • After completion of step 205, original metadata 140 has regained consistency and may again become the metadata system of record for application container 120. By this point metadata 140 and 180, which were more or less identical upon completion of step 203, may have divergent content.
  • For example, the upgrade of application 110 may cause addition of a new column to a metadata table of original metadata 140. However so long as read requests such as 190 continue to be diverted to cloned metadata 180, upgraded metadata (such as the new column) will not be available to application 110.
  • As such, completion of step 205 may finally include or be followed by restoration of identifiers and references that were retargeted to divert access to cloned metadata 180. For example, any reference to cloned metadata 180 or object 155 should be restored by database server 100 to once again respectively refer to original metadata 140 and object 150.
  • However and although such restoration of original targeting of metadata should not occur until step 205 is complete, such restoration need not wait for completion of step 206, which is the actual use of cloned metadata 180. For example, the reinstatement of original metadata 140 as a system of record may occur even though some reading of cloned metadata 180 is still ongoing.
  • In an embodiment, reinstatement of original metadata 140 as a system of record is full reinstatement for all uses and clients of application container 120. In another embodiment, reinstatement is limited to a subset of the pluggable databases of application container 120.
  • For example, one pluggable database may have its metadata retrieval resume access of original metadata 140. Whereas metadata access by another pluggable database may continue to be diverted to cloned metadata 180.
  • In a preferred embodiment, a so-called ‘synchronization’ (sync) command designates a subset of pluggable databases that should have restored access to original metadata 140, which was upgraded and is ready to be returned into service. In an embodiment, a sync command also causes additional upgrade activity to be applied to the subset of pluggable databases. The sync command is discussed later herein.
  • Furthermore and eventually, steps 205-206 finish, and cloned metadata 180 may become unnecessary. At that point, the upgrading of application 110 is complete, and reference container 170 may be deleted. However scenarios explained below may more or less prevent deletion of reference container 170, such as upgrading application container 120 without upgrading all of its pluggable databases. In that case, at least one of its pluggable databases may have a continued need for cloned metadata 180.
  • As such, deletion of reference container 170 may be inappropriate until some time after upgrading application container 120. In another example, reference container 170 is not deleted so that a subset of application pluggable databases can be restored, within application container 120, to an historic version from backup that expects old metadata.
  • In an embodiment, database server 100 automatically deletes reference container 170 when no longer needed. In an embodiment a command, such as an interactive command, causes database server 100 to delete reference container 170.
  • 4.0 Metadata Binding
  • FIG. 3 is a block diagram that depicts an example container database 300 that maintains an association between an application container and a reference container to achieve diversion of metadata retrieval, in an embodiment. Container database 300 may be hosted by an implementation of database server 100.
  • Container database 300 may be a multitenant container database that may contain at least one application container, such as 320, for at least one application. Application container 320 contains original metadata 340.
  • At the beginning of a software upgrade of application container 320, the database server creates reference container 370, which includes copying cloned metadata 380 from original metadata 340. During the upgrade, pluggable databases contained within application container 320 may read cloned metadata 380, instead of original metadata 340, to retrieve metadata for ordinary purposes.
  • However, retrieval of metadata from one application container, 370, for use by a pluggable database of another application container, 320, may need a cross-reference from application container 320 to reference container 370.
  • To facilitate cross referencing, each application container may have its own unique identifier. For example, reference container 370 is identified by identifier 357. Whereas application container 320 may be identified by a different identifier.
  • For example, an identifier may be a more or less globally unique identifier. In an embodiment, a database identifier is guaranteed to be unique only within container database 300.
  • The identifier of one application container, such as identifier 357, may be specified within an association that logically binds one application container to another for the purpose of metadata retrieval. For example, application container 320 contains association 356 as a reference or pointer that identifies reference container 370 as the source for metadata of a pluggable database or its application container.
  • In an embodiment, association 356 may be used to divert metadata access for all pluggable databases within application container 320. In an embodiment, each pluggable database of application container 320 has its own association 356.
  • As such, a subset of pluggable databases of application container 320 may be diverted to cloned metadata 380. Furthermore, each pluggable database may have its metadata retrieval diverted to its own reference container. Furthermore, a pluggable database may have its metadata retrieval diverted to another application container that is not a reference clone.
  • During metadata retrieval for a pluggable database of application container 320, the database server may examine association 356 to detect from which application container the metadata should be retrieved. The database server may then read the metadata from whichever application container is specified by association 356. Upon completion of the upgrade, the database server may reset association 356 to refer to application container 320.
  • 5.0 Clone Consistency
  • FIG. 4 is a block diagram that depicts an example container database 400 that processes undo records to make consistent a reference container, in an embodiment. Container database 400 may be hosted by an implementation of database server 100.
  • Container database 400 contains application container 420. During a software upgrade of application container 420, the database server may copy reference container 470 from application container 420, such as by copying data files.
  • However, copying may occur while application 420 sustains a transactional load. For example, transactions may remain ongoing until after creation of reference container 470.
  • Furthermore, intermediate data written by an ongoing transaction may be included when copying application container 420 to reference container 470. For example a copied transaction, such as 460, may have altered row 495 of relational table 490 in a way that would only be consistent if transaction 460 finishes.
  • As such, row 495 may be in an inconsistent state. Furthermore, transaction 460 will not finish within reference container 470, although the transaction may eventually finish within application container 420.
  • As such, reference container 470 may be inconsistent. Because reference container 470 is used solely for metadata access, inconsistent ordinary (not metadata) data may be tolerated (ignored).
  • However, inconsistent cloned metadata should be remedied. For example, relational table 490 may be part of cloned metadata instead of ordinary data. For example, relational table 490 may contain predefined specification data, such as zip codes.
  • 5.1 Undo Record and Transaction Rollback
  • Achieving consistency within reference container 470 may involve undoing (rolling back) transaction 460. The database server typically processes undo records, such as 480, to roll back a transaction.
  • Each undo record 480 may contain content of a data block as it existed immediately before being modified by a transaction. For example, undo record 480 may hold the prior content of row 495 as it was before transaction 460.
  • The database server may apply undo record 480 to reference container 470 to roll back transaction 460. To accomplish the roll back in an embodiment, the database server reads undo record 480 from within application container 480.
  • Undo record 480 may be copied into reference container 470 during 470′s creation (by cloning). In a preferred embodiment, the database server accomplishes the roll back by reading the copy of undo record 480 that occupies reference container 470.
  • Regardless from which application container does the database server retrieve the undo record, the undo record may contain object identifiers that are valid only within application container 420 and not within reference container 470. For example, the undo record may be applied as-is to application container 420. However, the undo record cannot be applied as-is to reference container 470 because identifiers might not be portable between application containers.
  • As such the database server may, for example with the help of a lookup table, translate any identifier that is specified by undo record 480 and that is valid within application container 420 to a different identifier that is valid within reference container 470.
  • For example, the database server may populate the lookup table as part of the container cloning process that creates reference container 470. For example, when the database server assigns a new identifier, such as 457, to a clone of a metadata object, the database server may add to the lookup table an entry that maps old identifier 456 to new identifier 457.
  • During processing of undo record 480, the database server may use old identifier 456, as specified in undo record 480, as a lookup key into the lookup table to translate old identifier 456 into new identifier 457. In this way, the database server may apply undo records to reference container 470, even though the undo records specify identifiers that are invalid within reference container 470.
  • 5.2 Rollback Mechanics
  • Reference container 470, as a copy of application container 420, may contain some written data that was committed by a completed transaction and some written data that is uncommitted (not yet committed) by an ongoing transaction. As such, the database server may need a mechanism to distinguish uncommitted writes that should be undone within reference container 470 and committed writes that should not be disturbed.
  • In an embodiment, application container 420 (and its clone, reference container 470) may maintain a listing of pending (ongoing) transactions and there corresponding undo records. In an embodiment, undo record 480 may specify which transaction created it.
  • In an embodiment, each transaction is assigned a unique serial number or timestamp when committed, and this serial number may be associated with or recorded within each undo record for the transaction. In an embodiment, the database server detects which undo records correspond to uncommitted transactions by detecting which undo records have no associated transaction serial number. The database server may be configured to apply to reference container 470 those undo records that do not have an associated transaction serial number.
  • 6.0 Independent Synchronization
  • FIG. 5 is a scenario diagram that depicts an example system of computers 500 that selectively upgrades a subset of pluggable databases of an application container, in an embodiment. System 500 may be composed of at least one computer.
  • System 500 includes clients 515-516 and database server 510. Clients 515-516 may be external clients of database server 510.
  • For example, client 516 may occupy a different computer than database server 510. Likewise, client 516 may occupy a same computer as database server 510, but occupy a different operating system process.
  • In some cases, client 516 may be embedded within database server 510. For example, client 516 may implement a maintenance chore that database server 510 performs upon itself.
  • An implementation of client 516 may include a codebase that contains a database connector, such as an open database connectivity (ODBC) driver. Communication between client 516 and database server 510 may occur through transport control protocol (TCP) sockets, through shared memory, or other inter-process channel.
  • For example, client 516 may send upgrade request 501 to database server 510 to upgrade the software of application container 520 that is contained within database server 510. Although not shown, application container 520 may reside in a container database that resides in database server 510. Database server 510 may be an implementation of database server 100.
  • Database server 510 may react to upgrade request 501 by cloning application container 520 to create reference container 570, shown as create 502. In a preferred embodiment, application containers 520 and 570 occupy a same container database. In an embodiment, database server 510 has a plurality of container databases, and application containers 520 and 570 occupy separate container databases.
  • 6.1 SQL Recording
  • However, creation of reference container 570 is not the only work that database server 510 must perform to fulfill upgrade request 501. Typically a software upgrade of application container 520 involves altering the metadata of application container 520.
  • In an embodiment, upgrading the metadata of application container 520 involves the execution of data manipulation language (DML) and/or data definition language (DDL) statements, which may be scripted or dynamically generated. In an embodiment, all or some of these statements may be recorded, along with their actual parameters.
  • In an embodiment, upgrading the metadata of application container 520 involves creating or modifying a database view. For example, application 520 may restrict some of its pluggable databases to using a limited view that exposes less metadata.
  • 6.2 Limited Asynchrony
  • Application container 520 contains pluggable databases such as 530. Also in response to upgrade request 501, database server 510 adjusts a metadata pointer for each pluggable database or for application container 520 as a whole.
  • This is shown as configure diversion 503. After adjustment, the metadata pointer(s) refer to reference container 570 instead of application container 520.
  • As such, client 515 may send read request 504 to database server 510 to read metadata for pluggable database 530. However because metadata retrieval is diverted while upgrade request 501 is executing, read request 504 is fulfilled by read metadata 505 that reads reference container 570 instead of application container 520.
  • As depicted by the dashed arrow on the right edge of FIG. 5, time flows downward. As such, read request 504 occurs after the creation (create 502) of reference container 570.
  • The shown embodiment achieves limited asynchrony by enabling read request 504 to access cloned metadata while upgrade request 501 is being simultaneously executed, so long as interactions 502-503 have finished. In an embodiment not shown, database server 510 receives read request 504 during the creation of reference container 570 and buffers the request without processing it until interactions 502-503 have finished.
  • Whether requests are buffered or not, database server 510 detects that metadata retrieval for application container 520 is diverted to reference container 570. For example, database server 510 may still be upgrading application container 520 for upgrade request 501. As such, database server 510 sends (diverts) read metadata 505 to read the metadata of reference container 570 to fulfill read request 504.
  • 6.3 SQL Replay
  • Eventually, fulfilment of upgrade request 501 finishes, such as after read metadata 505. At this time, continued diversion may or may not be necessary.
  • For example after the upgrade and although not shown, application container 520 may have a legacy pluggable database that cannot be upgraded because development of the legacy pluggable database as ceased. As such, the legacy pluggable database may have a continued need for the backwards-compatible metadata of reference container 570.
  • Whereas, pluggable database 530 is upgradable. To accommodate this, database server 510 may be directed by itself or by another agent to upgrade a subset of pluggable databases.
  • For example and although not shown, database server 510 may receive a command, such as a scripted or interactive command, to upgrade specified pluggable databases, such as 530. Database server 510 may react by invoking synchronize 506 upon pluggable database 530.
  • Synchronize 506 cancels configured diversion 503 for specified pluggable databases. However before, during, or after cancellation and depending on the implementation, synchronize 506 may also cause the specified pluggable databases to be individually upgraded.
  • In a preferred embodiment, application container 520 and pluggable database 530 may have metadata that contains a respective data dictionary. During synchronization, the data dictionary of pluggable database 530 may be updated to reflect new data objects that are added to application container 520. This updating may involve recording, refreshing, or updating a link or pointer between application container 520 and pluggable database 530, or between objects within application container 520 and pluggable database 530.
  • Upgrading pluggable database 530 may entail executing logic of components 510, 520, and/or 530. It may also entail replaying (repeating), into pluggable database 530, database statements that were recorded while upgrading the metadata of application container 520 during execution of upgrade request 501, as explained above.
  • In a preferred embodiment, the database statements are recorded as a re-playable script. Either creating or replaying the script may involve filtering out (skipping) statements that are only operable or useful at application container 520.
  • For example, metadata objects may be replicated into both of application container 520 and pluggable database 530. However, application container 520 may have objects that pluggable database 530 lacks.
  • After fulfilment of synchronize 506, diversion ceases for pluggable database 530, which is re-associated with the metadata of application container 520. As such, read request 507 is satisfied by read metadata 508 using metadata from application container 520 instead of from reference container 570. However, pluggable databases of application container 520 that have not been upgraded, such as by synchronize 506, continue to have metadata retrieval diverted to reference container 570.
  • 7.0 Hardware Overview
  • According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques.
  • For example, FIG. 6 is a block diagram that illustrates a computer system 600 upon which an embodiment of the invention may be implemented. Computer system 600 includes a bus 602 or other communication mechanism for communicating information, and a hardware processor 604 coupled with bus 602 for processing information. Hardware processor 604 may be, for example, a general purpose microprocessor.
  • Computer system 600 also includes a main memory 606, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 602 for storing information and instructions to be executed by processor 604. Main memory 606 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 604. Such instructions, when stored in non-transitory storage media accessible to processor 604, render computer system 600 into a special-purpose machine that is customized to perform the operations specified in the instructions.
  • Computer system 600 further includes a read only memory (ROM) 608 or other static storage device coupled to bus 602 for storing static information and instructions for processor 604. A storage device 66, such as a magnetic disk or optical disk, is provided and coupled to bus 602 for storing information and instructions.
  • Computer system 600 may be coupled via bus 602 to a display 612, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device 614, including alphanumeric and other keys, is coupled to bus 602 for communicating information and command selections to processor 604. Another type of user input device is cursor control 616, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 604 and for controlling cursor movement on display 612. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
  • Computer system 600 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 600 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 600 in response to processor 604 executing one or more sequences of one or more instructions contained in main memory 606. Such instructions may be read into main memory 606 from another storage medium, such as storage device 66. Execution of the sequences of instructions contained in main memory 606 causes processor 604 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
  • The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 66. Volatile media includes dynamic memory, such as main memory 606. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
  • Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 602. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
  • Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 604 for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 600 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 602. Bus 602 carries the data to main memory 606, from which processor 604 retrieves and executes the instructions. The instructions received by main memory 606 may optionally be stored on storage device 66 either before or after execution by processor 604.
  • Computer system 600 also includes a communication interface 618 coupled to bus 602. Communication interface 618 provides a two-way data communication coupling to a network link 620 that is connected to a local network 622. For example, communication interface 618 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 618 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 618 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • Network link 620 typically provides data communication through one or more networks to other data devices. For example, network link 620 may provide a connection through local network 622 to a host computer 624 or to data equipment operated by an Internet Service Provider (ISP) 626. ISP 626 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 628. Local network 622 and Internet 628 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 620 and through communication interface 618, which carry the digital data to and from computer system 600, are example forms of transmission media.
  • Computer system 600 can send messages and receive data, including program code, through the network(s), network link 620 and communication interface 618. In the Internet example, a server 630 might transmit a requested code for an application program through Internet 628, ISP 626, local network 622 and communication interface 618.
  • The received code may be executed by processor 604 as it is received, and/or stored in storage device 66, or other non-volatile storage for later execution.
  • In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction.

Claims (21)

What is claimed is:
1. A method comprising:
storing, within an application container of a database application, original metadata that defines one or more objects for use by one or more pluggable databases of the database application;
receiving a maintenance request to adjust the original metadata;
creating, in response to receiving the maintenance request, a reference container that contains cloned metadata that is a copy of the original metadata;
receiving, during or after creating the reference container, a read request to read an object of the one or more objects;
concurrently performing both of:
executing the maintenance request upon the original metadata, and
executing the read request upon the cloned metadata of the reference container.
2. The method of claim 1 wherein creating the reference container comprises associating the application container or the original metadata to an identifier of the reference container or an identifier of the cloned metadata.
3. The method of claim 1 wherein executing the maintenance request comprises:
issuing one or more database statements to the application container, and
creating a recording of the one or more database statements;
issuing the one or more database statements to the application pluggable database by replaying the recording.
4. The method of claim 1 wherein creating the reference container comprises making the reference container consistent by applying an undo record to roll back a transaction.
5. The method of claim 4 wherein:
creating the reference container comprises assigning a new identifier to an object within the reference container;
the undo record contains an old identifier of the object;
applying the undo record comprises translating the old identifier into the new identifier.
6. The method of claim 1 further comprising:
receiving, after executing the maintenance request, an additional request to read the original metadata;
executing the additional request upon the original metadata of the application container.
7. The method of claim 6 wherein after executing the maintenance request comprises after receiving a command to cease using the reference container.
8. The method of claim 1 wherein the reference container is read only or does not accept client connections.
9. The method of claim 1 wherein a container database contains the application container.
10. The method of claim 1 wherein executing the maintenance request comprises at least one of:
executing data definition language (DDL), or
creating or modifying a database view.
11. The method of claim 1 wherein the cloned metadata comprises one or more metadata rows contained in one or more relational tables.
12. A system comprising:
database storage configured to store and retrieve, within an application container of an application, original metadata that defines one or more objects for use by one or more pluggable databases of the application;
a processor connected to the database storage and configured to:
receive a maintenance request to adjust the original metadata;
create, in response to receiving the maintenance request and within the database storage, a reference container that contains cloned metadata that is a copy of the original metadata;
receive, during or after creating the reference container, a read request to read an object of the one or more objects;
concurrently perform both of:
executing the maintenance request upon the original metadata, and
executing the read request upon the cloned metadata of the reference container.
13. One or more non-transitory computer-readable media storing instructions comprising:
first instructions that, when executed by one or more processors, cause storing, within an application container of an application, original metadata that defines one or more objects for use by one or more pluggable databases of the application;
second instructions that, when executed by one or more processors, cause receiving a maintenance request to adjust the original metadata;
third instructions that, when executed by one or more processors, cause creating, in response to receiving the maintenance request, a reference container that contains cloned metadata that is a copy of the original metadata;
fourth instructions that, when executed by one or more processors, cause receiving, during or after creating the reference container, a read request to read an object of the one or more objects;
fifth instructions that, when executed by one or more processors, cause concurrently performing both of:
executing the maintenance request upon the original metadata, and
executing the read request upon the cloned metadata of the reference container.
14. The one or more non-transitory computer-readable media of claim 13 wherein creating the reference container comprises associating the application container or the original metadata to an identifier of the reference container or an identifier of the cloned metadata.
14. The one or more non-transitory computer-readable media of claim 13 wherein executing the maintenance request comprises:
issuing one or more database statements to the application container, and
creating a recording of the one or more database statements;
issuing the one or more database statements to the application pluggable database by replaying the recording.
15. The one or more non-transitory computer-readable media of claim 13 wherein creating the reference container comprises making the reference container consistent by applying an undo record to roll back a transaction.
16. The one or more non-transitory computer-readable media of claim 15 wherein:
creating the reference container comprises assigning a new identifier to an object within the reference container;
the undo record contains an old identifier of the object;
applying the undo record comprises translating the old identifier into the new identifier.
17. The one or more non-transitory computer-readable media of claim 13 wherein the instructions further comprise:
sixth instructions that, when executed by one or more processors, cause receiving, after executing the maintenance request, an additional request to read the original metadata;
seventh instructions that, when executed by one or more processors, cause executing the additional request upon the original metadata of the application container.
18. The one or more non-transitory computer-readable media of claim 17 wherein after executing the maintenance request comprises after receiving a command to cease using the reference container.
19. The one or more non-transitory computer-readable media of claim 13 wherein a container database contains the application container.
20. The one or more non-transitory computer-readable media of claim 13 wherein executing the maintenance request comprises at least one of:
executing data definition language (DDL), or
creating or modifying a database view.
US15/266,917 2015-10-23 2016-09-15 Asynchronous shared application upgrade Active 2036-12-21 US10635658B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/266,917 US10635658B2 (en) 2015-10-23 2016-09-15 Asynchronous shared application upgrade

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562245937P 2015-10-23 2015-10-23
US15/266,917 US10635658B2 (en) 2015-10-23 2016-09-15 Asynchronous shared application upgrade

Publications (3)

Publication Number Publication Date
US20180075086A1 US20180075086A1 (en) 2018-03-15
US20190278856A9 true US20190278856A9 (en) 2019-09-12
US10635658B2 US10635658B2 (en) 2020-04-28

Family

ID=61560666

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/266,917 Active 2036-12-21 US10635658B2 (en) 2015-10-23 2016-09-15 Asynchronous shared application upgrade

Country Status (1)

Country Link
US (1) US10635658B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230156481A1 (en) * 2021-11-12 2023-05-18 T-Mobile Innovations Llc Downtime optimized network upgrade process

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10922331B2 (en) 2012-09-28 2021-02-16 Oracle International Corporation Cloning a pluggable database in read-write mode
US10635674B2 (en) 2012-09-28 2020-04-28 Oracle International Corporation Migrating a pluggable database between database server instances with minimal impact to performance
US10606578B2 (en) 2015-10-23 2020-03-31 Oracle International Corporation Provisioning of pluggable databases using a central repository
CN108431810B (en) 2015-10-23 2022-02-01 甲骨文国际公司 Proxy database
US10572551B2 (en) 2015-10-23 2020-02-25 Oracle International Corporation Application containers in container databases
US10803078B2 (en) 2015-10-23 2020-10-13 Oracle International Corporation Ability to group multiple container databases as a single container database cluster
US10360269B2 (en) 2015-10-23 2019-07-23 Oracle International Corporation Proxy databases
US10440153B1 (en) 2016-02-08 2019-10-08 Microstrategy Incorporated Enterprise health score and data migration
US11283900B2 (en) 2016-02-08 2022-03-22 Microstrategy Incorporated Enterprise performance and capacity testing
US11349922B2 (en) 2016-04-06 2022-05-31 Marvell Asia Pte Ltd. System and method for a database proxy
US10237350B2 (en) 2016-04-06 2019-03-19 Reniac, Inc. System and method for a database proxy
US11386058B2 (en) 2017-09-29 2022-07-12 Oracle International Corporation Rule-based autonomous database cloud service framework
US11327932B2 (en) 2017-09-30 2022-05-10 Oracle International Corporation Autonomous multitenant database cloud service framework
US11829742B2 (en) 2019-08-15 2023-11-28 Microstrategy Incorporated Container-based server environments
US11106455B2 (en) 2019-08-15 2021-08-31 Microstrategy Incorporated Integration of containers with external elements
US11288053B2 (en) * 2019-08-15 2022-03-29 Microstrategy Incorporated Conversion and restoration of computer environments to container-based implementations
US11637748B2 (en) 2019-08-28 2023-04-25 Microstrategy Incorporated Self-optimization of computing environments
US11210189B2 (en) 2019-08-30 2021-12-28 Microstrategy Incorporated Monitoring performance of computing systems
US11507295B2 (en) 2019-08-30 2022-11-22 Microstrategy Incorporated Backup, restoration, and migration of computer systems
US11354216B2 (en) 2019-09-18 2022-06-07 Microstrategy Incorporated Monitoring performance deviations
US11360881B2 (en) 2019-09-23 2022-06-14 Microstrategy Incorporated Customizing computer performance tests
US11438231B2 (en) 2019-09-25 2022-09-06 Microstrategy Incorporated Centralized platform management for computing environments
US11836158B2 (en) 2020-02-03 2023-12-05 Microstrategy Incorporated Deployment of container-based computer environments
US11429595B2 (en) 2020-04-01 2022-08-30 Marvell Asia Pte Ltd. Persistence of write requests in a database proxy
US11954473B2 (en) 2021-09-20 2024-04-09 Microstrategy Incorporated Deployment architecture for multi-tenant cloud computing systems
US11861342B2 (en) 2022-01-28 2024-01-02 Microstrategy Incorporated Enhanced cloud-computing environment deployment

Family Cites Families (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6647510B1 (en) 1996-03-19 2003-11-11 Oracle International Corporation Method and apparatus for making available data that was locked by a dead transaction before rolling back the entire dead transaction
US7415466B2 (en) 1996-03-19 2008-08-19 Oracle International Corporation Parallel transaction recovery
US7031987B2 (en) 1997-05-30 2006-04-18 Oracle International Corporation Integrating tablespaces with different block sizes
US6272503B1 (en) 1997-05-30 2001-08-07 Oracle Corporation Tablespace-relative database pointers
US6185699B1 (en) 1998-01-05 2001-02-06 International Business Machines Corporation Method and apparatus providing system availability during DBMS restart recovery
US6205449B1 (en) 1998-03-20 2001-03-20 Lucent Technologies, Inc. System and method for providing hot spare redundancy and recovery for a very large database management system
US6226650B1 (en) 1998-09-17 2001-05-01 Synchrologic, Inc. Database synchronization and organization system and method
US6295610B1 (en) 1998-09-17 2001-09-25 Oracle Corporation Recovering resources in parallel
US9239763B2 (en) 2012-09-28 2016-01-19 Oracle International Corporation Container database
US6868417B2 (en) 2000-12-18 2005-03-15 Spinnaker Networks, Inc. Mechanism for handling file level and block level remote file accesses using the same server
US7305421B2 (en) 2001-07-16 2007-12-04 Sap Ag Parallelized redo-only logging and recovery for highly available main memory database systems
US8738568B2 (en) 2011-05-05 2014-05-27 Oracle International Corporation User-defined parallelization in transactional replication of in-memory database
US7493311B1 (en) 2002-08-01 2009-02-17 Microsoft Corporation Information server and pluggable data sources
US6976022B2 (en) 2002-09-16 2005-12-13 Oracle International Corporation Method and mechanism for batch processing transaction logging records
US6981004B2 (en) 2002-09-16 2005-12-27 Oracle International Corporation Method and mechanism for implementing in-memory transaction logging records
US7890466B2 (en) 2003-04-16 2011-02-15 Oracle International Corporation Techniques for increasing the usefulness of transaction logs
US7181476B2 (en) 2003-04-30 2007-02-20 Oracle International Corporation Flashback database
US7457829B2 (en) 2003-06-23 2008-11-25 Microsoft Corporation Resynchronization of multiple copies of a database after a divergence in transaction history
US7873684B2 (en) 2003-08-14 2011-01-18 Oracle International Corporation Automatic and dynamic provisioning of databases
US7660805B2 (en) 2003-12-23 2010-02-09 Canon Kabushiki Kaisha Method of generating data servers for heterogeneous data sources
US7870120B2 (en) 2004-05-27 2011-01-11 International Business Machines Corporation Method and system for processing a database query by a proxy server
US7822727B1 (en) 2004-07-02 2010-10-26 Borland Software Corporation System and methodology for performing read-only transactions in a shared cache
US20060047713A1 (en) 2004-08-03 2006-03-02 Wisdomforce Technologies, Inc. System and method for database replication by interception of in memory transactional change records
GB2419697A (en) 2004-10-29 2006-05-03 Hewlett Packard Development Co Virtual overlay infrastructures each having an infrastructure controller
EP1952283A4 (en) 2005-10-28 2010-01-06 Goldengate Software Inc Apparatus and method for creating a real time database replica
US20070118527A1 (en) 2005-11-22 2007-05-24 Microsoft Corporation Security and data filtering
US7822717B2 (en) 2006-02-07 2010-10-26 Emc Corporation Point-in-time database restore
US9026679B1 (en) 2006-03-30 2015-05-05 Emc Corporation Methods and apparatus for persisting management information changes
US7571225B2 (en) 2006-06-29 2009-08-04 Stratavia Corporation Standard operating procedure automation in database administration
US8364648B1 (en) 2007-04-09 2013-01-29 Quest Software, Inc. Recovering a database to any point-in-time in the past with guaranteed data consistency
US20080319958A1 (en) 2007-06-22 2008-12-25 Sutirtha Bhattacharya Dynamic Metadata based Query Formulation for Multiple Heterogeneous Database Systems
US8600977B2 (en) 2007-10-17 2013-12-03 Oracle International Corporation Automatic recognition and capture of SQL execution plans
US8090917B2 (en) 2008-05-09 2012-01-03 International Business Machines Corporation Managing storage and migration of backup data
US8745076B2 (en) 2009-01-13 2014-06-03 Red Hat, Inc. Structured query language syntax rewriting
US8549038B2 (en) 2009-06-15 2013-10-01 Oracle International Corporation Pluggable session context
US10120767B2 (en) 2009-07-15 2018-11-06 Idera, Inc. System, method, and computer program product for creating a virtual database
US8429134B2 (en) 2009-09-08 2013-04-23 Oracle International Corporation Distributed database recovery
EP2323047B1 (en) 2009-10-09 2020-02-19 Software AG Primary database system, replication database system and method for replicating data of a primary database system
US8484164B1 (en) * 2009-10-23 2013-07-09 Netapp, Inc. Method and system for providing substantially constant-time execution of a copy operation
US20110126197A1 (en) 2009-11-25 2011-05-26 Novell, Inc. System and method for controlling cloud and virtualized data centers in an intelligent workload management system
JP5302227B2 (en) 2010-01-19 2013-10-02 富士通テン株式会社 Image processing apparatus, image processing system, and image processing method
US8386431B2 (en) 2010-06-14 2013-02-26 Sap Ag Method and system for determining database object associated with tenant-independent or tenant-specific data, configured to store data partition, current version of the respective convertor
US9081837B2 (en) 2010-10-28 2015-07-14 Microsoft Technology Licensing, Llc Scoped database connections
US8478718B1 (en) 2010-11-16 2013-07-02 Symantec Corporation Systems and methods for replicating data in cluster environments
US8819163B2 (en) 2010-12-08 2014-08-26 Yottastor, Llc Method, system, and apparatus for enterprise wide storage and retrieval of large amounts of data
US8996463B2 (en) 2012-07-26 2015-03-31 Mongodb, Inc. Aggregation framework system architecture and method
US8554762B1 (en) 2010-12-28 2013-10-08 Amazon Technologies, Inc. Data replication framework
US20120173717A1 (en) 2010-12-31 2012-07-05 Vince Kohli Cloud*Innovator
US20120284544A1 (en) 2011-05-06 2012-11-08 Microsoft Corporation Storage Device Power Management
US8868492B2 (en) 2011-06-15 2014-10-21 Oracle International Corporation Method for maximizing throughput and minimizing transactions response times on the primary system in the presence of a zero data loss standby replica
US9769250B2 (en) 2013-08-08 2017-09-19 Architecture Technology Corporation Fight-through nodes with disposable virtual machines and rollback of persistent state
US9203900B2 (en) 2011-09-23 2015-12-01 Netapp, Inc. Storage area network attached clustered storage system
US8880477B2 (en) 2011-10-04 2014-11-04 Nec Laboratories America, Inc. Latency-aware live migration for multitenant database platforms
US9058371B2 (en) 2011-11-07 2015-06-16 Sap Se Distributed database log recovery
KR101322401B1 (en) 2012-01-31 2013-10-28 주식회사 알티베이스 Apparatus and method for parallel processing in database management system for synchronous replication
US8527462B1 (en) 2012-02-09 2013-09-03 Microsoft Corporation Database point-in-time restore and as-of query
US10635674B2 (en) 2012-09-28 2020-04-28 Oracle International Corporation Migrating a pluggable database between database server instances with minimal impact to performance
US9396220B2 (en) 2014-03-10 2016-07-19 Oracle International Corporation Instantaneous unplug of pluggable database from one container database and plug into another container database
US10922331B2 (en) 2012-09-28 2021-02-16 Oracle International Corporation Cloning a pluggable database in read-write mode
US9633216B2 (en) 2012-12-27 2017-04-25 Commvault Systems, Inc. Application of information management policies based on operation with a geographic entity
US9563655B2 (en) 2013-03-08 2017-02-07 Oracle International Corporation Zero and near-zero data loss database backup and recovery
WO2014145230A1 (en) 2013-03-15 2014-09-18 Recent Memory Incorporated Object-oriented data infrastructure
US9767178B2 (en) 2013-10-30 2017-09-19 Oracle International Corporation Multi-instance redo apply
US9817994B2 (en) 2013-10-30 2017-11-14 Oracle International Corporation System and method for integrating a database with a service deployed on a cloud platform
US9390120B1 (en) 2013-12-31 2016-07-12 Google Inc. System and methods for organizing hierarchical database replication
US10148757B2 (en) 2014-02-21 2018-12-04 Hewlett Packard Enterprise Development Lp Migrating cloud resources
US11172022B2 (en) 2014-02-21 2021-11-09 Hewlett Packard Enterprise Development Lp Migrating cloud resources
US9940203B1 (en) 2015-06-11 2018-04-10 EMC IP Holding Company LLC Unified interface for cloud-based backup and restoration
US10606578B2 (en) 2015-10-23 2020-03-31 Oracle International Corporation Provisioning of pluggable databases using a central repository
US10803078B2 (en) 2015-10-23 2020-10-13 Oracle International Corporation Ability to group multiple container databases as a single container database cluster
US10572551B2 (en) 2015-10-23 2020-02-25 Oracle International Corporation Application containers in container databases
CN108431810B (en) 2015-10-23 2022-02-01 甲骨文国际公司 Proxy database
US10430284B2 (en) 2016-06-08 2019-10-01 International Business Machines Corporation Creating a full backup image from incremental backups
US11386058B2 (en) 2017-09-29 2022-07-12 Oracle International Corporation Rule-based autonomous database cloud service framework
US11327932B2 (en) 2017-09-30 2022-05-10 Oracle International Corporation Autonomous multitenant database cloud service framework

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230156481A1 (en) * 2021-11-12 2023-05-18 T-Mobile Innovations Llc Downtime optimized network upgrade process

Also Published As

Publication number Publication date
US20180075086A1 (en) 2018-03-15
US10635658B2 (en) 2020-04-28

Similar Documents

Publication Publication Date Title
US10635658B2 (en) Asynchronous shared application upgrade
CN108475271B (en) Application container of container database
US10572551B2 (en) Application containers in container databases
CN109906448B (en) Method, apparatus, and medium for facilitating operations on pluggable databases
US11068437B2 (en) Periodic snapshots of a pluggable database in a container database
CN106415536B (en) method and system for pluggable database transmission between database management systems
US11550667B2 (en) Pluggable database archive
US10789131B2 (en) Transportable backups for pluggable database relocation
US9146934B2 (en) Reduced disk space standby
US6857053B2 (en) Method, system, and program for backing up objects by creating groups of objects
US9747356B2 (en) Eager replication of uncommitted transactions
CN113396407A (en) System and method for augmenting database applications using blockchain techniques
CN108021338B (en) System and method for implementing a two-layer commit protocol
US11847034B2 (en) Database-level automatic storage management
US7720884B1 (en) Automatic generation of routines and/or schemas for database management
US10387447B2 (en) Database snapshots
US11599504B2 (en) Executing a conditional command on an object stored in a storage system
US11880495B2 (en) Processing log entries under group-level encryption
US11768853B2 (en) System to copy database client data
US20230188324A1 (en) Initialization vector handling under group-level encryption
US20230195747A1 (en) Performant dropping of snapshots by linking converter streams

Legal Events

Date Code Title Description
AS Assignment

Owner name: ORACLE INTERNATIONAL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YAM, PHILIP;BABY, THOMAS;KRUGLIKOV, ANDRE;AND OTHERS;REEL/FRAME:041478/0299

Effective date: 20170306

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

FEPP Fee payment procedure

Free format text: PETITION RELATED TO MAINTENANCE FEES GRANTED (ORIGINAL EVENT CODE: PTGR); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4