US20160210201A9 - Techniques for backup restore and recovery of a pluggable database - Google Patents
Techniques for backup restore and recovery of a pluggable database Download PDFInfo
- Publication number
- US20160210201A9 US20160210201A9 US14/135,202 US201314135202A US2016210201A9 US 20160210201 A9 US20160210201 A9 US 20160210201A9 US 201314135202 A US201314135202 A US 201314135202A US 2016210201 A9 US2016210201 A9 US 2016210201A9
- Authority
- US
- United States
- Prior art keywords
- database
- pluggable
- pluggable database
- status
- redo
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1469—Backup restoration techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1471—Saving, restoring, recovering or retrying involving logging of persistent data for recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/084—Multiuser, multiprocessor or multiprocessing cache systems with a shared cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1605—Handling requests for interconnection or transfer for access to memory bus based on arbitration
- G06F13/1652—Handling requests for interconnection or transfer for access to memory bus based on arbitration in a multiprocessor architecture
- G06F13/1663—Access to shared memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/211—Schema design and management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/25—Integrating or interfacing systems involving database management systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/25—Integrating or interfacing systems involving database management systems
- G06F16/256—Integrating or interfacing systems involving database management systems in federated or virtual databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/284—Relational databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G06F17/30289—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/065—Replication mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1448—Management of the data involved in backup or backup restore
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/80—Database-specific techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/835—Timestamp
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/60—Details of cache memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/62—Details of cache specific to multiprocessor cache arrangements
Definitions
- the present invention relates to database systems and, more specifically, to pluggable database systems.
- Database consolidation involves distributing and sharing computer resources of a hardware platform among multiple databases.
- Important objectives of database consolidation include isolation, transportability, and fast provisioning. Isolation is the ability to limit an application's access to the appropriate database. Transportability is the ability to efficiently move databases between hosts. Fast provisioning is the ability to quickly deploy a database on a host.
- database backup and recovery may be performed on a per-database basis.
- database backup and recovery practices cannot be carried out on a per-database basis with the same behavior expected of a non-consolidated database.
- the database backup is recovered, and redo records are processed from the time of the database backup to the restore point.
- Recovery time is roughly proportional to the time elapsed between the database backup and the restore point.
- the restore point typically corresponds to the point the non-consolidated database was closed or otherwise made inactive.
- the redo log of the non-consolidated database does not grow when the non-consolidated database is inactive.
- the recovery time for a particular database may be unbounded since the redo log grows even when the specific pluggable database is inactive.
- FIG. 1 is a block diagram depicting an embodiment of a container database and pluggable database elements
- FIG. 2A is a diagram depicting non-consolidated database backup, according to an embodiment
- FIG. 2B is a diagram depicting pluggable database backup with respect to redo logs that include an offline period, according to an embodiment
- FIG. 2C is a diagram depicting pluggable database backup with respect to redo logs that include multiple offline periods of the pluggable database, according to an embodiment
- FIG. 3 is a flowchart illustrating an embodiment of a method for bringing a pluggable database to an active status
- FIG. 4 is a flowchart illustrating an embodiment of a method for bringing a pluggable database to a clean status
- FIG. 5 is a flowchart illustrating an embodiment of a method for restoring a pluggable database
- FIG. 6 is a flowchart illustrates a computer system upon which one or more embodiments may be implemented.
- Databases may be consolidated using a container database management system.
- the container database may manage multiple pluggable databases.
- each pluggable database may be open or closed in the container database independently from other pluggable databases.
- the pluggable databases of a container database share a single set of redo records, which are maintained by the container database.
- the redo records correspond to all changes made to databases within the container database.
- the redo records are ordered in time using a shared logical clock service that assigns each redo record a logical timestamp.
- a container database management system may be implemented as a shared-disk database.
- multiple database instances hereafter instances, may concurrently access a specific database, such as a container database and/or pluggable databases of the container database.
- a shared-disk database may use various backup restore and recovery techniques that take into account the existence of multiple instances that have access to the shared-disk database.
- the container database maintains a pluggable database status and offline range data for associated pluggable databases.
- the pluggable database status is “active” when at least one instance of the pluggable database is open in read-write mode.
- the pluggable database status is “clean” when no instances of the pluggable database are open in read-write mode.
- the offline range of a pluggable database indicates when the pluggable database was in a clean status with respect to the logical timestamp generated by the shared logical clock.
- backup restore and recovery of individual pluggable databases may be implemented using the same set of common backup restore and recovery techniques for a non-consolidated database, and the expected behavior for backup restore and recovery is achieved.
- a container database consisting of one or more pluggable databases provides in-database virtualization for consolidating multiple separate databases.
- pluggable database status and offline range data for associated pluggable databases may be stored in components of the container database and/or the individual pluggable databases, as described in further detail below.
- FIG. 1 is a block diagram depicting an embodiment of a container database and pluggable database elements.
- Container database 100 contains multiple databases that are hosted and managed by a database server.
- the container database 100 includes one or more pluggable databases 120 - 122 , and root database 102 , which are described in greater detail below.
- a container database may contain more pluggable databases than the number of pluggable databases that are depicted in FIG. 1 .
- Pluggable databases may be “plugged in” to a container database, and may be transported between database servers and/or DBMSs.
- Container database 100 allows multiple pluggable databases to run on the same database server and/or database server instance, allowing the computing resources of a single database server or instance to be shared between multiple pluggable databases.
- Container database 100 provides database isolation between its pluggable databases 120 - 122 such that users of a database session established for a pluggable database may only access or otherwise view database objects defined via the attached pluggable database dictionary corresponding to the user's database session.
- the isolation also extends to namespaces.
- Each pluggable database has its own namespace for more types of database objects.
- the name uniqueness requirement of tablespaces 128 - 130 and schemas is also only confined to individual pluggable databases 120 - 122 .
- the respective tablespace files 128 - 130 and database dictionaries 124 - 126 may be moved between environments of container databases using readily available mechanisms for copying and moving files.
- Root database 102 is a database used to globally manage container database 100 , and to store metadata and/or data for “common database objects” to manage access to pluggable databases 120 - 122 .
- a database dictionary contains metadata that defines database objects physically or logically contained in the database.
- a database dictionary is stored persistently (e.g. on disk).
- the database dictionary may be loaded into one or more data structures in volatile memory (“in-memory data structures”) that store at least a portion of metadata that is in the dictionary store.
- Root database 102 may also have its own tablespace files 112 in container database 100 .
- Root database dictionary 104 defines common database objects that are shared by pluggable databases 120 - 122 in container database 100 , such as data to administer container database 100 and pluggable databases 120 - 122 .
- root database 102 may include data that identifies pluggable databases that are plugged into container database 100 .
- SYS_TABLE 116 identifies a dictionary store that holds metadata for the associated pluggable database in a database dictionary.
- SYS_TABLE 116 of root database dictionary 104 may identify each pluggable database 120 - 122 plugged into container database, the respective database dictionaries 124 - 126 , and the respective pluggable database statuses.
- root database 102 is illustrated as a separate database within container database 100 , other architectural implementations for storing common database objects may be used.
- a container database 100 may include one or more pluggable databases 120 - 122 .
- Container database 100 is used to consolidate pluggable databases 120 - 122 .
- pluggable databases 120 - 122 share resources, they may be accessed independently, as described in further detail below.
- a user connected to a specific pluggable database is not exposed to the underlying structure utilized for database consolidation, and the specific pluggable database appears as an independent database system.
- Pluggable database A 120 includes database dictionary 124 .
- Database dictionary 124 defines database objects physically or logically contained in pluggable database A 120 .
- database dictionary 124 may be loaded in-memory. Metadata of root database dictionary 104 is also stored persistently, such as in file A.DBDIC.
- Pluggable database B 122 includes database dictionary 126 .
- Database dictionary 126 defines database objects physically or logically contained in pluggable database B 122 .
- database dictionary 126 may be loaded in-memory. Metadata of database dictionary B 106 is also stored persistently, such as in file B.DBDIC.
- a database dictionary of the pluggable database may be referred to herein as a pluggable database dictionary.
- a database object defined by a pluggable database dictionary that is not a common database object (e.g. not shared in container database 100 ) is referred to herein as a pluggable database object.
- a pluggable database object is defined in a pluggable database dictionary, such as database dictionary 124 , and is only available to the associated pluggable database.
- Tablespace files may include one or more data files 132 - 138 .
- one data file is stored for each tablespace of a pluggable database.
- Each data file 132 - 138 may include a header 142 - 148 comprising metadata for a corresponding data file. Metadata corresponding to data files 132 - 138 may also be otherwise stored.
- a database session comprises a particular connection established for a client to a database server, such as a database instance, through which the client issues a series of database requests.
- a pluggable database dictionary is established for a database session by a database server in response to a connection request from the user for the pluggable database. Establishing the pluggable database dictionary as a database dictionary for a database session may be referred to herein as attaching the database dictionary.
- the pluggable database objects in the one or more pluggable databases of a container database execution of database commands issued to a database session attached to a pluggable database dictionary can only access pluggable database objects that are defined by the pluggable database dictionary.
- database dictionary 124 is attached to the database session.
- Database commands issued in the database session are executed against database dictionary 124 .
- Access to pluggable database objects, such as through DML commands issued in the database session, is isolated to pluggable database objects defined by database dictionary 124 .
- Container database 100 may handle multiple concurrently executing database sessions in this manner.
- container database 100 provides in-database virtualization such that the consolidated database architecture is not transparent to the user of the database session.
- a pluggable database status and offline range data is maintained for pluggable databases associated with a container database.
- the pluggable database status is “active” when at least one instance of the pluggable databases is open in read-write mode.
- the pluggable database status is “clean” when no instances of the pluggable database are open in read-write mode.
- the pluggable database status is “clean”, the corresponding data files of the pluggable database includes all changes made in the database, i.e. all the changes in memory are flushed to the pluggable database's data files by the corresponding container database.
- the pluggable database status is maintained by container database 100 .
- the pluggable database status may also include a logical timestamp associated with the most recent status change, or the logical timestamp associated therewith may be otherwise stored.
- the pluggable database status and the logical timestamp associated with the corresponding status change are stored in the data dictionary of the container database, such as root data dictionary 104 .
- the pluggable database status and the corresponding logical timestamp may be stored persistently.
- Offline range data may include a complete offline range history or a portion thereof.
- the offline range of a pluggable database indicates when the pluggable database status was “clean” with respect to shared logical timestamps of the container database, such as logical timestamps generated by shared logical clock service 114 of container database 100 .
- Offline range data may include a complete offline range history or a portion thereof.
- a backup control file may include an incomplete offline range history if the pluggable database status changes after the backup is generated.
- the offline range data may be stored in the control file of the container database, such as control file 110 .
- the offline range data may also be stored in a data file header (e.g., data file headers 142 - 144 for pluggable database A 120 , or data file headers 146 - 148 for pluggable database B 122 ), or other metadata files of container database 100 or the respective pluggable database.
- the pluggable database status and the logical timestamp associated with the corresponding status change are stored in root data dictionary 104 , while the offline range data is stored in control file 110 .
- the information may be obtained from control file 110 .
- Container database 100 includes redo log 106 .
- Redo log 106 includes one or more files that store all changes made to the database as they occur, including changes to pluggable databases 120 - 122 .
- these changes are first recorded in redo log 106 . If a data file needs to be restored, a backup of the data file can be loaded, and redo records of redo log 106 may be applied, or replayed.
- the offline ranges contained in the offline range data correspond to periods when the pluggable database is not open in read-write mode.
- a portion of the redo records of redo log 106 may be skipped when a backup restore and recovery procedure is performed for one or more data files of the specific pluggable database.
- the recovery process has to apply all transactions, both uncommitted as well as committed, to a backup of the corresponding data files on disk using redo log files 106 .
- redo log 106 is shared between the databases of container database 100 , including pluggable databases 120 - 122 and root database 102 .
- each database instance may have an associated redo log 106 .
- Redo log 106 stores data and/or metadata (“redo records”) related to modifications performed on container database 100 , including any modifications performed on any pluggable databases 120 - 122 that are plugged into container database 100 .
- Redo log 106 includes data usable to reconstruct all changes made to container database 100 and databases contained therein. For example, a redo record may specify one or more data block(s) being modified and their respective values before and after each database change.
- Redo records may also include logical timestamp data that identifies an order in which the corresponding changes were made.
- each redo record may be associated with a logical timestamp generated by logical clock service 114 .
- the term “logical timestamp” includes any data usable to uniquely identify an order between any two logical timestamps.
- Container database 100 includes a single logical clock service 114 that generates logical timestamps for all databases in container database 100 , including pluggable databases 120 - 122 . The logical timestamps may be used to identify an order in which the corresponding database changes were made across all pluggable database instances within container database 100 .
- the logical timestamps may be based on an actual system time, a counter, or any other data that may be used to identify order.
- the logical timestamp associated with each redo record may be a System Change Number (“SCN”).
- SCN System Change Number
- the corresponding redo record includes the current logical timestamp. This produces a stream of redo changes in logical timestamp order.
- the logical timestamp may be propagated across database instances.
- redo log 106 When redo log 106 includes a plurality of ordered redo records, the redo records may be considered a stream of redo records, or a redo stream.
- An associated database server may use the redo stream to replay modifications to container database 100 , such as when a recovery is required, as will be discussed in more detail below.
- a status change redo record corresponding to the pluggable database status change may also be generated.
- the status change redo record may be useful in cases such as, but not limited to: database crashes and failures, pluggable database replication on a standby or secondary system, and redundant storage of critical data within the container database system.
- Checkpoints are implemented in a variety of situations, such as, but not limited to: database shutdown, redo log changes, incrementally, and tablespace operations.
- all data files specific to the pluggable database are checkpointed.
- the data files of the pluggable database may be closed such that the pluggable database may be restored and recovered freely without worrying about interference from other instances of the pluggable database.
- Container database 100 includes control file 110 .
- a control file keeps track of database status and records the physical structure of the database.
- a control file may include a database name, names and locations of associated data files, logical timestamp information associated with the creation of the database, a current logical timestamp for the database, and checkpoint information for the database.
- At least one control file 110 is created and available for writing when container database 100 is open.
- control file 110 is shared between the databases of container database 100 , including pluggable databases 120 - 122 and root database 102 .
- control file 110 includes the pluggable database status and/or offline range data associated with each pluggable database 120 - 122 within container database 100 .
- FIG. 2A is a diagram depicting non-consolidated database backup, according to an embodiment.
- a redo log of the non-consolidated database is represented as redo stream 200 .
- Redo stream 200 is illustrated as a timeline of redo records in logical timestamp order.
- the logical clock service that generates logical timestamps is only running when the database is open. If the database is closed on all database instances, then no redo records are generated.
- the non-consolidated database is open during range 202 .
- Restore point 204 is a logical timestamp corresponding to a desired restore point of the non-consolidated database.
- Backup point 206 is a logical timestamp corresponding to a point at which a backup was taken of data files corresponding to the non-consolidated database.
- Range 208 includes logical timestamps between backup point 206 and restore point 204 when the non-consolidated database was open.
- FIG. 2B is a diagram depicting pluggable database backup, according to an embodiment.
- a redo log of a container database associated with the pluggable database is represented as redo stream 220 .
- Redo stream 220 is illustrated as a timeline of redo records, for all databases within the container database, in logical timestamp order.
- the pluggable database status is “active” during range 222 .
- Restore point 232 is the logical timestamp corresponding to a desired restore point of the pluggable database.
- Backup point 226 is a logical timestamp corresponding to the point at which a backup was taken of data files corresponding to the pluggable database.
- the backup data files may be from an individual pluggable database backup or a container database backup.
- Range 228 includes logical timestamps between backup point 226 and restore point 232 where the pluggable database status was also “active”. Redo records with logical timestamps in an offline period 230 of the pluggable database do not need to be applied because the pluggable database status was clean.
- the offline data range for the pluggable database includes range 230 . Because redo records with logical timestamps between point 224 and restore point 232 are within an offline period 230 of the pluggable database, these redo records do not need to be applied.
- FIG. 2C is a diagram depicting pluggable database backup with respect to redo logs for a container database.
- the redo logs for the container database includes redo records for the pluggable database.
- a redo log of a container database associated with the pluggable database is represented as redo stream 240 .
- Redo stream 240 is illustrated as a timeline of redo records for all databases within the container database, in logical timestamp order.
- the pluggable database status is active during ranges 250 and 252 .
- the pluggable database status is clean during ranges 260 , 262 and 264 .
- Restore point 266 is the logical timestamp corresponding to a desired restore point of the pluggable database.
- Backup point 254 is a logical timestamp corresponding to the point at which a backup was taken of data files corresponding to the pluggable database.
- the backup data files may be from an individual pluggable database backup or a container database backup.
- the data files associated with backup point 254 are restored in the container database. Redo records of logical timestamps within ranges 256 and 258 are processed and applied if the changes refer to the pluggable database. Ranges 256 - 258 include logical timestamps between backup point 254 and restore point 266 where the pluggable database status is active.
- the offline data range for the pluggable database includes ranges 260 , 262 and 264 .
- the entire offline data range may not be available or current.
- the control file may be lost and a backup control file is restored, and information in the backup control file may be incomplete.
- a pluggable database status (e.g. a pluggable database status stored in the data dictionary of the container database) may be compared to the offline data range in a backup file (e.g. in the container database control file) to determine if the backup file contains a current version of the offline data range.
- this pluggable database status includes the logical timestamp of the last status change, the logical timestamp may be used to determine whether an offline data range is current.
- redo records from one or more offline periods of the pluggable database may be replayed.
- the outdated offline data range e.g. in the control file, data file header, or other location
- Other actions may be taken based on the redo records from one or more offline periods to model non-consolidated database behavior. For example, in one embodiment, when a status change redo record is replayed that corresponds to a change to the clean status, the data files corresponding to the pluggable database may be check pointed if data file backups were also restored before recovery.
- FIG. 3 is a flowchart illustrating an embodiment of a method for detecting a pluggable database status change to an active status. The method may be performed by a process associated with a container database, such as container database 100 .
- Processing continues to decision block 304 , where it is detected whether any other instance of the pluggable database is open. If it is determined that at least one prior instance of the pluggable database is open, no pluggable database status change is required, and processing continues to block 312 , where the method returns and/or terminates. If no pluggable database status change is required, processing may continue to processing another database operation, passing control to a calling process, generating any appropriate record or notification, returning after a method or function invocation, or terminating.
- a pluggable database status is changed to “active”.
- the pluggable database status is maintained by the corresponding container database.
- the pluggable database status change may be recorded in root data dictionary 104 of container database 100 .
- a pluggable database status maintained by a container database includes a logical timestamp indicating when the corresponding pluggable database status change occurred.
- root data dictionary 104 may include pluggable database status information for pluggable database A 120 indicating that the pluggable database status change to “active” at a specific logical timestamp.
- the redo record indicates that the pluggable database status of the corresponding pluggable database is changed to an active pluggable database status at a corresponding logical timestamp.
- the redo record may be added to the redo log 106 of container database 100 .
- offline range data associated with the corresponding pluggable database is updated.
- the offline range data should indicate that the specific pluggable database was offline from a previous pluggable database status change until the current pluggable database status change.
- offline range data is stored and updated in control file 110 of container database 100 .
- offline range data may be stored and updated in file headers or other metadata associated with data files of the specific pluggable database. For example, when pluggable database A 120 changes to an active pluggable database status, both control file 110 and headers 142 - 144 may be updated.
- Processing continues to block 312 , where the method returns and/or terminates. For example, processing may continue to processing another database operation, passing control to a calling process, generating any appropriate record or notification, returning after a method or function invocation, or terminating.
- FIG. 4 is a flowchart illustrating an embodiment of a method for detecting a pluggable database status change to a clean status. The method may be performed, by a process associated with a container database, such as container database 100 .
- the closing of a specific pluggable database instance is detected.
- the specific pluggable database instance is closed in normal operation.
- the method described in FIG. 4 is performed by one or more surviving instances.
- Processing continues to decision block 404 , where it is detected whether the specific pluggable database instance is the last open instance of the corresponding pluggable database. If it is determined that the specific pluggable database instance is not the last open instance, no status change is required, and processing continues to block 412 , where the method returns and/or terminates. For example, if no status change is required, processing may continue to processing another database operation, passing control to a calling process, generating any appropriate record or notification, returning after a method or function invocation, or terminating.
- a pluggable database status is changed to “clean”.
- the pluggable database status is maintained by the corresponding container database.
- the status change may be recorded in root data dictionary 104 of container database 100 .
- a pluggable database status maintained by a container database includes a logical timestamp indicating when the corresponding status change occurred.
- root data dictionary 104 may include status information for pluggable database A 120 indicating that the status change to “clean” at a specific logical timestamp.
- the redo record indicates that the status of the corresponding pluggable database status is changed to “clean” at a corresponding logical timestamp.
- the redo record may be added to the redo log 106 of container database 100 .
- offline range data associated with the corresponding pluggable database is updated.
- the offline range data should indicate that the specific pluggable database is offline as of this status change.
- offline range data is stored and updated in control file 110 of container database 100 .
- Processing continues to block 412 , where the method returns and/or terminates. For example, processing may continue to process another database operation, passing control to a calling process, generating any appropriate record or notification, returning after a method or function invocation, or terminating.
- processing may continue to process another database operation, passing control to a calling process, generating any appropriate record or notification, returning after a method or function invocation, or terminating.
- a pluggable database is transitioned to a clean status, such as by the method of FIG. 4 , all data files associated with the pluggable database are checkpointed, and the data files will be closed on all database instances.
- FIG. 5 is a flowchart illustrating an embodiment of a method for restoring a pluggable database.
- the method may be performed by a process associated with a container database, such as container database 100 .
- a restore logical timestamp such as an SCN, is obtained.
- An individual pluggable database will be restored to a logical time associated with the logical timestamp.
- backup data files are loaded and restored for the pluggable database.
- the backup data files may be selected based on a backup of the individual pluggable database or a backup of the container database, where the backup is associated with a logical timestamp.
- offline range data may be stored in the control file of the container database and/or one or more header files of the data files of the pluggable database.
- the data dictionary of the container database includes a pluggable database status.
- the pluggable database status may include a logical timestamp of the last status change of the pluggable database.
- the offline range data is evaluated to determine redo records that need to be processed. Redo records between the backup point and the restore point are processed unless they fall within an offline range of the pluggable database.
- a redo record of the container database may contain redo records for other pluggable databases in addition to redo records for the current pluggable database.
- the redo record is processed by determining whether the change contained therein is relevant to the current pluggable database, in which case the redo record is applied.
- processing continues to decision block 512 , where it is determined whether the current redo record indicates a pluggable database status change to “clean”. If the redo record indicates a pluggable database status change to “clean”, processing continues to step 514 , where a checkpoint is generated. Otherwise, processing returns to decision block 508 .
- the checkpoint is generated when the pluggable database status changes to clean to emulate non-consolidated database behavior when a non-consolidated database is closed.
- processing continues to block 516 , where the method returns and/or terminates. For example, processing may continue to process another database operation, passing control to a calling process, generating any appropriate record or notification, returning after a method or function invocation, or terminating.
- Embodiments of the present invention are used in the context of database management systems (DBMSs). Therefore, a description of a DBMS is useful.
- a DBMS manages a database.
- a DBMS may comprise one or more database servers.
- a database comprises database data and a database dictionary that are stored on a persistent memory mechanism, such as a set of hard disks.
- Database data may be stored in one or more data containers. Each container contains records. The data within each record is organized into one or more fields.
- the data containers are referred to as tables, the records are referred to as rows, and the fields are referred to as columns.
- object-oriented databases the data containers are referred to as object classes, the records are referred to as objects, and the fields are referred to as attributes.
- Other database architectures may use other terminology.
- Users interact with a database server of a DBMS by submitting to the database server commands that cause the database server to perform operations on data stored in a database.
- a user may be one or more applications running on a client computer that interact with a database server. Multiple users may also be referred to herein collectively as a user.
- a database command may be in the form of a database statement that conforms to a database language.
- a database language for expressing the database commands is the Structured Query Language (SQL).
- SQL Structured Query Language
- DDL Data definition language
- SQL/XML is a common extension of SQL used when manipulating XML data in an object-relational database.
- DML Data manipulation language
- SELECT, INSERT, UPDATE, and DELETE are common examples of DML instructions found in some SQL implementations.
- a multi-node database management system is made up of interconnected nodes that share access to the same database.
- the nodes are interconnected via a network and share access, in varying degrees, to shared storage, e.g. shared access to a set of disk drives and data blocks stored thereon.
- the nodes in a multi-node database system may be in the form of a group of computers (e.g. work stations, personal computers) that are interconnected via a network.
- the nodes may be the nodes of a grid, which is composed of nodes in the form of server blades interconnected with other server blades on a rack.
- Each node in a multi-node database system hosts a database server.
- a server such as a database server, is a combination of integrated software components and an allocation of computational resources, such as memory, a node, and processes on the node for executing the integrated software components on a processor, the combination of the software and computational resources being dedicated to performing a particular function on behalf of one or more clients.
- Resources from multiple nodes in a multi-node database system can be allocated to running a particular database server's software.
- Each combination of the software and allocation of resources from a node is a server that is referred to herein as a “server instance” or “instance”.
- a database server may comprise multiple database instances, some or all of which are running on separate computers, including separate server blades.
- the techniques described herein are implemented by one or more special-purpose computing devices.
- the special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination.
- ASICs application-specific integrated circuits
- FPGAs field programmable gate arrays
- Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques.
- the special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques.
- FIG. 6 is a block diagram that illustrates a computer system 600 upon which an embodiment of the invention may be implemented.
- Computer system 600 includes a bus 602 or other communication mechanism for communicating information, and a hardware processor 604 coupled with bus 602 for processing information.
- Hardware processor 604 may be, for example, a general purpose microprocessor.
- Computer system 600 also includes a main memory 606 , such as a random access memory (RAM) or other dynamic storage device, coupled to bus 602 for storing information and instructions to be executed by processor 604 .
- Main memory 606 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 604 .
- Such instructions when stored in non-transitory storage media accessible to processor 604 , render computer system 600 into a special-purpose machine that is customized to perform the operations specified in the instructions.
- Computer system 600 further includes a read only memory (ROM) 608 or other static storage device coupled to bus 602 for storing static information and instructions for processor 604 .
- ROM read only memory
- a storage device 610 such as a magnetic disk or optical disk, is provided and coupled to bus 602 for storing information and instructions.
- Computer system 600 may be coupled via bus 602 to a display 612 , such as a cathode ray tube (CRT), for displaying information to a computer user.
- a display 612 such as a cathode ray tube (CRT)
- An input device 614 is coupled to bus 602 for communicating information and command selections to processor 604 .
- cursor control 616 is Another type of user input device
- cursor control 616 such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 604 and for controlling cursor movement on display 612 .
- This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
- Computer system 600 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 600 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 600 in response to processor 604 executing one or more sequences of one or more instructions contained in main memory 606 . Such instructions may be read into main memory 606 from another storage medium, such as storage device 610 . Execution of the sequences of instructions contained in main memory 606 causes processor 604 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
- Non-volatile media includes, for example, optical or magnetic disks, such as storage device 610 .
- Volatile media includes dynamic memory, such as main memory 606 .
- Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid status drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
- Storage media is distinct from but may be used in conjunction with transmission media.
- Transmission media participates in transferring information between storage media.
- transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 602 .
- transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
- Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 604 for execution.
- the instructions may initially be carried on a magnetic disk or solid status drive of a remote computer.
- the remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem.
- a modem local to computer system 600 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal.
- An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 602 .
- Bus 602 carries the data to main memory 606 , from which processor 604 retrieves and executes the instructions.
- the instructions received by main memory 606 may optionally be stored on storage device 610 either before or after execution by processor 604 .
- Computer system 600 also includes a communication interface 618 coupled to bus 602 .
- Communication interface 618 provides a two-way data communication coupling to a network link 620 that is connected to a local network 622 .
- communication interface 618 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
- ISDN integrated services digital network
- communication interface 618 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
- LAN local area network
- Wireless links may also be implemented.
- communication interface 618 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
- Network link 620 typically provides data communication through one or more networks to other data devices.
- network link 620 may provide a connection through local network 622 to a host computer 624 or to data equipment operated by an Internet Service Provider (ISP) 626 .
- ISP 626 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 628 .
- Internet 628 uses electrical, electromagnetic or optical signals that carry digital data streams.
- the signals through the various networks and the signals on network link 620 and through communication interface 618 which carry the digital data to and from computer system 600 , are example forms of transmission media.
- Computer system 600 can send messages and receive data, including program code, through the network(s), network link 620 and communication interface 618 .
- a server 630 might transmit a requested code for an application program through Internet 628 , ISP 626 , local network 622 and communication interface 618 .
- the received code may be executed by processor 604 as it is received, and/or stored in storage device 610 , or other non-volatile storage for later execution.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- Bioethics (AREA)
- Health & Medical Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A container database stores redo records and logical timestamps for multiple pluggable databases. When it is detected that a first read-write instance of the pluggable database is opened and no other read-write instances of the pluggable database are open, offline range data associated with the pluggable database is updated. When it is detected that a second read-write instance of the pluggable database is closed, and the second read-write instance is the last open read-write instance, the offline range data associated with the pluggable database is updated. The pluggable database is restored to a logical timestamp associated with a restore request based on the offline range data.
Description
- This application claims benefit as a continuation-in-part of U.S. patent application Ser. No. 13/830,349, filed Mar. 14, 2013, which claims benefit of U.S. Provisional Application No. 61/707,726, filed Sep. 28, 2012, the entire contents of which are hereby incorporated by reference as if fully set forth herein.
- The present invention relates to database systems and, more specifically, to pluggable database systems.
- The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
- Database consolidation involves distributing and sharing computer resources of a hardware platform among multiple databases. Important objectives of database consolidation include isolation, transportability, and fast provisioning. Isolation is the ability to limit an application's access to the appropriate database. Transportability is the ability to efficiently move databases between hosts. Fast provisioning is the ability to quickly deploy a database on a host.
- In non-consolidated databases, database backup and recovery may be performed on a per-database basis. However, in a consolidated database or other in-database virtualizations capable of consolidating multiple databases, traditional database backup and recovery practices cannot be carried out on a per-database basis with the same behavior expected of a non-consolidated database.
- For example, when a database is restored to a restore point from a database backup, the database backup is recovered, and redo records are processed from the time of the database backup to the restore point. Recovery time is roughly proportional to the time elapsed between the database backup and the restore point. In a non-consolidated database, the restore point typically corresponds to the point the non-consolidated database was closed or otherwise made inactive. The redo log of the non-consolidated database does not grow when the non-consolidated database is inactive. In a consolidated database environment that implements a shared redo log, the recovery time for a particular database may be unbounded since the redo log grows even when the specific pluggable database is inactive.
- Discussed herein are approaches for database backup and recovery on a per-database basis in a consolidated database system.
- In the drawings:
-
FIG. 1 is a block diagram depicting an embodiment of a container database and pluggable database elements; -
FIG. 2A is a diagram depicting non-consolidated database backup, according to an embodiment; -
FIG. 2B is a diagram depicting pluggable database backup with respect to redo logs that include an offline period, according to an embodiment; -
FIG. 2C is a diagram depicting pluggable database backup with respect to redo logs that include multiple offline periods of the pluggable database, according to an embodiment; -
FIG. 3 is a flowchart illustrating an embodiment of a method for bringing a pluggable database to an active status; -
FIG. 4 is a flowchart illustrating an embodiment of a method for bringing a pluggable database to a clean status; -
FIG. 5 is a flowchart illustrating an embodiment of a method for restoring a pluggable database; -
FIG. 6 is a flowchart illustrates a computer system upon which one or more embodiments may be implemented. - In the following description, for the purpose of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
- Techniques are described herein for backup restore and recovery of pluggable databases in a container database.
- Databases may be consolidated using a container database management system. The container database may manage multiple pluggable databases. In a container database management system, each pluggable database may be open or closed in the container database independently from other pluggable databases. The pluggable databases of a container database share a single set of redo records, which are maintained by the container database. The redo records correspond to all changes made to databases within the container database. The redo records are ordered in time using a shared logical clock service that assigns each redo record a logical timestamp.
- A container database management system may be implemented as a shared-disk database. In a shared-disk database, multiple database instances, hereafter instances, may concurrently access a specific database, such as a container database and/or pluggable databases of the container database. A shared-disk database may use various backup restore and recovery techniques that take into account the existence of multiple instances that have access to the shared-disk database.
- To facilitate backup restore and recovery of a pluggable database within a container database, the container database maintains a pluggable database status and offline range data for associated pluggable databases. The pluggable database status is “active” when at least one instance of the pluggable database is open in read-write mode. The pluggable database status is “clean” when no instances of the pluggable database are open in read-write mode. The offline range of a pluggable database indicates when the pluggable database was in a clean status with respect to the logical timestamp generated by the shared logical clock.
- When a specific pluggable database is recovered based on a backup, the number of redo records that need to be processed is limited based on the pluggable database status and the offline range data. In this manner, backup restore and recovery of individual pluggable databases may be implemented using the same set of common backup restore and recovery techniques for a non-consolidated database, and the expected behavior for backup restore and recovery is achieved.
- A container database consisting of one or more pluggable databases provides in-database virtualization for consolidating multiple separate databases. To facilitate backup restore and recovery of individual pluggable databases of the container database, pluggable database status and offline range data for associated pluggable databases may be stored in components of the container database and/or the individual pluggable databases, as described in further detail below.
-
FIG. 1 is a block diagram depicting an embodiment of a container database and pluggable database elements.Container database 100 contains multiple databases that are hosted and managed by a database server. Thecontainer database 100 includes one or more pluggable databases 120-122, androot database 102, which are described in greater detail below. A container database may contain more pluggable databases than the number of pluggable databases that are depicted inFIG. 1 . - Pluggable databases may be “plugged in” to a container database, and may be transported between database servers and/or DBMSs.
Container database 100 allows multiple pluggable databases to run on the same database server and/or database server instance, allowing the computing resources of a single database server or instance to be shared between multiple pluggable databases. -
Container database 100 provides database isolation between its pluggable databases 120-122 such that users of a database session established for a pluggable database may only access or otherwise view database objects defined via the attached pluggable database dictionary corresponding to the user's database session. The isolation also extends to namespaces. Each pluggable database has its own namespace for more types of database objects. With respect to each pluggable database 120-122 in acontainer database 100 hosted on a database server, the name uniqueness requirement of tablespaces 128-130 and schemas is also only confined to individual pluggable databases 120-122. The respective tablespace files 128-130 and database dictionaries 124-126 may be moved between environments of container databases using readily available mechanisms for copying and moving files. -
Root database 102 is a database used to globally managecontainer database 100, and to store metadata and/or data for “common database objects” to manage access to pluggable databases 120-122. - A database dictionary contains metadata that defines database objects physically or logically contained in the database. A database dictionary is stored persistently (e.g. on disk). When a database server is running, the database dictionary may be loaded into one or more data structures in volatile memory (“in-memory data structures”) that store at least a portion of metadata that is in the dictionary store.
- The database dictionary corresponding to root
database 102 is aroot database dictionary 104.Root database 102 may also have its own tablespace files 112 incontainer database 100.Root database dictionary 104 defines common database objects that are shared by pluggable databases 120-122 incontainer database 100, such as data to administercontainer database 100 and pluggable databases 120-122. For example,root database 102 may include data that identifies pluggable databases that are plugged intocontainer database 100. In one embodiment,SYS_TABLE 116 identifies a dictionary store that holds metadata for the associated pluggable database in a database dictionary. For example,SYS_TABLE 116 ofroot database dictionary 104 may identify each pluggable database 120-122 plugged into container database, the respective database dictionaries 124-126, and the respective pluggable database statuses. - Although
root database 102 is illustrated as a separate database withincontainer database 100, other architectural implementations for storing common database objects may be used. - A
container database 100 may include one or more pluggable databases 120-122.Container database 100 is used to consolidate pluggable databases 120-122. Although pluggable databases 120-122 share resources, they may be accessed independently, as described in further detail below. In one embodiment, a user connected to a specific pluggable database is not exposed to the underlying structure utilized for database consolidation, and the specific pluggable database appears as an independent database system. -
Pluggable database A 120 includesdatabase dictionary 124.Database dictionary 124 defines database objects physically or logically contained inpluggable database A 120. Whenpluggable database A 120 is open,database dictionary 124 may be loaded in-memory. Metadata ofroot database dictionary 104 is also stored persistently, such as in file A.DBDIC. -
Pluggable database B 122 includesdatabase dictionary 126.Database dictionary 126 defines database objects physically or logically contained inpluggable database B 122. Whenpluggable database B 122 is open,database dictionary 126 may be loaded in-memory. Metadata ofdatabase dictionary B 106 is also stored persistently, such as in file B.DBDIC. - A database dictionary of the pluggable database may be referred to herein as a pluggable database dictionary. A database object defined by a pluggable database dictionary that is not a common database object (e.g. not shared in container database 100) is referred to herein as a pluggable database object. A pluggable database object is defined in a pluggable database dictionary, such as
database dictionary 124, and is only available to the associated pluggable database. - Data for pluggable database objects are stored in the corresponding tablespace files (e.g. tablespace files 128 for
pluggable database A 120 andtablespace files 130 for pluggable database B 122). Tablespace files may include one or more data files 132-138. In one embodiment, one data file is stored for each tablespace of a pluggable database. Each data file 132-138 may include a header 142-148 comprising metadata for a corresponding data file. Metadata corresponding to data files 132-138 may also be otherwise stored. - A database session comprises a particular connection established for a client to a database server, such as a database instance, through which the client issues a series of database requests. A pluggable database dictionary is established for a database session by a database server in response to a connection request from the user for the pluggable database. Establishing the pluggable database dictionary as a database dictionary for a database session may be referred to herein as attaching the database dictionary. With respect to the pluggable database objects in the one or more pluggable databases of a container database, execution of database commands issued to a database session attached to a pluggable database dictionary can only access pluggable database objects that are defined by the pluggable database dictionary.
- For example, in response to a connection request for access to
pluggable database A 120,database dictionary 124 is attached to the database session. Database commands issued in the database session are executed againstdatabase dictionary 124. Access to pluggable database objects, such as through DML commands issued in the database session, is isolated to pluggable database objects defined bydatabase dictionary 124.Container database 100 may handle multiple concurrently executing database sessions in this manner. In one embodiment,container database 100 provides in-database virtualization such that the consolidated database architecture is not transparent to the user of the database session. - To facilitate backup restore and recovery, a pluggable database status and offline range data is maintained for pluggable databases associated with a container database. The pluggable database status is “active” when at least one instance of the pluggable databases is open in read-write mode. The pluggable database status is “clean” when no instances of the pluggable database are open in read-write mode. In one embodiment, when the pluggable database status is “clean”, the corresponding data files of the pluggable database includes all changes made in the database, i.e. all the changes in memory are flushed to the pluggable database's data files by the corresponding container database.
- For each pluggable database 120-122 of
container database 100, the pluggable database status is maintained bycontainer database 100. The pluggable database status may also include a logical timestamp associated with the most recent status change, or the logical timestamp associated therewith may be otherwise stored. In one embodiment, the pluggable database status and the logical timestamp associated with the corresponding status change are stored in the data dictionary of the container database, such asroot data dictionary 104. The pluggable database status and the corresponding logical timestamp may be stored persistently. - The offline range of a pluggable database indicates when the pluggable database status was “clean” with respect to the shared logical clock. Offline range data may include a complete offline range history or a portion thereof.
- The offline range of a pluggable database indicates when the pluggable database status was “clean” with respect to shared logical timestamps of the container database, such as logical timestamps generated by shared
logical clock service 114 ofcontainer database 100. Offline range data may include a complete offline range history or a portion thereof. For example, a backup control file may include an incomplete offline range history if the pluggable database status changes after the backup is generated. - The offline range data may be stored in the control file of the container database, such as
control file 110. Alternatively and/or in addition, the offline range data may also be stored in a data file header (e.g., data file headers 142-144 forpluggable database A 120, or data file headers 146-148 for pluggable database B 122), or other metadata files ofcontainer database 100 or the respective pluggable database. - In one embodiment, the pluggable database status and the logical timestamp associated with the corresponding status change are stored in
root data dictionary 104, while the offline range data is stored incontrol file 110. In cases where theroot data dictionary 104 is not accessible, such as when backup restore and recovery is performed for the entire consolidated database, the information may be obtained fromcontrol file 110. -
Container database 100 includesredo log 106. Redolog 106 includes one or more files that store all changes made to the database as they occur, including changes to pluggable databases 120-122. In one embodiment, before database changes are written to file, such as data files 132-138, these changes are first recorded inredo log 106. If a data file needs to be restored, a backup of the data file can be loaded, and redo records ofredo log 106 may be applied, or replayed. The offline ranges contained in the offline range data correspond to periods when the pluggable database is not open in read-write mode. Based on one or more offline ranges contained in the offline range data for a specific pluggable database, a portion of the redo records ofredo log 106 may be skipped when a backup restore and recovery procedure is performed for one or more data files of the specific pluggable database. In the case of a pluggable database crash, the recovery process has to apply all transactions, both uncommitted as well as committed, to a backup of the corresponding data files on disk using redo log files 106. - Within
container database 100, redolog 106 is shared between the databases ofcontainer database 100, including pluggable databases 120-122 androot database 102. In a multi-instance database, each database instance may have an associatedredo log 106. - Redo log 106 stores data and/or metadata (“redo records”) related to modifications performed on
container database 100, including any modifications performed on any pluggable databases 120-122 that are plugged intocontainer database 100. Redolog 106 includes data usable to reconstruct all changes made tocontainer database 100 and databases contained therein. For example, a redo record may specify one or more data block(s) being modified and their respective values before and after each database change. - Redo records may also include logical timestamp data that identifies an order in which the corresponding changes were made. For example, each redo record may be associated with a logical timestamp generated by
logical clock service 114. As used herein, the term “logical timestamp” includes any data usable to uniquely identify an order between any two logical timestamps.Container database 100 includes a singlelogical clock service 114 that generates logical timestamps for all databases incontainer database 100, including pluggable databases 120-122. The logical timestamps may be used to identify an order in which the corresponding database changes were made across all pluggable database instances withincontainer database 100. The logical timestamps may be based on an actual system time, a counter, or any other data that may be used to identify order. For example, the logical timestamp associated with each redo record may be a System Change Number (“SCN”). In one embodiment, for each change tocontainer database 100, the corresponding redo record includes the current logical timestamp. This produces a stream of redo changes in logical timestamp order. In a multi-instance database environment, such as Oracle Real Application Clusters (“RAC”), the logical timestamp may be propagated across database instances. - When
redo log 106 includes a plurality of ordered redo records, the redo records may be considered a stream of redo records, or a redo stream. An associated database server may use the redo stream to replay modifications tocontainer database 100, such as when a recovery is required, as will be discussed in more detail below. - When the pluggable database status changes, a status change redo record, corresponding to the pluggable database status change may also be generated. The status change redo record may be useful in cases such as, but not limited to: database crashes and failures, pluggable database replication on a standby or secondary system, and redundant storage of critical data within the container database system.
- At a checkpoint, buffers are written to data files. Checkpoints are implemented in a variety of situations, such as, but not limited to: database shutdown, redo log changes, incrementally, and tablespace operations. In one embodiment, when the pluggable database status becomes clean after closing any instances that was open in read-write mode, all data files specific to the pluggable database are checkpointed. Furthermore, the data files of the pluggable database may be closed such that the pluggable database may be restored and recovered freely without worrying about interference from other instances of the pluggable database.
-
Container database 100 includescontrol file 110. A control file keeps track of database status and records the physical structure of the database. For example, a control file may include a database name, names and locations of associated data files, logical timestamp information associated with the creation of the database, a current logical timestamp for the database, and checkpoint information for the database. At least onecontrol file 110 is created and available for writing whencontainer database 100 is open. Withincontainer database 100,control file 110 is shared between the databases ofcontainer database 100, including pluggable databases 120-122 androot database 102. In one embodiment,control file 110 includes the pluggable database status and/or offline range data associated with each pluggable database 120-122 withincontainer database 100. -
FIG. 2A is a diagram depicting non-consolidated database backup, according to an embodiment. A redo log of the non-consolidated database is represented asredo stream 200. Redostream 200 is illustrated as a timeline of redo records in logical timestamp order. In a non-consolidated database, the logical clock service that generates logical timestamps is only running when the database is open. If the database is closed on all database instances, then no redo records are generated. The non-consolidated database is open duringrange 202. Restorepoint 204 is a logical timestamp corresponding to a desired restore point of the non-consolidated database.Backup point 206 is a logical timestamp corresponding to a point at which a backup was taken of data files corresponding to the non-consolidated database. To recover the non-consolidated database to restorepoint 204 based on the backup of data files taken atbackup point 206, the data files associated withbackup point 206 are restored in the non-consolidated database, and redo records with logical timestamps withinrange 208 are applied.Range 208 includes logical timestamps betweenbackup point 206 and restorepoint 204 when the non-consolidated database was open. -
FIG. 2B is a diagram depicting pluggable database backup, according to an embodiment. A redo log of a container database associated with the pluggable database is represented asredo stream 220. Redostream 220 is illustrated as a timeline of redo records, for all databases within the container database, in logical timestamp order. The pluggable database status is “active” duringrange 222. Restorepoint 232 is the logical timestamp corresponding to a desired restore point of the pluggable database.Backup point 226 is a logical timestamp corresponding to the point at which a backup was taken of data files corresponding to the pluggable database. The backup data files may be from an individual pluggable database backup or a container database backup. To recover the pluggable database to restorepoint 232 based on the backup of data files taken atbackup point 226, the data files associated withbackup point 226 are restored. Redo records with logical timestamps withinrange 228 are processed and applied if the changes contained therein are relevant to the pluggable database.Range 228 includes logical timestamps betweenbackup point 226 and restorepoint 232 where the pluggable database status was also “active”. Redo records with logical timestamps in anoffline period 230 of the pluggable database do not need to be applied because the pluggable database status was clean. The offline data range for the pluggable database includesrange 230. Because redo records with logical timestamps betweenpoint 224 and restorepoint 232 are within anoffline period 230 of the pluggable database, these redo records do not need to be applied. -
FIG. 2C is a diagram depicting pluggable database backup with respect to redo logs for a container database. The redo logs for the container database includes redo records for the pluggable database. A redo log of a container database associated with the pluggable database is represented asredo stream 240. Redostream 240 is illustrated as a timeline of redo records for all databases within the container database, in logical timestamp order. The pluggable database status is active duringranges ranges point 266 is the logical timestamp corresponding to a desired restore point of the pluggable database.Backup point 254 is a logical timestamp corresponding to the point at which a backup was taken of data files corresponding to the pluggable database. The backup data files may be from an individual pluggable database backup or a container database backup. To recover the pluggable database to restorepoint 266 based on the backup of data files taken atbackup point 254, the data files associated withbackup point 254 are restored in the container database. Redo records of logical timestamps withinranges backup point 254 and restorepoint 266 where the pluggable database status is active. Redo records with logical timestamps in anoffline period ranges - Use of Status Change Redo Records when Offline Data Range is Incomplete
- In one embodiment, the entire offline data range may not be available or current. For example, in the case of media failure, the control file may be lost and a backup control file is restored, and information in the backup control file may be incomplete. In such cases, a pluggable database status (e.g. a pluggable database status stored in the data dictionary of the container database) may be compared to the offline data range in a backup file (e.g. in the container database control file) to determine if the backup file contains a current version of the offline data range. For example, when this pluggable database status includes the logical timestamp of the last status change, the logical timestamp may be used to determine whether an offline data range is current.
- In one embodiment, if the offline data range is not current, such as in the control file or in a data file header, redo records from one or more offline periods of the pluggable database may be replayed. In this case, when a status change redo record is encountered, the outdated offline data range (e.g. in the control file, data file header, or other location) may be updated as the redo records are processed and/or applied. Other actions may be taken based on the redo records from one or more offline periods to model non-consolidated database behavior. For example, in one embodiment, when a status change redo record is replayed that corresponds to a change to the clean status, the data files corresponding to the pluggable database may be check pointed if data file backups were also restored before recovery.
-
FIG. 3 is a flowchart illustrating an embodiment of a method for detecting a pluggable database status change to an active status. The method may be performed by a process associated with a container database, such ascontainer database 100. - At
block 302, the opening of a pluggable database instance is detected. - Processing continues to decision block 304, where it is detected whether any other instance of the pluggable database is open. If it is determined that at least one prior instance of the pluggable database is open, no pluggable database status change is required, and processing continues to block 312, where the method returns and/or terminates. If no pluggable database status change is required, processing may continue to processing another database operation, passing control to a calling process, generating any appropriate record or notification, returning after a method or function invocation, or terminating.
- Returning to decision block 304, if it is determined that there are no prior instances of the pluggable database open, processing continues to block 306, where a pluggable database status is changed to “active”. The pluggable database status is maintained by the corresponding container database. For example, the pluggable database status change may be recorded in
root data dictionary 104 ofcontainer database 100. In one embodiment, a pluggable database status maintained by a container database includes a logical timestamp indicating when the corresponding pluggable database status change occurred. For example,root data dictionary 104 may include pluggable database status information forpluggable database A 120 indicating that the pluggable database status change to “active” at a specific logical timestamp. - Processing continues to block 308, where a redo record is generated. The redo record indicates that the pluggable database status of the corresponding pluggable database is changed to an active pluggable database status at a corresponding logical timestamp. The redo record may be added to the redo log 106 of
container database 100. - Processing continues to block 310, where offline range data associated with the corresponding pluggable database is updated. When a specific pluggable database status becomes “active”, the offline range data should indicate that the specific pluggable database was offline from a previous pluggable database status change until the current pluggable database status change. In one embodiment, offline range data is stored and updated in control file 110 of
container database 100. Alternatively and/or in addition, offline range data may be stored and updated in file headers or other metadata associated with data files of the specific pluggable database. For example, whenpluggable database A 120 changes to an active pluggable database status, bothcontrol file 110 and headers 142-144 may be updated. - Processing continues to block 312, where the method returns and/or terminates. For example, processing may continue to processing another database operation, passing control to a calling process, generating any appropriate record or notification, returning after a method or function invocation, or terminating.
-
FIG. 4 is a flowchart illustrating an embodiment of a method for detecting a pluggable database status change to a clean status. The method may be performed, by a process associated with a container database, such ascontainer database 100. - At
block 402, the closing of a specific pluggable database instance is detected. For example, the specific pluggable database instance is closed in normal operation. In one embodiment, when the closing of a specific pluggable database instance is an abnormal termination, and the method described inFIG. 4 is performed by one or more surviving instances. - Processing continues to decision block 404, where it is detected whether the specific pluggable database instance is the last open instance of the corresponding pluggable database. If it is determined that the specific pluggable database instance is not the last open instance, no status change is required, and processing continues to block 412, where the method returns and/or terminates. For example, if no status change is required, processing may continue to processing another database operation, passing control to a calling process, generating any appropriate record or notification, returning after a method or function invocation, or terminating.
- Returning to decision block 404, if it is determined that the specific pluggable database instance is the last open instance of the corresponding pluggable database, processing continues to block 406, where a pluggable database status is changed to “clean”. The pluggable database status is maintained by the corresponding container database. For example, the status change may be recorded in
root data dictionary 104 ofcontainer database 100. In one embodiment, a pluggable database status maintained by a container database includes a logical timestamp indicating when the corresponding status change occurred. For example,root data dictionary 104 may include status information forpluggable database A 120 indicating that the status change to “clean” at a specific logical timestamp. - Processing continues to block 408, where a redo record is generated. The redo record indicates that the status of the corresponding pluggable database status is changed to “clean” at a corresponding logical timestamp. The redo record may be added to the redo log 106 of
container database 100. - Processing continues to block 410, where offline range data associated with the corresponding pluggable database is updated. When a specific pluggable database status changes to “clean”, the offline range data should indicate that the specific pluggable database is offline as of this status change. In one embodiment, offline range data is stored and updated in control file 110 of
container database 100. - Processing continues to block 412, where the method returns and/or terminates. For example, processing may continue to process another database operation, passing control to a calling process, generating any appropriate record or notification, returning after a method or function invocation, or terminating. In one embodiment, when after a pluggable database is transitioned to a clean status, such as by the method of
FIG. 4 , all data files associated with the pluggable database are checkpointed, and the data files will be closed on all database instances. -
FIG. 5 is a flowchart illustrating an embodiment of a method for restoring a pluggable database. The method may be performed by a process associated with a container database, such ascontainer database 100. Atblock 502, a restore logical timestamp, such as an SCN, is obtained. An individual pluggable database will be restored to a logical time associated with the logical timestamp. - Processing continues to block 504, where backup data files are loaded and restored for the pluggable database. For example, the backup data files may be selected based on a backup of the individual pluggable database or a backup of the container database, where the backup is associated with a logical timestamp.
- Processing continues to block 506, where offline range data corresponding to the pluggable database is evaluated. For example, offline range data may be stored in the control file of the container database and/or one or more header files of the data files of the pluggable database. In one embodiment, the data dictionary of the container database includes a pluggable database status. The pluggable database status may include a logical timestamp of the last status change of the pluggable database. The offline range data is evaluated to determine redo records that need to be processed. Redo records between the backup point and the restore point are processed unless they fall within an offline range of the pluggable database.
- Processing continues to decision block 508, where it is determined if more redo records of the container database remain to be processed. If more redo records remain to be processed, processing continues to block 510, where the next redo record of the container database is processed and/or applied. A redo record of the container database may contain redo records for other pluggable databases in addition to redo records for the current pluggable database. In one embodiment, the redo record is processed by determining whether the change contained therein is relevant to the current pluggable database, in which case the redo record is applied.
- Processing continues to decision block 512, where it is determined whether the current redo record indicates a pluggable database status change to “clean”. If the redo record indicates a pluggable database status change to “clean”, processing continues to step 514, where a checkpoint is generated. Otherwise, processing returns to
decision block 508. In one embodiment, the checkpoint is generated when the pluggable database status changes to clean to emulate non-consolidated database behavior when a non-consolidated database is closed. - Returning to decision block 508, if it is determined that no more redo records remain to be processed, processing continues to block 516, where the method returns and/or terminates. For example, processing may continue to process another database operation, passing control to a calling process, generating any appropriate record or notification, returning after a method or function invocation, or terminating.
- Embodiments of the present invention are used in the context of database management systems (DBMSs). Therefore, a description of a DBMS is useful. A DBMS manages a database. A DBMS may comprise one or more database servers. A database comprises database data and a database dictionary that are stored on a persistent memory mechanism, such as a set of hard disks. Database data may be stored in one or more data containers. Each container contains records. The data within each record is organized into one or more fields. In relational DBMSs, the data containers are referred to as tables, the records are referred to as rows, and the fields are referred to as columns. In object-oriented databases, the data containers are referred to as object classes, the records are referred to as objects, and the fields are referred to as attributes. Other database architectures may use other terminology.
- Users interact with a database server of a DBMS by submitting to the database server commands that cause the database server to perform operations on data stored in a database. A user may be one or more applications running on a client computer that interact with a database server. Multiple users may also be referred to herein collectively as a user.
- A database command may be in the form of a database statement that conforms to a database language. A database language for expressing the database commands is the Structured Query Language (SQL). There are many different versions of SQL, some versions are standard and some proprietary, and there are a variety of extensions. Data definition language (“DDL”) commands are issued to a DBMS to create or configure database objects, such as tables, views, or complex data types. SQL/XML is a common extension of SQL used when manipulating XML data in an object-relational database. Data manipulation language (“DML”) instructions are issued to a DBMS to manage data stored within a database structure. For instance, SELECT, INSERT, UPDATE, and DELETE are common examples of DML instructions found in some SQL implementations.
- A multi-node database management system is made up of interconnected nodes that share access to the same database. Typically, the nodes are interconnected via a network and share access, in varying degrees, to shared storage, e.g. shared access to a set of disk drives and data blocks stored thereon. The nodes in a multi-node database system may be in the form of a group of computers (e.g. work stations, personal computers) that are interconnected via a network. Alternately, the nodes may be the nodes of a grid, which is composed of nodes in the form of server blades interconnected with other server blades on a rack.
- Each node in a multi-node database system hosts a database server. A server, such as a database server, is a combination of integrated software components and an allocation of computational resources, such as memory, a node, and processes on the node for executing the integrated software components on a processor, the combination of the software and computational resources being dedicated to performing a particular function on behalf of one or more clients.
- Resources from multiple nodes in a multi-node database system can be allocated to running a particular database server's software. Each combination of the software and allocation of resources from a node is a server that is referred to herein as a “server instance” or “instance”. A database server may comprise multiple database instances, some or all of which are running on separate computers, including separate server blades.
- According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques.
- For example,
FIG. 6 is a block diagram that illustrates acomputer system 600 upon which an embodiment of the invention may be implemented.Computer system 600 includes abus 602 or other communication mechanism for communicating information, and ahardware processor 604 coupled withbus 602 for processing information.Hardware processor 604 may be, for example, a general purpose microprocessor. -
Computer system 600 also includes amain memory 606, such as a random access memory (RAM) or other dynamic storage device, coupled tobus 602 for storing information and instructions to be executed byprocessor 604.Main memory 606 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed byprocessor 604. Such instructions, when stored in non-transitory storage media accessible toprocessor 604, rendercomputer system 600 into a special-purpose machine that is customized to perform the operations specified in the instructions. -
Computer system 600 further includes a read only memory (ROM) 608 or other static storage device coupled tobus 602 for storing static information and instructions forprocessor 604. Astorage device 610, such as a magnetic disk or optical disk, is provided and coupled tobus 602 for storing information and instructions. -
Computer system 600 may be coupled viabus 602 to adisplay 612, such as a cathode ray tube (CRT), for displaying information to a computer user. Aninput device 614, including alphanumeric and other keys, is coupled tobus 602 for communicating information and command selections toprocessor 604. Another type of user input device iscursor control 616, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections toprocessor 604 and for controlling cursor movement ondisplay 612. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. -
Computer system 600 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes orprograms computer system 600 to be a special-purpose machine. According to one embodiment, the techniques herein are performed bycomputer system 600 in response toprocessor 604 executing one or more sequences of one or more instructions contained inmain memory 606. Such instructions may be read intomain memory 606 from another storage medium, such asstorage device 610. Execution of the sequences of instructions contained inmain memory 606 causesprocessor 604 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. - The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as
storage device 610. Volatile media includes dynamic memory, such asmain memory 606. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid status drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. - Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise
bus 602. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. - Various forms of media may be involved in carrying one or more sequences of one or more instructions to
processor 604 for execution. For example, the instructions may initially be carried on a magnetic disk or solid status drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local tocomputer system 600 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data onbus 602.Bus 602 carries the data tomain memory 606, from whichprocessor 604 retrieves and executes the instructions. The instructions received bymain memory 606 may optionally be stored onstorage device 610 either before or after execution byprocessor 604. -
Computer system 600 also includes acommunication interface 618 coupled tobus 602.Communication interface 618 provides a two-way data communication coupling to anetwork link 620 that is connected to alocal network 622. For example,communication interface 618 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example,communication interface 618 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation,communication interface 618 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. - Network link 620 typically provides data communication through one or more networks to other data devices. For example,
network link 620 may provide a connection throughlocal network 622 to ahost computer 624 or to data equipment operated by an Internet Service Provider (ISP) 626.ISP 626 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 628.Local network 622 andInternet 628 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals onnetwork link 620 and throughcommunication interface 618, which carry the digital data to and fromcomputer system 600, are example forms of transmission media. -
Computer system 600 can send messages and receive data, including program code, through the network(s),network link 620 andcommunication interface 618. In the Internet example, aserver 630 might transmit a requested code for an application program throughInternet 628,ISP 626,local network 622 andcommunication interface 618. - The received code may be executed by
processor 604 as it is received, and/or stored instorage device 610, or other non-volatile storage for later execution. - In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction.
Claims (20)
1. A method comprising:
storing, by a container database, redo records, each redo record of the redo records being associated with a logical timestamp, for multiple pluggable databases plugged into the container database, wherein the multiple pluggable databases include a pluggable database;
when it is detected that a first read-write instance of the pluggable database is opened, if no other read-write instances of the pluggable database are open:
updating offline range data associated with the pluggable database;
when it is detected that a second read-write instance of the pluggable database is closed, if the second read-write instance is the last open read-write instance of the pluggable database:
updating the offline range data associated with the pluggable database; and
restoring the pluggable database to a restore point based on the offline range data;
wherein the method is performed by one or more computing devices.
2. The method of claim 1 , wherein restoring the pluggable database to the restore point comprises:
loading at least one backup data file associated with the pluggable database;
processing at least one redo record associated with a redo logical timestamp in a range from a backup point associated with the at least one backup data file to the restore point, wherein redo records of the at least one redo record falling within the offline range data is not processed.
3. The method of claim 1 , wherein the offline range data associated with the pluggable database is stored in a control file of the container database.
4. The method of claim 1 , wherein the offline range data associated with the pluggable database is stored in file header data or other metadata for at least one data file associated with the pluggable database.
5. The method of claim 1 , wherein the second read-write instance of the pluggable database is closed abnormally, and at least one other instance of the pluggable database performs:
said detecting that the second read-write instance of the pluggable database is closed;
said changing the pluggable database status;
said generating the second redo record; and
said updating the offline range data.
6. The method of claim 1 , further comprising storing, by the container database, a current pluggable database status for the pluggable database and a last status change logical timestamp indicating the logical timestamp associated with a last pluggable database status change of the pluggable database to the current status.
7. The method of claim 6 , wherein the current pluggable database status and the last status change logical timestamp are stored in a data dictionary of the container database.
8. The method of claim 1 , further comprising:
generating a first redo record indicating the changing of the pluggable database status to the active status at a first logical timestamp; and
generating a second redo record indicating the changing of the pluggable database status to the clean status at a second logical timestamp.
9. The method of claim 8 , further comprising:
loading at least one backup data file associated with the pluggable database;
determining that the offline range data is not current;
processing at least one redo record associated with a redo logical timestamp in a range from a backup point associated with the at least one backup data file to the restore point, including the second redo record indicating the changing of the pluggable database status to the clean status; and
updating the offline range data based on the second redo record.
10. The method of claim 9 , further comprising:
generating a checkpoint for the pluggable database based on the second redo record indicating the changing of the pluggable database status to the clean status.
11. A non-transitory computer-readable medium storing instructions which, when executed by one or more processors, cause the one or more processors to carry out the steps of:
storing, by a container database, redo records, each redo record of the redo records being associated with a logical timestamp, for multiple pluggable databases plugged into the container database, wherein the multiple pluggable databases include a pluggable database;
when it is detected that a first read-write instance of the pluggable database is opened, if no other read-write instances of the pluggable database are open:
updating offline range data associated with the pluggable database;
when it is detected that a second read-write instance of the pluggable database is closed, if the second read-write instance is the last open read-write instance of the pluggable database:
updating the offline range data associated with the pluggable database; and
restoring the pluggable database to a restore point based on the offline range data;
wherein the method is performed by one or more computing devices.
12. The non-transitory computer-readable medium of claim 11 , wherein restoring the pluggable database to the restore point comprises:
loading at least one backup data file associated with the pluggable database;
processing at least one redo record associated with a redo logical timestamp in a range from a backup point associated with the at least one backup data file to the restore point, wherein redo records of the at least one redo record falling within the offline range data is not processed.
13. The non-transitory computer-readable medium of claim 11 , wherein the offline range data associated with the pluggable database is stored in a control file of the container database.
14. The non-transitory computer-readable medium of claim 11 , wherein the offline range data associated with the pluggable database is stored in file header data or other metadata for at least one data file associated with the pluggable database.
15. The non-transitory computer-readable medium of claim 11 , wherein the second read-write instance of the pluggable database is closed abnormally, and at least one other instance of the pluggable database performs:
said detecting that the second read-write instance of the pluggable database is closed;
said changing the pluggable database status;
said generating the second redo record; and
said updating the offline range data.
16. The non-transitory computer-readable medium of claim 11 , wherein the steps further comprise storing, by the container database, a current pluggable database status for the pluggable database and a last status change logical timestamp indicating the logical timestamp associated with a last pluggable database status change of the pluggable database to the current status.
17. The non-transitory computer-readable medium of claim 11 , wherein the current pluggable database status and the last status change logical timestamp are stored in a data dictionary of the container database.
18. The non-transitory computer-readable medium of claim 11 , wherein the steps further comprise:
generating a first redo record indicating the changing of the pluggable database status to the active status at a first logical timestamp; and
generating a second redo record indicating the changing of the pluggable database status to the clean status at a second logical timestamp.
19. The non-transitory computer-readable medium of claim 18 , wherein the steps further comprise:
loading at least one backup data file associated with the pluggable database;
determining that the offline range data is not current;
processing at least one redo record associated with a redo logical timestamp in a range from a backup point associated with the at least one backup data file to the restore point, including the second redo record indicating the changing of the pluggable database status to the clean status;
updating the offline range data based on the second redo record.
20. The non-transitory computer-readable medium of claim 19 , wherein the steps further comprise generating a checkpoint for the pluggable database.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/135,202 US9684566B2 (en) | 2012-09-28 | 2013-12-19 | Techniques for backup restore and recovery of a pluggable database |
US15/014,969 US9928147B2 (en) | 2012-09-28 | 2016-02-03 | Forceful closure and automatic recovery of pluggable databases in a shared-everything cluster multitenant container database |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261707726P | 2012-09-28 | 2012-09-28 | |
US13/631,815 US9239763B2 (en) | 2012-09-28 | 2012-09-28 | Container database |
US13/830,349 US9298564B2 (en) | 2012-09-28 | 2013-03-14 | In place point-in-time recovery of pluggable databases |
US14/135,202 US9684566B2 (en) | 2012-09-28 | 2013-12-19 | Techniques for backup restore and recovery of a pluggable database |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/631,815 Continuation-In-Part US9239763B2 (en) | 1998-11-24 | 2012-09-28 | Container database |
US13/830,349 Continuation-In-Part US9298564B2 (en) | 2012-09-28 | 2013-03-14 | In place point-in-time recovery of pluggable databases |
Publications (3)
Publication Number | Publication Date |
---|---|
US20140164331A1 US20140164331A1 (en) | 2014-06-12 |
US20160210201A9 true US20160210201A9 (en) | 2016-07-21 |
US9684566B2 US9684566B2 (en) | 2017-06-20 |
Family
ID=50386179
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/631,815 Active US9239763B2 (en) | 1998-11-24 | 2012-09-28 | Container database |
US13/830,349 Active 2033-06-13 US9298564B2 (en) | 2012-09-28 | 2013-03-14 | In place point-in-time recovery of pluggable databases |
US13/841,272 Active 2033-10-24 US9122644B2 (en) | 2012-09-28 | 2013-03-15 | Common users, common roles, and commonly granted privileges and roles in container databases |
US14/135,202 Active 2034-06-25 US9684566B2 (en) | 2012-09-28 | 2013-12-19 | Techniques for backup restore and recovery of a pluggable database |
US14/835,507 Active US10191671B2 (en) | 2012-09-28 | 2015-08-25 | Common users, common roles, and commonly granted privileges and roles in container databases |
US15/012,621 Active 2037-10-22 US11175832B2 (en) | 2012-09-28 | 2016-02-01 | Thread groups for pluggable database connection consolidation in NUMA environment |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/631,815 Active US9239763B2 (en) | 1998-11-24 | 2012-09-28 | Container database |
US13/830,349 Active 2033-06-13 US9298564B2 (en) | 2012-09-28 | 2013-03-14 | In place point-in-time recovery of pluggable databases |
US13/841,272 Active 2033-10-24 US9122644B2 (en) | 2012-09-28 | 2013-03-15 | Common users, common roles, and commonly granted privileges and roles in container databases |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/835,507 Active US10191671B2 (en) | 2012-09-28 | 2015-08-25 | Common users, common roles, and commonly granted privileges and roles in container databases |
US15/012,621 Active 2037-10-22 US11175832B2 (en) | 2012-09-28 | 2016-02-01 | Thread groups for pluggable database connection consolidation in NUMA environment |
Country Status (4)
Country | Link |
---|---|
US (6) | US9239763B2 (en) |
EP (1) | EP2901325B1 (en) |
CN (1) | CN104781809B (en) |
WO (1) | WO2014052851A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9928147B2 (en) | 2012-09-28 | 2018-03-27 | Oracle International Corporation | Forceful closure and automatic recovery of pluggable databases in a shared-everything cluster multitenant container database |
US10642861B2 (en) | 2013-10-30 | 2020-05-05 | Oracle International Corporation | Multi-instance redo apply |
Families Citing this family (89)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10191922B2 (en) | 1998-11-24 | 2019-01-29 | Oracle International Corporation | Determining live migration speed based on workload and performance characteristics |
US9239763B2 (en) | 2012-09-28 | 2016-01-19 | Oracle International Corporation | Container database |
US20140019421A1 (en) * | 2012-07-13 | 2014-01-16 | Apple Inc. | Shared Architecture for Database Systems |
US10922331B2 (en) | 2012-09-28 | 2021-02-16 | Oracle International Corporation | Cloning a pluggable database in read-write mode |
US10635674B2 (en) * | 2012-09-28 | 2020-04-28 | Oracle International Corporation | Migrating a pluggable database between database server instances with minimal impact to performance |
US9396220B2 (en) | 2014-03-10 | 2016-07-19 | Oracle International Corporation | Instantaneous unplug of pluggable database from one container database and plug into another container database |
US8903779B1 (en) | 2013-03-06 | 2014-12-02 | Gravic, Inc. | Methods for returning a corrupted database to a known, correct state |
US9418129B2 (en) | 2013-03-08 | 2016-08-16 | Oracle International Corporation | Adaptive high-performance database redo log synchronization |
US10152500B2 (en) | 2013-03-14 | 2018-12-11 | Oracle International Corporation | Read mostly instances |
US9514007B2 (en) * | 2013-03-15 | 2016-12-06 | Amazon Technologies, Inc. | Database system with database engine and separate distributed storage service |
US9298933B2 (en) * | 2013-07-18 | 2016-03-29 | Sybase, Inc. | Autonomous role-based security for database management systems |
US9830372B2 (en) | 2013-07-24 | 2017-11-28 | Oracle International Corporation | Scalable coordination aware static partitioning for database replication |
US9922300B2 (en) * | 2013-11-26 | 2018-03-20 | Sap Se | Enterprise performance management planning operations at an enterprise database |
US9779128B2 (en) * | 2014-04-10 | 2017-10-03 | Futurewei Technologies, Inc. | System and method for massively parallel processing database |
CN105101196B (en) * | 2014-05-06 | 2018-11-02 | 阿里巴巴集团控股有限公司 | A kind of user account management method and device |
JP2015225603A (en) * | 2014-05-29 | 2015-12-14 | 富士通株式会社 | Storage control device, storage control method, and storage control program |
US10218591B2 (en) | 2014-06-23 | 2019-02-26 | Oracle International Corporation | Embedded performance monitoring of a DBMS |
US9836360B2 (en) * | 2014-11-25 | 2017-12-05 | Sap Se | Recovery strategy with dynamic number of volumes |
US10409835B2 (en) * | 2014-11-28 | 2019-09-10 | Microsoft Technology Licensing, Llc | Efficient data manipulation support |
GB2534374A (en) * | 2015-01-20 | 2016-07-27 | Ibm | Distributed System with accelerator-created containers |
GB2534373A (en) | 2015-01-20 | 2016-07-27 | Ibm | Distributed system with accelerator and catalog |
EP3248101B1 (en) | 2015-01-23 | 2021-12-08 | ServiceNow, Inc. | Distributed computing system with resource managed database cloning |
US9804935B1 (en) * | 2015-01-26 | 2017-10-31 | Intel Corporation | Methods for repairing a corrupted database to a new, correct state by selectively using redo and undo operations |
US9830223B1 (en) * | 2015-01-26 | 2017-11-28 | Intel Corporation | Methods for repairing a corrupted database to a new, correct state |
US10007352B2 (en) | 2015-08-21 | 2018-06-26 | Microsoft Technology Licensing, Llc | Holographic display system with undo functionality |
US10657116B2 (en) | 2015-10-19 | 2020-05-19 | Oracle International Corporation | Create table for exchange |
US10789131B2 (en) | 2015-10-23 | 2020-09-29 | Oracle International Corporation | Transportable backups for pluggable database relocation |
CN108431810B (en) * | 2015-10-23 | 2022-02-01 | 甲骨文国际公司 | Proxy database |
US10733316B2 (en) * | 2015-10-23 | 2020-08-04 | Oracle International Corporation | Pluggable database lockdown profile |
US10635658B2 (en) | 2015-10-23 | 2020-04-28 | Oracle International Corporation | Asynchronous shared application upgrade |
US10747752B2 (en) | 2015-10-23 | 2020-08-18 | Oracle International Corporation | Space management for transactional consistency of in-memory objects on a standby database |
US10803078B2 (en) * | 2015-10-23 | 2020-10-13 | Oracle International Corporation | Ability to group multiple container databases as a single container database cluster |
US11068437B2 (en) | 2015-10-23 | 2021-07-20 | Oracle Interntional Corporation | Periodic snapshots of a pluggable database in a container database |
US10360269B2 (en) * | 2015-10-23 | 2019-07-23 | Oracle International Corporation | Proxy databases |
US11657037B2 (en) | 2015-10-23 | 2023-05-23 | Oracle International Corporation | Query execution against an in-memory standby database |
US10579478B2 (en) | 2015-10-23 | 2020-03-03 | Oracle International Corporation | Pluggable database archive |
WO2017070572A1 (en) * | 2015-10-23 | 2017-04-27 | Oracle International Corporation | Application containers for container databases |
US10606578B2 (en) * | 2015-10-23 | 2020-03-31 | Oracle International Corporation | Provisioning of pluggable databases using a central repository |
US10289617B2 (en) | 2015-12-17 | 2019-05-14 | Oracle International Corporation | Accessing on-premise and off-premise datastores that are organized using different application schemas |
US10387387B2 (en) | 2015-12-17 | 2019-08-20 | Oracle International Corporation | Enabling multi-tenant access to respective isolated data sets organized using different application schemas |
US10171471B2 (en) * | 2016-01-10 | 2019-01-01 | International Business Machines Corporation | Evidence-based role based access control |
US10303894B2 (en) * | 2016-08-31 | 2019-05-28 | Oracle International Corporation | Fine-grained access control for data manipulation language (DML) operations on relational data |
US10248685B2 (en) | 2016-08-31 | 2019-04-02 | Oracle International Corporation | Efficient determination of committed changes |
US11277435B2 (en) * | 2016-09-14 | 2022-03-15 | Oracle International Corporation | Reducing network attack surface area for a database using deep input validation |
US10698771B2 (en) | 2016-09-15 | 2020-06-30 | Oracle International Corporation | Zero-data-loss with asynchronous redo shipping to a standby database |
US10747782B2 (en) * | 2016-09-16 | 2020-08-18 | Oracle International Corporation | Efficient dual-objective cache |
US10423600B2 (en) | 2016-09-16 | 2019-09-24 | Oracle International Corporation | Low latency query processing over a series of redo records |
US10528538B2 (en) | 2016-09-30 | 2020-01-07 | Oracle International Corporation | Leveraging SQL with user defined aggregation to efficiently merge inverted indexes stored as tables |
US10891291B2 (en) | 2016-10-31 | 2021-01-12 | Oracle International Corporation | Facilitating operations on pluggable databases using separate logical timestamp services |
US10949310B2 (en) * | 2016-11-28 | 2021-03-16 | Sap Se | Physio-logical logging for in-memory row-oriented database system |
US11475006B2 (en) | 2016-12-02 | 2022-10-18 | Oracle International Corporation | Query and change propagation scheduling for heterogeneous database systems |
US10769034B2 (en) * | 2017-03-07 | 2020-09-08 | Sap Se | Caching DML statement context during asynchronous database system replication |
US10691722B2 (en) | 2017-05-31 | 2020-06-23 | Oracle International Corporation | Consistent query execution for big data analytics in a hybrid database |
CN107391720A (en) * | 2017-07-31 | 2017-11-24 | 郑州云海信息技术有限公司 | A kind of data summarization method and device |
US11386058B2 (en) | 2017-09-29 | 2022-07-12 | Oracle International Corporation | Rule-based autonomous database cloud service framework |
US10949413B2 (en) * | 2017-09-29 | 2021-03-16 | Oracle International Corporation | Method and system for supporting data consistency on an active standby database after DML redirection to a primary database |
US11327932B2 (en) * | 2017-09-30 | 2022-05-10 | Oracle International Corporation | Autonomous multitenant database cloud service framework |
US10649981B2 (en) | 2017-10-23 | 2020-05-12 | Vmware, Inc. | Direct access to object state in a shared log |
US11392567B2 (en) * | 2017-10-30 | 2022-07-19 | Vmware, Inc. | Just-in-time multi-indexed tables in a shared log |
CN108089948B (en) * | 2017-12-20 | 2021-02-02 | 北京搜狐新媒体信息技术有限公司 | Database backup method and device |
US10642680B2 (en) * | 2018-02-23 | 2020-05-05 | International Business Machines Corporation | Chronologically ordered log-structured key-value store from failures during garbage collection |
US10635523B2 (en) | 2018-02-23 | 2020-04-28 | International Business Machines Corporation | Fast recovery from failures in a chronologically ordered log-structured key-value storage system |
US11226876B2 (en) * | 2018-06-21 | 2022-01-18 | Sap Se | Non-blocking backup in a log replay node for tertiary initialization |
US11068460B2 (en) | 2018-08-06 | 2021-07-20 | Oracle International Corporation | Automated real-time index management |
US11188516B2 (en) | 2018-08-24 | 2021-11-30 | Oracle International Corproation | Providing consistent database recovery after database failure for distributed databases with non-durable storage leveraging background synchronization point |
US11386264B2 (en) * | 2018-09-14 | 2022-07-12 | Sap Se | Configuring complex tables in a client experience framework |
US11188555B2 (en) * | 2018-10-10 | 2021-11-30 | Oracle International Corporation | Isolating a network stack for pluggable databases |
US11113110B2 (en) * | 2018-10-19 | 2021-09-07 | Oracle International Corporation | Intelligent pooling of isolated hierarchical runtimes for cloud scale databases in a multi-tenant environment |
US10942945B2 (en) * | 2018-10-19 | 2021-03-09 | Oracle International Corporation | Isolated hierarchical runtime environments for multi-tenant databases |
US11334445B2 (en) | 2018-10-19 | 2022-05-17 | Oracle International Corporation | Using non-volatile memory to improve the availability of an in-memory database |
CN109933463A (en) * | 2019-02-28 | 2019-06-25 | 苏州浪潮智能科技有限公司 | A kind of data back up method and system based on storage and backup system medium pool |
US11281670B2 (en) * | 2019-03-30 | 2022-03-22 | Oracle International Corporation | High-performance implementation of sharing of read-only data in a multi-tenant environment |
CN110196793B (en) * | 2019-04-30 | 2023-05-12 | 武汉达梦数据库股份有限公司 | Log analysis method and device for plug-in database |
CN110457181B (en) * | 2019-08-02 | 2023-05-16 | 武汉达梦数据库股份有限公司 | Log optimization analysis method and device for database |
US11726952B2 (en) | 2019-09-13 | 2023-08-15 | Oracle International Corporation | Optimization of resources providing public cloud services based on adjustable inactivity monitor and instance archiver |
US11451371B2 (en) * | 2019-10-30 | 2022-09-20 | Dell Products L.P. | Data masking framework for information processing system |
US11269825B2 (en) | 2019-12-13 | 2022-03-08 | Sap Se | Privilege retention for database migration |
CN113050874A (en) * | 2019-12-26 | 2021-06-29 | 华为技术有限公司 | Memory setting method and device |
US11372995B2 (en) * | 2020-01-17 | 2022-06-28 | Snowflake Inc. | Container-centric access control on database objects |
US11604761B2 (en) | 2020-01-30 | 2023-03-14 | Rubrik, Inc. | Utilizing a tablespace to export from a foreign database recovery environment |
US11360860B2 (en) | 2020-01-30 | 2022-06-14 | Rubrik, Inc. | Exporting a database from a foreign database recovery environment |
US11467925B2 (en) | 2020-01-30 | 2022-10-11 | Rubrik, Inc. | Exporting a database to a native database recovery environment |
US11609828B2 (en) | 2020-01-30 | 2023-03-21 | Rubrik, Inc. | Utilizing a tablespace to export to a native database recovery environment |
CN111611107A (en) * | 2020-05-21 | 2020-09-01 | 云和恩墨(北京)信息技术有限公司 | Method and device for acquiring database logs |
WO2021243358A1 (en) * | 2020-05-27 | 2021-12-02 | Insight Creations Llc | Method and system for producing data visualizations via limited bandwidth communication channels |
US11669411B2 (en) | 2020-12-06 | 2023-06-06 | Oracle International Corporation | Efficient pluggable database recovery with redo filtering in a consolidated database |
US20220284056A1 (en) * | 2021-03-05 | 2022-09-08 | Oracle International Corporation | Fast and memory efficient in-memory columnar graph updates while preserving analytical performance |
US20230033806A1 (en) * | 2021-07-30 | 2023-02-02 | Oracle International Corporation | Data guard at pdb (pluggable database) level |
US11934543B1 (en) | 2022-11-17 | 2024-03-19 | Snowflake Inc. | Transient object references |
Family Cites Families (150)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SE448919B (en) | 1983-03-04 | 1987-03-23 | Ibm Svenska Ab | METHOD FOR TRANSFERING INFORMATION DEVICES IN A COMPUTER NETWORK, AND COMPUTER NETWORK FOR IMPLEMENTATION OF THE METHOD |
AU591057B2 (en) | 1984-06-01 | 1989-11-30 | Digital Equipment Corporation | Local area network for digital data processing system |
US4823122A (en) | 1984-06-01 | 1989-04-18 | Digital Equipment Corporation | Local area network for digital data processing system |
US4664528A (en) | 1985-10-18 | 1987-05-12 | Betz Laboratories, Inc. | Apparatus for mixing water and emulsion polymer |
US5047922A (en) | 1988-02-01 | 1991-09-10 | Intel Corporation | Virtual I/O |
AU601328B2 (en) | 1988-05-26 | 1990-09-06 | Digital Equipment Corporation | Temporary state preservation for a distributed file service |
US5247671A (en) | 1990-02-14 | 1993-09-21 | International Business Machines Corporation | Scalable schedules for serial communications controller in data processing systems |
US5319754A (en) | 1991-10-03 | 1994-06-07 | Compaq Computer Corporation | Data transfer system between a computer and a host adapter using multiple arrays |
US5642515A (en) | 1992-04-17 | 1997-06-24 | International Business Machines Corporation | Network server for local and remote resources |
US5742760A (en) | 1992-05-12 | 1998-04-21 | Compaq Computer Corporation | Network packet switch using shared memory for repeating and bridging packets at media rate |
JPH06214969A (en) | 1992-09-30 | 1994-08-05 | Internatl Business Mach Corp <Ibm> | Method and equipment for information communication |
GB9309468D0 (en) | 1993-05-07 | 1993-06-23 | Roke Manor Research | Improvements in or relating to asynchronous transfer mode communication systems |
US5289461A (en) | 1992-12-14 | 1994-02-22 | International Business Machines Corporation | Interconnection method for digital multimedia communications |
US5392285A (en) | 1993-03-31 | 1995-02-21 | Intel Corporation | Cascading twisted pair ethernet hubs by designating one hub as a master and designating all other hubs as slaves |
US5963556A (en) | 1993-06-23 | 1999-10-05 | Digital Equipment Corporation | Device for partitioning ports of a bridge into groups of different virtual local area networks |
JP3263878B2 (en) | 1993-10-06 | 2002-03-11 | 日本電信電話株式会社 | Cryptographic communication system |
US5553242A (en) | 1993-11-03 | 1996-09-03 | Wang Laboratories, Inc. | Client/server connection sharing |
US5596745A (en) | 1994-05-16 | 1997-01-21 | International Business Machines Corporation | System and procedure for concurrent database access by multiple user applications through shared connection processes |
US5598536A (en) | 1994-08-09 | 1997-01-28 | Shiva Corporation | Apparatus and method for providing remote users with the same unique IP address upon each network access |
US5553239A (en) | 1994-11-10 | 1996-09-03 | At&T Corporation | Management facility for server entry and application utilization in a multi-node server configuration |
ZA959722B (en) | 1994-12-19 | 1996-05-31 | Alcatel Nv | Traffic management and congestion control for packet-based networks |
US5828879A (en) | 1994-12-22 | 1998-10-27 | Fore Systems, Inc. | Method and a scheduler for controlling when a server provides service to an entity |
US5774668A (en) | 1995-06-07 | 1998-06-30 | Microsoft Corporation | System for on-line service in which gateway computer uses service map which includes loading condition of servers broadcasted by application servers for load balancing |
US5617540A (en) | 1995-07-31 | 1997-04-01 | At&T | System for binding host name of servers and address of available server in cache within client and for clearing cache prior to client establishes connection |
US5682534A (en) | 1995-09-12 | 1997-10-28 | International Business Machines Corporation | Transparent local RPC optimization |
US5740175A (en) | 1995-10-03 | 1998-04-14 | National Semiconductor Corporation | Forwarding database cache for integrated switch controller |
US5790800A (en) | 1995-10-13 | 1998-08-04 | Digital Equipment Corporation | Client application program mobilizer |
US5805920A (en) | 1995-11-13 | 1998-09-08 | Tandem Computers Incorporated | Direct bulk data transfers |
US5684800A (en) | 1995-11-15 | 1997-11-04 | Cabletron Systems, Inc. | Method for establishing restricted broadcast groups in a switched network |
US5805827A (en) | 1996-03-04 | 1998-09-08 | 3Com Corporation | Distributed signal processing for data channels maintaining channel bandwidth |
US5761507A (en) | 1996-03-05 | 1998-06-02 | International Business Machines Corporation | Client/server architecture supporting concurrent servers within a server with a transaction manager providing server/connection decoupling |
US6647510B1 (en) | 1996-03-19 | 2003-11-11 | Oracle International Corporation | Method and apparatus for making available data that was locked by a dead transaction before rolling back the entire dead transaction |
US7415466B2 (en) | 1996-03-19 | 2008-08-19 | Oracle International Corporation | Parallel transaction recovery |
US5774660A (en) | 1996-08-05 | 1998-06-30 | Resonate, Inc. | World-wide-web server with delayed resource-binding for resource-based load balancing on a distributed resource multi-node network |
US5944823A (en) | 1996-10-21 | 1999-08-31 | International Business Machines Corporations | Outside access to computer resources through a firewall |
US5873102A (en) * | 1997-04-29 | 1999-02-16 | Oracle Corporation | Pluggable tablespaces on a transportable medium |
US5890167A (en) * | 1997-05-08 | 1999-03-30 | Oracle Corporation | Pluggable tablespaces for database systems |
US7031987B2 (en) * | 1997-05-30 | 2006-04-18 | Oracle International Corporation | Integrating tablespaces with different block sizes |
US6272503B1 (en) * | 1997-05-30 | 2001-08-07 | Oracle Corporation | Tablespace-relative database pointers |
US5974463A (en) | 1997-06-09 | 1999-10-26 | Compaq Computer Corporation | Scaleable network system for remote access of a local network |
US6088728A (en) | 1997-06-11 | 2000-07-11 | Oracle Corporation | System using session data stored in session data storage for associating and disassociating user identifiers for switching client sessions in a server |
US5978849A (en) | 1997-06-13 | 1999-11-02 | International Business Machines Corporation | Systems, methods, and computer program products for establishing TCP connections using information from closed TCP connections in time-wait state |
US6006264A (en) | 1997-08-01 | 1999-12-21 | Arrowpoint Communications, Inc. | Method and system for directing a flow between a client and a server |
US5987430A (en) | 1997-08-28 | 1999-11-16 | Atcom, Inc. | Communications network connection system and method |
GB2332809A (en) | 1997-12-24 | 1999-06-30 | Northern Telecom Ltd | Least cost routing |
US6185699B1 (en) | 1998-01-05 | 2001-02-06 | International Business Machines Corporation | Method and apparatus providing system availability during DBMS restart recovery |
US6205449B1 (en) | 1998-03-20 | 2001-03-20 | Lucent Technologies, Inc. | System and method for providing hot spare redundancy and recovery for a very large database management system |
US6138120A (en) | 1998-06-19 | 2000-10-24 | Oracle Corporation | System for sharing server sessions across multiple clients |
US6226650B1 (en) | 1998-09-17 | 2001-05-01 | Synchrologic, Inc. | Database synchronization and organization system and method |
US6295610B1 (en) | 1998-09-17 | 2001-09-25 | Oracle Corporation | Recovering resources in parallel |
US20010051956A1 (en) | 1998-09-29 | 2001-12-13 | Paul Bird | Global caching and sharing of sql statements in a heterogeneous application environment |
US10191922B2 (en) | 1998-11-24 | 2019-01-29 | Oracle International Corporation | Determining live migration speed based on workload and performance characteristics |
US9239763B2 (en) | 2012-09-28 | 2016-01-19 | Oracle International Corporation | Container database |
JP2001256333A (en) | 2000-01-06 | 2001-09-21 | Canon Inc | Operation allocation system and method, decentralized client-server system, and computer program storage medium |
US6742035B1 (en) * | 2000-02-28 | 2004-05-25 | Novell, Inc. | Directory-based volume location service for a distributed file system |
US7506034B2 (en) | 2000-03-03 | 2009-03-17 | Intel Corporation | Methods and apparatus for off loading content servers through direct file transfer from a storage center to an end-user |
US6950848B1 (en) | 2000-05-05 | 2005-09-27 | Yousefi Zadeh Homayoun | Database load balancing for multi-tier computer systems |
US20020147652A1 (en) | 2000-10-18 | 2002-10-10 | Ahmed Gheith | System and method for distruibuted client state management across a plurality of server computers |
US6868417B2 (en) | 2000-12-18 | 2005-03-15 | Spinnaker Networks, Inc. | Mechanism for handling file level and block level remote file accesses using the same server |
US7512686B2 (en) | 2000-12-21 | 2009-03-31 | Berg Mitchell T | Method and system for establishing a data structure of a connection with a client |
WO2002091120A2 (en) * | 2001-05-04 | 2002-11-14 | Logic Junction | System and method for logical agent engine |
US6738933B2 (en) | 2001-05-09 | 2004-05-18 | Mercury Interactive Corporation | Root cause analysis of server system performance degradations |
US7305421B2 (en) | 2001-07-16 | 2007-12-04 | Sap Ag | Parallelized redo-only logging and recovery for highly available main memory database systems |
US7174379B2 (en) | 2001-08-03 | 2007-02-06 | International Business Machines Corporation | Managing server resources for hosted applications |
US6873694B2 (en) | 2001-10-30 | 2005-03-29 | Hewlett-Packard Development Company, L.P. | Telephony network optimization method and system |
US7039654B1 (en) | 2002-09-12 | 2006-05-02 | Asset Trust, Inc. | Automated bot development system |
US20080027769A1 (en) | 2002-09-09 | 2008-01-31 | Jeff Scott Eder | Knowledge based performance management system |
US9087319B2 (en) | 2002-03-11 | 2015-07-21 | Oracle America, Inc. | System and method for designing, developing and implementing internet service provider architectures |
US20030229695A1 (en) | 2002-03-21 | 2003-12-11 | Mc Bride Edmund Joseph | System for use in determining network operational characteristics |
US8738568B2 (en) | 2011-05-05 | 2014-05-27 | Oracle International Corporation | User-defined parallelization in transactional replication of in-memory database |
US7496655B2 (en) | 2002-05-01 | 2009-02-24 | Satyam Computer Services Limited Of Mayfair Centre | System and method for static and dynamic load analyses of communication network |
US7346690B1 (en) | 2002-05-07 | 2008-03-18 | Oracle International Corporation | Deferred piggybacked messaging mechanism for session reuse |
US20040030801A1 (en) | 2002-06-14 | 2004-02-12 | Moran Timothy L. | Method and system for a client to invoke a named service |
US7386672B2 (en) | 2002-08-29 | 2008-06-10 | International Business Machines Corporation | Apparatus and method for providing global session persistence |
US6981004B2 (en) | 2002-09-16 | 2005-12-27 | Oracle International Corporation | Method and mechanism for implementing in-memory transaction logging records |
US6976022B2 (en) | 2002-09-16 | 2005-12-13 | Oracle International Corporation | Method and mechanism for batch processing transaction logging records |
US6983295B1 (en) | 2002-10-24 | 2006-01-03 | Unisys Corporation | System and method for database recovery using a mirrored snapshot of an online database |
US7406481B2 (en) | 2002-12-17 | 2008-07-29 | Oracle International Corporation | Using direct memory access for performing database operations between two or more machines |
US20040176996A1 (en) | 2003-03-03 | 2004-09-09 | Jason Powers | Method for monitoring a managed system |
US7284054B2 (en) | 2003-04-11 | 2007-10-16 | Sun Microsystems, Inc. | Systems, methods, and articles of manufacture for aligning service containers |
US7890466B2 (en) | 2003-04-16 | 2011-02-15 | Oracle International Corporation | Techniques for increasing the usefulness of transaction logs |
US8621031B2 (en) | 2003-04-29 | 2013-12-31 | Oracle International Corporation | Method and apparatus using connection pools in communication networks |
US7181476B2 (en) | 2003-04-30 | 2007-02-20 | Oracle International Corporation | Flashback database |
US7349340B2 (en) | 2003-06-18 | 2008-03-25 | Hewlett-Packard Development Company, L.P. | System and method of monitoring e-service Quality of Service at a transaction level |
US7457829B2 (en) | 2003-06-23 | 2008-11-25 | Microsoft Corporation | Resynchronization of multiple copies of a database after a divergence in transaction history |
US7664847B2 (en) | 2003-08-14 | 2010-02-16 | Oracle International Corporation | Managing workload by service |
US7873684B2 (en) | 2003-08-14 | 2011-01-18 | Oracle International Corporation | Automatic and dynamic provisioning of databases |
US7937493B2 (en) | 2003-08-14 | 2011-05-03 | Oracle International Corporation | Connection pool use of runtime load balancing service performance advisories |
US7503052B2 (en) * | 2004-04-14 | 2009-03-10 | Microsoft Corporation | Asynchronous database API |
US20050289213A1 (en) * | 2004-06-25 | 2005-12-29 | International Business Machines Corporation | Switching between blocking and non-blocking input/output |
US7822727B1 (en) | 2004-07-02 | 2010-10-26 | Borland Software Corporation | System and methodology for performing read-only transactions in a shared cache |
US20060047713A1 (en) | 2004-08-03 | 2006-03-02 | Wisdomforce Technologies, Inc. | System and method for database replication by interception of in memory transactional change records |
US8204931B2 (en) | 2004-12-28 | 2012-06-19 | Sap Ag | Session management within a multi-tiered enterprise network |
US7302533B2 (en) * | 2005-03-11 | 2007-11-27 | International Business Machines Corporation | System and method for optimally configuring software systems for a NUMA platform |
US7610314B2 (en) * | 2005-10-07 | 2009-10-27 | Oracle International Corporation | Online tablespace recovery for export |
CA2626227C (en) | 2005-10-28 | 2016-07-05 | Goldengate Software, Inc. | Apparatus and method for creating a real time database replica |
US8943181B2 (en) | 2005-11-29 | 2015-01-27 | Ebay Inc. | Method and system for reducing connections to a database |
US8266214B2 (en) * | 2006-01-24 | 2012-09-11 | Simulat, Inc. | System and method for collaborative web-based multimedia layered platform with recording and selective playback of content |
US7822717B2 (en) | 2006-02-07 | 2010-10-26 | Emc Corporation | Point-in-time database restore |
US9026679B1 (en) | 2006-03-30 | 2015-05-05 | Emc Corporation | Methods and apparatus for persisting management information changes |
US20080208820A1 (en) | 2007-02-28 | 2008-08-28 | Psydex Corporation | Systems and methods for performing semantic analysis of information over time and space |
US8364648B1 (en) | 2007-04-09 | 2013-01-29 | Quest Software, Inc. | Recovering a database to any point-in-time in the past with guaranteed data consistency |
US20090183225A1 (en) * | 2008-01-10 | 2009-07-16 | Microsoft Corporation | Pluggable modules for terminal services |
US20090215469A1 (en) * | 2008-02-27 | 2009-08-27 | Amit Fisher | Device, System, and Method of Generating Location-Based Social Networks |
WO2010093831A1 (en) * | 2009-02-11 | 2010-08-19 | Social Gaming Network | Apparatuses, methods and systems for an interactive proximity display tether with remote co-play |
US20100205153A1 (en) * | 2009-02-12 | 2010-08-12 | Accenture Global Services Gmbh | Data System Architecture to Analyze Distributed Data Sets |
US8245008B2 (en) * | 2009-02-18 | 2012-08-14 | Advanced Micro Devices, Inc. | System and method for NUMA-aware heap memory management |
US8271615B2 (en) | 2009-03-31 | 2012-09-18 | Cloud Connex, Llc | Centrally managing and monitoring software as a service (SaaS) applications |
US8214424B2 (en) * | 2009-04-16 | 2012-07-03 | International Business Machines Corporation | User level message broadcast mechanism in distributed computing environment |
US20110112901A1 (en) * | 2009-05-08 | 2011-05-12 | Lance Fried | Trust-based personalized offer portal |
US8549038B2 (en) * | 2009-06-15 | 2013-10-01 | Oracle International Corporation | Pluggable session context |
US10120767B2 (en) | 2009-07-15 | 2018-11-06 | Idera, Inc. | System, method, and computer program product for creating a virtual database |
US8479216B2 (en) | 2009-08-18 | 2013-07-02 | International Business Machines Corporation | Method for decentralized load distribution in an event-driven system using localized migration between physically connected nodes and load exchange protocol preventing simultaneous migration of plurality of tasks to or from a same node |
US8429134B2 (en) | 2009-09-08 | 2013-04-23 | Oracle International Corporation | Distributed database recovery |
EP2323047B1 (en) | 2009-10-09 | 2020-02-19 | Software AG | Primary database system, replication database system and method for replicating data of a primary database system |
US8332758B2 (en) * | 2009-11-25 | 2012-12-11 | International Business Machines Corporation | Plugin-based user interface contributions to manage policies in an IT environment |
JP5302227B2 (en) | 2010-01-19 | 2013-10-02 | 富士通テン株式会社 | Image processing apparatus, image processing system, and image processing method |
EP2553613A4 (en) * | 2010-03-26 | 2017-01-25 | Nokia Technologies Oy | Method and apparatus for portable index on a removable storage medium |
US8386431B2 (en) | 2010-06-14 | 2013-02-26 | Sap Ag | Method and system for determining database object associated with tenant-independent or tenant-specific data, configured to store data partition, current version of the respective convertor |
US20110314035A1 (en) * | 2010-06-21 | 2011-12-22 | Storage Appliance Corporation | Creation, Transfer and Use of a Portable Data Map Using Metadata |
US8625113B2 (en) * | 2010-09-24 | 2014-01-07 | Ricoh Company Ltd | System and method for distributed optical character recognition processing |
US9081837B2 (en) | 2010-10-28 | 2015-07-14 | Microsoft Technology Licensing, Llc | Scoped database connections |
US8478718B1 (en) | 2010-11-16 | 2013-07-02 | Symantec Corporation | Systems and methods for replicating data in cluster environments |
US8554762B1 (en) | 2010-12-28 | 2013-10-08 | Amazon Technologies, Inc. | Data replication framework |
US20120226659A1 (en) * | 2011-02-02 | 2012-09-06 | Ball Derek | System and method for monitoring elements and categorizing users within a network |
US9311462B1 (en) * | 2011-03-04 | 2016-04-12 | Zynga Inc. | Cross platform social networking authentication system |
US20120284544A1 (en) | 2011-05-06 | 2012-11-08 | Microsoft Corporation | Storage Device Power Management |
US9262181B2 (en) * | 2011-05-10 | 2016-02-16 | International Business Machines Corporation | Process grouping for improved cache and memory affinity |
US8868492B2 (en) | 2011-06-15 | 2014-10-21 | Oracle International Corporation | Method for maximizing throughput and minimizing transactions response times on the primary system in the presence of a zero data loss standby replica |
US20130085742A1 (en) | 2011-10-04 | 2013-04-04 | Nec Laboratories America, Inc. | Service level agreement-aware migration for multitenant database platforms |
US9058371B2 (en) | 2011-11-07 | 2015-06-16 | Sap Se | Distributed database log recovery |
KR101322401B1 (en) | 2012-01-31 | 2013-10-28 | 주식회사 알티베이스 | Apparatus and method for parallel processing in database management system for synchronous replication |
US8955065B2 (en) * | 2012-02-01 | 2015-02-10 | Amazon Technologies, Inc. | Recovery of managed security credentials |
US8527462B1 (en) | 2012-02-09 | 2013-09-03 | Microsoft Corporation | Database point-in-time restore and as-of query |
US20130226816A1 (en) * | 2012-02-24 | 2013-08-29 | Microsoft Corporation | Creating customized project plans integrated with user data stores |
US8805849B1 (en) | 2012-06-20 | 2014-08-12 | Symantec Corporation | Enabling use of analytic functions for distributed storage system data |
CA2791110A1 (en) * | 2012-09-25 | 2014-03-25 | Pixton Comics Inc. | Collaborative comic creation |
US9396220B2 (en) | 2014-03-10 | 2016-07-19 | Oracle International Corporation | Instantaneous unplug of pluggable database from one container database and plug into another container database |
US10635674B2 (en) | 2012-09-28 | 2020-04-28 | Oracle International Corporation | Migrating a pluggable database between database server instances with minimal impact to performance |
US10922331B2 (en) | 2012-09-28 | 2021-02-16 | Oracle International Corporation | Cloning a pluggable database in read-write mode |
US9633216B2 (en) | 2012-12-27 | 2017-04-25 | Commvault Systems, Inc. | Application of information management policies based on operation with a geographic entity |
US9563655B2 (en) | 2013-03-08 | 2017-02-07 | Oracle International Corporation | Zero and near-zero data loss database backup and recovery |
US9830372B2 (en) | 2013-07-24 | 2017-11-28 | Oracle International Corporation | Scalable coordination aware static partitioning for database replication |
US9767178B2 (en) | 2013-10-30 | 2017-09-19 | Oracle International Corporation | Multi-instance redo apply |
US10635644B2 (en) * | 2013-11-11 | 2020-04-28 | Amazon Technologies, Inc. | Partition-based data stream processing framework |
US9390120B1 (en) | 2013-12-31 | 2016-07-12 | Google Inc. | System and methods for organizing hierarchical database replication |
US10148757B2 (en) | 2014-02-21 | 2018-12-04 | Hewlett Packard Enterprise Development Lp | Migrating cloud resources |
US11172022B2 (en) | 2014-02-21 | 2021-11-09 | Hewlett Packard Enterprise Development Lp | Migrating cloud resources |
US9684561B1 (en) * | 2014-09-29 | 2017-06-20 | EMC IP Holding Company LLC | Smart assistant for backing up data |
CA3204109A1 (en) * | 2014-10-27 | 2016-05-06 | William Sale | System and method for performing concurrent database operations on a database record |
-
2012
- 2012-09-28 US US13/631,815 patent/US9239763B2/en active Active
-
2013
- 2013-03-14 US US13/830,349 patent/US9298564B2/en active Active
- 2013-03-15 US US13/841,272 patent/US9122644B2/en active Active
- 2013-09-27 CN CN201380058414.3A patent/CN104781809B/en active Active
- 2013-09-27 EP EP13777372.7A patent/EP2901325B1/en active Active
- 2013-09-27 WO PCT/US2013/062342 patent/WO2014052851A1/en active Application Filing
- 2013-12-19 US US14/135,202 patent/US9684566B2/en active Active
-
2015
- 2015-08-25 US US14/835,507 patent/US10191671B2/en active Active
-
2016
- 2016-02-01 US US15/012,621 patent/US11175832B2/en active Active
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9928147B2 (en) | 2012-09-28 | 2018-03-27 | Oracle International Corporation | Forceful closure and automatic recovery of pluggable databases in a shared-everything cluster multitenant container database |
US10642861B2 (en) | 2013-10-30 | 2020-05-05 | Oracle International Corporation | Multi-instance redo apply |
Also Published As
Publication number | Publication date |
---|---|
EP2901325B1 (en) | 2022-01-26 |
US9684566B2 (en) | 2017-06-20 |
CN104781809A (en) | 2015-07-15 |
US20140164331A1 (en) | 2014-06-12 |
US20140095452A1 (en) | 2014-04-03 |
US11175832B2 (en) | 2021-11-16 |
US9239763B2 (en) | 2016-01-19 |
US10191671B2 (en) | 2019-01-29 |
US9122644B2 (en) | 2015-09-01 |
EP2901325A1 (en) | 2015-08-05 |
CN104781809B (en) | 2018-02-09 |
US20170220271A1 (en) | 2017-08-03 |
WO2014052851A1 (en) | 2014-04-03 |
US20140095546A1 (en) | 2014-04-03 |
US9298564B2 (en) | 2016-03-29 |
US20150363610A1 (en) | 2015-12-17 |
US20140095530A1 (en) | 2014-04-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9684566B2 (en) | Techniques for backup restore and recovery of a pluggable database | |
CN107209704B (en) | Method, system and apparatus for database management | |
US7610314B2 (en) | Online tablespace recovery for export | |
US10678808B2 (en) | Eager replication of uncommitted transactions | |
CN108475271B (en) | Application container of container database | |
KR102579190B1 (en) | Backup and restore in distributed databases using consistent database snapshots | |
CN104813276B (en) | Recover database from standby system streaming | |
EP3117340B1 (en) | Instantaneous unplug of pluggable database from one container database and plug into another container database | |
EP2746965B1 (en) | Systems and methods for in-memory database processing | |
US11068437B2 (en) | Periodic snapshots of a pluggable database in a container database | |
US20140019421A1 (en) | Shared Architecture for Database Systems | |
EP2746971A2 (en) | Replication mechanisms for database environments | |
Hvasshovd | Recovery in parallel database systems | |
US11150964B1 (en) | Sequential processing of changes in a distributed system | |
US9928147B2 (en) | Forceful closure and automatic recovery of pluggable databases in a shared-everything cluster multitenant container database | |
Stamatakis et al. | A general-purpose architecture for replicated metadata services in distributed file systems | |
Donselaar | Low latency asynchronous database synchronization and data transformation using the replication log. | |
US20230033806A1 (en) | Data guard at pdb (pluggable database) level | |
US20230350859A1 (en) | Data retrieval from archived data storage | |
Zhou et al. | FoundationDB: A Distributed Key Value Store | |
US20200394209A1 (en) | Multi-master with ownership transfer | |
Iqbal Hossain | SQL query based data and structure uniformity maintenance in heterogeneous database environment | |
Butelli | PERFORMANCE ANALYSIS OF A DATABASE LAYER’S MIGRATION FROM RDBMS TO A NOSQL SOLUTION IN AMAZON AWS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ORACLE INTERNATIONAL CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, YUNRUI;MINH, CHI CAO;RAJAMANI, KUMAR;AND OTHERS;SIGNING DATES FROM 20131217 TO 20131218;REEL/FRAME:031823/0979 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |