US10740318B2 - Key pattern management in multi-tenancy database systems - Google Patents

Key pattern management in multi-tenancy database systems Download PDF

Info

Publication number
US10740318B2
US10740318B2 US15/794,368 US201715794368A US10740318B2 US 10740318 B2 US10740318 B2 US 10740318B2 US 201715794368 A US201715794368 A US 201715794368A US 10740318 B2 US10740318 B2 US 10740318B2
Authority
US
United States
Prior art keywords
tenant
shared
read
container
database container
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/794,368
Other versions
US20190129988A1 (en
Inventor
Ulrich Auer
Immo-Gert Birn
Ralf-Juergen Hauck
Uwe Schlarb
Christian Stork
Welf Walter
Torsten Ziegler
Volker Driesen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAP SE
Original Assignee
SAP SE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAP SE filed Critical SAP SE
Priority to US15/794,368 priority Critical patent/US10740318B2/en
Assigned to SAP SE reassignment SAP SE ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: STORK, CHRISTIAN, WALTER, WELF, AUER, ULRICH, HAUCK, RALF-JUERGEN, SCHLARB, UWE, ZIEGLER, TORSTEN, Birn, Immo-Gert, DRIESEN, VOLKER
Priority to EP17001948.3A priority patent/EP3477503A1/en
Priority to CN201711270288.4A priority patent/CN110019215B/en
Publication of US20190129988A1 publication Critical patent/US20190129988A1/en
Priority to US16/860,532 priority patent/US11561956B2/en
Application granted granted Critical
Publication of US10740318B2 publication Critical patent/US10740318B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2336Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
    • G06F16/2343Locking methods, e.g. distributed locking or locking implementation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24553Query execution of query operations
    • G06F16/24554Unary operations; Data partitioning operations
    • G06F16/24556Aggregation; Duplicate elimination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24564Applying rules; Deductive queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0645Rental transactions; Leasing transactions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/06Network architectures or network communication protocols for network security for supporting key management in a packet data network
    • H04L63/065Network architectures or network communication protocols for network security for supporting key management in a packet data network for group communications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0816Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
    • H04L9/0819Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s)

Definitions

  • the present disclosure relates to computer-implemented methods, software, and systems for key pattern management in multi-tenancy database systems.
  • a multi-tenancy software architecture can include a single instance of a software application that runs on a server and serves multiple tenants.
  • a tenant is a group of users who share a common access to the software instance.
  • the software application can be designed to provide every tenant a dedicated share of the instance—including tenant-specific data, configuration, user management, and tenant-specific functionality. Multi-tenancy can be used in cloud computing.
  • One example method includes receiving a query for a logical database table from an application. A determination is made as to whether the query is a write query. In response to determining that the query is a write query, a determination is made as to whether the query complies with a key pattern configuration that describes keys of records included in a physical database table that is part of a logical table implementation.
  • the physical table includes records of the logical database table that are allowed to be written by the application.
  • the write query is redirected to the physical database table in response to determining that the query complies with the key pattern definition. The query is rejected in response to determining that the query does not comply with the key pattern configuration.
  • FIG. 1 is a block diagram illustrating an example system for multi-tenancy.
  • FIG. 2 illustrates an example system for an application with a standard database setup.
  • FIG. 3 illustrates an example non multi-tenancy system in which same content is stored for multiple, different tenants in different database containers.
  • FIG. 4A illustrates an example system that illustrates the splitting of data for a tenant.
  • FIG. 4B illustrates an example multi-tenancy system that includes multiple tables of each of multiple table types.
  • FIG. 4C illustrates an example multi-tenancy system that uses a suffix table naming scheme.
  • FIGS. 5 and 6 illustrate example systems that include a shared database container, a first tenant database container for a first tenant, and a second tenant database container for a second tenant.
  • FIG. 7 illustrates a system for constraint enforcement.
  • FIG. 8 illustrates an example system for deploying content in accordance with configured tenant keys.
  • FIG. 9 illustrates an example system for changing tenant keys.
  • FIG. 10 illustrates an example system for updating database records to comply with updated tenant keys.
  • FIG. 11 illustrates an example system for updating database records to comply with updated tenant keys using a transfer file.
  • FIG. 12 illustrates an example system for updating an inactive tenant keys record.
  • FIG. 13A illustrates an example system that includes a standard system with a standard system-sharing type and a shared/tenant system with a shared/tenant system-sharing type.
  • FIG. 13B is a table that illustrates processing that can be performed for standard, shared, and tenant database containers.
  • FIG. 14 illustrates a system for transitioning from a standard system to a shared/tenant system.
  • FIG. 15 illustrates a system with a sharing type of simulated.
  • FIG. 16 illustrates a system for transitioning from a standard system to a simulated system.
  • FIG. 17 illustrates a system for transitioning from a simulated system to a shared/tenant system.
  • FIG. 18 illustrates a system for transitioning from a shared/tenant system to a standard system.
  • FIG. 19 illustrates a system for transitioning from a simulated system to a standard system.
  • FIG. 20 illustrates a system that includes data for objects in both a shared database container and a tenant database container.
  • FIGS. 21A-B illustrates example systems for deploying changes to objects in a database system.
  • FIG. 22 illustrates an example system for upgrading a multi-tenancy database system using an exchanged shared database container approach.
  • FIG. 23 illustrates an example system for deploying a new service pack to a multi-tenancy database system.
  • FIG. 24 illustrates an example system for maintenance of a database system.
  • FIG. 25 illustrates an example system for upgrading a multi-tenancy system to a new version.
  • FIG. 26 illustrates an example system before deployment of a new database version using an exchanged shared database container approach.
  • FIGS. 27-31 are illustrations of example systems that are upgraded in part by exchanging a shared database container.
  • FIG. 32 illustrates a system for deploying changes to objects.
  • FIG. 33 illustrates a system for deploying a patch using a hidden preparation of a shared database container.
  • FIG. 34 illustrates an example system before deployment of a patch.
  • FIG. 35 illustrates a system for preparation of a shared database container during a deployment of a patch to a database system.
  • FIGS. 36 and 37 illustrate systems for deploying a patch to a tenant database container.
  • FIG. 38 illustrates a system for performing finalization of a deployment.
  • FIG. 39 illustrates a system after deployment using a hidden preparation of a shared database container technique.
  • FIG. 40 is a flowchart of an example method for handling unsuccessful tenant deployments.
  • FIG. 41 illustrates a system for deploying multiple patches to a database system.
  • FIG. 42 illustrates a system for preparing a shared database container before deploying multiple patches to a database system.
  • FIGS. 43-47 illustrate example systems for deploying multiple patches to a database system.
  • FIG. 48 illustrates a system after deployment of multiple patches to a database system has completed.
  • FIG. 49 is a flowchart of an example method for applying different types of changes to a multi-tenancy database system.
  • FIG. 50 is a flowchart of an example method for changing a sharing type of one or more tables.
  • FIG. 51 is a table that illustrates a transition from a first table type to a second, different table type.
  • FIG. 52 illustrates a system which includes a first system that is at a first version and a second system that is at a second, later version.
  • FIG. 53 illustrates conversions between various table types.
  • FIG. 54 illustrates a system for changing tenant keys when exchanging a shared database container.
  • FIG. 55 is a flowchart of an example method for redirecting a write query.
  • FIG. 56 is a flowchart of an example method for key pattern management.
  • FIG. 57 is a flowchart of an example method for transitioning between system sharing types.
  • FIG. 58 is a flowchart of an example method for exchanging a shared database container.
  • FIG. 59 is a flowchart of an example method for patching a shared database container.
  • FIG. 60 is a flowchart of an example method for deploying different types of changes to a database system.
  • FIG. 61 is a flowchart of an example method for changing key pattern definitions.
  • resources can be shared between applications from different customers. Each customer can be referred to as a tenant.
  • Shared resources can include, for example, vendor code, application documentation, and central runtime and configuration data.
  • Multi-tenancy can enable improved use of shared resources between multiple application instances, across tenants, which can reduce disk storage and processing requirements.
  • Multi-tenancy can enable centralized software change management for events such as patching or software upgrades.
  • a content separation approach can be used to separate shared data from tenant-specific data.
  • Multi-tenancy approaches can be applied to existing applications that were built without data separation as a design criterion. If multi-tenancy is implemented for an existing system, applications can execute unchanged. Applications can be provided with a unified view on stored data that hides from the application which data is shared and which data is tenant-local. Other advantages are discussed in more detail below.
  • FIG. 1 is a block diagram illustrating an example system 100 for multi-tenancy.
  • the illustrated system 100 includes or is communicably coupled with a database system 102 , an end user client device 104 , an administrator client device 105 , an application server 106 , and a network 108 .
  • functionality of two or more systems or servers may be provided by a single system or server.
  • the functionality of one illustrated system or server may be provided by multiple systems or servers.
  • the system 100 can include multiple application servers, a database server, a centralized services server, or some other combination of systems or servers.
  • An end user can use an end-user client device 104 to use a client application 110 that is a client version of a server application 112 hosted by the application server 106 .
  • the client application 110 may be any client-side application that can access and interact with at least a portion of the illustrated data, including a web browser, a specific app (e.g., a mobile app), or another suitable application.
  • the server application 112 can store and modify data in tables provided by a database system. The tables are defined in a data dictionary 114 and reside in either shared database containers 116 and/or tenant database containers 118 , as described below.
  • the server application 112 can access a database management system 119 using a database interface 120 .
  • the database management system 119 can provide a database that includes a common set of tables that can be used by multiple application providers. Each application provider can be referred to as a customer, or tenant, of the database system.
  • the database system 102 can store tenant-specific data for each tenant. However, at least some of the data provided by the database system 102 can be common data that can be shared by multiple tenants, such as master data or other non-tenant-specific data. Accordingly, common, shared data can be stored in one or more shared database containers 116 and tenant-specific data can be stored in one or more tenant database containers 118 (e.g., each tenant can have at least one dedicated tenant database container 118 ).
  • a shared database container 116 can store common data used by multiple instances of an application and the tenant database containers 118 can store data specific to each instance.
  • a data split and sharing system 122 can manage the splitting of data between the shared database containers 116 and the tenant database containers 118 .
  • the shared database containers 116 can include shared, read-only tables that include shared data, where the shared data can be used by multiple tenants as a common data set.
  • the tenant database containers 118 can include writable tables that store tenant-specific data that may be modified by a given tenant.
  • Some application tables, referred to as mixed, or split tables may include both read-only records that are common and are shared among multiple tenants and writable records that have been added for a specific tenant, or that are editable by or for a specific tenant before and/or during interactions with the system.
  • the read-only records of a mixed table can be stored in shared, read-only portion in a shared database container 116 .
  • Writable mixed-table records that may be modified by a given tenant can be stored in a writable portion in each tenant database container 118 of each tenant that uses the application.
  • Data for a given object can be split across tables of different types.
  • the data split and sharing system 122 can enable common portions of objects to be stored in a shared database container 116 .
  • the data dictionary 114 can store information indicating which tables are shared, whether fully or partially.
  • the server application 112 can be designed to be unaware of whether multi-tenancy has been implemented in the database system 102 .
  • the server application 112 can submit queries to the database system 102 using a same set of logical table names, regardless of whether multi-tenancy has been implemented in the database system 102 for a given tenant.
  • the server application 112 can submit a query using a logical name of a mixed table, and the database system 102 can return query results, regardless of whether the mixed table is a single physical table when multi-tenancy has not yet been implemented, or whether the mixed table is represented as multiple tables, including a read-only portion and a writable portion, in different database containers.
  • the multi-tenancy features implemented by the data split and sharing system 122 can allow an application to be programmed to use a single logical table for mixed data storage while still allowing the sharing of common vendor data between different customers.
  • An application which has not been previously designed for data sharing and multi tenancy can remain unchanged after implementation of multi-tenancy.
  • the data sharing provided by multi-tenancy can reduce data and memory footprints of an application deployment.
  • a constraint enforcement system 126 can be used to define key patterns which describe which records are allowed to be stored in a writable portion for a given mixed table, which can be used to prevent duplicate records.
  • the database interface 120 can be configured to determine that an incoming query is a write query for a mixed table that is represented as multiple physical tables in the database system 120 , and in response, use a write redirecter 128 to ensure that the write query operates only on a write portion of a mixed table.
  • the use of write redirection and key patterns can help with enforcement of data consistency, both during application operation and during content deployment done by a deployment tool 130 .
  • the deployment tool 130 can be used, for example, to deploy new content for the database system 102 after installment of tenant applications.
  • An administrator can initiate a deployment using a deployment administrator application 132 on an administrator client device 105 , for example.
  • the deployment tool 130 can use a change management system 134 to determine how to make each of the required changes.
  • the change management system 134 includes infrastructures for managing and making different types of changes.
  • the change management system includes a structure change infrastructure 136 for managing table structure changes, a split definition infrastructure 138 for managing changes to key patterns, and a sharing type change infrastructure 140 for managing changes to which tables are shared among tenants.
  • the change management system 134 can manage when and in which order or combination the respective sub infrastructures are invoked.
  • the deployment tool 130 can use an approach of exchanging a shared database container 116 , which can be more efficient than making changes inline to an existing shared database container 116 .
  • a shared database container exchanger 142 can prepare a new shared database container 116 for the deployment tool 130 to deploy.
  • the deployment tool 130 can link tenant database containers 118 to the new shared database container 116 .
  • the existing shared database container 116 can be dropped after all tenants have been upgraded. Deployment status can be stored in metadata 144 while an upgrade is in process.
  • the approach of exchanging a shared database container 116 can allow tenants to be upgraded individually—e.g., each tenant can be linked to the new shared database container 116 during an individual downtime window that can be customized for each tenant. If an upgrade for one tenant fails, a deployment for that tenant can be retried, and other tenant deployments can remain unaffected.
  • the deploying of the new shared database container 116 can reduce downtime because the new shared database container 116 can be deployed during uptime while the existing shared database container 116 is in use.
  • the deployment tool 130 can use a patching system 146 to make necessary changes inline to an existing shared database container 116 , rather than exchanging the existing shared database container 116 .
  • Changes for a patch can be deployed to shared tables that are initially hidden from tenants. This can enable tenants to be individually linked to the hidden table versions, which can enable individual tenant-specific upgrade windows and fallback capability, similar to the exchanged shared database container approach.
  • the patching system 146 can also enable a queue of patches to be applied. For example, deployment of a first patch can be in progress for a set of tenants, with some but not all of the tenants having the first patch applied. A problem can occur with a tenant who has already been upgraded with the first patch. A second patch can be developed to fix the problem, and the second patch can be applied to that tenant. The other tenants can be upgraded with the first patch (and possibly the second patch) at a later time.
  • Needs of an application system or a customer/tenant may change over time.
  • a database used for a set of customers may initially be relatively small, and may not include enough data to warrant implementation of multi-tenancy for that application/database/customer. For example, a choice may be made to use one database container for that customer, since higher performance may be obtained if only one vs. several database containers are used.
  • a customer may grow over time, may have a larger database, may run more application instances, etc.
  • a particular database may be used by more tenants than in the past.
  • the database system 102 can support a changing from one type of system setup to another, as needs change.
  • a system sharing type modifier 148 can change the database system 102 from a standard setup (e.g., one database container, with no multi-tenancy) for a given customer to a shared/tenant setup that uses a shared database container 116 for shared content and tenant database containers 118 for tenant-specific content.
  • a simulated setup can be used for the database system 102 .
  • a system sharing type can be stored as a system setting in the metadata 144 .
  • the deployment tool 130 , the database interface 120 , and the data split and sharing system 122 can alter behavior based on the system sharing type.
  • the server application 112 can run without being aware of a current system sharing type, and whether a system sharing type has been changed from one type to another.
  • FIG. 1 illustrates a single database system 102 , a single end-user client device 104 , a single administrator client device 105 , and a single application server 106
  • the system 100 can be implemented using a single, stand-alone computing device, two or more database systems 102 , two or more application servers 106 , two or more end-user client devices 104 , two or more administrator client devices 105 , etc.
  • the database system 102 , the application server 106 , the administrator client device 105 , and the client device 104 may be any computer or processing device such as, for example, a blade server, general-purpose personal computer (PC), Mac®, workstation, UNIX-based workstation, or any other suitable device.
  • the present disclosure contemplates computers other than general purpose computers, as well as computers without conventional operating systems.
  • the database system 102 , the application server 106 , the administrator client device 105 , and the client device 104 may be adapted to execute any operating system, including Linux, UNIX, Windows, Mac OS®, JavaTM, AndroidTM, iOS or any other suitable operating system.
  • the application server 106 and/or the database system 102 may also include or be communicably coupled with an e-mail server, a Web server, a caching server, a streaming data server, and/or other suitable server.
  • Interfaces 160 , 162 , 164 , and 166 are used by the database system 102 , the application server 106 , the administrator client device 105 , and the client device 104 , respectively, for communicating with other systems in a distributed environment—including within the system 100 —connected to the network 108 .
  • the interfaces 160 , 162 , 164 , and 166 each comprise logic encoded in software and/or hardware in a suitable combination and operable to communicate with the network 108 . More specifically, the interfaces 160 , 162 , 164 , and 166 may each comprise software supporting one or more communication protocols associated with communications such that the network 108 or interface's hardware is operable to communicate physical signals within and outside of the illustrated system 100 .
  • the database system 102 , the application server 106 , the administrator client device 105 , and the client device 104 each respectively include one or more processors 170 , 172 , 174 , or 176 .
  • Each processor in the processors 170 , 172 , 174 , and 176 may be a central processing unit (CPU), a blade, an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or another suitable component.
  • each processor in the processors 170 , 172 , 174 , and 176 executes instructions and manipulates data to perform the operations of a respective computing device.
  • “software” may include computer-readable instructions, firmware, wired and/or programmed hardware, or any combination thereof on a tangible medium (transitory or non-transitory, as appropriate) operable when executed to perform at least the processes and operations described herein. Indeed, each software component may be fully or partially written or described in any appropriate computer language including C, C++, JavaTM, JavaScript®, Visual Basic, assembler, Perl®, any suitable version of 4GL, as well as others. While portions of the software illustrated in FIG. 1 are shown as individual modules that implement the various features and functionality through various objects, methods, or other processes, the software may instead include a number of sub-modules, third-party services, components, libraries, and such, as appropriate. Conversely, the features and functionality of various components can be combined into single components as appropriate.
  • the database system 102 and the application server 106 respectively include memory 180 or memory 182 .
  • the database system 102 and/or the application server 106 include multiple memories.
  • the memory 180 and the memory 182 may each include any type of memory or database module and may take the form of volatile and/or non-volatile memory including, without limitation, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), removable media, or any other suitable local or remote memory component.
  • Each of the memory 180 and the memory 182 may store various objects or data, including caches, classes, frameworks, applications, backup data, business objects, jobs, web pages, web page templates, database tables, database queries, repositories storing business and/or dynamic information, and any other appropriate information including any parameters, variables, algorithms, instructions, rules, constraints, or references thereto associated with the purposes of the respective computing device.
  • the end-user client device 104 and the administrator client device 105 may each be any computing device operable to connect to or communicate in the network 108 using a wireline or wireless connection.
  • each of the end-user client device 104 and the administrator client device 105 comprises an electronic computer device operable to receive, transmit, process, and store any appropriate data associated with the system 100 of FIG. 1 .
  • Each of the end-user client device 104 and the administrator client device 105 can include one or more client applications, including the client application 110 or the deployment tool 132 , respectively.
  • a client application is any type of application that allows a client device to request and view content on the client device.
  • a client application can use parameters, metadata, and other information received at launch to access a particular set of data from the database system 102 .
  • a client application may be an agent or client-side version of the one or more enterprise applications running on an enterprise server (not shown).
  • Each of the end-user client device 104 and the administrator client device 105 is generally intended to encompass any client computing device such as a laptop/notebook computer, wireless data port, smart phone, personal data assistant (PDA), tablet computing device, one or more processors within these devices, or any other suitable processing device.
  • the end-user client device 104 and/or the administrator client device 105 may comprise a computer that includes an input device, such as a keypad, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of the database system 102 , or the client device itself, including digital data, visual information, or a graphical user interface (GUI) 190 or 192 , respectively.
  • GUI graphical user interface
  • the GUI 190 and the GUI 192 each interface with at least a portion of the system 100 for any suitable purpose, including generating a visual representation of the client application 110 or the deployment tool 132 , respectively.
  • the GUI 1902 and the GUI 192 may each be used to view and navigate various Web pages.
  • the GUI 190 and the GUI 192 each provide the user with an efficient and user-friendly presentation of business data provided by or communicated within the system.
  • the GUI 190 and the GUI 192 may each comprise a plurality of customizable frames or views having interactive fields, pull-down lists, and buttons operated by the user.
  • the GUI 190 and the GUI 192 each contemplate any suitable graphical user interface, such as a combination of a generic web browser, intelligent engine, and command line interface (CLI) that processes information and efficiently presents the results to the user visually.
  • CLI command line interface
  • Memory 194 and memory 196 respectively included in the end-user client device 104 or the administrator client device 105 may each include any memory or database module and may take the form of volatile or non-volatile memory including, without limitation, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), removable media, or any other suitable local or remote memory component.
  • the memory 194 and the memory 196 may each store various objects or data, including user selections, caches, classes, frameworks, applications, backup data, business objects, jobs, web pages, web page templates, database tables, repositories storing business and/or dynamic information, and any other appropriate information including any parameters, variables, algorithms, instructions, rules, constraints, or references thereto associated with the purposes of the client device 104 .
  • client devices 104 and administrator client devices 105 may be any number of end-user client devices 104 and administrator client devices 105 associated with, or external to, the system 100 . Additionally, there may also be one or more additional client devices external to the illustrated portion of system 100 that are capable of interacting with the system 100 via the network 108 . Further, the term “client,” “client device,” and “user” may be used interchangeably as appropriate without departing from the scope of this disclosure. Moreover, while client device may be described in terms of being used by a single user, this disclosure contemplates that many users may use one computer, or that one user may use multiple computers.
  • FIG. 2 illustrates an example system 200 for an application with a standard database setup.
  • An application server 202 accesses a database 204 , when executing application requests received from client applications.
  • the database 204 can be a database container for a particular tenant, for example, or a database that includes data for multiple tenants.
  • the database 204 includes, for a particular tenant, a read-only table 212 named “TABR”, a writable table 214 named “TABW”, and a mixed table 216 named “TAB”.
  • TABR read-only table 212
  • TABW writable table
  • TAB mixed table 216
  • the read-only table 212 includes vendor-delivered data, such as vendor code, character code pages, application documentation, central runtime and configuration data, and other vendor-provided data.
  • vendor-delivered data such as vendor code, character code pages, application documentation, central runtime and configuration data, and other vendor-provided data.
  • the tenant, or applications associated with the tenant do not write or modify data in the read-only table 212 .
  • the read-only table 212 is read-only from a tenant application perspective.
  • the writable table 214 includes only tenant-specific data.
  • the writable table 214 is generally shipped empty and does not include vendor-delivered data. Content is only written into the writable table 214 by the tenant or applications associated with the tenant.
  • the writable table 214 can include business transaction data, for example.
  • the mixed table 216 includes both read-only records that are not modified by tenant applications and records that may be modified by tenant applications.
  • the mixed table 216 can include both vendor-delivered data and tenant-created data.
  • An example mixed table can be a documentation table that includes shipped documentation data, tenant-added documentation data, and documentation data that was provided by the vendor but subsequently modified by the tenant.
  • the mixed table 216 can include default text values (which may be customized by particular tenants) for use in user interface displays, in various languages.
  • the mixed-table 216 is an extendable table that includes fields that have been added by a tenant application or customer.
  • FIG. 3 illustrates an example non-multi-tenancy system 300 in which same content is stored for multiple, different tenants in different database containers.
  • the system 300 includes applications 302 and 304 that use database interfaces 306 and 308 to access tables 310 and 312 in tenant database containers 314 and 316 , respectively.
  • the applications 302 and 304 and the database interfaces 306 and 308 are shown separately, in some implementations, the applications 302 and 304 are a same application, and the database interfaces 306 and 308 are a same database interface, on a single application server.
  • the tables 310 and 312 are each mixed tables that include both records common to multiple tenants and records unique to (e.g., added by) a respective tenant.
  • both the table 310 and the table 312 include common records that were shipped by a vendor (e.g., records 318 a - 318 b , 320 a - 320 b , and 322 a - 322 b ).
  • These common records can be deployed to the tables 310 and 312 when a respective application 302 or 304 is deployed for a respective tenant.
  • the common records can be records that are not changed by respective applications. Storing the common records separately for each tenant results in an increase of storage and maintenance costs, as compared to storing common records in one shared location.
  • Each table 310 and 312 also includes records written by a respective tenant application 302 or 304 , for example, records 324 a and 324 b (which happen to have a same key), and records 326 and 328 and 330 , which are only in their respective tables.
  • FIG. 4A illustrates an example system 400 that illustrates the splitting of data for a tenant.
  • the system 400 can be used for content separation—the separation of shared content used by multiple tenants from tenant-specific data used respectively by individual tenants.
  • the system 400 includes a shared database container 402 , and a tenant database container 404 for a given tenant.
  • Table and view names are illustrative and examples only-any table name and any table name variation scheme can be used.
  • the shared database container 402 includes shared content used by multiple tenants including the given tenant.
  • the shared content can include vendor-provided content and can enable the sharing of vendor-delivered data between multiple tenants.
  • shared content can also be stored in a shared database in general, or by using a shared database schema.
  • the shared database container 402 includes a TABR table 406 , corresponding to the read-only table 212 of FIG. 2 , that includes only read-only records.
  • the TABR table 406 is configured to be read-only and shareable, to the given tenant associated with the tenant database container 406 and to other tenants.
  • An application 408 running for the given tenant can submit queries that refer to the table name “TABR”.
  • a database interface (DBI) 410 can receive a query from an application and submit a query including the TABR table name to the tenant database container 404 .
  • the tenant database container 404 includes a TABR view 412 that can be used when the query is processed for read-only access to the TABR table 406 .
  • the TABR table 406 can be accessible from the tenant database container 404 using remote database access, for example.
  • remote database access for example.
  • each tenant can have their own database schema or container and can access the TABR table 406 using cross-schema access, cross-container access, or remote database access.
  • the tenant database container 404 includes a TABW table 414 , which in some instances corresponds to the writable table 214 of FIG. 2 .
  • the TABW table 414 can include non-shared, or tenant-specific, application data for the given tenant.
  • the TABW table 414 can be a table that is shipped empty, with records being added to the TABW table 414 for the given tenant in response to insert requests from the application 408 .
  • TABW table 414 may include an initial set of data that can be updated and modified by the tenant or in a tenant-specific manner.
  • An insert query submitted by the application 408 can include the TABW table name, and the DBI 410 can provide write access to the TABW table 414 , without the use of a view.
  • the application 408 can submit a query that includes a “TAB” table name that corresponds to the mixed table 216 of FIG. 2 .
  • records from the mixed table 216 can be split, to be included in either a read-only table 416 with name “/R/TAB” that is included in the shared database container 402 or a writable table 418 with name “/W/TAB” that is included in the tenant database container 404 .
  • the use and identification of the names “/R/TAB” and “/W/TAB” is discussed in more detail below.
  • the read-only table 416 can include records common to multiple tenants that had previously been included in multiple tenant tables for multiple tenants.
  • the read-only table 416 can be a shared repository that multiple tenants use to access the common data and records.
  • the writable table 418 includes records from the mixed table 216 that are specific to the given tenant associated with the tenant database container 404 .
  • a union view 420 with a same name of TAB as the mixed table 216 provides a single point of access for the application 408 to the read-only table 416 and the writable table 418 .
  • the application 408 may have been previously configured, before implementation of multi-tenancy, to submit queries that include the “TAB” table name.
  • the application 408 can continue to submit queries using the original “TAB” table name after implementation of multi-tenancy, using a single logical table name for access to the mixed records collectively stored in the writable table 418 and the read-only table 416 .
  • the union view 420 provides a unified view on the mixed record data that hides, from the application 408 , details regarding which data is shared and which data is tenant-local.
  • a query performed on the union view 420 may return records from the read-only table 416 , the writable table 420 , or a combination of records from both tables, and the application 420 is unaware of the source of the records returned from the query.
  • the use of the union view 420 enables multi-tenancy to be compatible with existing applications such as the application 408 —e.g., the application 408 and other applications can continue to be used without modification. Such an approach avoids significant rewriting of applications as compared to applications being aware of both the writable table 418 and the read-only table 416 and needing modifications to query two tables instead of one table. Queries and views that include a reference to the mixed table can continue to be used without modification.
  • the use of the union view 420 enables the application 408 to access the data split into the writable table 418 and the read-only table 416 using a single query.
  • the DBI 410 can be configured to determine whether a query that includes the TAB table name is a read query or a write query. If the query is a read query, the DBI 410 can submit the read query to the tenant database container 404 , for a read operation on the union view 420 .
  • the union view 420 provides unchanged read access to the joint data from the writable table 418 and the read-only table 416 .
  • the DBI 410 can, before submitting the query to the tenant database container 404 , automatically and transparently (from the perspective of the application 408 ) perform a write intercept operation, which can include changing a TAB reference in the query to a “/W/TAB” reference, which can result in write operations being performed on tenant-local data in the writable table 418 instead of the union view 420 .
  • Write queries for the mixed table can be submitted, unchanged, by the application 408 , since write access is redirected to the writable table 418 .
  • the union view 420 can be configured to be read-only so that a write operation would be rejected if it was attempted to be performed on the union view 420 .
  • a write operation may be ambiguous, as to which of the writable table 418 or the read-only table 416 should be written to, if write queries were allowed to be received for the union view 420 .
  • the storing of shared content in the TABR table 406 and the read-only table 416 can result in a reduced memory footprint as compared to storing common data separately for each tenant.
  • Storing common data in a shared location can reduce resource consumption during lifecycle management procedures and simplify those procedures.
  • Lifecycle management can include application development, assembly, transport, installation, and maintenance.
  • Storing common data in one location can simplify software change management, patching, and software upgrades.
  • FIG. 4B illustrates an example multi-tenancy system 440 that includes multiple tables of each of multiple table types.
  • a database system can have multiple tables of each of the read-only, writable, and mixed table types.
  • table metadata 441 tables “TABR”, “TCPOO”, AND “TCP01” are read-only tables, tables “TAB” and “DOKTL” are mixed tables, and tables “TABW”, “ACDOCA”, and “MATDOC” are read/write (e.g., writable) tables.
  • Table metadata can exist in a shared database container 442 and/or can exist in a tenant database container 443 , as illustrated by metadata 444 .
  • Implementation of multi-tenancy can result in the inclusion of the read-only tables in the shared database container 442 , as illustrated by read-only tables 445 , 446 , and 448 .
  • Read-only views 450 , 452 , and 454 can be created in the tenant database container 443 for the read-only tables 444 , 446 , and 448 , respectively, to provide read access for an application 456 .
  • Implementation of multi-tenancy can result in the inclusion of writable tables in the tenant database container 443 , as illustrated by writable tables 458 , 460 , and 462 .
  • Each mixed table can be split into a read-only table in the shared database container 442 and a writable table in the tenant database container 443 .
  • a read-only table “/R/TAB” 464 and a writable table “/W/TAB” 466 replace the mixed table “TAB”.
  • a read-only table “/R/DOKTL” 468 and a writable table “/W/DOKTL” 470 replace the mixed table “DOKTL”.
  • a deployment tool automatically generates names for the read-only and writable tables that replace a mixed table.
  • a generated name can include a prefix that is appended to the mixed table name. Prefixed can be predetermined (e.g., “/R/”, “/W/”) or can be identified using a prefix lookup.
  • APIs getSharedPrefix 472 and getTenantPrefix 474 can be invoked and can return “/R/” for a shared prefix and “/W/” for a writable (e.g., tenant) prefix, respectively (or other character strings).
  • the APIs 472 and 474 can look up a respective prefix in a preconfigured table, for example.
  • a different naming scheme is used, that uses suffixes or some other method to generate table names.
  • other APIs can generate and return a full shared table name or a full writable table name, rather than a shared or tenant prefix.
  • a union view is created in the tenant database container 443 that provides a single point of access to the application 456 to records in the read-only table and the writable table corresponding to the mixed table.
  • a union view 476 provides unified access to the read-only table 464 and the writable table 466 .
  • a union view 478 provides unified access to the read-only table 468 and the writable table 470 .
  • FIG. 4C illustrates an example multi-tenancy system 480 that uses a suffix table naming scheme.
  • read-only tables 484 , 485 , 486 , and 487 included in a shared database container 488 can include a suffix that enables the storing of several versions of a table.
  • a read-only view 489 provides read access to the read-only table 485 , which is a currently-configured version (e.g., “TABR #2”) of a given read-only table. To gain access to a different version (e.g., “TABR #1”) of the given read-only table, the read-only view 489 can be reconfigured to be associated with the read-only table 487 .
  • Multiple versions of a table can be used during deployment of an upgrade, as described in more detail below.
  • a read-only view 492 can be included in a tenant database container 494 , such as if an application 496 needs read access to shipped, read-only content that was included in a mixed table that is now stored in the read-only table 484 .
  • a union view 498 can provide unified access to the read-only view 492 and writable mixed-table records now included in a writable table 499 .
  • the read-only view 492 can be re-configured to access the table 486 that is a different version (e.g., “TAB #2”) of the read-only table 484 .
  • FIG. 5 illustrates an example system 500 that includes a shared database container 502 , a first tenant database container 504 for a first tenant, and a second tenant database container 506 for a second tenant.
  • First and second applications 508 and 510 handle application requests for the first tenant and the second tenant, respectively.
  • the first tenant and the second tenant can be served by separate application servers or a same application server, or by multiple application servers.
  • the shared database container 502 includes a shared read-only table 512 that includes read-only shipped records.
  • the shared read-only table 512 is made available as a shared table to the first and second tenants, and other tenants.
  • the first application 508 and the second application 510 can access the shared read-only table 512 using a view 514 or a view 516 , respectively.
  • the first application 508 and the second application 510 can have read, but not write access, to the shared read-only table 512 , through the view 514 or the view 516 , respectively.
  • the first tenant database container 504 and the second tenant database container 506 respectively include writable tables 518 or 520 .
  • the writable tables 518 and 520 are separate from one another and store records that have been respectively written by the application 508 or the application 510 .
  • the first tenant does not have access to the writable table 520 and correspondingly, the second tenant does not have access to the writable table 518 .
  • the shared database container 502 includes a shared read-only table 522 that stores shared read-only records that had been included in a mixed table.
  • Writable tables 524 and 526 included in the first tenant database container 504 and the second tenant database container 506 store mixed-table records that had been or will be added to the writable table 524 or the writable table 526 by the application 508 or the application 510 , respectively.
  • the writable tables 524 and 526 are separate from one another. The first tenant does not have access to the writable table 526 and correspondingly, the second tenant does not have access to the writable table 524 .
  • the application 508 can be provided a single point of access for the mixed-table records that are now split between the shared read-only table 522 and the writable table 524 using a union view 528 .
  • the application 510 can be provided a single point of access for the mixed-table records that are now split between the shared read-only table 522 and the writable table 526 using a union view 530 .
  • a write request for a TAB table submitted by the application 508 or the application 510 could be intercepted by a respective DBI and redirected to the writable table 524 or the writable table 526 , respectively.
  • FIG. 6 illustrates an example system 600 that includes a shared database container 602 , a first tenant database container 604 for a first tenant, and a second tenant database container 605 for a second tenant.
  • Applications 606 and 607 are configured to access a union view 608 or a union view 609 using a DBI 610 or a DBI 611 , respectively, to gain access to respective mixed tables.
  • the union views 608 and 609 respectively provide a single point of access for the application 606 or the application 607 to records previously stored in a mixed table named TAB (such as the mixed table 310 of FIG. 3 ).
  • the TAB table and the union views 608 and 609 include, as illustrated for the union view 608 , a first key field 612 , a second key field 614 , a first data field 616 , and a second data field 618 .
  • a primary key for the union view 608 (and consequently for the read-only table 620 and the writable table 623 ) can include the first key field 612 and the second key field 614 .
  • the first key field 612 and/or the second key field 614 can be technical fields that are used by the database but not presented to end users.
  • the shared read-only table 620 includes read-only records shared with/common to multiple tenants.
  • the shared read-only table 620 includes records 624 , 626 , and 628 corresponding to the records 318 a - 318 b , 320 a - 320 b , and 322 a - 322 b of FIG. 3 .
  • the writable table 622 includes records specific to the first tenant, including records 630 and 632 that correspond to the records 324 a and 330 of FIG. 3 .
  • the writable table 623 includes records specific to the second tenant, including records 634 , 636 , and 638 that correspond to the records 324 b , 326 , and 328 of FIG. 3 .
  • a query from the application 606 to retrieve all records from the union view 608 can return the records 624 , 626 , 628 , 630 , and 632 .
  • a query from the application 607 to retrieve all records from the union view 609 can return the records 624 , 626 , 628 , 634 , 636 , and 638 .
  • the records 630 and 632 are not accessible by the second tenant.
  • the records 634 , 636 , and 638 are not accessible by the first tenant.
  • FIG. 7 illustrates a system 700 for constraint enforcement.
  • the system 700 includes a shared database container 702 and a tenant database container 704 .
  • a mixed table named “TAB” has been split into a read-only table 706 (“/R/TAB”) in the shared database container 702 and a writable table 708 (“/W/TAB”) in the tenant database container 704 .
  • /R/TAB read-only table
  • /W/TAB writable table
  • a record in the read-only table 706 that was initially provided by a vendor can have a same key as a record in the writable table 708 that was written by a tenant application.
  • the vendor can deploy, post-installation, a record to the read-only table 706 that already exists as a tenant-written record in the writable table 708 .
  • an application 710 may be configured to submit, using a DBI 712 , a select query against the “TAB” table with a restriction on primary key field(s), with the query designed to either return one record (e.g., if a record matching the primary key restriction is found) or no records (e.g., if no records matching the primary key restriction are found).
  • a select query may return two records, since the query may be executed on a union view 714 with name of “TAB” that provides unified access to the read-only table 706 and the writable table 708 .
  • the application 710 may not be properly configured to handle such a situation, and an error condition, undesirable application behavior, and/or undesirable data modifications may occur.
  • the application 710 may submit a delete query, with a restriction on primary key fields, with an expectation that the query uniquely identifies a record to delete.
  • the restriction on the delete query may match two records when applied to the union view 714 , so an ambiguity may exist as to which record to delete.
  • a key pattern can be identified that describes records that can be written by the application 710 and thereby exist in the writable table 708 .
  • a key value convention may exist, such that shipped records in the read-only table 706 have a particular key pattern, such as a first range of key values, and application-added records have a different key pattern, such as a second, different range of key values.
  • shipped records may have a key value that includes a particular prefix
  • tenant-added records can be added using a key value that includes a different prefix.
  • Key value conventions can be used to define different key value spaces—a first key value space for shipped records and a second, different key value space for tenant records, for example.
  • a tenant keys table 716 can be used to define key patterns.
  • a row 718 in the tenant keys table 716 includes a value of “TAB” for a table name column 720 , which indicates that a key pattern is being defined for the union view 714 (and for application requests that include a “TAB” table reference).
  • the row 718 includes a value of “A” (for “Active”) in an active/inactive column 722 , indicating that a key pattern for the “TAB” table is active. Active and inactive key patterns are described in more detail below.
  • a value of “KF1 LIKE Z %” in the record 718 for a WHERE clause column 724 defines a key pattern for the “TAB” table.
  • the key pattern describes a pattern for keys of records that are included in the writable table 708 (e.g., the key pattern indicates that records in the writable table 708 should have keys that start with “Z”).
  • a complement of the key pattern e.g., “NOT KF1 LIKE Z %” (e.g., records that have keys that do not start with “Z”) describes a pattern for records in the read-only table 706 .
  • the DBI 712 can use the key pattern to ensure that the keys of records stored in the writable table 708 are disjoint from the keys of records stored in the read-only table 706 .
  • the DBI 712 can be configured to prohibit duplicate records by examining write queries (e.g., update, insert, delete queries) received from the application 710 for the “TAB” table, accepting (and executing) queries (e.g., using a redirect write, on the writable table 708 , as described above) that are consistent with the key pattern, and rejecting queries that are inconsistent with the key pattern.
  • An inconsistent query would add or modify a record in the writable table 708 so that the record does not match the key pattern.
  • the DBI 712 can be configured to reject (and possibly issue a runtime error against) such inconsistent queries during a key-pattern check to ensure that write queries are only applied to the writable table 708 and not the read-only table 706 .
  • the key pattern check can be performed elsewhere, such as by an additional table constraint object applied to the writable table 708 and/or the read-only table 706 , a database trigger, or some other database component.
  • the DBI 712 can be configured to examine complex queries, such as queries that refer to ranges of values, to ensure that modifications adhere to the key pattern definition.
  • tenant keys table 716 is illustrated as being included in the tenant table 704
  • tenant key definitions can also, or alternatively, exist in the shared database container 702 , as illustrated by a tenant keys table 726 .
  • Tenant key definitions can exist in the shared database container 702 so that the application 710 or a tenant user is not able to change the tenant key definitions.
  • a view (not shown) can be included in the tenant database container 704 to provide read access to the tenant key table 726 , for example.
  • tenant keys are included in the shared database container 702
  • tenant key definitions can be shared with multiple tenants, if the multiple tenants each have a same key pattern definition. If some tenants have different key pattern definitions, tenant key definitions included in the shared database container 702 can be associated with particular tenant(s) (e.g., using a tenant identifier column or some other identifier).
  • a key pattern can be advantageous as compared to other alternate approaches to a duplicate record issue, such as an overlay approach that allows for duplicate records.
  • an overlay approach that allows for duplicate records.
  • more complex union views (as compared to the union view 714 ) can be used, that involve the selection of one record among multiple records with a same key across the writable table 708 and the read-only table 706 using a priority algorithm.
  • a select query being able to return a record that has a same key as a record that was just deleted (e.g., the delete may have deleted one but not both of duplicate records stored across different tables).
  • An approach can be used to store local deletes so as to later filter out shared data that has been deleted locally, but that approach adds complexity and may impact performance. Additionally, an upgrade process may include complications if the shared content is updated since the tenant content may have to be analyzed for duplicate records and a decision may have to be made regarding whether a tenant local record is to be removed due a conflict with new shipped content.
  • the system 700 can perform a check against the read-only table after every change operation in the writable table.
  • a check against the read-only table after every change operation in the writable table.
  • Such an approach may result in an unacceptable performance degradation.
  • the use of a key pattern, instead of these alternative approaches, can avoid complexities and performance issues.
  • the key pattern can be used, during initial system deployment, to split mixed table data according to the key pattern definition.
  • the system 700 can ensure that no content in the read-only table 706 matches the key pattern that defines data included in the writable table 708 .
  • the system 700 can ensure that no content is included in the writable table 708 that does not match the key pattern. Key patterns can be used during other lifecycle phases, as described below.
  • FIG. 8 illustrates an example system 800 for deploying content in accordance with configured tenant keys.
  • key pattern definitions are enforced to make sure that tenants do not write data that conflicts with currently shared data or with data that might be delivered for sharing in the future.
  • key pattern definitions are enforced throughout other phases of the system lifecycle, such as data deployment.
  • new content or content updates are shipped by the vendor, such as during an update or upgrade, content separation and key enforcements are taken into account, to ensure that vendor deliveries to a shared container during a software lifecycle event do not create conflicts with data that was created in a tenant container.
  • a file 802 containing new records to be deployed to the system 800 can be provided to a content deployment tool 804 and a content deployment tool 806 , for deployment to a shared database container 808 and a tenant database container 810 , respectively.
  • the file 802 may include records to be added to the system 800 as a result of a new version of an application or database, for example.
  • the content deployment tools 804 and 806 can use a DBI 812 or a DBI 814 , respectively, to write content to the shared database container 808 or the tenant database container 810 , respectively.
  • the content deployment tools 804 and 806 are the same tool and/or the DBIs 812 and 814 are the same interface.
  • the content deployment tool 804 can read, using the DBI 812 , a WHERE clause 816 for a read-only “/R/TAB” table 818 associated with a “TAB” mixed table from a tenant keys table 820 .
  • the WHERE clause 816 describes a pattern of keys that exist in a “/W/TAB” writable table 822 in the tenant database container 810 , the writable table 822 also associated with the “TAB” mixed table.
  • the content deployment tool 804 can determine which records in the file 802 do not match the WHERE clause 816 , and can, using the DBI 812 , write the records from the file 802 that do not match the WHERE clause 816 to the read-only table 818 , as indicated by note 824 .
  • the records that do not match the WHERE clause 816 can be records that are to be shared among tenants and not modified by respective tenants.
  • a record with a value of “ww” for a “KF1” key column 828 can be read by the content deployment tool 804 from the file 802 and written to the read-only table 818 , based on the “ww” key value not matching the WHERE clause 816 of “KF1 like Z %”.
  • the DBI 812 and/or the read-only table 818 can be configured to allow the writing of content by the content deployment tool 804 to the read-only table 818 , even though the read-only table 818 is read-only with respect to requests received by a DBI 830 from an application 832 .
  • the DBI 830 and/or a union view 834 can be configured to allow read but not write requests for the read-only table 818 (through the union view 834 ), for example.
  • the DBI 830 can be the same or a different DBI as the DBI 812 and/or the DBI 814 .
  • the content deployment tool 806 can read, using the DBI 814 , a WHERE clause 836 for the writable “/W/TAB” table 822 associated with the “TAB” mixed table from a tenant keys table 838 .
  • the tenant keys table 838 may be the same table as the tenant keys table 820 , and may exist in the shared database container 808 , the tenant database container 810 , or in another location.
  • a separate read of the WHERE clause 836 may not be performed since the WHERE clause 816 may have already been read and can be used by the content deployment tool 806 .
  • the WHERE clause 836 describes a pattern of keys that exist in the “/W/TAB” writable table 822 .
  • the content deployment tool 806 can determine which records in the file 802 match the WHERE clause 836 , and can write the records from the file 802 that match the WHERE clause 836 to the writable table 822 , as indicated by note 840 .
  • a record with a key value of “zz” can be written to the writable table 822 , based on the “zz” key value matching the WHERE clause 836 .
  • Records in the file 802 that match the WHERE clause 836 can be records that may be later modified by the tenant associated with the tenant container 810 .
  • the file 802 can include data to be written to both the read-only table 818 and the writable table 822 , as described above.
  • the content deployment tool 804 and/or the content deployment tool 806 can create two files for content delivery—e.g., one file for the writable table 822 and one file for the read-only table 818 .
  • the content deployment tool 806 can either ignore records in a file for the writable table 822 that do not match the key pattern or can issue an error for such records.
  • the content deployment tool 804 can either ignore records in a file for the read-only table 818 that match the key pattern or can issue an error for such records.
  • Content deployment is described in more detail below, in other sections.
  • FIG. 9 illustrates an example system 900 for changing tenant keys.
  • Tenant keys may be changed for example, when a new version of an application and/or database is released.
  • An application developer may change a range of key values that may be written by a tenant application for example.
  • a database system may have detected, during execution of a current or prior version of an application, attempts to write records with keys not matching a current key pattern. A developer or an administrator may review a log of such attempts and determine to allow the writing of records with such keys in the future.
  • a current record 904 in a tenant keys table 906 in a tenant database container 907 has a value 908 of “A” (for “active”), which indicates that a WHERE clause 910 in the current record 904 is a currently-configured description of key values for records in the writable table 902 .
  • the WHERE clause 910 of “KF1 LIKE Z %” indicates that key values in the writable table 902 start with the letter “Z”.
  • An administrator may desire to change the tenant key table 906 so that records having key values beginning with “Z” or “Y” are allowed in the writable table 902 .
  • a file 912 (or other electronic data input) including a new WHERE clause can be provided to a constraint changing tool 914 .
  • the constraint changing tool 914 can, using a DBI 916 , add a record 918 to the tenant keys table that includes the new WHERE clause included in the file 912 .
  • a new WHERE clause 920 of “KF1 LIKE Z % OR KF1 LIKE Y %” is included in the added record 918 .
  • the added record 918 includes an active/inactive value 922 of “I” for “inactive”.
  • the added record 918 can be marked as active after the writable table 902 and a read-only table 924 in a shared database container 926 have been updated to be in accordance with the new WHERE clause 920 .
  • tenant keys can exist in the tenant database container 907 (as illustrated by the tenant keys table 906 ) and/or in the shared database container 926 (as illustrated by a tenant keys table 928 ).
  • a constraint changing tool 930 (which can be the same or a different tool as the constraint changing tool 914 ) can use a DBI 931 to add a new record 932 with a new WHERE clause to the tenant keys table 928 , as described above for the added record 918 .
  • the DBI 931 can be the same or a different interface as the DBI 916 .
  • FIG. 10 illustrates an example system 1000 for updating database records to comply with updated tenant keys.
  • the updated tenant keys are described by a new WHERE clause 1002 included in an inactive record 1004 included in a tenant keys table 1006 .
  • the inactive record 1004 is a replacement record for an active tenant keys record 1008 .
  • a constraint changing tool 1010 can update records in a read-only table 1012 in a shared database container 1014 and a writable table 1015 in a tenant database container 1016 to comply with the new WHERE clause 1002 .
  • the constraint changing tool 1010 can use a DBI 1020 to read the new WHERE clause 1002 from the inactive tenant keys record 1004 (e.g., as illustrated by note 1022 ).
  • the constraint changing tool 1010 can use the DBI 1020 to delete records from the read-only table 1012 that match the new WHERE clause 1002 .
  • a record with a key value of “YY” e.g., that was included in the read-only table 924 of FIG. 9
  • the record with key value of “YY” may have been previously allowed to be in the read-only table 924 due to the record not matching a previous WHERE clause of “KF1 LIKE Z %” included in the active tenant keys record 1008 , for example.
  • a constraint changing tool 1026 (which can be the same as or different from the constraint changing tool 1010 ) can use a DBI 1029 (which can be the same as or different from the DBI 1020 ) to delete records from the writable table 1015 that do not match the WHERE clause 1002 .
  • the constraint changing tool 1028 can read the WHERE clause 1002 from the tenant keys table 1006 or can read a WHERE clause 1030 from an inactive tenant keys record 1032 in a tenant keys table 1034 in the tenant database container 1016 .
  • the WHERE clause 1030 describes a key pattern of keys starting with “Z” or “Y”.
  • the writable table 1015 is the same as the writable table 902 of FIG. 9 (e.g., no records have been deleted) since both records in the writable table 1015 have keys that start with “Z” (e.g., there are no records in the writable table 902 that do not match the WHERE clause 1030 ).
  • the constraint changing tool 1010 can read a file 1036 that includes information indicating data to be moved between the read-only table 1012 and the writable table 1015 , to complete updates to the system 1000 for compliance with the updated tenant keys. Processing of the file 1036 is described in more detail below.
  • the constraint changing tool 1010 can query the read-only table 1012 and/or the writable table 1015 to extract records to be moved. For example, the constraint changing tool 1010 can submit a query of “insert into /W/TAB (select * from /R/TAB where (KF1 LIKE Z % OR KF1 LIKE Y %))”, to move records from the read-only table 1012 to the writable table 1015 that match the new WHERE clause 1002 .
  • the constraint changing tool 1010 can submit a query of “insert into /R/TAB (select * from /W/TAB where not (KF1 LIKE Z % OR KF1 LIKE Y %))”, to move records from the writable table 1015 to the read-only table 1012 that do not match the new WHERE clause 1002 .
  • content is not selected from the writable table 1015 for inclusion in the read-only table 1012 , since the tenant may have modified the data in the writable table 1015 .
  • FIG. 11 illustrates an example system 1100 for updating database records to comply with updated tenant keys using a transfer file 1102 .
  • the transfer file 1102 corresponds to the file 1036 and include data to be moved between a read-only table 1104 in a shared database container 1106 and a writable table 1108 in a tenant database container 1110 .
  • a constraint changing tool 1112 can read records from the transfer file 1102 that do not match a WHERE clause 1114 included in an inactive record 1116 in a tenant keys table 1118 .
  • the constraint changing tool 1112 can use a DBI 1120 to deploy the records from the transfer file 1102 that do not match the WHERE clause 1114 to the read-only table 1104 .
  • there are no records in the transfer file 1102 that do not match the WHERE clause 1114 so no new records are deployed to the read-only table 1104 .
  • a constraint changing tool 1122 (which can be the same as or different from the constraint changing tool 1112 ) can read records from the transfer file 1102 that match the WHERE clause 1114 .
  • the constraint changing tool 1122 can read the WHERE clause 1114 from the tenant keys table 1118 or can read a WHERE clause 1124 from an inactive tenant keys record 1126 in a tenant keys table 1128 in the tenant database container 1110 .
  • the constraint changing tool 1122 can use a DBI 1130 (which can be the same as or different from the DBI 1120 ) to deploy the records from the transfer file 1102 that match the WHERE clause 1114 to the writable table 1108 .
  • DBI 1130 which can be the same as or different from the DBI 1120
  • a record with a key value of “YY” (that matches the WHERE clause 1114 ) is included in the transfer file 1102 , and is deployed to the writable table 1108 , as illustrated by a record 1132 and note 1134 .
  • the inactive record 1116 is changed to be an active record in the tenant keys table 1118 , as described below.
  • FIG. 12 illustrates an example system 1200 for updating an inactive tenant keys record.
  • a constraint changing tool 1202 can update a tenant keys table 1204 in a shared database container 1206 .
  • a constraint changing tool 1208 makes similar changes to a tenant keys table 1210 in a tenant database container 1212 .
  • the constraint changing tool 1202 can submit a delete query 1214 to a DBI 1216 to delete one or more active entries in the tenant keys table 1204 .
  • an empty (deleted) entry 1218 represents a now-deleted active tenant keys record 1008 of FIG. 10 .
  • the constraint changing tool 1202 can submit an update query 1219 to the DBI 1216 to change a previously inactive tenant keys record (e.g., the inactive tenant keys record 1004 of FIG. 10 ) to be an active tenant keys record, as illustrated by an updated tenant keys record 1220 that includes a value of “A” for “Active”.
  • a previously inactive tenant keys record e.g., the inactive tenant keys record 1004 of FIG. 10
  • an updated tenant keys record 1220 that includes a value of “A” for “Active”.
  • An inactive tenant keys record may be marked as inactive during a deployment process, for example, and may be marked as active when the deployment process has completed.
  • tenant applications can write new records that match a WHERE clause 1222 included in the now active record. For example, a tenant application can write a record with a key value of “Y1” to a writable table 1224 in the tenant database container 1212 , as illustrated by a new record 1226 and note 1228 . Updating of tenant keys, along with other types of deployment changes, is described in more detail below.
  • system sharing types can be supported, such as a standard system setup in which multi-tenancy is not implemented and a shared/tenant setup where multi-tenancy is implemented. Transitions between system sharing types can be supported, with a change in the system sharing type being transparent to applications.
  • FIG. 13A illustrates an example system 1300 that includes a standard system 1302 with a standard system-sharing type and a shared/tenant system 1304 with a shared/tenant system-sharing type.
  • the standard system 1302 includes a read-only table “TABR” 1306 , a writable table “TABW” 1308 , and a read-only with local-write table “TAB” 1310 , all included in a single database container 1312 .
  • a deployment tool 1314 can deploy data to each of the tables 1306 , 1308 , and 1310 .
  • the tables 1306 , 1308 , and 1310 are illustrative.
  • a standard system-sharing type system can include other combinations of tables of different table types, including multiple instances of tables of a given type.
  • the standard system-sharing type system 1302 can include multiple read-only tables, multiple writable tables, and/or multiple read-only with local-write tables.
  • the shared/tenant system 1304 includes a shared database container 1316 and a tenant database container 1318 .
  • the shared database container 1316 includes a read-only table 1320 that corresponds to the read-only table 1306 and the tenant database container 1318 includes a writable table 1322 that corresponds to the writable table 1308 .
  • a read-only table 1324 in the shared database container 1316 and a writable table 1326 in the tenant database container 1318 correspond to the read-only with local-write table 1310 .
  • a view 1328 provides read access to the read-only table 1320 and a union view 1330 provides unified access to the read-only table 1324 and the writable table 1326 .
  • a deployment tool 1332 can deploy data to the read-only table 1320 and the read-only table 1324 included in the shared database container 1316 .
  • a deployment tool 1334 can deploy data to the writable table 1322 and the writable table 1326 included in the tenant database container 1318 .
  • the deployment tool 1332 and the deployment tool 1334 are the same tool.
  • FIG. 13B is a table 1350 that illustrates processing that can be performed for standard 1352 , shared 1354 , and tenant 1356 database containers.
  • Types of processing in a multi-tenant system can include database (DB) object creation 1358 , DB content deployment 1360 , and write operations by application(s) 1362 .
  • DB database
  • DB content deployment 1360 DB content deployment 1360
  • write operations by application(s) 1362 DB
  • read-only (RO), writable (RW), and mixed (RO+WL) tables can be created in a standard database container 1352 .
  • a cell 1366 indicates that only shareable objects, such as a read-only table, or a read-only portion of a mixed table (e.g., the read-only table created when the mixed table is split), are created in a shared container 1354 .
  • a cell 1368 indicates that local tables (e.g., local to a given tenant) are created in a tenant database container 1356 .
  • the tenant database container 1356 can include a writable table (RW) and a writable portion of a mixed table (e.g., RO+WL, with name /W/TAB, such as the writable table created when the mixed table is split).
  • the tenant container 1356 can also include a view to the read-only table in the shared container 1354 , and a union view on the read-only and writable portions of a mixed table.
  • a cell 1370 indicates that a deployment tool can deploy content to all tables included in a standard database container 1352 .
  • the deployment tool can deploy content to shared tables (e.g., a read-only table or a read-only portion of a mixed table) in a shared database container 1354 , as indicated by a cell 1372 .
  • a cell 1374 indicates that the deployment tool can deploy content to local tables in a tenant database container 1356 .
  • Deployment to a mixed table can include redirection of tables writes to the writable portion of the mixed table.
  • Tenant applicants can write to all objects in a standard database container 1352 (e.g., as described in a cell 1376 ).
  • a cell 1378 indicates that tenant applications are not allowed to write to tables in a shared database container 1354 .
  • a cell 1380 indicates that tenant applications can write content to local tables in a tenant database container 1356 , including a writable table and a writable portion of a mixed table. Application writes on a mixed table can be redirected to the writable portion of the mixed table.
  • FIG. 14 illustrates a system 1400 for transitioning from a standard system 1401 to a shared/tenant system 1402 .
  • the standard system 1401 includes a database container 1403 that includes a read-only table 1404 , a writable table 1405 , and a mixed table 1406 .
  • the database container 1403 can be associated with a tenant and for purposes of discussion has a name of “tenant”.
  • a transition can be performed to transition the standard system 1401 of the tenant to the shared/tenant system 1402 , as described by a flowchart 1407 .
  • a shared database container 1410 is created, for inclusion in the shared/tenant system 1402 .
  • the database container 1403 included in the standard system 1401 can be used as a tenant database container 1414 in the shared/tenant system 1402 . That is, the database container 1403 is a pre-transition illustration and the tenant database container 1414 is a post-transition illustration of a tenant database container used for the tenant.
  • access to the shared database container 1410 is granted to a tenant database user associated with the tenant.
  • a read only table 1420 (e.g., with a path/name of “shared./R/TABR”) is created in the shared database container 1410 .
  • data is copied from the read-only table 1404 included in the database container 1403 (e.g., a table object with a path/name of “tenant.TABR”) to the read-only table 1420 (e.g., “shared./R/TABR”).
  • the read-only table 1404 included in the database container 1403 e.g., a table object with a path/name of “tenant.TABR”
  • the read-only table 1420 e.g., “shared./R/TABR”.
  • the read-only table 1404 (e.g., “tenant.TABR”) is dropped. Accordingly, the read-only table 1404 is not included in the tenant database container 1414 at the end of the transition.
  • a view 1428 (e.g., “tenant.TABR”) is created in the tenant database container 1414 , to provide read access to the read-only table 1420 .
  • a read-only table 1432 (e.g., “shared./R/TAB”) is created in the shared database container 1410 .
  • data that does not match key patterns defined for tenant content is copied from the mixed table 1406 (e.g., “tenant.TAB”) to the read-only table 1432 (e.g., “shared./R/TAB”).
  • data that is to be shared among tenants and that is not tenant-specific is copied from the mixed table 1406 to the read-only table 1432 in the shared database container 1410 .
  • the data that does not match key patterns defined for tenant content (e.g., data that was copied in operation 1434 ) is deleted from the mixed table 1406 (e.g., “tenant.TAB”).
  • the mixed table 1406 (e.g., “tenant.TAB”) is renamed to “tenant./W/TAB”, for inclusion in the tenant database container 1414 as a writable table 1440 , for storing tenant-specific content.
  • the records that remain in the writable table 1440 should be records that match key patterns defined for tenant content.
  • the writable table 1405 is included, unmodified, in the tenant database container 1414 , as a writable table 1442 , for storing tenant content post transition.
  • a union view 1446 (e.g., “tenant.TAB”) is created, on the read-only table 1432 (e.g., “shared./R/TAB”) and the writable table 1440 (e.g., “tenant./W/TAB”), to provide unified access to the read-only table 1432 and the writable table 1440 .
  • the transition from the standard system 1401 directly to the shared/tenant system 1402 can, due to cross database container access and data movement, and other issues, take more time than is desired in some instances.
  • a database object cannot simply be renamed to move the database object from one database container to another database container.
  • the changing of which tables are read-only, mixed, or writable, and changing of key patterns, can result in data and table movement.
  • the changing of a table to be read-only or mixed can result in data being moved to a shared database container from a tenant database container.
  • a simulation mode can be used that simulates data sharing for an application and for content deployment.
  • the simulation mode involves storing all database objects in one database container, and simulating read-only/shared access, and redirect write operations for appropriate database objects.
  • Using one database container can enable renaming of database objects to simulate a transition to a shared system setup. If the application performs as expected in the simulation mode, a transition can be performed to transition the database system from the simulation mode to the shared system setup. As discussed below in FIGS. 15-17 , transitioning the database system from the standard system setup to the simulation mode and transitioning the database system from the simulation mode to the shared system setup includes more DDL (Data Definition Language) statements and less DML (Data Manipulation Language) statements than transitioning the database system directly to the shared system setup from the standard system setup.
  • DDL Data Definition Language
  • DML Data Manipulation Language
  • FIG. 15 illustrates a system 1500 with a sharing type of simulated.
  • a deployment control system 1502 can use a deployment tool 1504 to simulate an import of tenant data, by importing data to a simulation database container 1505 .
  • the deployment tool 1504 can use a DBI 1506 to deploy data to a writable table 1508 and a writable table 1510 included in the simulation database container 1505 .
  • the deployment control system 1502 can use a deployment tool 1514 (which can be the same as or different than the deployment tool 1504 ) to simulate the importing of shared data, by importing data to the simulation database container 1505 .
  • the deployment tool 1514 can use a DBI 1516 (which can be the same or a different interface as the DBI 1506 ) to deploy shared data to a read-only table 1518 and a read-only tool 1520 included in the same simulation database container 1505 that also includes the writable table 1508 and the writable table 1510 .
  • a view 1522 provides read access to the read-only table 1520 .
  • a union view 1524 provides unified access to the read-only table 1518 and the writable table 1508 .
  • a simulation of the sharing mode can be accomplished by disabling, using a DBI 1526 , application write access to read-only tables, such as the read-only table 1518 , redirecting application write queries received for the union view 1524 to the writable table 1508 , if records to be modified match a defined key pattern, providing application read-access to the read-only table 1520 using the read-only view 1522 , and providing application read access to the read-only table 1518 (and the writable table 1508 ) using the union view 1524 .
  • FIG. 16 illustrates a system 1600 for transitioning from a standard system 1602 to a simulated system 1604 .
  • the transition from the standard system 1602 to the simulated system 1604 is described in a flowchart 1606 .
  • a read-only table 1610 included in a database container 1612 is renamed from “TABR” to “/R/TABR”, as illustrated by a read-only table 1614 in a simulated database container 1616 .
  • the database container 1612 included in the standard system 1602 can be used as the simulated database container 1616 in the simulated system 1616 . That is, the database container 1612 is a pre-transition illustration and the simulated database container 1616 shows container content post-transition.
  • a view 1620 is created on the read-only table 1614 .
  • a “TAB” mixed table 1624 included in the database container 1612 is renamed to “/R/TAB”, as illustrated by a mixed table 1626 included in the simulated database container 1616 .
  • a mixed “/W/TAB” table 1630 is created in the simulated database container 1616 .
  • data is moved from the read-only table 1626 to the writable table 1630 according to tenant content definition. For example, tenant-specific data that matches key patterns defined for tenant content is moved from the read-only table 1626 to the writable table 1630 .
  • a union view 1636 is created on the read-only table 1626 and the mixed table 1630 .
  • a writable table 1638 included in the database container remains included in the simulated database container 1616 , as illustrated by a writable table 1640 .
  • FIG. 17 illustrates a system 1700 for transitioning from a simulated system 1702 to a shared/tenant system 1704 .
  • a simulated system 1702 includes a simulated container 1706 that includes a read-only table 1708 , a read-only table 1710 , a writable table 1712 , a view 1714 on the read-only table 1708 , a union view 1716 on the read-only table 1710 and the writable table 1712 , and a writable table 1717 .
  • a transition from the simulated system 1702 to the shared/tenant system 1704 is described in a flowchart 1718 .
  • the read-only “/R/TABR” table 1708 is moved to a shared container 1722 included in the shared/tenant system 1704 , as illustrated by a read-only table 1724 .
  • a view 1727 is recreated for the read-only table 1724 (e.g., “shared./R/TABR”), as shown in a tenant container 1728 .
  • the view 1714 may become invalid or be deleted when the read-only table 1708 is moved.
  • the tenant container 1728 is a post-transition view of the simulated container 1706 . That is, the simulated container 1706 can serve as a container for the tenant once the transition has completed, with the tenant container 1728 being an illustration showing container contents after completion of the transition.
  • the read-only “/R/TAB” table 1710 is moved from the simulated container 1706 to the shared container 1722 , as illustrated by a read-only table 1732 .
  • a union view 1736 is recreated on the read-only table 1732 and a writable table 1738 that corresponds to the writable table 1712 .
  • the union view 1716 may become invalid or be deleted when the read-only table 1710 is moved to the shared container 1722 .
  • a writable table 1740 corresponds to the writable table 1717 (that is, the writable table 1717 remains unchanged and is included in the tenant container 1728 post transition).
  • FIG. 18 illustrates a system 1800 for transitioning from a shared/tenant system 1802 to a standard system 1804 . Such a transition may occur, for example, if cross-container access incurred an unacceptable performance degradation, for example, or if a determination is made that not enough shared content exists to warrant multi-tenancy.
  • the shared/tenant system 1802 includes a shared database container 1806 and a pre-transition tenant database container 1808 .
  • the standard system 1804 includes a post-transition database container 1810 .
  • the post-transition database container 1810 is a post-transition illustration of the pre-transition tenant database container 1808 .
  • the shared container 1806 is not used in the standard system 1804 post transition.
  • a “tenant./W/TABR” table 1815 is created in the post-transition tenant database container 1810 .
  • the “/W/TABR” table name is shown crossed out since the table 1815 is renamed in a later operation).
  • data is copied from a read-only table 1818 in the shared database container 1806 (e.g., “shared./R/TABR”) to the table 1815 .
  • a read-only table 1818 in the shared database container 1806 e.g., “shared./R/TABR”
  • the read-only table 1818 (e.g., “shared./R/TABR”) is dropped from the shared database container 1806 .
  • a view 1824 that had been configured for the read-only table 1818 is dropped (e.g., the post-transition database container 1810 does not include a view).
  • the “tenant./W/TABR” table is renamed to be “tenant.TABR”, as shown by an updated “TABR” name of the table 1815 .
  • Processing of read-only data described in operations 1814 , 1820 , 1822 , and 1826 can alternatively be performed by the processing described in an alternative flowchart 1828 .
  • the view 1824 can be dropped.
  • the table 1815 with name of “TABR” can be created in the database container 1820 .
  • data can be copied from the read-only table 1818 to the “TABR” table 1815 .
  • data is copied from a read-only table 1838 in the shared database container 1806 (e.g., “shared./R/TAB”) to a writable table 1840 in the pre-transition tenant container 1808 (e.g., “tenant./W/TAB”). That is, records that had been previously split into the shared read-only table 1838 and the writable table 1840 are now included in the writable table 1840 .
  • a union view 1844 is dropped from the pre-transition tenant database container 1808 (e.g., the post-transition database container 1810 does not include a union view).
  • the writable table 1840 (e.g., “tenant./W/TAB” is renamed to “tenant.TAB”, as illustrated by a table 1848 in the post-transition database container 1810 .
  • a writable table 1850 included in the pre-transition tenant database container 1808 remains unchanged and is included in the post-transition database container 1810 , e.g., as a writable table 1852 .
  • FIG. 19 illustrates a system 1900 for transitioning from a simulated system 1902 to a standard system 1904 .
  • a transition from a system sharing type of simulated to a system sharing type of standard can occur, for example, if a problem is detected in the simulated system setup, and developers wish to debug the problem in a standard system setup.
  • the simulated system 1902 includes a pre-transition simulated database container 1906 .
  • the standard system 1904 includes a post-transition tenant database container 1908 .
  • the post-transition tenant database container 1908 is a post-transition illustration of the pre-transition simulated database container 1906 (e.g., the post-transition tenant database container 1908 and the pre-transition simulated database container 1906 can be a same container, each with different content and different points in time).
  • a view 1916 on a read-only table 1918 is dropped (e.g., the post-transition tenant database container 1908 does not include a view).
  • the read-only table 1918 is renamed from a name of “/R/TABR” to “TABR”, as illustrated by a read-only table 1922 in the post-transition tenant database container 1908 .
  • content is copied from a “/R/TAB” read-only table 1926 to a “/W/TAB” writable table 1928 . That is, records that had been previously split into the read-only table 1926 and the writable table 1928 are now included in the writable table 1928 .
  • a “TAB” union view 1932 is dropped from the pre-transition simulated database container 1906 (e.g., the post-transition tenant database container 1908 does not include a union view).
  • the writable table 1928 is renamed from “/W/TAB” TO “TAB”, as illustrated by a writable table 1936 included in the post-transition tenant database container 1908 .
  • Processing of writable data described in operations 1924 , 1930 , and 1934 can alternatively be performed by the processing described in an alternative flowchart 1938 .
  • content can be copied from the writable table 1928 to the read-only table.
  • the “TAB” view 1932 can be dropped.
  • the read-only table 1926 can be renamed from “/R/TAB” to “TAB”, to become the writable table 1936 .
  • a writable table 1946 included in the pre-transition simulated database container 1906 remains unchanged and is included in the post-transition tenant database container 1908 , e.g., as a writable table 1948 .
  • Changes may need to be deployed to a system during a system's lifetime, such as during maintenance and upgrade phases. Changes can include emergency patches, hot fixes, service packs and release upgrades, for example. Changes can include new content, new tables, modified content, or other changes that may need to be deployed to a shared database container and/or a tenant database container.
  • a deployment, such as a patch can be a shared-only patch.
  • the patch can include changes to vendor-provided objects, such as reports, classes, modules, or other objects that are only in a shared database container.
  • Other deployments can include changes to be made to data in both a shared database container and in tenant database containers.
  • a given software object can include data stored in a shared database container and/or a tenant database container, for example.
  • Challenges can arise when deploying changes to a multi-tenancy database system, since if an online shared database container is changed, those changes can be visible to tenant applications. The changes can cause inconsistencies and/or application errors. If shared content referenced or depended on by tenant data is changed, all connected tenants should generally be changed as well to ensure consistency for the tenants. To avoid inconsistencies and errors, tenants can be upgraded, which can involve taking tenants offline. Upgrading of tenants can include deployment of objects that are at least partially stored in a tenant database and post-processing for tenant objects that relate to a shared object.
  • a problem occurs with a particular tenant, an attempt can be made to correct the problem during a predetermined downtime window. If the problem cannot be corrected during the available downtime window, the tenant can be reverted to connect to an earlier version of a shared container and brought back online. However, the tenant needing a connection to the earlier version of the shared container can pose a challenge for those tenants who are already connected to a new version of a shared container, if only one shared database container is used.
  • One deployment approach can be to revert all tenants back to a prior version upon an error happening in a deployment of a respective tenant, with a later re-attempt of the deployment for all tenants. Such an approach can cause undesirable downtime for tenants, however.
  • a deployment includes changes to a relatively small percentage of tables in a system, such as with an emergency patch
  • the changes can be made to both an existing production shared database container and existing production tenant database containers.
  • an approach of exchanging a shared container can be used, so that a new shared database container includes the changed data when it is inserted into the system.
  • the new shared database container can be inserted into the system in parallel with an existing shared database container.
  • Tenant database containers can be changed individually to connect to the new shared database container.
  • an existing shared database container is replaced with a new version and content is adjusted in connected tenants.
  • the replacement approach avoids upgrading the existing shared container in place, which can reduce overall deployment runtime.
  • a new shared database container is deployed, tenants are linked to the new shared database container, and the old shared database container can be deleted.
  • the new shared database container is deployed in parallel to the old shared database container, so that both can be simultaneously accessible by tenants.
  • Having both shared database containers simultaneously accessible allows the deployment of the new shared container during “uptime”, since tenants can still productively use the old shared database container. Then tenants can be upgraded separately (either individually or potentially multiple tenants in parallel, but each done independently). Individual tenant upgrades can allow each tenant to define an individual downtime window. A problem with one tenant upgrade does not need to prolong downtime of other tenants. Having both shared database containers simultaneously accessible also allows some tenants to temporarily remain on an old version of the software using the old shared database container while some tenants use the new version of the software with the new shared database container.
  • views reading from the old shared database container are dropped and new views are created reading from the new shared database container.
  • Subsequent actions are performed to deploy remaining content to the tenants. For example, if objects are stored partly in the shared database container and partly in the tenant database container, a complement of the objects being delivered with the shared database container can be deployed to the tenants. Additionally, follow-up activities can be performed in the tenant, as described in more detail below.
  • FIG. 20 illustrates a system 2000 that includes data for objects in both a shared database container 2002 and a tenant database container 2004 .
  • Objects used in business applications can be persisted in a set of database tables. Objects can be shipped by a vendor to a customer, and customers can also create custom objects (e.g. classes, configurations, user interfaces).
  • the tables used for the persistency of an object can be all of the same table type (e.g., read-only, mixed, writable). Therefore, some objects may have data that is only in the shared database container 2002 or only in the tenant database container 2004 .
  • an object can store data in tables of different types, such as if several objects re-use a table to store data (e.g., for documentation or text elements). Accordingly, some objects may have data that is in both the shared database container 2002 and the tenant database container 2004 .
  • an object deployment can be split into two parts: a deployment to a shared database container and a deployment to tenant database container(s).
  • the shared database container 2002 includes a read-only table T1 2006 and a read-only table 2008 T2#1 that stores read-only records for a mixed table named T2.
  • the tenant database container 2004 includes a writable table 2010 and a writable table 2012 that stores writable tenant records for the T2 mixed table.
  • a style key 2014 shows a dashed-line style 2016 used to mark entries in the shared database container 2002 and the tenant database container 2004 that correspond to a first object that includes both vendor and customer data.
  • a first entry 2018 and a second entry 2020 represent shared vendor data being stored for the first object in the read-only table 2006 and the read-only table 2008 , respectively, in the shared database container 2002 .
  • a third entry 2024 represents tenant data being stored for the first object in the writable table 2010 , in the tenant database container 2004 .
  • the first object does not store data in the writable table 2012 .
  • the style key 2014 shows a dotted line style 2026 used to mark entries 2028 and 2030 in the tenant database container 2004 .
  • the entries 2028 and 2030 represent tenant data being stored for a second object in the writable table 2012 and the writable table 2010 respectively.
  • the second object is a customer object that includes writable customer data and no shared read-only data.
  • FIG. 21A illustrates an example system 2100 for deploying changes to objects in a database system.
  • a deployment tool 2102 can determine, from a deploy data file 2104 , which objects have changes to be deployed, which tables are to be updated with changes to a given object, and whether each object has changes to be made to a shared database container 2106 , a tenant database container 2108 , or both the shared database container 2106 and the tenant database container 2108 .
  • the deployment tool 2102 can determine, from information in the deploy file 2104 , that an object “R” 2110 includes data in a TR1 table 2112 and a TR2 table 2114 .
  • the deployment tool 2102 can determine, from metadata in a sharing type table 2116 (which may exist in the shared database container 2106 or another location), that the TR1 table 2112 and the TR2 table 2114 are read-only tables. Accordingly, the deployment tool 2102 can determine that the object “R” is a completely-shared table (e.g., exists only in the shared database container 2106 ), as illustrated by note 2118 .
  • the deployment tool 2102 can determine, from information in the deploy file 2104 , that an object “M” 2120 includes data in the TR1 table 2112 , a T2 table, and a T3 table 2122 .
  • the deployment tool 2102 can determine, from metadata in the sharing type table 2116 , that the TR1 table 2112 is a read-only table and that the T3 table 2122 is a local table.
  • the deployment tool 2102 can determine that the T2 table is a split table (and thus implemented as a read-only table 2123 in the shared database container 2106 and a writable table 2124 in the tenant database container 2108 ).
  • the deployment tool 2102 can determine that content for the object “M” is split, between the shared database container 2106 and the tenant database container 2108 , as illustrated by note 2125 .
  • the deployment tool 2102 can determine, from information in the deploy file 2104 , that an object “L” 2126 includes data in an A1 table 2128 , an A2 table 2130 , an A3 table 2132 , and an A4 table 2134 .
  • the deployment tool 2102 can determine, from metadata in the sharing type table 2116 , that the A1 table 2128 , the A2 table 2130 , the A3 table 2132 , and the A4 table 2134 are each local tables. Accordingly, the deployment tool 2102 can determine that the object “R” is a completely-tenant table (e.g., exists only in the tenant database container 2108 ), as illustrated by note 2136 .
  • the deployment tool 2102 can track deployment status and can know what objects have been deployed, whether partially or completely. For example, the deployment tool 2102 can update a deploy status table 2138 that indicates, that at a current point in time, the object “R” 2110 has been completely deployed, the object “M” 2120 has been partially deployed, and the object “L” has not yet been deployed.
  • the deployment tool 2102 When using the exchanged shared database approach, objects that exist only in the shared database container 2106 are updated when a new shared database container is installed. Accordingly, and as illustrated by note 2140 , the deployment tool 2102 does not deploy content to the existing shared database container 2106 , rather, shared database container content is available in the new shared database container (not shown in FIG. 21A ).
  • the deploy status table 2138 can be updated and populated when preparing the new shared database container, to indicate, for example, that the completely-shared object “R” is already deployed (e.g., already in the new shared database container), that the object “M” is partially-deployed (e.g., shared portions of the object “M” are already in the new shared database container at the start of the deployment, in the TR1 table 2112 and the T2 table 2123 ), and that the object “L” has not yet been deployed. The remaining part of the object “M”, and the object “L” will be deployed as part of a tenant deployment.
  • a deploy to tenant can include deploying portions of an object that are stored in a local table or in a local part of a mixed table.
  • deployment for the object “M” to a tenant can include deployment of data to the writable table 2124 and/or to the local table 2122 .
  • Deployment for the object “L” to a tenant can include deployment to the local tables A1 2128 , A2 2130 , A3 2132 and A4.
  • Tenant deployment can also include dropping of views to the shared database container 2106 (e.g., views 2142 , 2144 , 2146 , and 2148 ) and the updating of union views, such as a union view 2150 .
  • FIG. 21B illustrates an example system 2180 for deploying changes to objects in a database system.
  • the system 2180 is an illustration of the system 2100 when a deployment uses an approach of modifying, rather than exchanging, an existing shared database container (e.g., during deployment of an emergency patch).
  • a deployment tool 2186 (which can be the same as the deployment tool 2102 ) can deploy changes to objects that are completely or partially stored in the shared database container 2106 .
  • deployment to the shared database container 2106 can include modification, in place, of the read-only table 2114 and the read-only table 2112 when deploying the object “R” and modification, in place, of the read-only table 2114 and the read-only table 2122 when deploying the object “M”.
  • the deployment status table 2138 can be updated as the deployment process proceeds. Deployment of patches is described in more detail below.
  • FIG. 22 illustrates an example system 2200 for upgrading a multi-tenancy database system 2202 using an exchanged shared database container approach.
  • the multi-tenancy database system 2202 includes a first tenant database container 2204 and a second tenant database container 2206 that are each connected to a shared database container 2208 , with each of the first tenant database container 2204 , the second tenant database container 2206 and the shared database container 2208 at a particular version (e.g., version “ 1708 ”).
  • a first application server 2210 also at the version “ 1708 ”, sends queries to the first tenant database container 2204 , for data in the first tenant database container 2204 and/or in the shared database container 2208 .
  • a second application server 2212 also at the version “ 1708 ”, sends queries to the second tenant database container 2206 , for data in the second tenant database container 2206 and/or in the shared database container 2208 .
  • a new shared database container that includes shared database container changes as compared to a current version can be deployed, as illustrated by a new shared database container 2220 , at a new version (e.g., version “ 1711 ”), in a database system 2222 .
  • the new shared database container 2220 is included in the database system 2222 in parallel along with a current-version (e.g., version “ 1708 ”) shared database container 2224 .
  • a naming convention can be used to name the new shared database container 2220 and the current-version shared database container 2224 , to ensure uniqueness of shared database container names.
  • shared database containers can be named using a combination of a product name and a version number.
  • Tenants can be linked, one at a time, to the new shared database container 2222 .
  • a second application server 2226 and a second tenant database container 2228 have been upgraded to the new version (e.g., version “ 1711 ”), with the second tenant database container 2228 now linked to the new shared database container 2220 .
  • a first application server 2230 and a first tenant database container 2232 are still at the old version (e.g., version “ 1708 ”), and the first tenant database container 2232 is still connected to the current-version shared database container 2224 .
  • the first tenant database container 2232 can be identified as a next tenant database container to upgrade.
  • a database system 2240 includes a first tenant database container 2242 and a first application server 2244 now at the new version (e.g., version “ 1711 ”), with the first tenant database container 2242 now connected to a new shared database container 2244 also at the new version.
  • the old database container e.g., what was the current-version database container 2224
  • FIG. 23 illustrates an example system 2300 for deploying a new service pack to a multi-tenancy database system.
  • the system 2300 includes an existing shared database container 2302 at a version of “ 1231 ” and service pack two (SP2).
  • An application server 2304 and a tenant database container 2306 for a first tenant are also at the version “ 1231 ” and SP2.
  • the existing shared database container 2302 , the tenant database container 2306 , and respective included components, are illustrated in a solid line, to denote being at version “ 1231 ” and SP2.
  • a view 2308 provides access to a TABR read-only table 2310 in the existing shared database container 2302 .
  • a second tenant served by an application server 2312 has been upgraded to a new service pack level (SP3), as described below.
  • SP3 new service pack level
  • a deployment tool 2314 can attach, to the system 2300 , a new shared database container 2316 that has been configured to be at a next service pack (SP3).
  • the new shared database container 2316 includes a new TABR read-only table 2318 that includes change for the new service pack.
  • the deployment tool 2314 can, when upgrading the second tenant, drop, from a tenant database container 2319 , a view to the TABR read-only table 2310 in the existing shared database container 2302 and add a new view 2320 to the new TABR read-only table 2318 in the new shared database container 2316 .
  • the deployment tool 2314 can import changes to a writable table 2322 , so that the writable table 2322 is at the new service pack level.
  • the tenant database container 2319 , the new shared database container 2316 , and respective included components, are illustrated in a dashed line to denote being at SP3.
  • the deployment tool 2314 can, at a later time, perform deployments operations similar to those done for the second tenant to upgrade the first tenant, so that both are at SP3.
  • the existing shared database container 2302 can be dropped after all tenants have been upgraded.
  • FIG. 24 illustrates an example system 2400 for maintenance of a database system 2401 .
  • a service pack (SP) master 2402 can be used to create a delivery package.
  • the SP master 2402 may have been used to create a delivery package 2404 when deploying a SP1 service pack to the database system 2401 .
  • a SP1 shared database container 2406 and tenant database containers 2408 , 2410 , and 2412 are each at the SP1 level, for example.
  • the SP1 shared database container and the tenant database containers 2408 , 2410 , and 2412 can be referred to as a cluster.
  • the delivery package 2404 may have been created for a past deployment to the cluster.
  • the delivery package 2404 includes a copy 2414 of the SP1 shared database container 2406 and a transport file 2416 that includes changes that had been imported to the tenant database containers 2408 , 2410 , and 2412 during the deployment of the SP1 service pack.
  • the SP master 2402 can create a new delivery package 2418 that includes a new SP2 shared database container 2420 and a transport file 2422 that include changes for a new service pack (SP2).
  • the new SP2 shared database container 2420 can be attached to the database system 2401 , as illustrated by an attached SP2 shared database container 2424 .
  • Objects, such as views, in the tenant database containers 2408 , 2410 , and 2412 can be detached from the SP1 shared database container 2406 and connected to the attached SP2 shared database container 2424 .
  • the transport file 2422 can be applied to the tenant database containers 2408 , 2410 , and 2412 , to upgrade them to a SP2 level. After all tenants have been upgraded, the SP1 shared database container 2406 can be dropped.
  • FIG. 25 illustrates an example system 2500 for upgrading a multi-tenancy system 2502 to a new version.
  • the multi-tenancy system 2502 is in a state of partial completion of upgrading from an old “ 1708 ” version to a new “ 1711 ” version.
  • some tenants can use, in production, a prior (e.g., “start”) release shared database container, while other tenants use a new (e.g., “target”) release shared database container, while still other tenants are offline and being upgraded to the new release.
  • the multi-tenancy system 2502 includes a version “ 1708 ” shared database container 2504 .
  • Tenant database containers 2506 and 2508 (e.g., “Tenant 01” and “Tenant 02”, respectively) are also at version “ 1708 ” and are connected to the version “ 1708 ” shared database container.
  • Tenant database containers 2510 and 2512 (e.g., “Tenant 05” and “Tenant 06”, respectively) have been converted to the version “ 1711 ” and are now connected to a version “ 1711 ” shared database container 2513 that has been added to the multi-tenancy system 2502 during the upgrade.
  • Tenant database containers 2514 and 2516 (e.g., “Tenant 03” and “Tenant 04”, respectively) are currently being upgraded.
  • An overview of an upgrade process for a given tenant is outlined in a flowchart 2520 .
  • the given tenant is backed up at a beginning of a downtime period. For example, a backup 2524 of the tenant database container 2514 and a backup 2526 of the tenant database container 2516 have been created.
  • a link to the new (e.g., version “ 1711 ”) shared database container 2513 is established.
  • new views can be established, as described in more detail below in FIGS. 74-79 .
  • a delta is deployed to the tenant.
  • the delta can be included in a transport file, and can include changes to be applied to tables in the given tenant database container.
  • Processing operations 2534 include: restoring, at 2536 , the backup (e.g., at version “ 1708 ”, such as the backup 2524 for the tenant database container 2514 ); establishing a link, at 2538 , to the old (e.g., “version 1708 ”) shared database container 2504 ; and releasing, at 2540 , the given tenant on the old version “ 1708 ” to the customer.
  • Establishing the link, at 2538 can include restoring views to tables in the “version 1708 ” shared database container 2504 . Deployment can be re-attempted at a later time. If the deployment succeeded, the tenant is released, at 2542 , on the new version “ 1711 ” to the customer.
  • FIGS. 26 to 31 progressively illustrate, in further detail, various stages of an upgrade process for upgrading a database system to a new version, using an exchanged shared database container approach.
  • the exchanged shared database container approach can also be used for deployment of a service pack or patch.
  • FIG. 26 illustrates an example system 2600 before deployment of a new database version using an exchanged shared container approach.
  • the system 2600 includes a shared database container 2602 that includes a current version of a read-only table 2604 that is a shared portion of a mixed table named “TAB”.
  • the shared database container 2602 also includes a read-only table 2606 .
  • the system 2600 includes a first tenant database container 2608 for a first tenant and a second tenant database container 2610 for a second tenant.
  • the first tenant database container 2608 includes a view 2612 to the read-only table 2604 (illustrated as an arrow 2614 ), a writable table 2616 that is a local portion of the mixed table, a union view 2618 providing unified access to the read-only table 2604 and the writable table 2616 , a writable table 2620 , and a view 2621 to the read-only table 2606 (illustrated as an arrow 2622 ).
  • the second tenant database container 2610 includes a view 2623 to the read-only table 2604 (illustrated as an arrow 2624 ), a writable table 2626 that is a local portion of the mixed table, a union view 2628 providing unified access to the read-only table 2604 and the writable table 2626 , a writable table 2630 , and a view 2631 to the read-only table 2606 (illustrated as an arrow 2632 ).
  • FIG. 27 is an illustration of a system 2700 that is upgraded in part by exchanging a shared database container.
  • the system 2700 is a view of the system 2600 during a first set of deployment operations, for preparing a shared database container.
  • a new shared database container 2704 can be deployed in parallel to an existing, in-production shared container (e.g., the shared database container 2602 ), without disrupting the operation of the existing shared database container 2602 .
  • the first set of deployment operations, for preparing the shared database container 2704 are outlined in a flowchart 2705 .
  • the new (e.g., version 2 ) shared database container 2704 is copied and attached to the database, at 2707 .
  • the new shared database container 2704 is a container included in a delivery package and created at the vendor, it contains a new software version (e.g., a copy of the shared database container 2420 , brought together with the tenant part delivered with 2807 ).
  • the new shared database container 2704 includes a read-only table 2708 that is a copy of a shared table included in the service pack master 2402 .
  • target connection information (e.g., URL, user name, password) is provided to tenants.
  • the target connection information such as an address of the new shared database container 2704
  • the target connection information can be made available to the first tenant database container 2608 and the second tenant database container 2610 .
  • Information about the new shared database container 2704 can be published to the tenants, so the tenants can read new shared database container content. Read-only access to objects in the shared container can be granted to tenants.
  • the target connection information can be provided to a deployment tool that will respectively upgrade the first tenant and the second tenant.
  • the first tenant database container 2608 and the second tenant database container 2610 can be designated as version two (“V2”) destinations (e.g., upgrade targets).
  • information is provided from the new shared database container 2704 , such as to the deployment tool, including a list of shared tables, information about component versions (e.g., service pack levels), and information about deployed transports and import state.
  • the deployment process continues as described below for FIG. 28 .
  • FIG. 28 is an illustration of a system 2800 that is upgraded in part by exchanging a shared database container.
  • the system 2800 is a view of the system 2600 during a second set of deployment operations, for deploying to a first tenant.
  • the second set of operations are outlined in a flowchart 2802 .
  • connectivity and new shared space information is obtained.
  • connectivity information to connect the first tenant database container 2608 to the new shared database container 2708 can be provided to the first tenant database container 2608 and/or to a deployment tool.
  • an address of the new shared database container 2708 can be provided to the deployment tool.
  • a new shared space version and matching service pack level is determined.
  • the deployment tool can ensure that a version of the new shared database container 2708 matches a version of a delta deployment package 2807 .
  • the delta deployment package 2807 is, for example, a file that was prepared before initiation of the deployment.
  • Creating the delta deployment package 2807 can include identifying objects that are partially included in the new shared database container 2704 and computing the remaining deployment parts (i.e. local content portions of those objects and changes to those local content portions that are to be part of the deployment).
  • Creating the delta deployment package 2807 can also include identifying objects that are completely stored in tenant containers and identifying changes to those objects that are to be part of the deployment.
  • “drop/create” or “alter” statements for views reading from shared tables are computed.
  • drop statements for views to the read-only table 2606 and the read-only table 2604 can be prepared.
  • drop statements dropping the view 2631 (illustrated as the arrow 2632 ), the view 2621 (illustrated as the arrow 2622 ), the view 2612 (illustrated as the arrow 2614 ), and the view 2623 (illustrated as the arrow 2624 ) can be prepared.
  • Respective create view statements for creating new views in the first tenant database container 2608 and in the second tenant database container 2610 to the read-only table 2708 and the read-only table 2710 can be prepared.
  • the new shared database container 2704 can include more or less tables than the shared database container 2602 . Therefore, a set of views to be created depends on the contents of the new shared database container 2704 .
  • the new shared database container 2704 can include an administrative table (not shown) that includes a list of tables included in the new shared database container 2704 . The administrative table can be read, so that statements can be prepared that will, when executed, drop views to all tables in the shared database container 2602 and create new views for all tables in the new shared database container 2704 .
  • a target destination and table names are read, and statements are computed, for data to be transported to tenant database containers.
  • the deployment can include changes to the writable table 2616 and/or the writable table 2620 in the first tenant database container 2608 .
  • the deployment can include changes to the writable table 2626 and/or the writable table 2630 in the second tenant database container 2610 .
  • Statement(s) e.g., alter statement(s)
  • to adjust the structure of these writable/local tables can be computed, for later execution, as described below. If the structure of the writable table 2616 is to be adjusted, a statement to re-create the union view 2618 can be prepared, to create a view that includes the updated structure of the writable table 2616 . The deployment process continues as described below for FIG. 29 .
  • FIG. 29 is an illustration of a system 2900 that is upgraded in part by exchanging a shared database container.
  • the system 2900 is a view of the system 2600 during a third set of deployment operations, for completing a deployment to a first tenant.
  • the third set of operations are outlined in a flowchart 2902 .
  • previously-prepared statements are executed.
  • previously-prepared drop-view statements to drop views to the shared database container 2602 (e.g., the views 2612 and 2621 illustrated as the arrows 2614 and 2622 , respectively, on previous figures) can be executed, by a transport control component 2905 .
  • New views can be created, used previously-prepared create-view statements, to create new views, to the read-only table 2708 and the read-only table 2710 in the new shared database container 2704 , in the first tenant database container 2608 .
  • a view 2906 to the read-only table 2708 can be created (with the connection illustrated as an arrow 2908 ).
  • a view 2910 to the read-only table 2710 can be created (with the connection illustrated as an arrow 2912 ).
  • the transport control component 2905 can also execute previously-prepared alter statements, to adjust structures of local tables, as illustrated by an updated writable table 2914 and an updated writable table 2916 . If the structure of the writable table 2914 is new and/or the structure of the view 2910 is new (e.g., as compared to the read-only view 2612 ), the transport control component 2905 can execute a statement to create a new union view 2918 to replace the union view 2618 .
  • local content is deployed.
  • a transport program 2922 can copy data from the delta deployment package 2807 to the updated writable table 2916 .
  • the transport program 2922 can copy data from the delta deployment package 2807 to the updated writable table 2914 .
  • the local content can include content that is the local portion of objects that are partially stored in the new shared database container 2704 and partially stored in the first tenant database container 2608 .
  • Local content can also include content for objects that are completely stored in the first tenant database container 2608 and not stored in the new shared database container 2704 .
  • a status update is written to local patch tables.
  • status information indicating that the first tenant has been upgraded to version two can be stored, such as in an administrative table in the new shared database container 2704 (not shown) or in another location.
  • the first tenant is registered at a target shared space.
  • the first tenant database container 2608 can be registered, in an administrative table in the new shared database container 2704 , as being connected to the new shared database container 2704 .
  • the first tenant is de-registered from the source shared space. For example, an entry can be deleted (or marked as inactive) in an administrative table in the shared database container 2602 , with the deletion or the marking as inactive indicating that the first tenant database container 2608 is no longer connected to the shared database container 2602 .
  • version one destination information is deleted. The deployment process continues as described below for FIG. 30 .
  • FIG. 30 is an illustration of a system 3000 that is upgraded in part by exchanging a shared database container.
  • the system 3000 is a view of the system 2600 during a fourth set of deployment operations, for deploying to a second tenant.
  • Deployment of the second tenant can include a same set of operations as performed for the first tenant, as described above for FIG. 28 and FIG. 29 .
  • Deployment for the second tenant can include the dropping, in the second tenant database container 2610 , of views to the shared database container 2602 (e.g., the views 2623 and 2631 , illustrated as the arrows 2624 and 2632 , respectively, on previous figures).
  • Deployment for the second tenant can include the creating of new views, to the read-only table 2708 and the read-only table 2710 , in the new shared database container 2704 , as illustrated by a new view 3002 and arrow 3004 , and a new view 3006 and arrow 3008 .
  • Deployment for the second tenant can include the adjustment of and deployment of content to local tables, as illustrated by an updated writable table 3010 and an updated writable table 3011 .
  • An updated union view 3012 can be created to reflect updated structure(s) of the updated writable table 3010 and/or the new view 3002 .
  • the shared database container 2602 can be dropped, as illustrated by an “X” 3014 .
  • FIG. 31 is an illustration of a system 3100 that is upgraded in part by exchanging a shared database container.
  • the system 3100 is a view of the system 2600 in a final state, after deployment to all tenants, including the first tenant database container 2608 and the second tenant database container 2610 , has been completed.
  • the shared database container 2602 has been dropped and is no longer included in the system 3100 .
  • the shared database container 2602 can be dropped, for example, after test(s) have been performed to ensure that all tenants are using the new shared database container 2704 . Completing a deployment can also include performing other tests, such as to ensure that all parts of all objects to be changed in the new version have been deployed.
  • Post actions can include invalidating table buffers (e.g., that store previously read shared content) in an application server 3102 and/or an application server 3104 (the application servers 3102 and 3104 being different or a same server) for tables that have been switched to read from the new shared database container 2704 , invalidating previously-compiled objects, triggering re-compile of objects to now read from the new shared database container 2704 , re-generating tenant-specific objects that depend on shared content and tenant content, and calling other application-specific follow-up actions related to the deployment of changed content in a tenant.
  • After-deployment actions can ensure that objects are consistent with deployed content.
  • FIG. 32 illustrates a system 3200 for deploying changes to objects.
  • FIG. 32 illustrates a system for deploying changes to objects.
  • changes can be applied in place to both the shared database container 3202 and tenant database containers (e.g., a first tenant database container 3204 and a second tenant database container 3206 ).
  • Deployment can be performed in two phases: 1) deployment to the shared database container 3202 ; and 2) deployment to the tenant database containers 3204 and 3206 , which can be performed independently. Independent tenant deployments can enable sequential and de-coupled deployments.
  • a deployment 3208 can ensure that a patch is completely deployed both to the shared database container 3202 and to each tenant database container 3204 and 3206 , including ensuring that any planned follow-up actions have been performed for all tenants.
  • the deployment tool 3208 can identify a deployment file entry 3209 in a deployment package 3210 for a given object, and determine that the given object includes data stored in T1, T2, and T3 tables.
  • the deployment tool 3208 can access metadata 3212 that indicates that the T1 table is a shared read-only table (and thus residing in the shared database container 3202 , e.g., as a read-only table 3214 ), the T2 table is a split table (and thus partially residing in the shared database container 3202 , e.g., as a read-only table 3216 ), and the T3 is a tenant-local table (and thus respectively residing in tenant database containers, e.g., as a local table 3218 and a local table 3220 ).
  • the deployment tool 3208 can identify, based on the metadata 3212 and the deployment file entry 3209 , the given object as at least partially included in the shared read-only table 3202 .
  • the deployment tool 3208 can deploy, for the given object, changes for the portions of the given object that reside in the shared database container 3202 , as illustrated by an entry 3222 in the T1 read-only table 3214 and an entry 3224 in the T2 read-only table 3216 .
  • the entry 3222 can be populated with data from an entry 3226 in the deployment file entry 3209 .
  • the entry 3224 can be populated with data from an entry 3228 in the deployment file entry 3209 .
  • the deployment tool 3208 can store a record, in a status table, that indicates that the given object is partially deployed.
  • the deployment tool 3208 can next perform the deployment to tenant phase, which can include a deployment to the first tenant database container 3204 and a deployment to the second tenant database container 3206 .
  • the deployments to the tenant database containers can operate independently, and may happen sequentially, or in parallel.
  • the deployment tool 3208 can identify the given object associated with the entry 3209 as an object that has been partially deployed, based on the entry 3209 and the metadata 3212 indicating that the given object includes data in the T3 tenant-local table.
  • the deployment tool 3208 can determine that a portion of the given object that is stored in an entry 3230 in the deployment file entry 3209 has not yet been deployed.
  • the deployment tool 3208 can deploy the entry 3230 , to the first tenant database container 3204 and the second tenant database container 3206 , as illustrated by an entry 3232 and an entry 3234 .
  • Other deployment tasks that can be performed by the deployment tool 3208 include identifying objects that have not been deployed to the shared database container (e.g., objects that reside only in local tenant tables), and deploying changes to those objects.
  • Finalization tasks performed by the deployment tool 3208 can include invoking actions to operate on deployed content, which can include, for example, triggering buffer invalidation and buffer refresh, or compiling deployed code.
  • Finalization tasks can also include ensuring that all parts of all objects to be included in the deployment have been deployed.
  • FIG. 33 illustrates a system 3300 for deploying a patch using a hidden preparation of a shared database container.
  • tenant-independent deployments may be desired, so that tenants can each define their own downtime window and so that if one tenant deployment has an issue, not all tenants deployments need to be reverted.
  • Deploying a new shared database container in parallel to an existing shared database container is one approach.
  • preparing, in the existing shared database container, hidden version of individual tables can be another approach.
  • This hidden-deployment approach can reduce downtime by providing a tenant-individual fallback option.
  • Hidden changes are initially invisible to tenants who can still productively use current-version tables in the shared database container, until they are individually deployed and switched over to use new table versions.
  • the system 3300 includes sub-systems 3302 , 3304 , 3306 , and 3308 which provide an overview of the progression of the deployment. Other figures below give further detail to each deployment stage.
  • the sub-system 3302 includes a shared database container 3310 , a first tenant database container 3312 for a first tenant, and a second tenant database container 3314 for a second tenant.
  • the shared database container 3310 includes a read-only table 3316 that is at a first version, with a name of “TABR #1”. Although only one table is illustrated in the shared database container 3310 , the shared database container 3310 can include other tables.
  • the first tenant database container 3312 and the second tenant database container 3314 respectively include a read-only view 3318 or a read-only view 3320 that each provide read access to the read-only table 3316 for a respective tenant.
  • the first tenant database container 3312 and the second tenant database container 3314 respectively also include a writable table 3322 or a writable table 3324 .
  • a patching system 3326 creates a clone/copy of the read-only table 3316 , illustrated as a new read-only table 3328 .
  • the new read-only table 3328 has the same structure as the read-only table 3316 .
  • the patching system 3326 and/or a deployment tool can modify the new read-only table 3328 by importing changes to the new read-only table 3328 for a patch to be deployed to the sub-system 3302 .
  • the new read-only table 3328 is displayed in dashed lines to signify that the new read-only table 3328 is at a new version that includes the patch.
  • the first tenant is switched to be compatible and connected to the updated shared database container 3310 .
  • the view 3318 is dropped and a new view 3330 is created to the new read-only table 3328 .
  • a structure of the writable table 3322 can be updated, as illustrated by an updated writable table 3332 .
  • the second tenant is switched to be compatible and connected to the updated shared database container 3310 .
  • the view 3320 is dropped and a new view 3334 is created to the new read-only table 3328 .
  • a structure of the writable table 3324 can be updated, as illustrated by an updated writable table 3336 .
  • FIGS. 34-39 discuss a more involved example of deployment using hidden preparation of a shared database container, including the use of a mixed table, and more detailed discussions of each operation.
  • FIG. 34 illustrates an example system 3400 before deployment of a patch.
  • the system 3400 includes a shared database container 3402 that includes a current version (e.g., version #1) of a read-only table 3403 that is a shared portion of a mixed table named “TAB”.
  • the system 3400 includes a first tenant database container 3404 and a second tenant database container 3406 .
  • the first tenant database container 3404 includes a view 3408 to the read-only table 3403 (illustrated as an arrow 3409 ), a writable table 3410 that is a local portion of the mixed table, a union view 3412 providing unified access to the read-only table 3403 and the writable table 3410 , and a writable table 3414 .
  • the second tenant database container 3406 includes a view 3416 to the read-only table 3403 (illustrated as an arrow 3417 ), a writable table 3418 that is a local portion of the mixed table, a union view 3420 providing unified access to the read-only table 3403 and the writable table 3418 , and a writable table 3422 .
  • FIG. 35 illustrates a system 3500 for preparation of a shared database container during a deployment of a patch to a database system.
  • the system 3500 is a view of the system 3400 after a first set of deployment operations have been completed.
  • the first set of deployment operations are outlined in a flowchart 3502 .
  • a patch system 3506 reads a deployment package 3508 to identify shared tables to which content is to be deployed.
  • the patch system 3506 can identify, based on data in the deployment package 3508 , a mixed table named “TAB” 3509 for which a patch is to be deployed to the read-only portion of the mixed table in the shared database container 3402 .
  • a current version of the read-only portion of the “TAB” table is included in the shared database container 3402 as a read-only table 3403 .
  • the patch system 3506 clones the read-only table 3403 to create a read-only table 3512 that has the same structure as the read-only table 3403 , and publishes a name of the read-only table 3512 to the deploy tool 3516 running at the shared deployment.
  • the read-only table 3512 is named with a target name of “TAB #2”, and is shown with dashed lines to signify that the read-only table 3512 is a new version of the read-only table 3403 .
  • An administration table can be updated to publish the name of the read-only table 3512 . The published name can be used in a later stage when tenants are deployed and connected to the read-only table 3512 .
  • a deployment tool 3516 deploys (e.g., imports) data from the deployment package 3508 to the read-only table 3512 , to deploy the patch to the read-only table 3512 .
  • the read-only table 3512 is read-only with respect to tenant applications, but the deployment tool 3516 has write access to the read-only table 3512 .
  • the deployment tool 3516 can determine content that is to be deployed to the shared database container 3402 only (e.g., and not to tenant database containers).
  • deployment status is stored (e.g., in an administrative table in the shared database container 3402 (not shown)).
  • Deployment status can include an indication that the patch to the TAB table is partially deployed (e.g., changes to the read-only sharable portion of the TAB mixed table have been made in the shared database container 3402 but the writable portion of the TAB mixed table has not yet been updated).
  • the administrative table can include information that indicates, for example, that changes to the writable table 3414 (e.g. named “TAB2”), and other tables, have not yet been deployed.
  • the name of the read-only table 3512 is published to the patch system 3506 running at the tenant deployment, or otherwise made available, as the name of the new version of the read-only table 3403 .
  • the published name is used in later deployment operations, as described in more detail below.
  • the read-only table 3512 remains hidden, and unused by tenant applications, until later operations have been completed.
  • FIG. 36 illustrates a system 3600 for deploying a patch to a tenant database container.
  • the system 3600 is a view of the system 3400 during a second set of deployment operations, for deploying the patch to the first tenant database container 3404 .
  • the second set of deployment operations are outlined in a flowchart 3602 .
  • a downtime period can be initiated for the first tenant database container 3404 .
  • shared tables that have been prepared, and partially deployed, are identified, and a drop view statement is created.
  • the patch system 3506 can identify that the read-only table 3512 has been prepared as a new version of the read-only table 3403 .
  • a drop view statement can be prepared to drop a view to the read-only table 3403 .
  • a create view statement is computed, by reading, and including in the create view statement, a published target name of the read-only table 3512 .
  • the previously-computed drop view statement and create view statement are executed.
  • the drop view statement drops a view in the first tenant database container 3404 to the read-only table 3403 . Accordingly, there is now no arrow (e.g., arrow 3409 on prior figures) originating from the first tenant database container 3404 and ending at the read-only table 3403 .
  • the create view statement creates a new view 3612 to the read-only table 3512 (e.g., illustrated by an arrow 3613 ).
  • the deployment tool 3516 deploys content to the first tenant database container 3404 .
  • the deployment tool 3516 can deploy content from the deployment package 3508 to one or more writable tables included in the first tenant database container 3404 , as illustrated by an updated writable table 3616 .
  • content from the deployment package 3508 can be deployed to a writable table that includes tenant-local content associated with the mixed table corresponding to the read-only table 3512 , as illustrated by an updated writable table 3618 .
  • the deployment tool 3516 can determine content in the deployment package 3508 that has not been deployed to the shared database container 3402 and that is to be deployed to tenants.
  • local table structure(s) and union view(s) are updated.
  • the union view 3412 of FIG. 34 can be updated to connect to the new view 3612 and the updated writable table 3618 , as illustrated by an updated union view 3622 .
  • structure of the updated writable table 3616 and/or the updated writable table 3618 can be updated, according to data in the deployment package 3508 .
  • FIG. 37 illustrates a system 3700 for deploying a patch to a tenant database container.
  • the system 3700 is a view of the system 3400 during a third set of deployment operations, for deploying the patch to the second tenant database container 3406 .
  • a downtime period can be initiated for the second tenant database container 3406 .
  • Deployment of the patch to the second database container 3406 can include the same or similar operations as done for the first database container, as outlined in the flowchart 3602 , but for the second database container 3406 .
  • a view in the second database container 3406 to the read-only table 3403 can be dropped (e.g., the arrow 3417 shown on prior figures is no longer included in FIG. 37 ).
  • a new view 3702 can be created, to the read-only table 3512 , as illustrated by an arrow 3704 .
  • Content can be deployed to writable tables, and writable table structures can be altered, as illustrated by an updated writable table 3706 and an updated writable table 3708 .
  • a union view can be updated to provide unified access to the new view 3702 and the updated writable table 3708 , as illustrated by an updated union view 3710 .
  • downtime for the second tenant can be ended, with the second tenant database container 3406 successfully configured with deployed changes and updated connections to the read-only table 3512 .
  • the new view 3702 , the arrow 3704 , the updated writable table 3706 , the updated writable table 3708 , and the updated union view 3710 are illustrated in dashed lines to signify completion of the patch deployment for the second tenant database container 3406 .
  • FIG. 38 illustrates a system 3800 for performing finalization of a deployment.
  • the system 3800 is a view of the system 3400 during a fourth set of deployment operations, for performing a finalization/clean up phase.
  • the fourth set of operations are outlined in a flowchart 3802 .
  • a determination is made as to whether the patch has been deployed to all registered tenants.
  • old shared table(s) that are no longer used are dropped.
  • the patch system 3506 can drop the read-only table 3403 , since there are no longer any tenants connected to the read-only table 3403 .
  • the name of the read-only table 3403 (e.g., “TAB #1”) is removed from a list of published shared tables.
  • FIG. 39 illustrates a system 3900 after deployment using a hidden preparation of a shared database container technique.
  • the system 3900 is a view of the system 3400 after deployment to all tenants, including the first tenant database container 3404 and the second tenant database container 3406 , has been completed.
  • the shared database container 3402 includes the new version read-only table 3512 and no longer includes the prior version read-only table 3403 .
  • the first tenant database container 3404 and the second database container 3406 include updated components, including connections to the new version read-only table 3512 .
  • FIG. 40 is a flowchart of an example method 4000 for handling unsuccessful tenant deployments. It will be understood that method 4000 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 4000 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 4000 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1 . For example, the method 4000 and related methods can be executed by the deployment tool 130 of FIG. 1 .
  • an unsuccessful deployment of a tenant is detected. For example, an error message may be received.
  • the unsuccessful deployment is analyzed.
  • status information can be analyzed that indicates which portions of the deployment have successfully completed or have encountered errors.
  • the predetermined time window can be a maximum acceptable length of a downtime window for the tenant, for example.
  • the problem is resolved. For example, a new deployment package can be provided, and/or a system or process can be restarted.
  • the deployment is restarted for the tenant. If a new deployment package has been provided, the new deployment package can be used in the deployment re-attempt.
  • the tenant is reverted to a state before the deployment.
  • the tenant is provided to the customer at a release version of the tenant before the start of the deployment, so that the tenant can be online while the problem is being resolved.
  • the problem is resolved while the tenant is online.
  • the deployment is restarted for the tenant.
  • Deployment success can be determined, and the method 4000 can be re-executed if the restart of the deployment did not succeed, as described above.
  • FIG. 41 illustrates a system 4100 for deploying multiple patches to a database system.
  • Tenant-independent downtimes and deployments may result in different tenants connected to different versions at a given point in time, such as if deployments are re-attempted for one or more tenants or if given deployments are still ongoing.
  • Tenants can have overlapping deployment timeframes, either due to planned individual upgrade windows or as a result of a problem and a revoke of a particular tenant deployment.
  • An administrator may desire to deploy a patch to those tenants that are on a new version, even when some other tenants have not yet been upgraded to the new version.
  • the system 4100 can support the deployment of multiple patches to tenants. For example, a deployment of a package “p1” to a cluster of a shared database container and N tenant database containers can be partially completed (e.g., M of the N tenants, M ⁇ N, do not have the p1 patch deployed).
  • the system 4100 can support the deployment of a patch “p2”, even though the M tenants do not yet have the p1 patch. It may be desired to react, with a new patch, to a problem that is occurring in one or more tenants who already have the p1 patch, without needing to wait until all tenants have the p1 patch.
  • the system 4100 is an overview showing changes to the system 3400 after different sets of patches have been deployed to different tenant database containers.
  • the shared database container 3402 includes the read-only table 3403 and the read-only table 3512 (e.g., a second version of the read-only table 3402 ).
  • the first tenant associated with the first tenant database container 3404 has been upgraded to version two.
  • the patch system 3506 has created a view 4102 to the version-two read-only table 3512 , and the deployment tool 3516 has deployed content from a patch one deployment package 4104 to the first tenant database container 3404 .
  • a problem may be detected in the second tenant database container 3406 before the patch one deployment package 4104 has been deployed to the second tenant database container 3406 .
  • a patch two deployment package 4106 has been created which includes changes to content, including to the TAB and TAB2 tables, to create a third software version to fix the detected problem.
  • the patch system 3506 can clone the version-two read-only table 3512 to create a version-three read-only table 4108 .
  • the deployment tool 3516 can deploy content from the patch two deployment package 4106 to the version-three read-only table 4108 to deploy shared content included in the new patch.
  • the patch system 3506 can create a view 4110 to the version-three read-only table 4108 .
  • the deployment tool 3516 can deploy tenant content from the patch one deployment file 4104 and the patch two deployment file 4106 to complete the upgrade of the second tenant database container 3406 to the third software version. Later determinations can be made regarding whether the third software version has corrected the problem and whether to upgrade the first tenant database container 3404 , at a later time, to the third software version. Further details of deploying multiple patches are described below with respect to FIGS. 42-48 .
  • FIG. 42 illustrates a system 4200 for preparing a shared database container before deploying multiple patches to a database system.
  • the system 4200 is a view of the system 3400 after a first set of deployment operations have been completed, for preparing for deploying a first patch to the first tenant.
  • the first set of deployment operations are outlined in a flowchart 4202 and are similar to the deployment operations described above for the flowchart 3502 .
  • the patch system 3506 reads a deployment package 4206 to identify shared tables to which content is to be deployed. For example, the patch system 3506 can identify, based on data in the deployment package 4206 , a mixed table named “TAB” 4208 for which a first patch is to be deployed to the read-only portion of the TAB mixed table in the shared database container 3402 .
  • TAB mixed table
  • the patch system 3506 can determine a set of tables in the shared container that will receive data from the deployment package 4206 .
  • this set of tables can be referred to as a set st_1.
  • the patch system 3506 can determine a version number for each table in the set st_1, and can determine a maximum version number of those tables.
  • the patch system 3506 clones the read-only table 3403 to create a version-two read-only table 4212 that has the same structure as the read-only table 3403 , and publishes a name of the version-two read-only table 3512 .
  • the version-two read-only table 3512 is named with a target name of “TAB #2”.
  • the patch system 3506 can, for each table in the set st_1, identify, in the shared database container 3402 , a source table named ⁇ table-name># ⁇ v_start>, where v_start is a highest version number of tables that have a same base name of ⁇ table-name> (for example, the shared database container 3402 may have tables named DOKTL #3, DOKTL #5, and DOKTL #11, so for a table_name of DOKTL, v_start is 11).
  • the patch system 3506 can create a copy of each identified source table to make a respective target table, using a pattern of ⁇ table-name># ⁇ v_target1>.
  • the deployment tool 3516 deploys (e.g., imports) data from the deployment package 4206 to the version-two read-only table 4212 , to deploy the first patch to the version-two read-only table 4212 .
  • the deployment tool 3516 can determine content that is to be deployed to the shared database container 3402 only (e.g., and not to tenant database containers). Continuing with the general example, the deployment tool 3516 can deploy content of the deployment package 4206 to each of the target tables ⁇ table-name># ⁇ v_target1>, in the shared database container 3402 .
  • deployment status is stored (e.g., in an administrative table in the shared database container 3402 (not shown)).
  • Deployment status can include an indication that the first patch to the TAB table is partially deployed (e.g., changes to the read-only sharable portion of the TAB mixed table have been made in the shared database container 3402 ) but the first patch has not yet been applied to the writable portion of the TAB mixed table).
  • the name of the version-two read-only table 4212 is published, or otherwise made available, as the name of the new version of the read-only table 3403 .
  • a version number (e.g., version two) can also be published as a target (e.g., “go to”) version number, for later tenant deployments.
  • the number v_target1 can be passed to a central control tool as a goto-version for the deployment package 4206 , for orchestration of future tenant deployments.
  • FIG. 43 illustrates a system 4300 for deploying multiple patches to a database system.
  • the system 4300 is a view of the system 3400 after a second set of deployment operations, for deploying a first patch, have been completed during deployment of multiple patches to a database system.
  • the second set of deployment operations are outlined in a flowchart 4302 and are similar to the operations described above for the flowchart 3602 .
  • the patch system 3506 can retrieve a target version number v_target1 for use in deploying tenant content.
  • shared tables that have been prepared, and partially deployed, are identified, and a drop view statement is created.
  • the patch system 3506 can identify that the version-two read-only table 4212 has been prepared as a new version of the read-only table 3403 .
  • a drop view statement can be prepared to drop a view to the read-only table 3403 .
  • the patch system 3506 can determine, in the deployment package 4206 , a complement of what had been deployed from the deployment package 4206 to the shared database container. For example, the patch system 3506 can identify a set of all tables, st_1_all, that are to receive content from the deployment package 4206 . The patch system 3506 can remove, from the set st_1_all, tables that have been deployed in the shared (e.g., the set st_1). The patch system 3506 can determine a remaining set, st_1_rest.
  • the patch system 3506 can identify current views in the tenant database container 3404 that select from a shared table with a version smaller than v_target1. The patch system 3506 can prepare a drop statement for each of those identified current views.
  • a create view statement is computed, by reading, and including in the create view statement, a published target name of the version-two read-only table 4212 .
  • the patch system 3506 can compute, for each of the current views that are to be dropped, a version of a table to be used in a new view, by determining a maximum number of the version of the table that is identical or smaller than v_target1.
  • the patch system 3506 can prepare a create view statement using the determined version of the table to be used in the new view.
  • the previously-computed drop view statement and create view statement are executed.
  • the drop view statement drops a view in the first tenant database container 3404 to the read-only table 3403 . Accordingly, there is now no arrow (e.g., arrow 3409 on prior figures) originating from the first tenant database container 3404 and ending at the read-only table 3403 .
  • the create view statement creates a new view 4310 to the version-two read-only table 4212 (e.g., illustrated by an arrow 4312 ).
  • the deployment tool 3516 deploys content to the first tenant database container 3404 .
  • the deployment tool 3516 can deploy content for the first patch from the deployment package 4206 to one or more writable tables included in the first tenant database container 3404 , as illustrated by an updated writable table 4316 .
  • content from the deployment package 4206 for the first patch can be deployed to a writable table that includes tenant-local content associated with the mixed table corresponding to the version-two read-only table 4212 , as illustrated by an updated writable table 4318 .
  • the deployment tool 3516 can determine content in the deployment package 4206 that has not been deployed to the shared database container 3402 and that is to be deployed to tenants.
  • the deployment tool can deploy content from the deployment package for the tables included in the remaining table set st_1_rest.
  • local table structure(s) and union view(s) are updated.
  • the union view 3412 of FIG. 34 can be updated to connect to the new view 4310 and the updated writable table 4318 , as illustrated by an updated union view 4322 .
  • structure of the updated writable table 4316 and/or the updated writable table 4318 can be updated, according to data in the deployment package 4206 .
  • FIG. 44 illustrates a system 4400 for deploying multiple patches to a database system.
  • the system 4400 is a view of the system 3400 after a third set of deployment operations, for preparing a shared database container for a second patch, have been completed during deployment of multiple patches to a database system.
  • the third set of deployment operations are outlined in a flowchart 4402 and are similar to the operations described above for the flowchart 4202 .
  • the patch system 3506 reads a second patch deployment package 4406 to identify shared tables to which content is to be deployed.
  • the patch system 3506 can determine a set of tables in the shared container that will receive data from the deployment package 4406 . This set of tables can be referred to as a set st_2.
  • the patch system 3506 can determine a version number for each table in the set st_2, and can determine a maximum version number of those tables.
  • the patch system 3506 clones the version-two read-only table 4212 to create a version-three read-only table 4410 that has the same structure as the version-two read-only table 4212 , and publishes a name of the version-three read-only table 4410 .
  • the version-three read-only table 4410 is named with a target name of “TAB #3”.
  • the patch system 3506 can, for each table in the set st_2, identify, in the shared database container 3402 , a source table named ⁇ table-name># ⁇ v_start>, where v_start is a highest version number of tables that have a same base name of ⁇ table-name>.
  • the patch system 3506 can create a copy of each identified source table to make a respective target table, using a pattern of ⁇ table-name># ⁇ v_target2>.
  • the deployment tool 3516 deploys (e.g., imports) data from the second patch deployment package 4402 to the version-three read-only table 4410 , to deploy the second patch to the version-three read-only table 4410 .
  • the deployment tool 3516 can deploy content of the deployment package 4406 to each of the target tables ⁇ table-name># ⁇ v_target2>, in the shared database container 3402 .
  • deployment status is stored, (e.g., in an administrative table in the shared database container 3402 (not shown)).
  • Deployment status can include an indication that the second patch to the TAB table is partially deployed.
  • the name of the version-three read-only table 4410 is published, or otherwise made available, as the name of the new version of the read-only table 3403 .
  • a version number (e.g., version three) can also be published as a target (e.g., “go to”) version number, for later tenant deployments.
  • the number v_target2 can be passed to a central control tool as a goto-version for the deployment package 4406 , for orchestration of future tenant deployments of the second patch.
  • FIG. 45 illustrates a system 4500 for deploying multiple patches to a database system.
  • the system 4500 is a view of the system 3400 after a fourth set of deployment operations, for deploying a first and second patch to the second tenant, have been completed during deployment of multiple patches to a database system.
  • the fourth set of operations are similar to the operations described above in the flowchart 4302 , but for deployment of both the first patch and the second patch to the second tenant database container 3406 .
  • a view from the second tenant database container 3406 to the read-only table 3403 has been dropped.
  • a new view 4502 to the version-three read-only table 4410 (illustrated as an arrow 4503 ) has been created.
  • Content has been deployed to an updated writable table 4504 and possibly to an updated writable table 4506 , structure(s) of the updated writable table 4504 and/or the updated writable table 4506 have been updated, and the second tenant database container 3406 now includes an updated union view 4508 .
  • the patch system 3506 can retrieve a target version number v_target2 for use in deploying the deployment package 4206 and 4406 to the second tenant database container 3406 .
  • the patch system 3506 can determine a first complement of what had been deployed to the shared database container 3402 from the deployment package 4206 , and a second complement of what had been deployed to the shared database container 3402 from the deployment package 4406 , and deploy the first complement and the second complement to the second tenant database container 3406 .
  • FIG. 46 illustrates a system 4600 for deploying multiple patches to a database system.
  • the system 4600 is a view of the system 3400 after a fifth set of deployment operations, for deploying the second patch to the first tenant, have been completed during deployment of multiple patches to a database system.
  • a determination can be made to deploy the second patch to the first tenant, for example, based on a determination that the second patch successfully resolves an earlier problem identified for the second tenant.
  • the fifth set of operations are similar to the operations described above in the flowchart 4302 , but for deployment of the second patch to the first tenant database container 3404 , using the second patch deployment package 4406 .
  • a view from the first tenant database container 3404 to the version-two read-only table 4212 has been dropped.
  • a new view 4602 to the version-three read-only table 4410 (illustrated as an arrow 4503 ) has been created.
  • Content has been deployed to an updated writable table 4604 and possibly to an updated writable table 4606 , structure(s) of the updated writable table 4604 and/or the updated writable table 4606 have been updated, and the first tenant database container 3404 now includes an updated union view 4608 .
  • FIG. 47 illustrates a system 4700 for deploying multiple patches to a database system.
  • the system 4700 is a view of the system 3400 after a sixth set of deployment operations, for finalizing a deployment, have been completed during deployment of multiple patches to a database system.
  • the sixth set of deployment operations are outlined in a flowchart 4702 .
  • old shared tables that are no longer being used are dropped.
  • the patch system 3506 can drop the read-only table 3403 and the version-two read-only table 4212 since those tables are no longer connected to any tenants.
  • old shared table names that were dropped e.g., the read-only table 3403 and the version-two read-only table 4212 .
  • FIG. 48 illustrates a system 4800 after deployment of multiple patches to a database system has completed.
  • the system 4800 is a view of the system 3400 after deployment of multiple patches to all tenants, including the first tenant database container 3404 and the second tenant database container 3406 , has been completed.
  • the shared database container 3402 no longer includes the read-only table 3403 and the version-two read-only table 4212 , since all tenants are now connected to the version-three read-only table 4410 .
  • the new shared database container includes any of these changes that are part of changes for the new version.
  • the new shared database container includes tables that are already in the target structure, includes an updated key pattern configuration, if needed, and shared tables that are associated with mixed tables include content that adheres to the updated key pattern configuration.
  • a deployment tool can determine what changes are to be made in each tenant, to make each tenant compatible with the new shared database container.
  • the deployment tool can use a combination of a structure change mechanism, a sharing type change mechanism, and a data split definition (key pattern) change mechanism, to re-configure tenants, including using these mechanisms in a prescribed order, depending on the types of changes needed for a particular upgrade, as described in more detail below.
  • table definitions can change due to requirements of the application.
  • a deployment procedure can adjust table structures.
  • a logical “single table” e.g., from an application point of view
  • a table and a view e.g., for shared read-only tables
  • two tables and a view e.g., for mixed tables.
  • a change in structure to the logical table may need to be carried through to a multiple-item construct (e.g., a table and a view, two tables and a view) in the multi-tenancy system.
  • a change in sharing type can require moving data from a shared database container to tenant database container(s) and/or from tenant database container(s) to a shared database container.
  • a change in sharing type can also result in the deletion of data from a tenant database container.
  • an application can be configured to support persistency extensibility for key users in a multi tenancy setup.
  • a customer may, at a given point in time, desire to add custom fields to a table.
  • the table to be changed may currently be a read-only or split table type. Extensions to tables (adding fields) may only be allowed for local table types. Accordingly the table may need to change from a read-only or split table type to a local table type, in a next release.
  • a change in data split definition two types of changes can occur.
  • additional content may need to be shared.
  • an application or an administrator or developer
  • a decision can be made to share these records so as to lower total cost of ownership and to speed up change deployments.
  • An application or an administrator or developer
  • a data split definition is changed, stored data may need to be adjusted (e.g., moved) to match the updated definition.
  • the data split definition is a type of contract with an application, to let an application know which values of records can be written to and stored in tenant database containers. If a data split definition changes, data can be moved so that the data split definition consistently describes data stored in tenant database containers (and correspondingly, data stored in the shared database container, e.g., using the complement of the data split definition). Adjusting stored data to match updated data split definitions can avoid uniqueness constraint violations, data loss, and other issues.
  • FIG. 49 is a flowchart of an example method 4900 for applying different types of changes to a multi-tenancy database system. It will be understood that method 4900 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 4900 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 4900 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1 . For example, the method 4900 and related methods can be executed by the change management system of FIG. 1 .
  • changes to structure definitions (S), sharing type definitions (T), and key patterns (K) are deployed, to a new shared database container, for a set of tables in a database system.
  • the new shared database container includes tables already in a target structure and includes tables that are now to be shared as defined in the target version of the product (e.g., if a table is changed in sharing type, the new shared database container includes the shared part of the table, or the entire table if the table is now completely shared).
  • a new version of the shared table in the new shared database container includes content consistent with the new split definition.
  • a table in the set of tables is identified, for purposes of computing a set of actions to be executed for the table, for completing a tenant portion of the deployment.
  • the one change is executed using a respective structure, sharing type, or key pattern change infrastructure.
  • the sharing type change infrastructure is described below with respect to FIGS. 50-53 .
  • the key pattern change infrastructure is described below with respect to FIG. 54 .
  • the structure change infrastructure which can be part of or otherwise associated with a data dictionary, can include a mechanism for defining table and view structures.
  • the structure change infrastructure can compute table create statements and table change operations, based on table structures and target definitions.
  • the structure change infrastructure can compute view statements out of a table definition, e.g., a view that selects all fields of a table.
  • the structure change infrastructure can compute view statements for a view in one database container, that selects data from another database container and another schema, with the view reading the other data base container name and schema definition as an input parameter.
  • the structure change infrastructure can adjust the structure of the writable table in place, in the tenant database container.
  • the structure change infrastructure can drop, in the tenant database container, a view to the old table in the old shared container and create a view, in the tenant database container, to the new table in the new shared database container, with the new view having a new structure (as compared to the old, dropped view) that matches the structure of the new read-only table.
  • the structure change infrastructure can: 1) drop, in the tenant database container, a view to the old read-only table portion of the split table in the old shared database container; 2) drop, in the tenant database container, the union view for the split table; 3) adjust the writable table portion of the split table in the tenant database container; and 4) create a new union view, in the tenant database container, with the union view having a new structure that is the union of the structure of a new read-only table portion of the split table in the shared database container and the adjusted writable table portion of the split table in the tenant database container.
  • the change to the sharing type definition is executed using the sharing type change infrastructure including integration of the change to the structure definition by the sharing type change infrastructure.
  • the structure definition is changed first using the structure change infrastructure.
  • FIG. 50 is a flowchart of an example method 5000 for changing a sharing type of one or more tables. It will be understood that method 5000 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 5000 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 5000 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1 . For example, the method 5000 and related methods can be executed by the sharing type change infrastructure 140 of FIG. 1 .
  • a new shared database container is received with a new set of shared tables that has differences in sharing types for at least some of the new set of shared tables as compared to an old set of tables in an old shared container.
  • a target definition of sharing types is received for the new set of tables.
  • the target definition can include changes to sharing type for one or more tables.
  • a desire to change a sharing type can occur, for example, if a determination is made that remote access of shared data by a tenant has unacceptable performance (e.g., a shared table may be used in a complex view).
  • a desired change may be to make a currently-shared table a local table to improve performance.
  • a decision can be made to share more tables than are currently being shared, or to allow for more extensions to tables, which can result in more tables being defined as local tables.
  • a change in sharing type can require more tables, less tables, or new tables to be stored in the shared database container.
  • a current sharing type is compared to a target sharing type for each table in a tenant container.
  • six different types of sharing type changes can be identified, including: 1) from shared read-only to local (R ⁇ L); 2) from shared read-only to split (R ⁇ W); 3) from local to shared read-only (L ⁇ R); 4) from local to split (L ⁇ W); 5) from split to shared read-only (W ⁇ R); and 6) from split to local (W ⁇ L).
  • table content and access logic is changed in the tenant container, for each table, to reflect the new sharing type of the respective table.
  • Modifying table content and access logic can include: deleting content in the tenant and linking to content in the shared database container; copying content from the shared database container to the tenant database container and removing link(s) to the shared database container; splitting data by copying tenant data to a new table and creating a union view on tenant and shared data; and merging data by copying shared data to the tenant database container and removing a union view. Further, more-specific details of changing from one sharing type to another sharing type are described below with respect to FIGS. 51 to 53 .
  • FIG. 51 is a table 5100 that illustrates a transition from a first table type to a second, different table type.
  • a table of type local 5102 (“L”) can be converted to a table of type shared read-only 5104 (“R”) or split 5106 (“W”, with split being another term for a mixed table).
  • a table of type shared read-only 5108 can be converted to a table of type local 5110 or the type split 5106 .
  • a table of type split 5112 can be converted to a table of the type shared read-only 5104 or the type local 5110 .
  • a conversion from the table type shared read-only 5108 to the table type split 5106 can include processing operations of dropping a view to a shared table 5114 a , creating a “/W/TAB” tenant-local table 5114 b , and creating a union view 5114 c .
  • FIG. 52 illustrates a system 5200 which includes a first system 5202 that is at a first version and a second system 5204 that is at a second, later version.
  • a tenant container 5206 included in the first system 5202 includes a read-only view 5208 on a shared table 5210 that is included in a shared container 5212 , with the read-only view 5208 and the shared table 5210 being an implementation of the shared read-only table type 5108 .
  • a “:R” indicator in the “T1:R” label for the shared read-only table 5210 indicates that the shared read-only table 5210 is part of a shared read-only implementation.
  • a conversion is performed to change an implementation of the shared read-only table type 5108 to an implementation of the split table type 5106 in the second system 5204 .
  • the read-only view 5208 is dropped (e.g., processing operation 5114 a ).
  • the read-only view 5208 is not included in a tenant container 5214 in the second system 5204 .
  • a writable table 5216 (e.g., “/W/T1”) is created in the tenant container 5214 (e.g., processing operation 5114 b ).
  • a union view 5218 is created in the tenant container 5214 for the writable table 5216 and a shared table 5220 in a shared container 5221 (e.g., processing operation 5114 c , with the shared table 5220 corresponding to the shared table 5210 ).
  • the writable table 5216 , the union view 5218 , and the shared table 5220 are an implementation of the split table type 5106 in the second system 5204 .
  • a “:W” indicator in the “T1:W” label for the shared table 5220 and in the “/W/T1:W” label for the writable table 5216 respectively indicate that the shared table 5220 and the writable table 5216 are part of a split table implementation. If a table structure change is to be performed for the table as well as the sharing type change, the table structure change can be performed on the local table after the sharing type change has completed.
  • FIG. 53 illustrates conversions between various table types.
  • the conversions between table types include a conversion from the shared read-only type 5108 (“R”) to the split table type 5106 (“W”).
  • a prior-version system 5302 includes an implementation of a shared read-only type, as a read-only view 5304 in a tenant container 5306 and a shared table 5308 in a shared container 5310 .
  • a current-version system 5312 illustrates content of the prior-version system 5302 after a conversion from the shared read-only type 5108 (“R”) to the split table type 5106 (“W”).
  • the read-only view 5304 has been dropped, a writable table 5314 has been created in a tenant container 5316 (the tenant container 5316 being a post-conversion illustration of the tenant container 5306 ), and a union view 5317 has been created in the tenant container 5316 to provide access to the writable table 5314 and a shared table 5318 in a shared container 5319 (with the shared table 5318 corresponding to the shared table 5308 and the shared container 5319 being a post-conversion illustration of the shared container 5310 ).
  • a conversion from the shared read-only table type 5108 to the local table type 5110 can include processing operations of dropping a view 5116 a , creating a table 5116 b , and copying data from a shared table 5116 c .
  • the tenant container 5206 includes a read-only view 5222 on a shared table 5224 that is included in the shared container 5212 , with the read-only view 5222 and the shared table 5224 being an implementation of the shared read-only table type 5108 in the first system 5202 . If a table structure change is to be performed for the table as well as the sharing type change, the table structure change can be performed on the local table after the sharing type change has completed.
  • an implementation of the shared read-only table type 5108 is changed to be an implementation of the local table type 5110 in the second system 5204 .
  • the read-only view 5222 is dropped (e.g., processing operation 5116 a ).
  • the read-only view 5222 is not included in the tenant container 5214 in the second system 5204 .
  • a local table 5226 (e.g., “T2”) is created in the tenant container 5214 (e.g., processing operation 5116 b ). Data is copied from the shared table 5224 to the created local table 5226 .
  • the local table 5226 is an implementation of the local table type 5110 in the second system 5204 , as indicated by a “:L” in the “T2:L” label for the local table 5226 .
  • the shared table 5224 is dropped after data is copied to the local table 5226 .
  • FIG. 53 includes another illustration of a conversion from the shared read-only table type 5108 (“R”) to the local table type 5110 (“L”).
  • a prior-version system 5320 includes an implementation of a shared read-only type, as a read-only view 5322 in a tenant container 5324 and a shared table 5326 in a shared container 5328 .
  • a current-version system 5330 illustrates content of the prior-version system 5320 after a conversion from the shared read-only type 5108 (“R”) to the local table type 5110 (“L”).
  • the read-only view 5322 has been dropped, a local table 5331 has been created in a tenant container 5332 (the tenant container 5332 being a post-conversion illustration of the tenant container 5324 ), data has been copied from the shared table 5326 to the local table 5331 (e.g., as illustrated by an arrow 5333 ), and the shared table 5326 has been dropped after completion of the data copy operation (e.g., there is no shared table in a shared container 5334 that is a post-conversion illustration of the shared container 5328 ).
  • a conversion from the split table type 5112 to the shared read-only 5104 table type can include processing operations of dropping a local table 5118 a , dropping a union view 5118 b , and creating a view to a shared table 5118 c .
  • the tenant container 5206 includes a union view 5228 and a local table 5230 and the shared container 5212 includes a shared table 5232 , with the union view 5228 , the local table 5230 , and the shared table 5232 being an implementation of the split table type 5108 in the first system 5202 .
  • an implementation of the split table type 5112 is changed to be an implementation of the shared read-only type table type 5104 in the second system 5204 .
  • the local table 5230 is dropped (e.g., processing operation 5118 a ) and the union view 5228 is dropped (e.g., processing operation 5118 b ).
  • the local table 5230 and the union view 5228 are not included in the tenant container 5214 in the second system 5204 .
  • the local table 5230 includes content
  • data from the local table 5230 can be stored in a quarantine table for analysis and potential data retrieval after the deployment.
  • a read-only view 5234 is created in the tenant container 5214 to a shared table 5236 included in the shared container 5221 , with the shared table 5236 corresponding to the shared table 5232 .
  • the read-only view 5234 and the shared table 5236 are an implementation of the shared read-only table type 5104 in the second system 5204 .
  • FIG. 53 includes another illustration of a conversion from the split table type 5112 (“W”) to the shared read-only table type 5104 (“R”).
  • a prior-version system 5336 includes an implementation of the split type, as a union view 5337 in a tenant container 5338 that provides access to a local table 5339 in the tenant container 5338 and a shared table 5340 in a shared container 5341 .
  • a current-version system 5342 illustrates content of the prior-version system 5336 after a conversion from the split table type 5112 (“W”) to the shared read-only table type 5104 (“R”).
  • the local table 5339 and the union view 5337 have been dropped (e.g., the local table 5339 and the union view 5337 do not appear in a tenant container 5343 (the tenant container 5343 being a post-conversion illustration of the tenant container 5338 ).
  • a read-only view 5344 has been created in the tenant container 5343 to provide access to a shared table 5345 in a shared container 5346 (with the shared table 5345 corresponding to the shared table 5340 ).
  • a conversion from the split table type 5112 to the local table type 5210 can include processing operations of copying data from a shared table to a local table 5120 a and establishing one table (e.g., as a local table) 5120 b .
  • the tenant container 5206 includes a union view 5238 and a writable table 5240 and the shared container 5212 includes a shared table 5242 , with the union view 5238 , the writable table 5240 , and the shared table 5242 being an implementation of the split table type 5108 in the first system 5202 . If a table structure change is to be performed for the table as well as the sharing type change, the table structure change can be performed on the local table after the sharing type change has completed.
  • an implementation of the split table type 5112 can be changed to be an implementation of the local table type 5110 in the second system 5204 .
  • data is copied from the shared table 5242 to the writable table 5240 (e.g., processing operation 5220 a ).
  • processing operation 5220 b one table is established as a local table in the tenant container 5214 (e.g., processing operation 5220 b ).
  • the shared table 5242 and the union view 5238 can be dropped.
  • the shared table 5242 and the union view 5238 are not included in the tenant container 5214 in the second system 5204 .
  • the writable table 5240 can be renamed, in the tenant container 5214 , e.g., from an alternative name (e.g., “/W/T4”) to a “standard” name (e.g., “T4”), as shown for a writable table 5244 .
  • the writable table 5244 is an implementation of the local table type 5110 in the second system 5204 .
  • FIG. 53 includes another illustration of a conversion from the split table type 5112 (“W”) to the local table type 5110 (“L”).
  • a prior-version system 5350 includes an implementation of the split type, as a union view 5351 in a tenant container 5352 that provides access to a local table 5353 in the tenant container 5351 and a shared table 5354 in a shared container 5355 .
  • a current-version system 5356 illustrates content of the prior-version system 5350 after a conversion from the split table type 5112 (“W”) to the local table type 5110 (“L”).
  • the writable table 5353 has been renamed from “/W/T4” to “T4”, as illustrated by a local table 5357 in a tenant container 5358 (the tenant container 5358 being a post-conversion illustration of the tenant container 5352 ).
  • Data has been copied from the shared table 5354 to the local table 5357 , as illustrated by an arrow 5359 .
  • the shared table 5354 has been dropped.
  • the union view 5351 has also been dropped. For example, the shared table 5354 does not appear in a shared container 5360 in the current-version system 5356 and the union view 5351 does not appear in the tenant container 5358 .
  • a conversion from the local table type 5102 to the shared read-only table type 5104 can include processing operations of dropping a local table 5122 a and creating a view to a shared table 5122 b .
  • the tenant container 5206 includes a local table 5246 that is an implementation of the local table type 5110 in the first system 5202 .
  • the local table 5246 is dropped (e.g., processing operation 5122 a ).
  • the local table 5246 is not included in the tenant container 5214 in the second system 5204 .
  • data from the local table 5426 can be stored in a quarantine table for analysis and potential data retrieval after the deployment.
  • a read-only view 5248 is created to access a shared table 5250 in the shared container 5221 .
  • the shared table 5250 may already exist in the shared container 5221 (e.g., to service other tenants) or may be created in the shared container 5221 .
  • the read-only view 5248 and the shared table 5250 are an implementation of the shared read-only table type 5104 in the second system 5204 .
  • FIG. 53 includes another illustration of a conversion from the local table type 5110 (“L”) to the shared read-only table type 5104 (“R”).
  • a prior-version system 5362 includes an implementation of the local type, as a local table 5364 in a tenant container 5365 .
  • a current-version system 5366 illustrates content of the prior-version system 5362 after a conversion from the local table type 5110 (“L”) to the shared read-only table type 5104 (“R”).
  • the local table 5364 has been dropped (e.g., the local table 5364 does not appear in a tenant container 5367 in the current-version system 5366 (the tenant container 5367 being a post-conversion illustration of the tenant container 5365 ).
  • a read-only view 5368 has been created in the tenant container 5367 to provide access to a shared table 5369 in a shared container 5370 included in the current-version system 5366 .
  • the shared table 5369 may have already existed in the shared container 5370 (e.g., to service other tenants) or have been created in the shared container 5370 as part of the conversion.
  • a conversion from the local table type 5102 to the split table type 5106 can include processing operations of copying current data according to key patterns to a writable table 5124 a , dropping an old table 5124 b , and creating a union view 5124 c .
  • the tenant container 5206 includes a local table 5252 that is an implementation of the local table type 5110 in the first system 5202 .
  • data is copied from the local table 5252 to a writable table 5254 in the tenant container 5214 (e.g., processing operation 5124 a ).
  • the table 5252 can be temporarily renamed (e.g., to “/OLD/T6”), the writable table 5254 can be created (e.g. with name “/W/T6”), and data can be copied from the local table 5252 to the writable table 5254 according to defined key patterns.
  • the local table 5252 can be dropped (e.g., processing operation 5124 b ).
  • a union view 5256 can be created for the writable table 5254 and a shared table 5258 in the shared container 5221 (e.g., processing operation 5124 c ).
  • the shared table 5258 may already exist in the shared container 5221 (e.g., to service other tenants) or may be created in the shared container 5221 .
  • the union view 5256 , the shared table 5258 , and the writable table 5254 are an implementation of the split table type 5106 in the second system 5204 . If a table structure change is to be performed for the table as well as the sharing type change, the table structure change can be performed on the writable table 5254 before the union view 5256 is created.
  • FIG. 53 includes another illustration of a conversion from the local table type 5110 (“L”) to the split table type 5106 (“W”).
  • a prior-version system 5372 includes an implementation of the local type, as a local table 5374 in a tenant container 5376 .
  • a current-version system 5378 illustrates content of the prior-version system 5372 after a conversion from the local table type 5110 (“L”) to the split table type 5106 (“W”).
  • the local table 5374 can be renamed (e.g., from “T6” to “/W/T6”), as illustrated by a writable table 5380 in a tenant container 5382 (the tenant container 5382 being a post-conversion illustration of the tenant container 5376 ).
  • a shared table 5384 has been created in a shared container 5385 in the current-version system 5378 .
  • a union view 5386 has been created in the tenant container 5382 , to provide access to the writable table 5380 and the shared table 5384 .
  • FIG. 54 illustrates a system 5400 for changing tenant keys (e.g., split definition) when exchanging a shared database container.
  • the changing of tenant keys can be performed by a split definition change infrastructure.
  • the split definition change infrastructure includes a mechanism to store split definitions per table in an active and inactive state.
  • the split definition change infrastructure can compute and execute DML (Data Manipulation Language) statements to copy data and delete data so that tables are in accordance with the split definition.
  • DML Data Manipulation Language
  • a split definition also referred to as a key pattern
  • WHERE clause defines records with can be stored in a local table portion of a mixed table, in a tenant database container.
  • the system 5400 includes a version-one shared database container 5402 that includes a tenant keys table 5404 and a read-only table 5406 that is a read-only portion of a mixed table named “TAB”.
  • the read-only table 5406 includes a record 5408 with a key that starts with “A” and a record 5410 with a key that starts with “Y”.
  • the keys of the records 5408 and 5410 are in compliance with a WHERE clause 5411 included in the tenant keys table 5404 .
  • the WHERE clause 5411 defines keys that are allowed to be written for tenants, and a complement of the WHERE clause 5411 defines keys that are allowed to be stored in the read-only table 5406 .
  • the key values of “A*” and “Y*” for the records 5408 and 5410 match a complement of the WHERE clause 5411 of “NOT (Key like ‘B %’ or Key like ‘Z %’). In other words, the keys for the records 5408 and 5410 do not start with either “B” or “Z”.
  • a version-one tenant database container 5412 for a first tenant includes a view 5413 to the tenant keys table 5404 , a view 5414 to the read-only table 5406 , a writable table 5416 that is a writable portion of the “TAB” mixed table, and a union view 5418 to the writable table 5416 and the read-only table 5406 (through the view 5414 ).
  • the writable table 5416 includes a record 5420 with a key that starts with “B” (e.g., matching the WHERE clause 5411 ) and a record 5422 with a key that starts with “Z” (e.g., also matching the WHERE clause 5411 ).
  • developer(s) and/or administrator(s) may determine that the WHERE clause 5411 is now incorrect. For example, a determination may be made that records with keys that start with “Y” should no longer be shared (e.g., it may be desired that tenants are able to store local records with keys that start with “Y”). As another example, a determination may be made that records that start with “B” should now be shared (e.g., a determination may be made that tenant applications do not write local records that start with “B”).
  • a version-two shared database container 5424 has been prepared for deployment of a version two of the system 5400 .
  • the version-two shared database container 5424 includes an updated tenant keys table 5426 that includes an updated WHERE clause 5428 that indicates that tenants are allowed to write, to the mixed table named “TAB”, records that have keys that start with either “Y” or “Z”.
  • An updated read-only table 5430 includes records to be shared for the mixed table named “TAB”.
  • the updated read-only table 5430 includes a record 5432 with a key starting with “A” (which may be a copy of the record 5408 ) and a record 5434 with a key starting with “B” (which may be a record that was previously provided to, but editable by tenants, but is now to be read-only and shared).
  • the records 5432 and 5434 have keys that match the complement of the updated WHERE clause 5428 .
  • the record 5434 may be the same as or different than the record 5420 .
  • the first tenant may have modified the record 5420 after the record 5420 was first provided to the first tenant.
  • An upgrade process can be used to upgrade tenant database containers to version two of the system 5400 .
  • a version-two tenant database container 5440 has been upgraded to version two and is now connected to the version-two shared database container 5424 .
  • the version-two tenant database container 5440 includes a view 5442 to the updated tenant keys table 5426 , an updated writable table 5444 , an updated view 5446 to the updated read-only table 5430 , and an updated union view 5448 .
  • the updated writable table 5444 includes a record 5450 with a key starting with “Y” (e.g., compatible with the updated WHERE clause 5428 ) and a record 5452 with a key starting with “Z” (e.g., also compatible with the updated WHERE clause 5428 ).
  • version-two tenant database container 5440 was the same as the version-one tenant database container 5412 before the version-two tenant database container 5440 was upgraded to version two, and accordingly, that the version-one tenant database container 5412 can be, for purposes of discussion, a pre-deployment view of the version-two tenant database container 5440 .
  • a deployment tool can determine what to change in the version-one tenant database container 5412 during an upgrade of the version-one tenant database container 5412 to version two.
  • the deployment tool can identify records in the read-only table 5406 that are to be moved from the read-only table 5406 to the writable table 5416 (e.g., records that used to be shared and that are no longer to be shared).
  • the deployment tool can execute the following insert statement, to move records from the read-only table 5406 to the writable table 5416 (assuming the name of the shared database container 5402 is “shared_old” and that “ ⁇ new_where_condition>” is the updated WHERE clause 5428 ): INSERT INTO /W/TAB (SELECT * FROM shared_old.TAB WHERE ( ⁇ new_where_condition>)).
  • the insert statement can result in the moving of the record 5410 to the writable table 5416 (e.g., as illustrated in the updated writable table 5444 by the record 5450 ), since the key “Y*” of the record 5410 matches the updated WHERE clause 5411 .
  • the deployment tool can identify records to delete in the writable table 5416 (e.g., records that are no longer allowed to be stored locally as editable records by the first tenant). For example, the deployment tool can execute the following statement to delete records from the writable table 5416 : DELETE FROM /W/TAB WHERE NOT ( ⁇ new_where_condition>). The delete statement can result in deletion of the record 5420 from the writable table 5416 , since the key “B*” of the record 5420 does not match the updated WHERE clause 5428 .
  • a similar record may have been deleted from the updated writable table 5444 during the upgrade of the updated writable table 5444 (e.g., the updated writable table 5444 does not include any records that start with “B”).
  • the record 5420 can be moved to a quarantine location upon being deleted.
  • FIG. 55 is a flowchart of an example method 5500 for redirecting a write query. It will be understood that method 5500 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 5500 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 5500 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1 . For example, the method 5500 and related methods can be executed by the write redirecter 128 of FIG. 1 .
  • access is provided to at least one application to a database system.
  • the at least one application can include one or more tenant applications.
  • Access can be provided by a database interface, for example.
  • a first query is received from the at least one application.
  • the first query can be to retrieve, add, or edit data in the database system.
  • a read query retrieves but does not modify or add data to the database system.
  • processing the first query using the union view can include retrieving data from one or both of the read-only table and the writable table.
  • the first query in response to determining that the first query is not a read query (e.g., the first query is a write query), the first query is modified to use the writable table, rather than the union view.
  • the write query is thus redirected to use the writable table rather than the read-only union view.
  • the first query is processed using the writable table.
  • Processing the first query using the writable table can include modifying or adding data to the writable table.
  • FIG. 56 is a flowchart of an example method 5600 for key pattern management. It will be understood that method 5600 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 5600 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 5600 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1 . For example, the method 5600 and related methods can be executed by the constraint enforcement system 126 of FIG. 1 .
  • access is provided to at least one application to a database system.
  • At 5604 at least one query for a logical database table is received from the at least one application.
  • the logical database table is represented in the database system as a first physical database table that includes records of the logical database table that are allowed to be written by the at least one application and a second physical database table that includes records of the logical database table that are allowed to be read but not written by the at least one application.
  • the write query is configured to modify or add data to the database system.
  • the key pattern configuration describes keys of records that are included in or may be included in (e.g., added to) the first physical database table.
  • the write query is redirected to the first physical database table.
  • Redirecting can include modifying the write query to use the first physical database table rather than the logical database table.
  • the write query is rejected. Rejecting the write query can prevent records being added to the first physical database table that do not comply with the key pattern configuration.
  • FIG. 57 is a flowchart of an example method 5700 for transitioning between system sharing types. It will be understood that method 5700 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate.
  • a client, a server, or other computing device can be used to execute method 5700 and related methods and obtain any data from the memory of a client, the server, or the other computing device.
  • the method 5700 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1 .
  • the method 5700 and related methods can be executed by the system sharing type modifier 148 of FIG. 1 .
  • a request is received to convert a database system from a standard system setup to a shared system setup.
  • the database system includes a tenant database container.
  • the tenant database container includes, before conversion of the database system from the standard system setup to the shared system setup: a read-only table for storing read-only data that is read but not written by application(s); a first writable table for storing writable data that is read and written by the application(s); and a mixed table for storing read-only mixed data that is read but not written by the application(s) and writable mixed data that is read and written by the application(s).
  • a single read-only table, a single writable table, and a single mixed table are described, the tenant database container can include any combination of tables of various types.
  • a shared database container is created, for storing shared content used by multiple tenants.
  • a first shared table is created in the shared database container, for storing the read-only data that is read but not written by applications.
  • data is copied from the read-only table to the first shared table.
  • the read-only table is dropped from the tenant database container.
  • a read-only view is created in the tenant database container, for providing read access to the first shared table.
  • a second shared table is created in the shared database container, for storing the read-only mixed data.
  • the read-only mixed data is copied from the mixed table to the second shared table.
  • the read-only mixed data is deleted from the mixed table.
  • the mixed table is renamed to be a second writable table.
  • a union view is created to provide unified access to the second shared table and the second writable table.
  • FIG. 58 is a flowchart of an example method 5800 for exchanging a shared database container. It will be understood that method 5800 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 5800 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 5800 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1 . For example, the method 5800 and related methods can be executed by the deployment tool 130 of FIG. 1 .
  • a request to deploy a new version of a database system is received.
  • a deployment package is received that includes data for the new version of the database system.
  • a next-version shared database container is installed in the database system in parallel to a current-version shared database container.
  • the new version is deployed to each of multiple tenant database containers.
  • Deploying the new version to each of the multiple tenant database containers includes individually linking, at 5810 , each of the multiple tenant database containers to the next-version shared database container.
  • the linking can include dropping at least one view in each respective tenant database container to shared content in the current-version shared database container and adding at least one new view in each respective tenant database container to the updated shared content in the next-version shared database container.
  • Deploying the new version to each of the multiple tenant database containers includes, at 5812 , deploying, from the deployment package, changed local content to each tenant database container.
  • the current-version shared database container is dropped, after deployment to each of the multiple tenant database containers has completed.
  • FIG. 59 is a flowchart of an example method 5900 for patching a shared database container. It will be understood that method 5900 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 5900 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 5900 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1 . For example, the method 5900 and related methods can be executed by the patching system 146 of FIG. 1 .
  • a first deployment package is received for an upgrade of a database system to a second software version.
  • the upgrade can include deployment to a shared database container and one or more tenant database containers.
  • shared objects that are completely stored in the shared database container are identified, from information in the deployment package.
  • first shared content for the shared objects in the deployment package is determined.
  • partially-shared objects that have a shared portion in the shared database container and a tenant portion in the tenant database container are identified.
  • second shared content for the partially-shared objects in the deployment package is determined.
  • the determined first shared content and the determined second shared content is deployed to the shared database container as deployed shared content.
  • first local content for the partially-shared objects in the deployment package is determined.
  • the first local content is deployed to respective tenant database containers.
  • second local content for the local objects in the deployment package is identified.
  • the second local content is deployed to the respective tenant database containers.
  • FIG. 60 is a flowchart of an example method 6000 for deploying different types of changes to a database system. It will be understood that method 6000 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 6000 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 6000 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1 . For example, the method 6000 and related methods can be executed by the change management system 134 of FIG. 1 .
  • a table structure and a table sharing type are determined for each table in a current-version shared database container.
  • a table structure and a table sharing type are determined for each table in a next-version shared database container.
  • the table structures of the tables in the current-version shared database container are compared to the table structures of the tables in the next-version shared database container to identify table structure differences.
  • the table sharing types of the tables in the current-version shared database container are compared to the table sharing types of the tables in the next-version shared database container to identify table sharing type differences.
  • a current key pattern configuration associated with the current-version shared database container is compared to an updated key pattern configuration associated with the next-version shared database container to identify key pattern configuration differences.
  • each table in at least one tenant database container is upgraded to a next version based on the table structure differences, the table sharing type differences, and the key pattern configuration differences.
  • FIG. 61 is a flowchart of an example method 6100 for changing key pattern definitions. It will be understood that method 6100 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 6100 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 6100 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1 . For example, the method 6100 and related methods can be executed by the split definition change infrastructure of FIG. 1 .
  • a new shared database container that includes a new key pattern configuration is received.
  • the new shared database container is a new version of a current shared database container for storing data accessible to multiple tenants.
  • the new key pattern configuration is a new version of a current key pattern configuration for a logical split table.
  • the logical split table includes a read-only-portion table in the current shared database container and a writable portion in a tenant database container. They current key pattern configuration describes keys of records included in the writable-portion.
  • the new shared database container includes an updated read-only-portion for the logical split table that includes records that match a complement of the new key pattern configuration.
  • records that match the new key pattern configuration are identified in the read-only-portion of the logical split table in the current shared database container.
  • the identified records are moved, from the read-only-portion of the logical split table in the current shared database container to the writable-portion of the logical split table included in the tenant database container.
  • records that do not match the new key pattern configuration are deleted from the writable-portion of the logical split table in the tenant database container.
  • system 100 (or its software or other components) contemplates using, implementing, or executing any suitable technique for performing these and other tasks. It will be understood that these processes are for illustration purposes only and that the described or similar techniques may be performed at any appropriate time, including concurrently, individually, or in combination. In addition, many of the operations in these processes may take place simultaneously, concurrently, and/or in different orders than as shown. Moreover, system 100 may use processes with additional operations, fewer operations, and/or different operations, so long as the methods remain appropriate.

Abstract

The present disclosure involves systems, software, and computer implemented methods for key pattern management. One example method includes receiving a query for a logical database table from an application. A determination is made as to whether the query is a write query. In response to determining that the query is a write query, a determination is made as to whether the query complies with a key pattern configuration that describes keys of records included in a physical database table that is part of a logical table implementation. The physical table includes records of the logical database table that are allowed to be written by the application. The write query is redirected to the physical database table in response to determining that the query complies with the key pattern definition. The query is rejected in response to determining that the query does not comply with the key pattern configuration.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a co-pending application of U.S. application Ser. No. 15/794,261, filed on Oct. 26, 2017 entitled “SYSTEM SHARING TYPES IN MULTI-TENANCY DATABASE SYSTEMS”; and is also a co-pending application of U.S. application Ser. No. 15/794,305, filed on Oct. 26, 2017 entitled “DATA SEPARATION AND WRITE REDIRECTION IN MULTI-TENANCY DATABASE SYSTEMS”; and is also a co-pending application of U.S. application Ser. No. 15/794,501, filed on Oct. 26, 2017 entitled “TRANSITIONING BETWEEN SYSTEM SHARING TYPES IN MULTI-TENANCY DATABASE SYSTEMS”; and is also a co-pending application of U.S. application Ser. No. 15/794,335, filed on Oct. 26, 2017 entitled “DEPLOYING CHANGES IN A MULTI-TENANCY DATABASE SYSTEM”; and is also a co-pending application of U.S. application Ser. No. 15/794,381, filed on Oct. 26, 2017 entitled “DEPLOYING CHANGES TO KEY PATTERNS IN MULTI-TENANCY DATABASE SYSTEMS”; and is also a co-pending application of U.S. application Ser. No. 15/794,362, filed on Oct. 26, 2017 entitled “EXCHANGING SHARED CONTAINERS AND ADAPTING TENANTS IN MULTI-TENANCY DATABASE SYSTEMS”; and is also a co-pending application of U.S. application Ser. No. 15/794,424, filed on Oct. 26, 2017 entitled “PATCHING CONTENT ACROSS SHARED AND TENANT CONTAINERS IN MULTI-TENANCY DATABASE SYSTEMS”; the entire contents of each and as a whole, are incorporated herein by reference.
TECHNICAL FIELD
The present disclosure relates to computer-implemented methods, software, and systems for key pattern management in multi-tenancy database systems.
BACKGROUND
A multi-tenancy software architecture can include a single instance of a software application that runs on a server and serves multiple tenants. A tenant is a group of users who share a common access to the software instance. In a multitenant architecture, the software application can be designed to provide every tenant a dedicated share of the instance—including tenant-specific data, configuration, user management, and tenant-specific functionality. Multi-tenancy can be used in cloud computing.
SUMMARY
The present disclosure involves systems, software, and computer implemented methods for key pattern management in multi-tenancy database systems. One example method includes receiving a query for a logical database table from an application. A determination is made as to whether the query is a write query. In response to determining that the query is a write query, a determination is made as to whether the query complies with a key pattern configuration that describes keys of records included in a physical database table that is part of a logical table implementation. The physical table includes records of the logical database table that are allowed to be written by the application. The write query is redirected to the physical database table in response to determining that the query complies with the key pattern definition. The query is rejected in response to determining that the query does not comply with the key pattern configuration.
While generally described as computer-implemented software embodied on tangible media that processes and transforms the respective data, some or all of the aspects may be computer-implemented methods or further included in respective systems or other devices for performing this described functionality. The details of these and other aspects and embodiments of the present disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
DESCRIPTION OF DRAWINGS
FIG. 1 is a block diagram illustrating an example system for multi-tenancy.
FIG. 2 illustrates an example system for an application with a standard database setup.
FIG. 3 illustrates an example non multi-tenancy system in which same content is stored for multiple, different tenants in different database containers.
FIG. 4A illustrates an example system that illustrates the splitting of data for a tenant.
FIG. 4B illustrates an example multi-tenancy system that includes multiple tables of each of multiple table types.
FIG. 4C illustrates an example multi-tenancy system that uses a suffix table naming scheme.
FIGS. 5 and 6 illustrate example systems that include a shared database container, a first tenant database container for a first tenant, and a second tenant database container for a second tenant.
FIG. 7 illustrates a system for constraint enforcement.
FIG. 8 illustrates an example system for deploying content in accordance with configured tenant keys.
FIG. 9 illustrates an example system for changing tenant keys.
FIG. 10 illustrates an example system for updating database records to comply with updated tenant keys.
FIG. 11 illustrates an example system for updating database records to comply with updated tenant keys using a transfer file.
FIG. 12 illustrates an example system for updating an inactive tenant keys record.
FIG. 13A illustrates an example system that includes a standard system with a standard system-sharing type and a shared/tenant system with a shared/tenant system-sharing type.
FIG. 13B is a table that illustrates processing that can be performed for standard, shared, and tenant database containers.
FIG. 14 illustrates a system for transitioning from a standard system to a shared/tenant system.
FIG. 15 illustrates a system with a sharing type of simulated.
FIG. 16 illustrates a system for transitioning from a standard system to a simulated system.
FIG. 17 illustrates a system for transitioning from a simulated system to a shared/tenant system.
FIG. 18 illustrates a system for transitioning from a shared/tenant system to a standard system.
FIG. 19 illustrates a system for transitioning from a simulated system to a standard system.
FIG. 20 illustrates a system that includes data for objects in both a shared database container and a tenant database container.
FIGS. 21A-B illustrates example systems for deploying changes to objects in a database system.
FIG. 22 illustrates an example system for upgrading a multi-tenancy database system using an exchanged shared database container approach.
FIG. 23 illustrates an example system for deploying a new service pack to a multi-tenancy database system.
FIG. 24 illustrates an example system for maintenance of a database system.
FIG. 25 illustrates an example system for upgrading a multi-tenancy system to a new version.
FIG. 26 illustrates an example system before deployment of a new database version using an exchanged shared database container approach.
FIGS. 27-31 are illustrations of example systems that are upgraded in part by exchanging a shared database container.
FIG. 32 illustrates a system for deploying changes to objects.
FIG. 33 illustrates a system for deploying a patch using a hidden preparation of a shared database container.
FIG. 34 illustrates an example system before deployment of a patch.
FIG. 35 illustrates a system for preparation of a shared database container during a deployment of a patch to a database system.
FIGS. 36 and 37 illustrate systems for deploying a patch to a tenant database container.
FIG. 38 illustrates a system for performing finalization of a deployment.
FIG. 39 illustrates a system after deployment using a hidden preparation of a shared database container technique.
FIG. 40 is a flowchart of an example method for handling unsuccessful tenant deployments.
FIG. 41 illustrates a system for deploying multiple patches to a database system.
FIG. 42 illustrates a system for preparing a shared database container before deploying multiple patches to a database system.
FIGS. 43-47 illustrate example systems for deploying multiple patches to a database system.
FIG. 48 illustrates a system after deployment of multiple patches to a database system has completed.
FIG. 49 is a flowchart of an example method for applying different types of changes to a multi-tenancy database system.
FIG. 50 is a flowchart of an example method for changing a sharing type of one or more tables.
FIG. 51 is a table that illustrates a transition from a first table type to a second, different table type.
FIG. 52 illustrates a system which includes a first system that is at a first version and a second system that is at a second, later version.
FIG. 53 illustrates conversions between various table types.
FIG. 54 illustrates a system for changing tenant keys when exchanging a shared database container.
FIG. 55 is a flowchart of an example method for redirecting a write query.
FIG. 56 is a flowchart of an example method for key pattern management.
FIG. 57 is a flowchart of an example method for transitioning between system sharing types.
FIG. 58 is a flowchart of an example method for exchanging a shared database container.
FIG. 59 is a flowchart of an example method for patching a shared database container.
FIG. 60 is a flowchart of an example method for deploying different types of changes to a database system.
FIG. 61 is a flowchart of an example method for changing key pattern definitions.
DETAILED DESCRIPTION
In a multi-tenancy architecture, resources can be shared between applications from different customers. Each customer can be referred to as a tenant. Shared resources can include, for example, vendor code, application documentation, and central runtime and configuration data. Multi-tenancy can enable improved use of shared resources between multiple application instances, across tenants, which can reduce disk storage and processing requirements. Multi-tenancy can enable centralized software change management for events such as patching or software upgrades.
A content separation approach can be used to separate shared data from tenant-specific data. Multi-tenancy approaches can be applied to existing applications that were built without data separation as a design criterion. If multi-tenancy is implemented for an existing system, applications can execute unchanged. Applications can be provided with a unified view on stored data that hides from the application which data is shared and which data is tenant-local. Other advantages are discussed in more detail below.
FIG. 1 is a block diagram illustrating an example system 100 for multi-tenancy. Specifically, the illustrated system 100 includes or is communicably coupled with a database system 102, an end user client device 104, an administrator client device 105, an application server 106, and a network 108. Although shown separately, in some implementations, functionality of two or more systems or servers may be provided by a single system or server. In some implementations, the functionality of one illustrated system or server may be provided by multiple systems or servers. For example, although illustrated as a single server 102, the system 100 can include multiple application servers, a database server, a centralized services server, or some other combination of systems or servers.
An end user can use an end-user client device 104 to use a client application 110 that is a client version of a server application 112 hosted by the application server 106. In some instances, the client application 110 may be any client-side application that can access and interact with at least a portion of the illustrated data, including a web browser, a specific app (e.g., a mobile app), or another suitable application. The server application 112 can store and modify data in tables provided by a database system. The tables are defined in a data dictionary 114 and reside in either shared database containers 116 and/or tenant database containers 118, as described below. The server application 112 can access a database management system 119 using a database interface 120.
The database management system 119 can provide a database that includes a common set of tables that can be used by multiple application providers. Each application provider can be referred to as a customer, or tenant, of the database system. The database system 102 can store tenant-specific data for each tenant. However, at least some of the data provided by the database system 102 can be common data that can be shared by multiple tenants, such as master data or other non-tenant-specific data. Accordingly, common, shared data can be stored in one or more shared database containers 116 and tenant-specific data can be stored in one or more tenant database containers 118 (e.g., each tenant can have at least one dedicated tenant database container 118). As another example, a shared database container 116 can store common data used by multiple instances of an application and the tenant database containers 118 can store data specific to each instance.
A data split and sharing system 122 can manage the splitting of data between the shared database containers 116 and the tenant database containers 118. The shared database containers 116 can include shared, read-only tables that include shared data, where the shared data can be used by multiple tenants as a common data set. The tenant database containers 118 can include writable tables that store tenant-specific data that may be modified by a given tenant. Some application tables, referred to as mixed, or split tables, may include both read-only records that are common and are shared among multiple tenants and writable records that have been added for a specific tenant, or that are editable by or for a specific tenant before and/or during interactions with the system. Rather than store a separate mixed table for each tenant, the read-only records of a mixed table can be stored in shared, read-only portion in a shared database container 116. Writable mixed-table records that may be modified by a given tenant can be stored in a writable portion in each tenant database container 118 of each tenant that uses the application. Data for a given object can be split across tables of different types. The data split and sharing system 122 can enable common portions of objects to be stored in a shared database container 116. The data dictionary 114 can store information indicating which tables are shared, whether fully or partially.
The server application 112 can be designed to be unaware of whether multi-tenancy has been implemented in the database system 102. The server application 112 can submit queries to the database system 102 using a same set of logical table names, regardless of whether multi-tenancy has been implemented in the database system 102 for a given tenant. For example, the server application 112 can submit a query using a logical name of a mixed table, and the database system 102 can return query results, regardless of whether the mixed table is a single physical table when multi-tenancy has not yet been implemented, or whether the mixed table is represented as multiple tables, including a read-only portion and a writable portion, in different database containers.
The multi-tenancy features implemented by the data split and sharing system 122 can allow an application to be programmed to use a single logical table for mixed data storage while still allowing the sharing of common vendor data between different customers. An application which has not been previously designed for data sharing and multi tenancy can remain unchanged after implementation of multi-tenancy. The data sharing provided by multi-tenancy can reduce data and memory footprints of an application deployment.
Storing data for the mixed table in multiple physical tables can introduce potential problems, such as a possibility of duplicate records. A constraint enforcement system 126 can be used to define key patterns which describe which records are allowed to be stored in a writable portion for a given mixed table, which can be used to prevent duplicate records. The database interface 120 can be configured to determine that an incoming query is a write query for a mixed table that is represented as multiple physical tables in the database system 120, and in response, use a write redirecter 128 to ensure that the write query operates only on a write portion of a mixed table. The use of write redirection and key patterns can help with enforcement of data consistency, both during application operation and during content deployment done by a deployment tool 130.
The deployment tool 130 can be used, for example, to deploy new content for the database system 102 after installment of tenant applications. An administrator can initiate a deployment using a deployment administrator application 132 on an administrator client device 105, for example.
Other than new data, other changes can be deployed to the database system 102 for an application. For example, for a new software version one or more of the following can occur: new content, changes to content, deletion of content, changes to table structure, changes to which tables are shared and which tables are not shared, and changes to key pattern definitions that define which content records are shared and which are tenant local. The deployment tool 130 can use a change management system 134 to determine how to make each of the required changes. The change management system 134 includes infrastructures for managing and making different types of changes. For example, the change management system includes a structure change infrastructure 136 for managing table structure changes, a split definition infrastructure 138 for managing changes to key patterns, and a sharing type change infrastructure 140 for managing changes to which tables are shared among tenants. The change management system 134 can manage when and in which order or combination the respective sub infrastructures are invoked.
When a deployment is for an upgrade or a new feature set, changes can occur to a number of tables used by an application. The deployment tool 130 can use an approach of exchanging a shared database container 116, which can be more efficient than making changes inline to an existing shared database container 116. A shared database container exchanger 142 can prepare a new shared database container 116 for the deployment tool 130 to deploy. The deployment tool 130 can link tenant database containers 118 to the new shared database container 116. The existing shared database container 116 can be dropped after all tenants have been upgraded. Deployment status can be stored in metadata 144 while an upgrade is in process.
The approach of exchanging a shared database container 116 can allow tenants to be upgraded individually—e.g., each tenant can be linked to the new shared database container 116 during an individual downtime window that can be customized for each tenant. If an upgrade for one tenant fails, a deployment for that tenant can be retried, and other tenant deployments can remain unaffected. The deploying of the new shared database container 116 can reduce downtime because the new shared database container 116 can be deployed during uptime while the existing shared database container 116 is in use.
When a deployment is for an emergency patch, a relatively smaller number of tables may be affected, as compared to larger software releases. The deployment tool 130 can use a patching system 146 to make necessary changes inline to an existing shared database container 116, rather than exchanging the existing shared database container 116. Changes for a patch can be deployed to shared tables that are initially hidden from tenants. This can enable tenants to be individually linked to the hidden table versions, which can enable individual tenant-specific upgrade windows and fallback capability, similar to the exchanged shared database container approach. The patching system 146 can also enable a queue of patches to be applied. For example, deployment of a first patch can be in progress for a set of tenants, with some but not all of the tenants having the first patch applied. A problem can occur with a tenant who has already been upgraded with the first patch. A second patch can be developed to fix the problem, and the second patch can be applied to that tenant. The other tenants can be upgraded with the first patch (and possibly the second patch) at a later time.
Needs of an application system or a customer/tenant may change over time. A database used for a set of customers may initially be relatively small, and may not include enough data to warrant implementation of multi-tenancy for that application/database/customer. For example, a choice may be made to use one database container for that customer, since higher performance may be obtained if only one vs. several database containers are used. A customer may grow over time, may have a larger database, may run more application instances, etc. A particular database may be used by more tenants than in the past. The database system 102 can support a changing from one type of system setup to another, as needs change. For example, a system sharing type modifier 148 can change the database system 102 from a standard setup (e.g., one database container, with no multi-tenancy) for a given customer to a shared/tenant setup that uses a shared database container 116 for shared content and tenant database containers 118 for tenant-specific content. When testing for a change to multi-tenancy, a simulated setup can be used for the database system 102. A system sharing type can be stored as a system setting in the metadata 144. The deployment tool 130, the database interface 120, and the data split and sharing system 122 can alter behavior based on the system sharing type. The server application 112 can run without being aware of a current system sharing type, and whether a system sharing type has been changed from one type to another.
As used in the present disclosure, the term “computer” is intended to encompass any suitable processing device. For example, although FIG. 1 illustrates a single database system 102, a single end-user client device 104, a single administrator client device 105, and a single application server 106, the system 100 can be implemented using a single, stand-alone computing device, two or more database systems 102, two or more application servers 106, two or more end-user client devices 104, two or more administrator client devices 105, etc. Indeed, the database system 102, the application server 106, the administrator client device 105, and the client device 104 may be any computer or processing device such as, for example, a blade server, general-purpose personal computer (PC), Mac®, workstation, UNIX-based workstation, or any other suitable device. In other words, the present disclosure contemplates computers other than general purpose computers, as well as computers without conventional operating systems. Further, the database system 102, the application server 106, the administrator client device 105, and the client device 104 may be adapted to execute any operating system, including Linux, UNIX, Windows, Mac OS®, Java™, Android™, iOS or any other suitable operating system. According to one implementation, the application server 106 and/or the database system 102 may also include or be communicably coupled with an e-mail server, a Web server, a caching server, a streaming data server, and/or other suitable server.
Interfaces 160, 162, 164, and 166 are used by the database system 102, the application server 106, the administrator client device 105, and the client device 104, respectively, for communicating with other systems in a distributed environment—including within the system 100—connected to the network 108. Generally, the interfaces 160, 162, 164, and 166 each comprise logic encoded in software and/or hardware in a suitable combination and operable to communicate with the network 108. More specifically, the interfaces 160, 162, 164, and 166 may each comprise software supporting one or more communication protocols associated with communications such that the network 108 or interface's hardware is operable to communicate physical signals within and outside of the illustrated system 100.
The database system 102, the application server 106, the administrator client device 105, and the client device 104, each respectively include one or more processors 170, 172, 174, or 176. Each processor in the processors 170, 172, 174, and 176 may be a central processing unit (CPU), a blade, an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or another suitable component. Generally, each processor in the processors 170, 172, 174, and 176 executes instructions and manipulates data to perform the operations of a respective computing device.
Regardless of the particular implementation, “software” may include computer-readable instructions, firmware, wired and/or programmed hardware, or any combination thereof on a tangible medium (transitory or non-transitory, as appropriate) operable when executed to perform at least the processes and operations described herein. Indeed, each software component may be fully or partially written or described in any appropriate computer language including C, C++, Java™, JavaScript®, Visual Basic, assembler, Perl®, any suitable version of 4GL, as well as others. While portions of the software illustrated in FIG. 1 are shown as individual modules that implement the various features and functionality through various objects, methods, or other processes, the software may instead include a number of sub-modules, third-party services, components, libraries, and such, as appropriate. Conversely, the features and functionality of various components can be combined into single components as appropriate.
The database system 102 and the application server 106 respectively include memory 180 or memory 182. In some implementations, the database system 102 and/or the application server 106 include multiple memories. The memory 180 and the memory 182 may each include any type of memory or database module and may take the form of volatile and/or non-volatile memory including, without limitation, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), removable media, or any other suitable local or remote memory component. Each of the memory 180 and the memory 182 may store various objects or data, including caches, classes, frameworks, applications, backup data, business objects, jobs, web pages, web page templates, database tables, database queries, repositories storing business and/or dynamic information, and any other appropriate information including any parameters, variables, algorithms, instructions, rules, constraints, or references thereto associated with the purposes of the respective computing device.
The end-user client device 104 and the administrator client device 105 may each be any computing device operable to connect to or communicate in the network 108 using a wireline or wireless connection. In general, each of the end-user client device 104 and the administrator client device 105 comprises an electronic computer device operable to receive, transmit, process, and store any appropriate data associated with the system 100 of FIG. 1. Each of the end-user client device 104 and the administrator client device 105 can include one or more client applications, including the client application 110 or the deployment tool 132, respectively. A client application is any type of application that allows a client device to request and view content on the client device. In some implementations, a client application can use parameters, metadata, and other information received at launch to access a particular set of data from the database system 102. In some instances, a client application may be an agent or client-side version of the one or more enterprise applications running on an enterprise server (not shown).
Each of the end-user client device 104 and the administrator client device 105 is generally intended to encompass any client computing device such as a laptop/notebook computer, wireless data port, smart phone, personal data assistant (PDA), tablet computing device, one or more processors within these devices, or any other suitable processing device. For example, the end-user client device 104 and/or the administrator client device 105 may comprise a computer that includes an input device, such as a keypad, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of the database system 102, or the client device itself, including digital data, visual information, or a graphical user interface (GUI) 190 or 192, respectively.
The GUI 190 and the GUI 192 each interface with at least a portion of the system 100 for any suitable purpose, including generating a visual representation of the client application 110 or the deployment tool 132, respectively. In particular, the GUI 1902 and the GUI 192 may each be used to view and navigate various Web pages. Generally, the GUI 190 and the GUI 192 each provide the user with an efficient and user-friendly presentation of business data provided by or communicated within the system. The GUI 190 and the GUI 192 may each comprise a plurality of customizable frames or views having interactive fields, pull-down lists, and buttons operated by the user. The GUI 190 and the GUI 192 each contemplate any suitable graphical user interface, such as a combination of a generic web browser, intelligent engine, and command line interface (CLI) that processes information and efficiently presents the results to the user visually.
Memory 194 and memory 196 respectively included in the end-user client device 104 or the administrator client device 105 may each include any memory or database module and may take the form of volatile or non-volatile memory including, without limitation, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), removable media, or any other suitable local or remote memory component. The memory 194 and the memory 196 may each store various objects or data, including user selections, caches, classes, frameworks, applications, backup data, business objects, jobs, web pages, web page templates, database tables, repositories storing business and/or dynamic information, and any other appropriate information including any parameters, variables, algorithms, instructions, rules, constraints, or references thereto associated with the purposes of the client device 104.
There may be any number of end-user client devices 104 and administrator client devices 105 associated with, or external to, the system 100. Additionally, there may also be one or more additional client devices external to the illustrated portion of system 100 that are capable of interacting with the system 100 via the network 108. Further, the term “client,” “client device,” and “user” may be used interchangeably as appropriate without departing from the scope of this disclosure. Moreover, while client device may be described in terms of being used by a single user, this disclosure contemplates that many users may use one computer, or that one user may use multiple computers.
Data Split
FIG. 2 illustrates an example system 200 for an application with a standard database setup. An application server 202 accesses a database 204, when executing application requests received from client applications. The database 204 can be a database container for a particular tenant, for example, or a database that includes data for multiple tenants. As respectively indicated by access levels 206, 208, and 210, the database 204 includes, for a particular tenant, a read-only table 212 named “TABR”, a writable table 214 named “TABW”, and a mixed table 216 named “TAB”. Although one table of each of read-only, writable, and mixed table types are illustrated, a given tenant may have multiple tables of some or all of those table types.
The read-only table 212 includes vendor-delivered data, such as vendor code, character code pages, application documentation, central runtime and configuration data, and other vendor-provided data. The tenant, or applications associated with the tenant, do not write or modify data in the read-only table 212. The read-only table 212 is read-only from a tenant application perspective. The writable table 214 includes only tenant-specific data. The writable table 214 is generally shipped empty and does not include vendor-delivered data. Content is only written into the writable table 214 by the tenant or applications associated with the tenant. The writable table 214 can include business transaction data, for example. The mixed table 216 includes both read-only records that are not modified by tenant applications and records that may be modified by tenant applications. The mixed table 216 can include both vendor-delivered data and tenant-created data. An example mixed table can be a documentation table that includes shipped documentation data, tenant-added documentation data, and documentation data that was provided by the vendor but subsequently modified by the tenant. For example, the mixed table 216 can include default text values (which may be customized by particular tenants) for use in user interface displays, in various languages. In some implementations, the mixed-table 216 is an extendable table that includes fields that have been added by a tenant application or customer.
FIG. 3 illustrates an example non-multi-tenancy system 300 in which same content is stored for multiple, different tenants in different database containers. The system 300 includes applications 302 and 304 that use database interfaces 306 and 308 to access tables 310 and 312 in tenant database containers 314 and 316, respectively. Although the applications 302 and 304 and the database interfaces 306 and 308 are shown separately, in some implementations, the applications 302 and 304 are a same application, and the database interfaces 306 and 308 are a same database interface, on a single application server.
The tables 310 and 312 are each mixed tables that include both records common to multiple tenants and records unique to (e.g., added by) a respective tenant. For example, both the table 310 and the table 312 include common records that were shipped by a vendor (e.g., records 318 a-318 b, 320 a-320 b, and 322 a-322 b). These common records can be deployed to the tables 310 and 312 when a respective application 302 or 304 is deployed for a respective tenant. The common records can be records that are not changed by respective applications. Storing the common records separately for each tenant results in an increase of storage and maintenance costs, as compared to storing common records in one shared location. As described below, when implementing multi-tenancy, common, shared records can be moved to a shared table. Each table 310 and 312 also includes records written by a respective tenant application 302 or 304, for example, records 324 a and 324 b (which happen to have a same key), and records 326 and 328 and 330, which are only in their respective tables.
FIG. 4A illustrates an example system 400 that illustrates the splitting of data for a tenant. The system 400 can be used for content separation—the separation of shared content used by multiple tenants from tenant-specific data used respectively by individual tenants. The system 400 includes a shared database container 402, and a tenant database container 404 for a given tenant. Table and view names are illustrative and examples only-any table name and any table name variation scheme can be used.
The shared database container 402 includes shared content used by multiple tenants including the given tenant. The shared content can include vendor-provided content and can enable the sharing of vendor-delivered data between multiple tenants. Although illustrated as a shared database container 402, shared content can also be stored in a shared database in general, or by using a shared database schema.
The shared database container 402 includes a TABR table 406, corresponding to the read-only table 212 of FIG. 2, that includes only read-only records. The TABR table 406 is configured to be read-only and shareable, to the given tenant associated with the tenant database container 406 and to other tenants. An application 408 running for the given tenant can submit queries that refer to the table name “TABR”. A database interface (DBI) 410 can receive a query from an application and submit a query including the TABR table name to the tenant database container 404.
The tenant database container 404 includes a TABR view 412 that can be used when the query is processed for read-only access to the TABR table 406. The TABR table 406 can be accessible from the tenant database container 404 using remote database access, for example. As another example, if multiple tenants reside in a same database, the TABR table 406 can reside in the same database as the multiple tenants. In general, each tenant can have their own database schema or container and can access the TABR table 406 using cross-schema access, cross-container access, or remote database access.
The tenant database container 404 includes a TABW table 414, which in some instances corresponds to the writable table 214 of FIG. 2. The TABW table 414 can include non-shared, or tenant-specific, application data for the given tenant. The TABW table 414 can be a table that is shipped empty, with records being added to the TABW table 414 for the given tenant in response to insert requests from the application 408. Alternatively, TABW table 414 may include an initial set of data that can be updated and modified by the tenant or in a tenant-specific manner. An insert query submitted by the application 408 can include the TABW table name, and the DBI 410 can provide write access to the TABW table 414, without the use of a view.
The application 408 can submit a query that includes a “TAB” table name that corresponds to the mixed table 216 of FIG. 2. When implementing multi-tenancy, records from the mixed table 216 can be split, to be included in either a read-only table 416 with name “/R/TAB” that is included in the shared database container 402 or a writable table 418 with name “/W/TAB” that is included in the tenant database container 404. The use and identification of the names “/R/TAB” and “/W/TAB” is discussed in more detail below. The read-only table 416 can include records common to multiple tenants that had previously been included in multiple tenant tables for multiple tenants. The read-only table 416 can be a shared repository that multiple tenants use to access the common data and records. The writable table 418 includes records from the mixed table 216 that are specific to the given tenant associated with the tenant database container 404. A union view 420 with a same name of TAB as the mixed table 216 provides a single point of access for the application 408 to the read-only table 416 and the writable table 418.
The application 408 may have been previously configured, before implementation of multi-tenancy, to submit queries that include the “TAB” table name. The application 408 can continue to submit queries using the original “TAB” table name after implementation of multi-tenancy, using a single logical table name for access to the mixed records collectively stored in the writable table 418 and the read-only table 416. The union view 420 provides a unified view on the mixed record data that hides, from the application 408, details regarding which data is shared and which data is tenant-local. A query performed on the union view 420 may return records from the read-only table 416, the writable table 420, or a combination of records from both tables, and the application 420 is unaware of the source of the records returned from the query. The use of the union view 420 enables multi-tenancy to be compatible with existing applications such as the application 408—e.g., the application 408 and other applications can continue to be used without modification. Such an approach avoids significant rewriting of applications as compared to applications being aware of both the writable table 418 and the read-only table 416 and needing modifications to query two tables instead of one table. Queries and views that include a reference to the mixed table can continue to be used without modification. The use of the union view 420 enables the application 408 to access the data split into the writable table 418 and the read-only table 416 using a single query.
The DBI 410 can be configured to determine whether a query that includes the TAB table name is a read query or a write query. If the query is a read query, the DBI 410 can submit the read query to the tenant database container 404, for a read operation on the union view 420. The union view 420 provides unchanged read access to the joint data from the writable table 418 and the read-only table 416.
If the query is a write query (e.g., INSERT, UPDATE, DELETE, SELECT FOR UPDATE), the DBI 410 can, before submitting the query to the tenant database container 404, automatically and transparently (from the perspective of the application 408) perform a write intercept operation, which can include changing a TAB reference in the query to a “/W/TAB” reference, which can result in write operations being performed on tenant-local data in the writable table 418 instead of the union view 420. Write queries for the mixed table can be submitted, unchanged, by the application 408, since write access is redirected to the writable table 418. The union view 420 can be configured to be read-only so that a write operation would be rejected if it was attempted to be performed on the union view 420. A write operation may be ambiguous, as to which of the writable table 418 or the read-only table 416 should be written to, if write queries were allowed to be received for the union view 420.
The storing of shared content in the TABR table 406 and the read-only table 416 can result in a reduced memory footprint as compared to storing common data separately for each tenant. Storing common data in a shared location can reduce resource consumption during lifecycle management procedures and simplify those procedures. Lifecycle management can include application development, assembly, transport, installation, and maintenance. Storing common data in one location can simplify software change management, patching, and software upgrades.
FIG. 4B illustrates an example multi-tenancy system 440 that includes multiple tables of each of multiple table types. Before implementation of multi-tenancy, a database system can have multiple tables of each of the read-only, writable, and mixed table types. For example, as illustrated by table metadata 441, tables “TABR”, “TCPOO”, AND “TCP01” are read-only tables, tables “TAB” and “DOKTL” are mixed tables, and tables “TABW”, “ACDOCA”, and “MATDOC” are read/write (e.g., writable) tables. Table metadata can exist in a shared database container 442 and/or can exist in a tenant database container 443, as illustrated by metadata 444.
Implementation of multi-tenancy can result in the inclusion of the read-only tables in the shared database container 442, as illustrated by read-only tables 445, 446, and 448. Read- only views 450, 452, and 454 can be created in the tenant database container 443 for the read-only tables 444, 446, and 448, respectively, to provide read access for an application 456. Implementation of multi-tenancy can result in the inclusion of writable tables in the tenant database container 443, as illustrated by writable tables 458, 460, and 462.
Each mixed table can be split into a read-only table in the shared database container 442 and a writable table in the tenant database container 443. For example, a read-only table “/R/TAB” 464 and a writable table “/W/TAB” 466 replace the mixed table “TAB”. As another example, a read-only table “/R/DOKTL” 468 and a writable table “/W/DOKTL” 470 replace the mixed table “DOKTL”.
In some implementations, a deployment tool automatically generates names for the read-only and writable tables that replace a mixed table. A generated name can include a prefix that is appended to the mixed table name. Prefixed can be predetermined (e.g., “/R/”, “/W/”) or can be identified using a prefix lookup. For example, APIs getSharedPrefix 472 and getTenantPrefix 474 can be invoked and can return “/R/” for a shared prefix and “/W/” for a writable (e.g., tenant) prefix, respectively (or other character strings). The APIs 472 and 474 can look up a respective prefix in a preconfigured table, for example. In some implementations, a different naming scheme is used, that uses suffixes or some other method to generate table names. In some implementations, other APIs can generate and return a full shared table name or a full writable table name, rather than a shared or tenant prefix.
For each mixed table, a union view is created in the tenant database container 443 that provides a single point of access to the application 456 to records in the read-only table and the writable table corresponding to the mixed table. For example, a union view 476 provides unified access to the read-only table 464 and the writable table 466. As another example, a union view 478 provides unified access to the read-only table 468 and the writable table 470.
FIG. 4C illustrates an example multi-tenancy system 480 that uses a suffix table naming scheme. As illustrated by note 482, read-only tables 484, 485, 486, and 487 included in a shared database container 488 can include a suffix that enables the storing of several versions of a table. A read-only view 489 provides read access to the read-only table 485, which is a currently-configured version (e.g., “TABR #2”) of a given read-only table. To gain access to a different version (e.g., “TABR #1”) of the given read-only table, the read-only view 489 can be reconfigured to be associated with the read-only table 487. Multiple versions of a table can be used during deployment of an upgrade, as described in more detail below.
As illustrated by note 490, a read-only view 492 can be included in a tenant database container 494, such as if an application 496 needs read access to shipped, read-only content that was included in a mixed table that is now stored in the read-only table 484. A union view 498 can provide unified access to the read-only view 492 and writable mixed-table records now included in a writable table 499. The read-only view 492 can be re-configured to access the table 486 that is a different version (e.g., “TAB #2”) of the read-only table 484.
FIG. 5 illustrates an example system 500 that includes a shared database container 502, a first tenant database container 504 for a first tenant, and a second tenant database container 506 for a second tenant. First and second applications 508 and 510 handle application requests for the first tenant and the second tenant, respectively. The first tenant and the second tenant can be served by separate application servers or a same application server, or by multiple application servers.
The shared database container 502 includes a shared read-only table 512 that includes read-only shipped records. The shared read-only table 512 is made available as a shared table to the first and second tenants, and other tenants. The first application 508 and the second application 510 can access the shared read-only table 512 using a view 514 or a view 516, respectively. The first application 508 and the second application 510 can have read, but not write access, to the shared read-only table 512, through the view 514 or the view 516, respectively.
The first tenant database container 504 and the second tenant database container 506 respectively include writable tables 518 or 520. The writable tables 518 and 520 are separate from one another and store records that have been respectively written by the application 508 or the application 510. The first tenant does not have access to the writable table 520 and correspondingly, the second tenant does not have access to the writable table 518.
The shared database container 502 includes a shared read-only table 522 that stores shared read-only records that had been included in a mixed table. Writable tables 524 and 526 included in the first tenant database container 504 and the second tenant database container 506 store mixed-table records that had been or will be added to the writable table 524 or the writable table 526 by the application 508 or the application 510, respectively. The writable tables 524 and 526 are separate from one another. The first tenant does not have access to the writable table 526 and correspondingly, the second tenant does not have access to the writable table 524.
The application 508 can be provided a single point of access for the mixed-table records that are now split between the shared read-only table 522 and the writable table 524 using a union view 528. Similarly, the application 510 can be provided a single point of access for the mixed-table records that are now split between the shared read-only table 522 and the writable table 526 using a union view 530. As described above for FIG. 4, a write request for a TAB table submitted by the application 508 or the application 510 could be intercepted by a respective DBI and redirected to the writable table 524 or the writable table 526, respectively.
FIG. 6 illustrates an example system 600 that includes a shared database container 602, a first tenant database container 604 for a first tenant, and a second tenant database container 605 for a second tenant. Applications 606 and 607 are configured to access a union view 608 or a union view 609 using a DBI 610 or a DBI 611, respectively, to gain access to respective mixed tables. The union views 608 and 609 respectively provide a single point of access for the application 606 or the application 607 to records previously stored in a mixed table named TAB (such as the mixed table 310 of FIG. 3). The TAB table and the union views 608 and 609 include, as illustrated for the union view 608, a first key field 612, a second key field 614, a first data field 616, and a second data field 618. A primary key for the union view 608 (and consequently for the read-only table 620 and the writable table 623) can include the first key field 612 and the second key field 614. The first key field 612 and/or the second key field 614 can be technical fields that are used by the database but not presented to end users.
Read-only records of the mixed table that are common to multiple tenants are now stored in a shared read-only table 620 in the shared database container 602. The shared read-only table 620 includes read-only records shared with/common to multiple tenants. For example, the shared read-only table 620 includes records 624, 626, and 628 corresponding to the records 318 a-318 b, 320 a-320 b, and 322 a-322 b of FIG. 3.
Mixed table records that were added for the first tenant or the second tenant are now stored in either a writable table 622 in the first tenant database container 604 or a writable table 623 in the second tenant database container 605. The writable table 622 includes records specific to the first tenant, including records 630 and 632 that correspond to the records 324 a and 330 of FIG. 3. Similarly, the writable table 623 includes records specific to the second tenant, including records 634, 636, and 638 that correspond to the records 324 b, 326, and 328 of FIG. 3.
A query from the application 606 to retrieve all records from the union view 608 can return the records 624, 626, 628, 630, and 632. A query from the application 607 to retrieve all records from the union view 609 can return the records 624, 626, 628, 634, 636, and 638. The records 630 and 632 are not accessible by the second tenant. The records 634, 636, and 638 are not accessible by the first tenant.
Key Pattern Management
FIG. 7 illustrates a system 700 for constraint enforcement. The system 700 includes a shared database container 702 and a tenant database container 704. A mixed table named “TAB” has been split into a read-only table 706 (“/R/TAB”) in the shared database container 702 and a writable table 708 (“/W/TAB”) in the tenant database container 704. When storing data in two tables instead of one table, a primary key constraint by the database may no longer be effective. Once a mixed table is split, and without further configuration, a record in the read-only table 706 could have a same key value as a record in the writable table 708. For example, a record in the read-only table 706 that was initially provided by a vendor can have a same key as a record in the writable table 708 that was written by a tenant application. As another example, the vendor can deploy, post-installation, a record to the read-only table 706 that already exists as a tenant-written record in the writable table 708.
An existence of duplicate records could create undesirable issues. For example, an application 710 may be configured to submit, using a DBI 712, a select query against the “TAB” table with a restriction on primary key field(s), with the query designed to either return one record (e.g., if a record matching the primary key restriction is found) or no records (e.g., if no records matching the primary key restriction are found). However, if duplicate records are allowed to exist between the read-only table 706 and the writable table 708, such a select query may return two records, since the query may be executed on a union view 714 with name of “TAB” that provides unified access to the read-only table 706 and the writable table 708. The application 710 may not be properly configured to handle such a situation, and an error condition, undesirable application behavior, and/or undesirable data modifications may occur.
As another example, the application 710 may submit a delete query, with a restriction on primary key fields, with an expectation that the query uniquely identifies a record to delete. The restriction on the delete query may match two records when applied to the union view 714, so an ambiguity may exist as to which record to delete.
To solve issues related to a potential for duplicate records, a key pattern can be identified that describes records that can be written by the application 710 and thereby exist in the writable table 708. For example, a key value convention may exist, such that shipped records in the read-only table 706 have a particular key pattern, such as a first range of key values, and application-added records have a different key pattern, such as a second, different range of key values. As another example, shipped records may have a key value that includes a particular prefix, and tenant-added records can be added using a key value that includes a different prefix. Key value conventions can be used to define different key value spaces—a first key value space for shipped records and a second, different key value space for tenant records, for example.
A tenant keys table 716 can be used to define key patterns. For example a row 718 in the tenant keys table 716 includes a value of “TAB” for a table name column 720, which indicates that a key pattern is being defined for the union view 714 (and for application requests that include a “TAB” table reference). The row 718 includes a value of “A” (for “Active”) in an active/inactive column 722, indicating that a key pattern for the “TAB” table is active. Active and inactive key patterns are described in more detail below.
A value of “KF1 LIKE Z %” in the record 718 for a WHERE clause column 724 defines a key pattern for the “TAB” table. The key pattern describes a pattern for keys of records that are included in the writable table 708 (e.g., the key pattern indicates that records in the writable table 708 should have keys that start with “Z”). A complement of the key pattern (e.g., “NOT KF1 LIKE Z %” (e.g., records that have keys that do not start with “Z”)) describes a pattern for records in the read-only table 706. The DBI 712 can use the key pattern to ensure that the keys of records stored in the writable table 708 are disjoint from the keys of records stored in the read-only table 706.
The DBI 712 can be configured to prohibit duplicate records by examining write queries (e.g., update, insert, delete queries) received from the application 710 for the “TAB” table, accepting (and executing) queries (e.g., using a redirect write, on the writable table 708, as described above) that are consistent with the key pattern, and rejecting queries that are inconsistent with the key pattern. An inconsistent query would add or modify a record in the writable table 708 so that the record does not match the key pattern. The DBI 712 can be configured to reject (and possibly issue a runtime error against) such inconsistent queries during a key-pattern check to ensure that write queries are only applied to the writable table 708 and not the read-only table 706. Although described as being performed by the DBI 712, the key pattern check can be performed elsewhere, such as by an additional table constraint object applied to the writable table 708 and/or the read-only table 706, a database trigger, or some other database component. The DBI 712 can be configured to examine complex queries, such as queries that refer to ranges of values, to ensure that modifications adhere to the key pattern definition.
Although a WHERE clause syntax is illustrated, other types of definitions can be used to define a key pattern. Although the tenant keys table 716 is illustrated as being included in the tenant table 704, tenant key definitions can also, or alternatively, exist in the shared database container 702, as illustrated by a tenant keys table 726. Tenant key definitions can exist in the shared database container 702 so that the application 710 or a tenant user is not able to change the tenant key definitions. A view (not shown) can be included in the tenant database container 704 to provide read access to the tenant key table 726, for example. If tenant keys are included in the shared database container 702, tenant key definitions can be shared with multiple tenants, if the multiple tenants each have a same key pattern definition. If some tenants have different key pattern definitions, tenant key definitions included in the shared database container 702 can be associated with particular tenant(s) (e.g., using a tenant identifier column or some other identifier).
The use of a key pattern can be advantageous as compared to other alternate approaches to a duplicate record issue, such as an overlay approach that allows for duplicate records. With the overlay approach, more complex union views (as compared to the union view 714) can be used, that involve the selection of one record among multiple records with a same key across the writable table 708 and the read-only table 706 using a priority algorithm. However, such an approach does not solve the problem of a select query being able to return a record that has a same key as a record that was just deleted (e.g., the delete may have deleted one but not both of duplicate records stored across different tables). An approach can be used to store local deletes so as to later filter out shared data that has been deleted locally, but that approach adds complexity and may impact performance. Additionally, an upgrade process may include complications if the shared content is updated since the tenant content may have to be analyzed for duplicate records and a decision may have to be made regarding whether a tenant local record is to be removed due a conflict with new shipped content.
As another example of an alternate approach for avoiding duplicate records, the system 700 can perform a check against the read-only table after every change operation in the writable table. However, such an approach may result in an unacceptable performance degradation. The use of a key pattern, instead of these alternative approaches, can avoid complexities and performance issues.
The key pattern can be used, during initial system deployment, to split mixed table data according to the key pattern definition. Upon installation of the shared database container 702, the system 700 can ensure that no content in the read-only table 706 matches the key pattern that defines data included in the writable table 708. Similarly, upon installation of the tenant database container 704 (and other tenant database containers or databases), the system 700 can ensure that no content is included in the writable table 708 that does not match the key pattern. Key patterns can be used during other lifecycle phases, as described below.
FIG. 8 illustrates an example system 800 for deploying content in accordance with configured tenant keys. In general, during a system lifetime, key pattern definitions are enforced to make sure that tenants do not write data that conflicts with currently shared data or with data that might be delivered for sharing in the future. In addition to system installation and application execution, key pattern definitions are enforced throughout other phases of the system lifecycle, such as data deployment. When new content or content updates are shipped by the vendor, such as during an update or upgrade, content separation and key enforcements are taken into account, to ensure that vendor deliveries to a shared container during a software lifecycle event do not create conflicts with data that was created in a tenant container.
For example, a file 802 containing new records to be deployed to the system 800 can be provided to a content deployment tool 804 and a content deployment tool 806, for deployment to a shared database container 808 and a tenant database container 810, respectively. The file 802 may include records to be added to the system 800 as a result of a new version of an application or database, for example. The content deployment tools 804 and 806 can use a DBI 812 or a DBI 814, respectively, to write content to the shared database container 808 or the tenant database container 810, respectively. Although illustrated as separate content deployment tools 804 and 806 and separate DBIs 812 and 814, in some implementations, the content deployment tools 804 and 806 are the same tool and/or the DBIs 812 and 814 are the same interface.
The content deployment tool 804 can read, using the DBI 812, a WHERE clause 816 for a read-only “/R/TAB” table 818 associated with a “TAB” mixed table from a tenant keys table 820. The WHERE clause 816 describes a pattern of keys that exist in a “/W/TAB” writable table 822 in the tenant database container 810, the writable table 822 also associated with the “TAB” mixed table. The content deployment tool 804 can determine which records in the file 802 do not match the WHERE clause 816, and can, using the DBI 812, write the records from the file 802 that do not match the WHERE clause 816 to the read-only table 818, as indicated by note 824. The records that do not match the WHERE clause 816 can be records that are to be shared among tenants and not modified by respective tenants.
For example, as indicated by note 826, a record with a value of “ww” for a “KF1” key column 828 can be read by the content deployment tool 804 from the file 802 and written to the read-only table 818, based on the “ww” key value not matching the WHERE clause 816 of “KF1 like Z %”. The DBI 812 and/or the read-only table 818 can be configured to allow the writing of content by the content deployment tool 804 to the read-only table 818, even though the read-only table 818 is read-only with respect to requests received by a DBI 830 from an application 832. The DBI 830 and/or a union view 834 can be configured to allow read but not write requests for the read-only table 818 (through the union view 834), for example. The DBI 830 can be the same or a different DBI as the DBI 812 and/or the DBI 814.
The content deployment tool 806 can read, using the DBI 814, a WHERE clause 836 for the writable “/W/TAB” table 822 associated with the “TAB” mixed table from a tenant keys table 838. Although shown as separate from the tenant keys table 820, the tenant keys table 838 may be the same table as the tenant keys table 820, and may exist in the shared database container 808, the tenant database container 810, or in another location. When the content deployment tool 806 is the same tool as the content deployment tool 804, a separate read of the WHERE clause 836 may not be performed since the WHERE clause 816 may have already been read and can be used by the content deployment tool 806. Like the WHERE clause 816, the WHERE clause 836 describes a pattern of keys that exist in the “/W/TAB” writable table 822. The content deployment tool 806 can determine which records in the file 802 match the WHERE clause 836, and can write the records from the file 802 that match the WHERE clause 836 to the writable table 822, as indicated by note 840. For example, as indicated by note 842, a record with a key value of “zz” can be written to the writable table 822, based on the “zz” key value matching the WHERE clause 836. Records in the file 802 that match the WHERE clause 836 can be records that may be later modified by the tenant associated with the tenant container 810.
The file 802 can include data to be written to both the read-only table 818 and the writable table 822, as described above. As another example, the content deployment tool 804 and/or the content deployment tool 806 (or another component) can create two files for content delivery—e.g., one file for the writable table 822 and one file for the read-only table 818. When separate files are used, the content deployment tool 806 can either ignore records in a file for the writable table 822 that do not match the key pattern or can issue an error for such records. Similarly, the content deployment tool 804 can either ignore records in a file for the read-only table 818 that match the key pattern or can issue an error for such records. Content deployment is described in more detail below, in other sections.
FIG. 9 illustrates an example system 900 for changing tenant keys. Tenant keys may be changed for example, when a new version of an application and/or database is released. An application developer may change a range of key values that may be written by a tenant application for example. As another example, a database system may have detected, during execution of a current or prior version of an application, attempts to write records with keys not matching a current key pattern. A developer or an administrator may review a log of such attempts and determine to allow the writing of records with such keys in the future.
A current record 904 in a tenant keys table 906 in a tenant database container 907 has a value 908 of “A” (for “active”), which indicates that a WHERE clause 910 in the current record 904 is a currently-configured description of key values for records in the writable table 902. For example, the WHERE clause 910 of “KF1 LIKE Z %” indicates that key values in the writable table 902 start with the letter “Z”. An administrator may desire to change the tenant key table 906 so that records having key values beginning with “Z” or “Y” are allowed in the writable table 902.
A file 912 (or other electronic data input) including a new WHERE clause can be provided to a constraint changing tool 914. The constraint changing tool 914 can, using a DBI 916, add a record 918 to the tenant keys table that includes the new WHERE clause included in the file 912. For example, a new WHERE clause 920 of “KF1 LIKE Z % OR KF1 LIKE Y %” is included in the added record 918. The added record 918 includes an active/inactive value 922 of “I” for “inactive”. As described below, the added record 918 can be marked as active after the writable table 902 and a read-only table 924 in a shared database container 926 have been updated to be in accordance with the new WHERE clause 920.
As described above, tenant keys can exist in the tenant database container 907 (as illustrated by the tenant keys table 906) and/or in the shared database container 926 (as illustrated by a tenant keys table 928). A constraint changing tool 930 (which can be the same or a different tool as the constraint changing tool 914) can use a DBI 931 to add a new record 932 with a new WHERE clause to the tenant keys table 928, as described above for the added record 918. The DBI 931 can be the same or a different interface as the DBI 916.
FIG. 10 illustrates an example system 1000 for updating database records to comply with updated tenant keys. The updated tenant keys are described by a new WHERE clause 1002 included in an inactive record 1004 included in a tenant keys table 1006. The inactive record 1004 is a replacement record for an active tenant keys record 1008. As described in more detail below, a constraint changing tool 1010 can update records in a read-only table 1012 in a shared database container 1014 and a writable table 1015 in a tenant database container 1016 to comply with the new WHERE clause 1002.
The constraint changing tool 1010 can use a DBI 1020 to read the new WHERE clause 1002 from the inactive tenant keys record 1004 (e.g., as illustrated by note 1022). The constraint changing tool 1010 can use the DBI 1020 to delete records from the read-only table 1012 that match the new WHERE clause 1002. For example, and as indicated by note 1024, a record with a key value of “YY” (e.g., that was included in the read-only table 924 of FIG. 9) has been deleted from and is no longer included in the read-only table 1012. The record with key value of “YY” may have been previously allowed to be in the read-only table 924 due to the record not matching a previous WHERE clause of “KF1 LIKE Z %” included in the active tenant keys record 1008, for example.
A constraint changing tool 1026 (which can be the same as or different from the constraint changing tool 1010) can use a DBI 1029 (which can be the same as or different from the DBI 1020) to delete records from the writable table 1015 that do not match the WHERE clause 1002. The constraint changing tool 1028 can read the WHERE clause 1002 from the tenant keys table 1006 or can read a WHERE clause 1030 from an inactive tenant keys record 1032 in a tenant keys table 1034 in the tenant database container 1016.
The WHERE clause 1030 describes a key pattern of keys starting with “Z” or “Y”. The writable table 1015 is the same as the writable table 902 of FIG. 9 (e.g., no records have been deleted) since both records in the writable table 1015 have keys that start with “Z” (e.g., there are no records in the writable table 902 that do not match the WHERE clause 1030). After any records not matching the WHERE clause 1030 have been deleted from the writable table 1015 and any records matching the WHERE clause 1002 have been deleted from the read-only table 1012, the constraint changing tool 1010 (and/or the constraint changing tool 1028) can read a file 1036 that includes information indicating data to be moved between the read-only table 1012 and the writable table 1015, to complete updates to the system 1000 for compliance with the updated tenant keys. Processing of the file 1036 is described in more detail below.
In some implementations, rather than using the file 1036 to store data to be moved between the read-only table 1012 and the writable table 1015, the constraint changing tool 1010 can query the read-only table 1012 and/or the writable table 1015 to extract records to be moved. For example, the constraint changing tool 1010 can submit a query of “insert into /W/TAB (select * from /R/TAB where (KF1 LIKE Z % OR KF1 LIKE Y %))”, to move records from the read-only table 1012 to the writable table 1015 that match the new WHERE clause 1002. As another example, the constraint changing tool 1010 can submit a query of “insert into /R/TAB (select * from /W/TAB where not (KF1 LIKE Z % OR KF1 LIKE Y %))”, to move records from the writable table 1015 to the read-only table 1012 that do not match the new WHERE clause 1002. However, in some implementations, content is not selected from the writable table 1015 for inclusion in the read-only table 1012, since the tenant may have modified the data in the writable table 1015.
FIG. 11 illustrates an example system 1100 for updating database records to comply with updated tenant keys using a transfer file 1102. The transfer file 1102 corresponds to the file 1036 and include data to be moved between a read-only table 1104 in a shared database container 1106 and a writable table 1108 in a tenant database container 1110. A constraint changing tool 1112 can read records from the transfer file 1102 that do not match a WHERE clause 1114 included in an inactive record 1116 in a tenant keys table 1118. The constraint changing tool 1112 can use a DBI 1120 to deploy the records from the transfer file 1102 that do not match the WHERE clause 1114 to the read-only table 1104. In the example of FIG. 11, there are no records in the transfer file 1102 that do not match the WHERE clause 1114, so no new records are deployed to the read-only table 1104.
A constraint changing tool 1122 (which can be the same as or different from the constraint changing tool 1112) can read records from the transfer file 1102 that match the WHERE clause 1114. The constraint changing tool 1122 can read the WHERE clause 1114 from the tenant keys table 1118 or can read a WHERE clause 1124 from an inactive tenant keys record 1126 in a tenant keys table 1128 in the tenant database container 1110. The constraint changing tool 1122 can use a DBI 1130 (which can be the same as or different from the DBI 1120) to deploy the records from the transfer file 1102 that match the WHERE clause 1114 to the writable table 1108. In the example of FIG. 11, a record with a key value of “YY” (that matches the WHERE clause 1114) is included in the transfer file 1102, and is deployed to the writable table 1108, as illustrated by a record 1132 and note 1134. After records in the transfer file 1102 have been deployed to the writable table 1108 and/or the read-only table 1104, the inactive record 1116 is changed to be an active record in the tenant keys table 1118, as described below.
FIG. 12 illustrates an example system 1200 for updating an inactive tenant keys record. A constraint changing tool 1202 can update a tenant keys table 1204 in a shared database container 1206. In some implementations, additionally or alternatively, a constraint changing tool 1208 makes similar changes to a tenant keys table 1210 in a tenant database container 1212. The constraint changing tool 1202 can submit a delete query 1214 to a DBI 1216 to delete one or more active entries in the tenant keys table 1204. For example, an empty (deleted) entry 1218 represents a now-deleted active tenant keys record 1008 of FIG. 10. The constraint changing tool 1202 can submit an update query 1219 to the DBI 1216 to change a previously inactive tenant keys record (e.g., the inactive tenant keys record 1004 of FIG. 10) to be an active tenant keys record, as illustrated by an updated tenant keys record 1220 that includes a value of “A” for “Active”.
An inactive tenant keys record may be marked as inactive during a deployment process, for example, and may be marked as active when the deployment process has completed. Once the updated tenant keys record 1220 is active, tenant applications can write new records that match a WHERE clause 1222 included in the now active record. For example, a tenant application can write a record with a key value of “Y1” to a writable table 1224 in the tenant database container 1212, as illustrated by a new record 1226 and note 1228. Updating of tenant keys, along with other types of deployment changes, is described in more detail below.
System Sharing Types
As described above, different system sharing types can be supported, such as a standard system setup in which multi-tenancy is not implemented and a shared/tenant setup where multi-tenancy is implemented. Transitions between system sharing types can be supported, with a change in the system sharing type being transparent to applications.
FIG. 13A illustrates an example system 1300 that includes a standard system 1302 with a standard system-sharing type and a shared/tenant system 1304 with a shared/tenant system-sharing type. The standard system 1302 includes a read-only table “TABR” 1306, a writable table “TABW” 1308, and a read-only with local-write table “TAB” 1310, all included in a single database container 1312. During deployment, a deployment tool 1314 can deploy data to each of the tables 1306, 1308, and 1310.
The tables 1306, 1308, and 1310 are illustrative. A standard system-sharing type system can include other combinations of tables of different table types, including multiple instances of tables of a given type. For example, the standard system-sharing type system 1302 can include multiple read-only tables, multiple writable tables, and/or multiple read-only with local-write tables.
The shared/tenant system 1304 includes a shared database container 1316 and a tenant database container 1318. As described above, the shared database container 1316 includes a read-only table 1320 that corresponds to the read-only table 1306 and the tenant database container 1318 includes a writable table 1322 that corresponds to the writable table 1308. A read-only table 1324 in the shared database container 1316 and a writable table 1326 in the tenant database container 1318 correspond to the read-only with local-write table 1310. A view 1328 provides read access to the read-only table 1320 and a union view 1330 provides unified access to the read-only table 1324 and the writable table 1326.
During deployment, a deployment tool 1332 can deploy data to the read-only table 1320 and the read-only table 1324 included in the shared database container 1316. A deployment tool 1334 can deploy data to the writable table 1322 and the writable table 1326 included in the tenant database container 1318. Although illustrated as two separate deployment tools, in some implementations, the deployment tool 1332 and the deployment tool 1334 are the same tool.
FIG. 13B is a table 1350 that illustrates processing that can be performed for standard 1352, shared 1354, and tenant 1356 database containers. Types of processing in a multi-tenant system can include database (DB) object creation 1358, DB content deployment 1360, and write operations by application(s) 1362. For example, as described in a cell 1364, read-only (RO), writable (RW), and mixed (RO+WL) tables can be created in a standard database container 1352. A cell 1366 indicates that only shareable objects, such as a read-only table, or a read-only portion of a mixed table (e.g., the read-only table created when the mixed table is split), are created in a shared container 1354. A cell 1368 indicates that local tables (e.g., local to a given tenant) are created in a tenant database container 1356. For example, the tenant database container 1356 can include a writable table (RW) and a writable portion of a mixed table (e.g., RO+WL, with name /W/TAB, such as the writable table created when the mixed table is split). The tenant container 1356 can also include a view to the read-only table in the shared container 1354, and a union view on the read-only and writable portions of a mixed table.
A cell 1370 indicates that a deployment tool can deploy content to all tables included in a standard database container 1352. The deployment tool can deploy content to shared tables (e.g., a read-only table or a read-only portion of a mixed table) in a shared database container 1354, as indicated by a cell 1372. A cell 1374 indicates that the deployment tool can deploy content to local tables in a tenant database container 1356. Deployment to a mixed table can include redirection of tables writes to the writable portion of the mixed table.
Tenant applicants can write to all objects in a standard database container 1352 (e.g., as described in a cell 1376). A cell 1378 indicates that tenant applications are not allowed to write to tables in a shared database container 1354. A cell 1380 indicates that tenant applications can write content to local tables in a tenant database container 1356, including a writable table and a writable portion of a mixed table. Application writes on a mixed table can be redirected to the writable portion of the mixed table.
FIG. 14 illustrates a system 1400 for transitioning from a standard system 1401 to a shared/tenant system 1402. The standard system 1401 includes a database container 1403 that includes a read-only table 1404, a writable table 1405, and a mixed table 1406. The database container 1403 can be associated with a tenant and for purposes of discussion has a name of “tenant”. A transition can be performed to transition the standard system 1401 of the tenant to the shared/tenant system 1402, as described by a flowchart 1407.
At 1408, a shared database container 1410 is created, for inclusion in the shared/tenant system 1402. The database container 1403 included in the standard system 1401 can be used as a tenant database container 1414 in the shared/tenant system 1402. That is, the database container 1403 is a pre-transition illustration and the tenant database container 1414 is a post-transition illustration of a tenant database container used for the tenant.
At 1416, access to the shared database container 1410 is granted to a tenant database user associated with the tenant.
At 1418, a read only table 1420 (e.g., with a path/name of “shared./R/TABR”) is created in the shared database container 1410.
At 1422, data is copied from the read-only table 1404 included in the database container 1403 (e.g., a table object with a path/name of “tenant.TABR”) to the read-only table 1420 (e.g., “shared./R/TABR”).
At 1424, the read-only table 1404 (e.g., “tenant.TABR”) is dropped. Accordingly, the read-only table 1404 is not included in the tenant database container 1414 at the end of the transition.
At 1426, a view 1428 (e.g., “tenant.TABR”) is created in the tenant database container 1414, to provide read access to the read-only table 1420.
At 1430, a read-only table 1432 (e.g., “shared./R/TAB”) is created in the shared database container 1410.
At 1434, data that does not match key patterns defined for tenant content is copied from the mixed table 1406 (e.g., “tenant.TAB”) to the read-only table 1432 (e.g., “shared./R/TAB”). In other words, data that is to be shared among tenants and that is not tenant-specific is copied from the mixed table 1406 to the read-only table 1432 in the shared database container 1410.
At 1436, the data that does not match key patterns defined for tenant content (e.g., data that was copied in operation 1434) is deleted from the mixed table 1406 (e.g., “tenant.TAB”).
At 1438, the mixed table 1406 (e.g., “tenant.TAB”) is renamed to “tenant./W/TAB”, for inclusion in the tenant database container 1414 as a writable table 1440, for storing tenant-specific content. The records that remain in the writable table 1440 should be records that match key patterns defined for tenant content. The writable table 1405 is included, unmodified, in the tenant database container 1414, as a writable table 1442, for storing tenant content post transition.
At 1444, a union view 1446 (e.g., “tenant.TAB”) is created, on the read-only table 1432 (e.g., “shared./R/TAB”) and the writable table 1440 (e.g., “tenant./W/TAB”), to provide unified access to the read-only table 1432 and the writable table 1440.
The transition from the standard system 1401 directly to the shared/tenant system 1402 can, due to cross database container access and data movement, and other issues, take more time than is desired in some instances. In some implementations, a database object cannot simply be renamed to move the database object from one database container to another database container. The changing of which tables are read-only, mixed, or writable, and changing of key patterns, can result in data and table movement. For example, the changing of a table to be read-only or mixed can result in data being moved to a shared database container from a tenant database container.
To improve performance during development and testing of an application, before a final deployment, a simulation mode can be used that simulates data sharing for an application and for content deployment. The simulation mode involves storing all database objects in one database container, and simulating read-only/shared access, and redirect write operations for appropriate database objects.
Using one database container can enable renaming of database objects to simulate a transition to a shared system setup. If the application performs as expected in the simulation mode, a transition can be performed to transition the database system from the simulation mode to the shared system setup. As discussed below in FIGS. 15-17, transitioning the database system from the standard system setup to the simulation mode and transitioning the database system from the simulation mode to the shared system setup includes more DDL (Data Definition Language) statements and less DML (Data Manipulation Language) statements than transitioning the database system directly to the shared system setup from the standard system setup.
FIG. 15 illustrates a system 1500 with a sharing type of simulated. A deployment control system 1502 can use a deployment tool 1504 to simulate an import of tenant data, by importing data to a simulation database container 1505. For example, the deployment tool 1504 can use a DBI 1506 to deploy data to a writable table 1508 and a writable table 1510 included in the simulation database container 1505. The deployment control system 1502 can use a deployment tool 1514 (which can be the same as or different than the deployment tool 1504) to simulate the importing of shared data, by importing data to the simulation database container 1505. For example, in the simulation, the deployment tool 1514 can use a DBI 1516 (which can be the same or a different interface as the DBI 1506) to deploy shared data to a read-only table 1518 and a read-only tool 1520 included in the same simulation database container 1505 that also includes the writable table 1508 and the writable table 1510. A view 1522 provides read access to the read-only table 1520. A union view 1524 provides unified access to the read-only table 1518 and the writable table 1508.
A simulation of the sharing mode can be accomplished by disabling, using a DBI 1526, application write access to read-only tables, such as the read-only table 1518, redirecting application write queries received for the union view 1524 to the writable table 1508, if records to be modified match a defined key pattern, providing application read-access to the read-only table 1520 using the read-only view 1522, and providing application read access to the read-only table 1518 (and the writable table 1508) using the union view 1524.
FIG. 16 illustrates a system 1600 for transitioning from a standard system 1602 to a simulated system 1604. The transition from the standard system 1602 to the simulated system 1604 is described in a flowchart 1606. At 1608, a read-only table 1610 included in a database container 1612 is renamed from “TABR” to “/R/TABR”, as illustrated by a read-only table 1614 in a simulated database container 1616. The database container 1612 included in the standard system 1602 can be used as the simulated database container 1616 in the simulated system 1616. That is, the database container 1612 is a pre-transition illustration and the simulated database container 1616 shows container content post-transition.
At 1618, a view 1620 is created on the read-only table 1614.
At 1622, a “TAB” mixed table 1624 included in the database container 1612 is renamed to “/R/TAB”, as illustrated by a mixed table 1626 included in the simulated database container 1616.
At 1628, a mixed “/W/TAB” table 1630 is created in the simulated database container 1616.
At 1632, data is moved from the read-only table 1626 to the writable table 1630 according to tenant content definition. For example, tenant-specific data that matches key patterns defined for tenant content is moved from the read-only table 1626 to the writable table 1630.
At 1634, a union view 1636 is created on the read-only table 1626 and the mixed table 1630. A writable table 1638 included in the database container remains included in the simulated database container 1616, as illustrated by a writable table 1640.
FIG. 17 illustrates a system 1700 for transitioning from a simulated system 1702 to a shared/tenant system 1704. A simulated system 1702 includes a simulated container 1706 that includes a read-only table 1708, a read-only table 1710, a writable table 1712, a view 1714 on the read-only table 1708, a union view 1716 on the read-only table 1710 and the writable table 1712, and a writable table 1717. A transition from the simulated system 1702 to the shared/tenant system 1704 is described in a flowchart 1718.
At 1720, the read-only “/R/TABR” table 1708 is moved to a shared container 1722 included in the shared/tenant system 1704, as illustrated by a read-only table 1724.
At 1726, a view 1727 is recreated for the read-only table 1724 (e.g., “shared./R/TABR”), as shown in a tenant container 1728. For example, the view 1714 may become invalid or be deleted when the read-only table 1708 is moved. The tenant container 1728 is a post-transition view of the simulated container 1706. That is, the simulated container 1706 can serve as a container for the tenant once the transition has completed, with the tenant container 1728 being an illustration showing container contents after completion of the transition.
At 1730, the read-only “/R/TAB” table 1710 is moved from the simulated container 1706 to the shared container 1722, as illustrated by a read-only table 1732.
At 1734, a union view 1736 is recreated on the read-only table 1732 and a writable table 1738 that corresponds to the writable table 1712. For example, the union view 1716 may become invalid or be deleted when the read-only table 1710 is moved to the shared container 1722. A writable table 1740 corresponds to the writable table 1717 (that is, the writable table 1717 remains unchanged and is included in the tenant container 1728 post transition).
FIG. 18 illustrates a system 1800 for transitioning from a shared/tenant system 1802 to a standard system 1804. Such a transition may occur, for example, if cross-container access incurred an unacceptable performance degradation, for example, or if a determination is made that not enough shared content exists to warrant multi-tenancy.
The shared/tenant system 1802 includes a shared database container 1806 and a pre-transition tenant database container 1808. The standard system 1804 includes a post-transition database container 1810. The post-transition database container 1810 is a post-transition illustration of the pre-transition tenant database container 1808. The shared container 1806 is not used in the standard system 1804 post transition.
The transition from the shared/tenant system 1802 to the standard system 1804 is described in a flowchart 1812.
At 1814, a “tenant./W/TABR” table 1815 is created in the post-transition tenant database container 1810. (The “/W/TABR” table name is shown crossed out since the table 1815 is renamed in a later operation).
At 1816, data is copied from a read-only table 1818 in the shared database container 1806 (e.g., “shared./R/TABR”) to the table 1815.
At 1820, the read-only table 1818 (e.g., “shared./R/TABR”) is dropped from the shared database container 1806.
At 1822, a view 1824 that had been configured for the read-only table 1818 is dropped (e.g., the post-transition database container 1810 does not include a view).
At 1826, the “tenant./W/TABR” table is renamed to be “tenant.TABR”, as shown by an updated “TABR” name of the table 1815.
Processing of read-only data described in operations 1814, 1820, 1822, and 1826 can alternatively be performed by the processing described in an alternative flowchart 1828. For example, at 1830, the view 1824 can be dropped. At 1832, the table 1815 with name of “TABR” can be created in the database container 1820. At 1834, data can be copied from the read-only table 1818 to the “TABR” table 1815.
Continuing with the flowchart 1812, at 1836, data is copied from a read-only table 1838 in the shared database container 1806 (e.g., “shared./R/TAB”) to a writable table 1840 in the pre-transition tenant container 1808 (e.g., “tenant./W/TAB”). That is, records that had been previously split into the shared read-only table 1838 and the writable table 1840 are now included in the writable table 1840.
At 1842, a union view 1844 is dropped from the pre-transition tenant database container 1808 (e.g., the post-transition database container 1810 does not include a union view).
At 1846, the writable table 1840 (e.g., “tenant./W/TAB” is renamed to “tenant.TAB”, as illustrated by a table 1848 in the post-transition database container 1810. A writable table 1850 included in the pre-transition tenant database container 1808 remains unchanged and is included in the post-transition database container 1810, e.g., as a writable table 1852.
FIG. 19 illustrates a system 1900 for transitioning from a simulated system 1902 to a standard system 1904. A transition from a system sharing type of simulated to a system sharing type of standard can occur, for example, if a problem is detected in the simulated system setup, and developers wish to debug the problem in a standard system setup.
The simulated system 1902 includes a pre-transition simulated database container 1906. The standard system 1904 includes a post-transition tenant database container 1908. The post-transition tenant database container 1908 is a post-transition illustration of the pre-transition simulated database container 1906 (e.g., the post-transition tenant database container 1908 and the pre-transition simulated database container 1906 can be a same container, each with different content and different points in time).
The transition from the simulated system 1902 to the standard system 1904 is described in a flowchart 1912.
At 1914, a view 1916 on a read-only table 1918 is dropped (e.g., the post-transition tenant database container 1908 does not include a view).
At 1920, the read-only table 1918 is renamed from a name of “/R/TABR” to “TABR”, as illustrated by a read-only table 1922 in the post-transition tenant database container 1908.
At 1924, content is copied from a “/R/TAB” read-only table 1926 to a “/W/TAB” writable table 1928. That is, records that had been previously split into the read-only table 1926 and the writable table 1928 are now included in the writable table 1928.
At 1930, a “TAB” union view 1932 is dropped from the pre-transition simulated database container 1906 (e.g., the post-transition tenant database container 1908 does not include a union view).
At 1934, the writable table 1928 is renamed from “/W/TAB” TO “TAB”, as illustrated by a writable table 1936 included in the post-transition tenant database container 1908.
Processing of writable data described in operations 1924, 1930, and 1934 can alternatively be performed by the processing described in an alternative flowchart 1938. For example, at 1940, content can be copied from the writable table 1928 to the read-only table. At 1942, the “TAB” view 1932 can be dropped. At 1944, the read-only table 1926 can be renamed from “/R/TAB” to “TAB”, to become the writable table 1936.
A writable table 1946 included in the pre-transition simulated database container 1906 remains unchanged and is included in the post-transition tenant database container 1908, e.g., as a writable table 1948.
Deployment by Exchanging Shared Database Container
Changes may need to be deployed to a system during a system's lifetime, such as during maintenance and upgrade phases. Changes can include emergency patches, hot fixes, service packs and release upgrades, for example. Changes can include new content, new tables, modified content, or other changes that may need to be deployed to a shared database container and/or a tenant database container. A deployment, such as a patch, can be a shared-only patch. For example, the patch can include changes to vendor-provided objects, such as reports, classes, modules, or other objects that are only in a shared database container. Other deployments can include changes to be made to data in both a shared database container and in tenant database containers. A given software object can include data stored in a shared database container and/or a tenant database container, for example.
Challenges can arise when deploying changes to a multi-tenancy database system, since if an online shared database container is changed, those changes can be visible to tenant applications. The changes can cause inconsistencies and/or application errors. If shared content referenced or depended on by tenant data is changed, all connected tenants should generally be changed as well to ensure consistency for the tenants. To avoid inconsistencies and errors, tenants can be upgraded, which can involve taking tenants offline. Upgrading of tenants can include deployment of objects that are at least partially stored in a tenant database and post-processing for tenant objects that relate to a shared object.
If a problem occurs with a particular tenant, an attempt can be made to correct the problem during a predetermined downtime window. If the problem cannot be corrected during the available downtime window, the tenant can be reverted to connect to an earlier version of a shared container and brought back online. However, the tenant needing a connection to the earlier version of the shared container can pose a challenge for those tenants who are already connected to a new version of a shared container, if only one shared database container is used. One deployment approach can be to revert all tenants back to a prior version upon an error happening in a deployment of a respective tenant, with a later re-attempt of the deployment for all tenants. Such an approach can cause undesirable downtime for tenants, however.
To solve the issues of undesirable tenant downtime, different types of approaches can be used when deploying changes, to upgrade tenants individually and to temporarily hide changes from tenants who have not yet been upgraded. In a first approach, if a deployment includes changes to a relatively small percentage of tables in a system, such as with an emergency patch, the changes can be made to both an existing production shared database container and existing production tenant database containers. In a second approach, if changes are to be made to a relatively larger number of tables, such as during a feature release, then an approach of exchanging a shared container can be used, so that a new shared database container includes the changed data when it is inserted into the system. The new shared database container can be inserted into the system in parallel with an existing shared database container. Tenant database containers can be changed individually to connect to the new shared database container. Both approaches are described in more detail below.
As mentioned, with an exchanged shared database container approach, an existing shared database container is replaced with a new version and content is adjusted in connected tenants. The replacement approach avoids upgrading the existing shared container in place, which can reduce overall deployment runtime. A new shared database container is deployed, tenants are linked to the new shared database container, and the old shared database container can be deleted. During the deployment, the new shared database container is deployed in parallel to the old shared database container, so that both can be simultaneously accessible by tenants.
Having both shared database containers simultaneously accessible allows the deployment of the new shared container during “uptime”, since tenants can still productively use the old shared database container. Then tenants can be upgraded separately (either individually or potentially multiple tenants in parallel, but each done independently). Individual tenant upgrades can allow each tenant to define an individual downtime window. A problem with one tenant upgrade does not need to prolong downtime of other tenants. Having both shared database containers simultaneously accessible also allows some tenants to temporarily remain on an old version of the software using the old shared database container while some tenants use the new version of the software with the new shared database container.
During an update of a particular tenant, views reading from the old shared database container are dropped and new views are created reading from the new shared database container. Subsequent actions are performed to deploy remaining content to the tenants. For example, if objects are stored partly in the shared database container and partly in the tenant database container, a complement of the objects being delivered with the shared database container can be deployed to the tenants. Additionally, follow-up activities can be performed in the tenant, as described in more detail below.
FIG. 20 illustrates a system 2000 that includes data for objects in both a shared database container 2002 and a tenant database container 2004. Objects used in business applications can be persisted in a set of database tables. Objects can be shipped by a vendor to a customer, and customers can also create custom objects (e.g. classes, configurations, user interfaces). The tables used for the persistency of an object can be all of the same table type (e.g., read-only, mixed, writable). Therefore, some objects may have data that is only in the shared database container 2002 or only in the tenant database container 2004. As another example, an object can store data in tables of different types, such as if several objects re-use a table to store data (e.g., for documentation or text elements). Accordingly, some objects may have data that is in both the shared database container 2002 and the tenant database container 2004. Thus, an object deployment can be split into two parts: a deployment to a shared database container and a deployment to tenant database container(s).
The shared database container 2002 includes a read-only table T1 2006 and a read-only table 2008 T2#1 that stores read-only records for a mixed table named T2. The tenant database container 2004 includes a writable table 2010 and a writable table 2012 that stores writable tenant records for the T2 mixed table.
A style key 2014 shows a dashed-line style 2016 used to mark entries in the shared database container 2002 and the tenant database container 2004 that correspond to a first object that includes both vendor and customer data. For example, a first entry 2018 and a second entry 2020 represent shared vendor data being stored for the first object in the read-only table 2006 and the read-only table 2008, respectively, in the shared database container 2002. A third entry 2024 represents tenant data being stored for the first object in the writable table 2010, in the tenant database container 2004. In this example, the first object does not store data in the writable table 2012.
The style key 2014 shows a dotted line style 2026 used to mark entries 2028 and 2030 in the tenant database container 2004. The entries 2028 and 2030 represent tenant data being stored for a second object in the writable table 2012 and the writable table 2010 respectively. The second object is a customer object that includes writable customer data and no shared read-only data.
FIG. 21A illustrates an example system 2100 for deploying changes to objects in a database system. A deployment tool 2102 can determine, from a deploy data file 2104, which objects have changes to be deployed, which tables are to be updated with changes to a given object, and whether each object has changes to be made to a shared database container 2106, a tenant database container 2108, or both the shared database container 2106 and the tenant database container 2108. For example, the deployment tool 2102 can determine, from information in the deploy file 2104, that an object “R” 2110 includes data in a TR1 table 2112 and a TR2 table 2114. The deployment tool 2102 can determine, from metadata in a sharing type table 2116 (which may exist in the shared database container 2106 or another location), that the TR1 table 2112 and the TR2 table 2114 are read-only tables. Accordingly, the deployment tool 2102 can determine that the object “R” is a completely-shared table (e.g., exists only in the shared database container 2106), as illustrated by note 2118.
As another example, the deployment tool 2102 can determine, from information in the deploy file 2104, that an object “M” 2120 includes data in the TR1 table 2112, a T2 table, and a T3 table 2122. The deployment tool 2102 can determine, from metadata in the sharing type table 2116, that the TR1 table 2112 is a read-only table and that the T3 table 2122 is a local table. The deployment tool 2102 can determine that the T2 table is a split table (and thus implemented as a read-only table 2123 in the shared database container 2106 and a writable table 2124 in the tenant database container 2108). The deployment tool 2102 can determine that content for the object “M” is split, between the shared database container 2106 and the tenant database container 2108, as illustrated by note 2125.
As yet another example, the deployment tool 2102 can determine, from information in the deploy file 2104, that an object “L” 2126 includes data in an A1 table 2128, an A2 table 2130, an A3 table 2132, and an A4 table 2134. The deployment tool 2102 can determine, from metadata in the sharing type table 2116, that the A1 table 2128, the A2 table 2130, the A3 table 2132, and the A4 table 2134 are each local tables. Accordingly, the deployment tool 2102 can determine that the object “R” is a completely-tenant table (e.g., exists only in the tenant database container 2108), as illustrated by note 2136.
During deployment, the deployment tool 2102 can track deployment status and can know what objects have been deployed, whether partially or completely. For example, the deployment tool 2102 can update a deploy status table 2138 that indicates, that at a current point in time, the object “R” 2110 has been completely deployed, the object “M” 2120 has been partially deployed, and the object “L” has not yet been deployed.
When using the exchanged shared database approach, objects that exist only in the shared database container 2106 are updated when a new shared database container is installed. Accordingly, and as illustrated by note 2140, the deployment tool 2102 does not deploy content to the existing shared database container 2106, rather, shared database container content is available in the new shared database container (not shown in FIG. 21A). The deploy status table 2138 can be updated and populated when preparing the new shared database container, to indicate, for example, that the completely-shared object “R” is already deployed (e.g., already in the new shared database container), that the object “M” is partially-deployed (e.g., shared portions of the object “M” are already in the new shared database container at the start of the deployment, in the TR1 table 2112 and the T2 table 2123), and that the object “L” has not yet been deployed. The remaining part of the object “M”, and the object “L” will be deployed as part of a tenant deployment.
A deploy to tenant can include deploying portions of an object that are stored in a local table or in a local part of a mixed table. For example, deployment for the object “M” to a tenant can include deployment of data to the writable table 2124 and/or to the local table 2122. Deployment for the object “L” to a tenant can include deployment to the local tables A1 2128, A2 2130, A3 2132 and A4. Tenant deployment can also include dropping of views to the shared database container 2106 (e.g., views 2142, 2144, 2146, and 2148) and the updating of union views, such as a union view 2150.
FIG. 21B illustrates an example system 2180 for deploying changes to objects in a database system. The system 2180 is an illustration of the system 2100 when a deployment uses an approach of modifying, rather than exchanging, an existing shared database container (e.g., during deployment of an emergency patch). As indicated by note 2184, a deployment tool 2186 (which can be the same as the deployment tool 2102) can deploy changes to objects that are completely or partially stored in the shared database container 2106. For example, deployment to the shared database container 2106 can include modification, in place, of the read-only table 2114 and the read-only table 2112 when deploying the object “R” and modification, in place, of the read-only table 2114 and the read-only table 2122 when deploying the object “M”. The deployment status table 2138 can be updated as the deployment process proceeds. Deployment of patches is described in more detail below.
FIG. 22 illustrates an example system 2200 for upgrading a multi-tenancy database system 2202 using an exchanged shared database container approach. The multi-tenancy database system 2202 includes a first tenant database container 2204 and a second tenant database container 2206 that are each connected to a shared database container 2208, with each of the first tenant database container 2204, the second tenant database container 2206 and the shared database container 2208 at a particular version (e.g., version “1708”). A first application server 2210, also at the version “1708”, sends queries to the first tenant database container 2204, for data in the first tenant database container 2204 and/or in the shared database container 2208. Similarly, a second application server 2212, also at the version “1708”, sends queries to the second tenant database container 2206, for data in the second tenant database container 2206 and/or in the shared database container 2208.
When a new version of an application and/or database is to be deployed, a new shared database container that includes shared database container changes as compared to a current version can be deployed, as illustrated by a new shared database container 2220, at a new version (e.g., version “1711”), in a database system 2222. The new shared database container 2220 is included in the database system 2222 in parallel along with a current-version (e.g., version “1708”) shared database container 2224. A naming convention can be used to name the new shared database container 2220 and the current-version shared database container 2224, to ensure uniqueness of shared database container names. For example, shared database containers can be named using a combination of a product name and a version number.
Tenants can be linked, one at a time, to the new shared database container 2222. For example, a second application server 2226 and a second tenant database container 2228 have been upgraded to the new version (e.g., version “1711”), with the second tenant database container 2228 now linked to the new shared database container 2220. A first application server 2230 and a first tenant database container 2232 are still at the old version (e.g., version “1708”), and the first tenant database container 2232 is still connected to the current-version shared database container 2224. The first tenant database container 2232 can be identified as a next tenant database container to upgrade.
For example, a database system 2240 includes a first tenant database container 2242 and a first application server 2244 now at the new version (e.g., version “1711”), with the first tenant database container 2242 now connected to a new shared database container 2244 also at the new version. The old database container (e.g., what was the current-version database container 2224) has been dropped, and is not included in the database system 2240, since all tenants are now connected to the new shared database container 2242.
FIG. 23 illustrates an example system 2300 for deploying a new service pack to a multi-tenancy database system. The system 2300 includes an existing shared database container 2302 at a version of “1231” and service pack two (SP2). An application server 2304 and a tenant database container 2306 for a first tenant are also at the version “1231” and SP2. The existing shared database container 2302, the tenant database container 2306, and respective included components, are illustrated in a solid line, to denote being at version “1231” and SP2. A view 2308 provides access to a TABR read-only table 2310 in the existing shared database container 2302. A second tenant served by an application server 2312 has been upgraded to a new service pack level (SP3), as described below.
A deployment tool 2314 can attach, to the system 2300, a new shared database container 2316 that has been configured to be at a next service pack (SP3). The new shared database container 2316 includes a new TABR read-only table 2318 that includes change for the new service pack. The deployment tool 2314 can, when upgrading the second tenant, drop, from a tenant database container 2319, a view to the TABR read-only table 2310 in the existing shared database container 2302 and add a new view 2320 to the new TABR read-only table 2318 in the new shared database container 2316. The deployment tool 2314 can import changes to a writable table 2322, so that the writable table 2322 is at the new service pack level. The tenant database container 2319, the new shared database container 2316, and respective included components, are illustrated in a dashed line to denote being at SP3. The deployment tool 2314 can, at a later time, perform deployments operations similar to those done for the second tenant to upgrade the first tenant, so that both are at SP3. The existing shared database container 2302 can be dropped after all tenants have been upgraded.
FIG. 24 illustrates an example system 2400 for maintenance of a database system 2401. In preparation for a deployment, a service pack (SP) master 2402 can be used to create a delivery package. For example, the SP master 2402 may have been used to create a delivery package 2404 when deploying a SP1 service pack to the database system 2401. A SP1 shared database container 2406 and tenant database containers 2408, 2410, and 2412 are each at the SP1 level, for example. The SP1 shared database container and the tenant database containers 2408, 2410, and 2412 can be referred to as a cluster. The delivery package 2404 may have been created for a past deployment to the cluster. The delivery package 2404 includes a copy 2414 of the SP1 shared database container 2406 and a transport file 2416 that includes changes that had been imported to the tenant database containers 2408, 2410, and 2412 during the deployment of the SP1 service pack.
The SP master 2402 can create a new delivery package 2418 that includes a new SP2 shared database container 2420 and a transport file 2422 that include changes for a new service pack (SP2). The new SP2 shared database container 2420 can be attached to the database system 2401, as illustrated by an attached SP2 shared database container 2424.
Objects, such as views, in the tenant database containers 2408, 2410, and 2412 can be detached from the SP1 shared database container 2406 and connected to the attached SP2 shared database container 2424. The transport file 2422 can be applied to the tenant database containers 2408, 2410, and 2412, to upgrade them to a SP2 level. After all tenants have been upgraded, the SP1 shared database container 2406 can be dropped.
FIG. 25 illustrates an example system 2500 for upgrading a multi-tenancy system 2502 to a new version. The multi-tenancy system 2502 is in a state of partial completion of upgrading from an old “1708” version to a new “1711” version. As shown in the system 2500, at a same given time, some tenants can use, in production, a prior (e.g., “start”) release shared database container, while other tenants use a new (e.g., “target”) release shared database container, while still other tenants are offline and being upgraded to the new release.
For example, the multi-tenancy system 2502 includes a version “1708” shared database container 2504. Tenant database containers 2506 and 2508 (e.g., “Tenant 01” and “Tenant 02”, respectively) are also at version “1708” and are connected to the version “1708” shared database container. Tenant database containers 2510 and 2512 (e.g., “Tenant 05” and “Tenant 06”, respectively) have been converted to the version “1711” and are now connected to a version “1711” shared database container 2513 that has been added to the multi-tenancy system 2502 during the upgrade. Tenant database containers 2514 and 2516 (e.g., “Tenant 03” and “Tenant 04”, respectively) are currently being upgraded.
An overview of an upgrade process for a given tenant is outlined in a flowchart 2520. At 2522, the given tenant is backed up at a beginning of a downtime period. For example, a backup 2524 of the tenant database container 2514 and a backup 2526 of the tenant database container 2516 have been created.
At 2528, a link to the new (e.g., version “1711”) shared database container 2513 is established. For example, new views can be established, as described in more detail below in FIGS. 74-79.
At 2530, a delta is deployed to the tenant. The delta can be included in a transport file, and can include changes to be applied to tables in the given tenant database container.
At 2532, a determination is made as to whether the deployment succeeded. If the deployment did not succeed, processing operations 2534 are performed. Processing operations 2534 include: restoring, at 2536, the backup (e.g., at version “1708”, such as the backup 2524 for the tenant database container 2514); establishing a link, at 2538, to the old (e.g., “version 1708”) shared database container 2504; and releasing, at 2540, the given tenant on the old version “1708” to the customer. Establishing the link, at 2538, can include restoring views to tables in the “version 1708” shared database container 2504. Deployment can be re-attempted at a later time. If the deployment succeeded, the tenant is released, at 2542, on the new version “1711” to the customer.
FIGS. 26 to 31 progressively illustrate, in further detail, various stages of an upgrade process for upgrading a database system to a new version, using an exchanged shared database container approach. The exchanged shared database container approach can also be used for deployment of a service pack or patch.
FIG. 26 illustrates an example system 2600 before deployment of a new database version using an exchanged shared container approach. The system 2600 includes a shared database container 2602 that includes a current version of a read-only table 2604 that is a shared portion of a mixed table named “TAB”. The shared database container 2602 also includes a read-only table 2606. The system 2600 includes a first tenant database container 2608 for a first tenant and a second tenant database container 2610 for a second tenant.
The first tenant database container 2608 includes a view 2612 to the read-only table 2604 (illustrated as an arrow 2614), a writable table 2616 that is a local portion of the mixed table, a union view 2618 providing unified access to the read-only table 2604 and the writable table 2616, a writable table 2620, and a view 2621 to the read-only table 2606 (illustrated as an arrow 2622). Similarly, the second tenant database container 2610 includes a view 2623 to the read-only table 2604 (illustrated as an arrow 2624), a writable table 2626 that is a local portion of the mixed table, a union view 2628 providing unified access to the read-only table 2604 and the writable table 2626, a writable table 2630, and a view 2631 to the read-only table 2606 (illustrated as an arrow 2632).
FIG. 27 is an illustration of a system 2700 that is upgraded in part by exchanging a shared database container. The system 2700 is a view of the system 2600 during a first set of deployment operations, for preparing a shared database container. In summary, a new shared database container 2704 can be deployed in parallel to an existing, in-production shared container (e.g., the shared database container 2602), without disrupting the operation of the existing shared database container 2602.
The first set of deployment operations, for preparing the shared database container 2704, are outlined in a flowchart 2705.
At 2706, a determination is made as to whether the deployment is allowed or other activity is running. If the deployment is not allowed and/or other activity is running that is not allowed during a deployment, the deployment ends.
If the deployment is allowed, the new (e.g., version 2) shared database container 2704 is copied and attached to the database, at 2707. The new shared database container 2704 is a container included in a delivery package and created at the vendor, it contains a new software version (e.g., a copy of the shared database container 2420, brought together with the tenant part delivered with 2807). The new shared database container 2704 includes a read-only table 2708 that is a copy of a shared table included in the service pack master 2402.
At 2712, target connection information (e.g., URL, user name, password) is provided to tenants. For example, the target connection information, such as an address of the new shared database container 2704, can be made available to the first tenant database container 2608 and the second tenant database container 2610. Information about the new shared database container 2704 can be published to the tenants, so the tenants can read new shared database container content. Read-only access to objects in the shared container can be granted to tenants.
As another example, the target connection information can be provided to a deployment tool that will respectively upgrade the first tenant and the second tenant. As indicated by indicators 2714 and 2715, respectively, the first tenant database container 2608 and the second tenant database container 2610 can be designated as version two (“V2”) destinations (e.g., upgrade targets).
At 2718, information is provided from the new shared database container 2704, such as to the deployment tool, including a list of shared tables, information about component versions (e.g., service pack levels), and information about deployed transports and import state. The deployment process continues as described below for FIG. 28.
FIG. 28 is an illustration of a system 2800 that is upgraded in part by exchanging a shared database container. The system 2800 is a view of the system 2600 during a second set of deployment operations, for deploying to a first tenant. The second set of operations are outlined in a flowchart 2802.
At 2804, connectivity and new shared space information is obtained. For example, connectivity information to connect the first tenant database container 2608 to the new shared database container 2708 can be provided to the first tenant database container 2608 and/or to a deployment tool. For example, an address of the new shared database container 2708 can be provided to the deployment tool.
At 2806, a new shared space version and matching service pack level is determined. For example, the deployment tool can ensure that a version of the new shared database container 2708 matches a version of a delta deployment package 2807. The delta deployment package 2807 is, for example, a file that was prepared before initiation of the deployment. Creating the delta deployment package 2807 can include identifying objects that are partially included in the new shared database container 2704 and computing the remaining deployment parts (i.e. local content portions of those objects and changes to those local content portions that are to be part of the deployment). Creating the delta deployment package 2807 can also include identifying objects that are completely stored in tenant containers and identifying changes to those objects that are to be part of the deployment.
At 2808, “drop/create” or “alter” statements for views reading from shared tables are computed. For example, drop statements for views to the read-only table 2606 and the read-only table 2604 can be prepared. For example, drop statements dropping the view 2631 (illustrated as the arrow 2632), the view 2621 (illustrated as the arrow 2622), the view 2612 (illustrated as the arrow 2614), and the view 2623 (illustrated as the arrow 2624) can be prepared. Respective create view statements for creating new views in the first tenant database container 2608 and in the second tenant database container 2610 to the read-only table 2708 and the read-only table 2710 can be prepared.
In general, the new shared database container 2704 can include more or less tables than the shared database container 2602. Therefore, a set of views to be created depends on the contents of the new shared database container 2704. The new shared database container 2704 can include an administrative table (not shown) that includes a list of tables included in the new shared database container 2704. The administrative table can be read, so that statements can be prepared that will, when executed, drop views to all tables in the shared database container 2602 and create new views for all tables in the new shared database container 2704.
At 2810, a target destination and table names are read, and statements are computed, for data to be transported to tenant database containers.
At 2812, structure adjustment(s) to local tables are computed. For example, the deployment can include changes to the writable table 2616 and/or the writable table 2620 in the first tenant database container 2608. As another example, the deployment can include changes to the writable table 2626 and/or the writable table 2630 in the second tenant database container 2610.
Statement(s) (e.g., alter statement(s)) to adjust the structure of these writable/local tables can be computed, for later execution, as described below. If the structure of the writable table 2616 is to be adjusted, a statement to re-create the union view 2618 can be prepared, to create a view that includes the updated structure of the writable table 2616. The deployment process continues as described below for FIG. 29.
FIG. 29 is an illustration of a system 2900 that is upgraded in part by exchanging a shared database container. The system 2900 is a view of the system 2600 during a third set of deployment operations, for completing a deployment to a first tenant. The third set of operations are outlined in a flowchart 2902.
At 2904, previously-prepared statements are executed. For example, previously-prepared drop-view statements, to drop views to the shared database container 2602 (e.g., the views 2612 and 2621 illustrated as the arrows 2614 and 2622, respectively, on previous figures) can be executed, by a transport control component 2905. New views can be created, used previously-prepared create-view statements, to create new views, to the read-only table 2708 and the read-only table 2710 in the new shared database container 2704, in the first tenant database container 2608. For example, a view 2906 to the read-only table 2708 can be created (with the connection illustrated as an arrow 2908). As another example, a view 2910 to the read-only table 2710 can be created (with the connection illustrated as an arrow 2912).
The transport control component 2905 can also execute previously-prepared alter statements, to adjust structures of local tables, as illustrated by an updated writable table 2914 and an updated writable table 2916. If the structure of the writable table 2914 is new and/or the structure of the view 2910 is new (e.g., as compared to the read-only view 2612), the transport control component 2905 can execute a statement to create a new union view 2918 to replace the union view 2618.
At 2920, local content is deployed. For example, a transport program 2922 can copy data from the delta deployment package 2807 to the updated writable table 2916. As another example, the transport program 2922 can copy data from the delta deployment package 2807 to the updated writable table 2914. In general, the local content can include content that is the local portion of objects that are partially stored in the new shared database container 2704 and partially stored in the first tenant database container 2608. Local content can also include content for objects that are completely stored in the first tenant database container 2608 and not stored in the new shared database container 2704.
At 2926, a status update is written to local patch tables. For example, status information indicating that the first tenant has been upgraded to version two can be stored, such as in an administrative table in the new shared database container 2704 (not shown) or in another location.
At 2928, the first tenant is registered at a target shared space. For example, the first tenant database container 2608 can be registered, in an administrative table in the new shared database container 2704, as being connected to the new shared database container 2704.
At 2930, the first tenant is de-registered from the source shared space. For example, an entry can be deleted (or marked as inactive) in an administrative table in the shared database container 2602, with the deletion or the marking as inactive indicating that the first tenant database container 2608 is no longer connected to the shared database container 2602.
At 2932, version one destination information is deleted. The deployment process continues as described below for FIG. 30.
FIG. 30 is an illustration of a system 3000 that is upgraded in part by exchanging a shared database container. The system 3000 is a view of the system 2600 during a fourth set of deployment operations, for deploying to a second tenant. Deployment of the second tenant can include a same set of operations as performed for the first tenant, as described above for FIG. 28 and FIG. 29.
Deployment for the second tenant can include the dropping, in the second tenant database container 2610, of views to the shared database container 2602 (e.g., the views 2623 and 2631, illustrated as the arrows 2624 and 2632, respectively, on previous figures). Deployment for the second tenant can include the creating of new views, to the read-only table 2708 and the read-only table 2710, in the new shared database container 2704, as illustrated by a new view 3002 and arrow 3004, and a new view 3006 and arrow 3008.
Deployment for the second tenant can include the adjustment of and deployment of content to local tables, as illustrated by an updated writable table 3010 and an updated writable table 3011. An updated union view 3012 can be created to reflect updated structure(s) of the updated writable table 3010 and/or the new view 3002. Once all tenants have been upgraded, the shared database container 2602 can be dropped, as illustrated by an “X” 3014.
FIG. 31 is an illustration of a system 3100 that is upgraded in part by exchanging a shared database container. The system 3100 is a view of the system 2600 in a final state, after deployment to all tenants, including the first tenant database container 2608 and the second tenant database container 2610, has been completed. The shared database container 2602 has been dropped and is no longer included in the system 3100. The shared database container 2602 can be dropped, for example, after test(s) have been performed to ensure that all tenants are using the new shared database container 2704. Completing a deployment can also include performing other tests, such as to ensure that all parts of all objects to be changed in the new version have been deployed.
Other finalization tasks can include triggering after-deployment activities in each tenant database container for changed shared content, including performing post actions for objects. Post actions can include invalidating table buffers (e.g., that store previously read shared content) in an application server 3102 and/or an application server 3104 (the application servers 3102 and 3104 being different or a same server) for tables that have been switched to read from the new shared database container 2704, invalidating previously-compiled objects, triggering re-compile of objects to now read from the new shared database container 2704, re-generating tenant-specific objects that depend on shared content and tenant content, and calling other application-specific follow-up actions related to the deployment of changed content in a tenant. After-deployment actions can ensure that objects are consistent with deployed content.
Patching Content Across Shared and Tenant Database Containers
FIG. 32 illustrates a system 3200 for deploying changes to objects. FIG. 32 illustrates a system for deploying changes to objects. As mentioned above, rather than exchange a shared database container 3202, for some deployments, such as those for a patch that have changes to less than a predetermined threshold number of tables, changes can be applied in place to both the shared database container 3202 and tenant database containers (e.g., a first tenant database container 3204 and a second tenant database container 3206). Deployment can be performed in two phases: 1) deployment to the shared database container 3202; and 2) deployment to the tenant database containers 3204 and 3206, which can be performed independently. Independent tenant deployments can enable sequential and de-coupled deployments.
A deployment 3208 can ensure that a patch is completely deployed both to the shared database container 3202 and to each tenant database container 3204 and 3206, including ensuring that any planned follow-up actions have been performed for all tenants. The deployment tool 3208 can identify a deployment file entry 3209 in a deployment package 3210 for a given object, and determine that the given object includes data stored in T1, T2, and T3 tables. The deployment tool 3208 can access metadata 3212 that indicates that the T1 table is a shared read-only table (and thus residing in the shared database container 3202, e.g., as a read-only table 3214), the T2 table is a split table (and thus partially residing in the shared database container 3202, e.g., as a read-only table 3216), and the T3 is a tenant-local table (and thus respectively residing in tenant database containers, e.g., as a local table 3218 and a local table 3220).
The deployment tool 3208 can identify, based on the metadata 3212 and the deployment file entry 3209, the given object as at least partially included in the shared read-only table 3202. The deployment tool 3208 can deploy, for the given object, changes for the portions of the given object that reside in the shared database container 3202, as illustrated by an entry 3222 in the T1 read-only table 3214 and an entry 3224 in the T2 read-only table 3216. The entry 3222 can be populated with data from an entry 3226 in the deployment file entry 3209. Similarly, the entry 3224 can be populated with data from an entry 3228 in the deployment file entry 3209. The deployment tool 3208 can store a record, in a status table, that indicates that the given object is partially deployed.
The deployment tool 3208 can next perform the deployment to tenant phase, which can include a deployment to the first tenant database container 3204 and a deployment to the second tenant database container 3206. The deployments to the tenant database containers can operate independently, and may happen sequentially, or in parallel. The deployment tool 3208 can identify the given object associated with the entry 3209 as an object that has been partially deployed, based on the entry 3209 and the metadata 3212 indicating that the given object includes data in the T3 tenant-local table. The deployment tool 3208 can determine that a portion of the given object that is stored in an entry 3230 in the deployment file entry 3209 has not yet been deployed. The deployment tool 3208 can deploy the entry 3230, to the first tenant database container 3204 and the second tenant database container 3206, as illustrated by an entry 3232 and an entry 3234.
Other deployment tasks that can be performed by the deployment tool 3208 include identifying objects that have not been deployed to the shared database container (e.g., objects that reside only in local tenant tables), and deploying changes to those objects. Finalization tasks performed by the deployment tool 3208 can include invoking actions to operate on deployed content, which can include, for example, triggering buffer invalidation and buffer refresh, or compiling deployed code. Finalization tasks can also include ensuring that all parts of all objects to be included in the deployment have been deployed.
FIG. 33 illustrates a system 3300 for deploying a patch using a hidden preparation of a shared database container. As described above, tenant-independent deployments may be desired, so that tenants can each define their own downtime window and so that if one tenant deployment has an issue, not all tenants deployments need to be reverted. Deploying a new shared database container in parallel to an existing shared database container is one approach. For smaller changes, preparing, in the existing shared database container, hidden version of individual tables can be another approach. This hidden-deployment approach can reduce downtime by providing a tenant-individual fallback option. Hidden changes are initially invisible to tenants who can still productively use current-version tables in the shared database container, until they are individually deployed and switched over to use new table versions.
The system 3300 includes sub-systems 3302, 3304, 3306, and 3308 which provide an overview of the progression of the deployment. Other figures below give further detail to each deployment stage. The sub-system 3302 includes a shared database container 3310, a first tenant database container 3312 for a first tenant, and a second tenant database container 3314 for a second tenant.
The shared database container 3310 includes a read-only table 3316 that is at a first version, with a name of “TABR #1”. Although only one table is illustrated in the shared database container 3310, the shared database container 3310 can include other tables. The first tenant database container 3312 and the second tenant database container 3314 respectively include a read-only view 3318 or a read-only view 3320 that each provide read access to the read-only table 3316 for a respective tenant. The first tenant database container 3312 and the second tenant database container 3314 respectively also include a writable table 3322 or a writable table 3324.
In a first deployment stage, a patching system 3326 creates a clone/copy of the read-only table 3316, illustrated as a new read-only table 3328. The new read-only table 3328 has the same structure as the read-only table 3316.
In a second deployment stage, and as illustrated in the sub-system 3304, the patching system 3326 and/or a deployment tool can modify the new read-only table 3328 by importing changes to the new read-only table 3328 for a patch to be deployed to the sub-system 3302. The new read-only table 3328 is displayed in dashed lines to signify that the new read-only table 3328 is at a new version that includes the patch.
In a third deployment stage, and as illustrated in the sub-system 3306, the first tenant is switched to be compatible and connected to the updated shared database container 3310. For example, the view 3318 is dropped and a new view 3330 is created to the new read-only table 3328. A structure of the writable table 3322 can be updated, as illustrated by an updated writable table 3332.
Similarly, and as illustrated in the sub-system 3308, the second tenant is switched to be compatible and connected to the updated shared database container 3310. For example, the view 3320 is dropped and a new view 3334 is created to the new read-only table 3328. A structure of the writable table 3324 can be updated, as illustrated by an updated writable table 3336.
In a fourth deployment stage, the read-only table 3316 is dropped, as illustrated by an “X” 3338, since there are now no tenants connected to the read-only table 3316. FIGS. 34-39 below discuss a more involved example of deployment using hidden preparation of a shared database container, including the use of a mixed table, and more detailed discussions of each operation.
FIG. 34 illustrates an example system 3400 before deployment of a patch. The system 3400 includes a shared database container 3402 that includes a current version (e.g., version #1) of a read-only table 3403 that is a shared portion of a mixed table named “TAB”. The system 3400 includes a first tenant database container 3404 and a second tenant database container 3406. The first tenant database container 3404 includes a view 3408 to the read-only table 3403 (illustrated as an arrow 3409), a writable table 3410 that is a local portion of the mixed table, a union view 3412 providing unified access to the read-only table 3403 and the writable table 3410, and a writable table 3414. Similarly, the second tenant database container 3406 includes a view 3416 to the read-only table 3403 (illustrated as an arrow 3417), a writable table 3418 that is a local portion of the mixed table, a union view 3420 providing unified access to the read-only table 3403 and the writable table 3418, and a writable table 3422.
FIG. 35 illustrates a system 3500 for preparation of a shared database container during a deployment of a patch to a database system. The system 3500 is a view of the system 3400 after a first set of deployment operations have been completed. The first set of deployment operations are outlined in a flowchart 3502. At 3504, a patch system 3506 reads a deployment package 3508 to identify shared tables to which content is to be deployed. For example, the patch system 3506 can identify, based on data in the deployment package 3508, a mixed table named “TAB” 3509 for which a patch is to be deployed to the read-only portion of the mixed table in the shared database container 3402. As described above, a current version of the read-only portion of the “TAB” table is included in the shared database container 3402 as a read-only table 3403.
At 3510, the patch system 3506 clones the read-only table 3403 to create a read-only table 3512 that has the same structure as the read-only table 3403, and publishes a name of the read-only table 3512 to the deploy tool 3516 running at the shared deployment. The read-only table 3512 is named with a target name of “TAB #2”, and is shown with dashed lines to signify that the read-only table 3512 is a new version of the read-only table 3403. An administration table can be updated to publish the name of the read-only table 3512. The published name can be used in a later stage when tenants are deployed and connected to the read-only table 3512.
At 3514, a deployment tool 3516 deploys (e.g., imports) data from the deployment package 3508 to the read-only table 3512, to deploy the patch to the read-only table 3512. The read-only table 3512 is read-only with respect to tenant applications, but the deployment tool 3516 has write access to the read-only table 3512. The deployment tool 3516 can determine content that is to be deployed to the shared database container 3402 only (e.g., and not to tenant database containers).
At 3518, deployment status is stored (e.g., in an administrative table in the shared database container 3402 (not shown)). Deployment status can include an indication that the patch to the TAB table is partially deployed (e.g., changes to the read-only sharable portion of the TAB mixed table have been made in the shared database container 3402 but the writable portion of the TAB mixed table has not yet been updated). The administrative table can include information that indicates, for example, that changes to the writable table 3414 (e.g. named “TAB2”), and other tables, have not yet been deployed.
At 3520, the name of the read-only table 3512, with target name of “TAB #2”, is published to the patch system 3506 running at the tenant deployment, or otherwise made available, as the name of the new version of the read-only table 3403. The published name is used in later deployment operations, as described in more detail below. The read-only table 3512 remains hidden, and unused by tenant applications, until later operations have been completed.
FIG. 36 illustrates a system 3600 for deploying a patch to a tenant database container. The system 3600 is a view of the system 3400 during a second set of deployment operations, for deploying the patch to the first tenant database container 3404. The second set of deployment operations are outlined in a flowchart 3602. Before execution of the second set of operations, a downtime period can be initiated for the first tenant database container 3404.
At 3604, a determination is made that content from the deployment package 3508 has been prepared (e.g., deployed to as hidden) in the shared database container 3402.
At 3606, shared tables that have been prepared, and partially deployed, are identified, and a drop view statement is created. For example, the patch system 3506 can identify that the read-only table 3512 has been prepared as a new version of the read-only table 3403. A drop view statement can be prepared to drop a view to the read-only table 3403.
At 3608, a create view statement is computed, by reading, and including in the create view statement, a published target name of the read-only table 3512.
At 3610, the previously-computed drop view statement and create view statement are executed. The drop view statement drops a view in the first tenant database container 3404 to the read-only table 3403. Accordingly, there is now no arrow (e.g., arrow 3409 on prior figures) originating from the first tenant database container 3404 and ending at the read-only table 3403. The create view statement creates a new view 3612 to the read-only table 3512 (e.g., illustrated by an arrow 3613).
At 3614, the deployment tool 3516 deploys content to the first tenant database container 3404. For example, the deployment tool 3516 can deploy content from the deployment package 3508 to one or more writable tables included in the first tenant database container 3404, as illustrated by an updated writable table 3616. As another example, content from the deployment package 3508 can be deployed to a writable table that includes tenant-local content associated with the mixed table corresponding to the read-only table 3512, as illustrated by an updated writable table 3618. The deployment tool 3516 can determine content in the deployment package 3508 that has not been deployed to the shared database container 3402 and that is to be deployed to tenants.
At 3620, local table structure(s) and union view(s) are updated. For example, the union view 3412 of FIG. 34 can be updated to connect to the new view 3612 and the updated writable table 3618, as illustrated by an updated union view 3622. As another example, structure of the updated writable table 3616 and/or the updated writable table 3618 can be updated, according to data in the deployment package 3508.
After deployment for the first tenant is completed, downtime for the first tenant can be ended, with the first tenant database container 3404 successfully configured with deployed changes and updated connections to the read-only table 3512. The new view 3612, the arrow 3613, the updated writable table 3616, the updated writable table 3618, and the updated union view 3622 are illustrated in dashed lines to signify completion of the patch deployment for the first tenant database container 3404.
FIG. 37 illustrates a system 3700 for deploying a patch to a tenant database container. The system 3700 is a view of the system 3400 during a third set of deployment operations, for deploying the patch to the second tenant database container 3406. Before execution of the third set of operations, a downtime period can be initiated for the second tenant database container 3406. Deployment of the patch to the second database container 3406 can include the same or similar operations as done for the first database container, as outlined in the flowchart 3602, but for the second database container 3406.
For example, a view in the second database container 3406 to the read-only table 3403 can be dropped (e.g., the arrow 3417 shown on prior figures is no longer included in FIG. 37). A new view 3702 can be created, to the read-only table 3512, as illustrated by an arrow 3704. Content can be deployed to writable tables, and writable table structures can be altered, as illustrated by an updated writable table 3706 and an updated writable table 3708. A union view can be updated to provide unified access to the new view 3702 and the updated writable table 3708, as illustrated by an updated union view 3710.
After deployment for the second tenant is completed, downtime for the second tenant can be ended, with the second tenant database container 3406 successfully configured with deployed changes and updated connections to the read-only table 3512. The new view 3702, the arrow 3704, the updated writable table 3706, the updated writable table 3708, and the updated union view 3710 are illustrated in dashed lines to signify completion of the patch deployment for the second tenant database container 3406.
FIG. 38 illustrates a system 3800 for performing finalization of a deployment. The system 3800 is a view of the system 3400 during a fourth set of deployment operations, for performing a finalization/clean up phase. The fourth set of operations are outlined in a flowchart 3802. At 3804, a determination is made as to whether the patch has been deployed to all registered tenants. At 3806, in response to determining that the patch has been deployed to all registered tenants, old shared table(s) that are no longer used are dropped. For example, the patch system 3506 can drop the read-only table 3403, since there are no longer any tenants connected to the read-only table 3403. At 3808, the name of the read-only table 3403 (e.g., “TAB #1”) is removed from a list of published shared tables.
FIG. 39 illustrates a system 3900 after deployment using a hidden preparation of a shared database container technique. The system 3900 is a view of the system 3400 after deployment to all tenants, including the first tenant database container 3404 and the second tenant database container 3406, has been completed. The shared database container 3402 includes the new version read-only table 3512 and no longer includes the prior version read-only table 3403. The first tenant database container 3404 and the second database container 3406 include updated components, including connections to the new version read-only table 3512.
FIG. 40 is a flowchart of an example method 4000 for handling unsuccessful tenant deployments. It will be understood that method 4000 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 4000 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 4000 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1. For example, the method 4000 and related methods can be executed by the deployment tool 130 of FIG. 1.
At 4002, an unsuccessful deployment of a tenant is detected. For example, an error message may be received.
At 4004, the unsuccessful deployment is analyzed. For example, status information can be analyzed that indicates which portions of the deployment have successfully completed or have encountered errors.
At 4006, a determination is made as to whether a problem with the deployment can be solved immediately, or within a predetermined time window (e.g., one hour). The predetermined time window can be a maximum acceptable length of a downtime window for the tenant, for example.
At 4008, in response to determining that the problem can be resolved within the predetermined time window, the problem is resolved. For example, a new deployment package can be provided, and/or a system or process can be restarted.
At 4010, the deployment is restarted for the tenant. If a new deployment package has been provided, the new deployment package can be used in the deployment re-attempt.
At 4012, a determination is made as to whether the deployment re-attempt succeeded. If the deployment re-attempt did not succeed, the method 4000 can be re-executed (e.g., at 4002).
At 4014, in response to determining that the problem with the initial deployment cannot be resolved within the predetermined time window, the tenant is reverted to a state before the deployment.
At 4016, the tenant is provided to the customer at a release version of the tenant before the start of the deployment, so that the tenant can be online while the problem is being resolved.
At 4018, the problem is resolved while the tenant is online.
At 4020, the deployment is restarted for the tenant. Deployment success can be determined, and the method 4000 can be re-executed if the restart of the deployment did not succeed, as described above.
FIG. 41 illustrates a system 4100 for deploying multiple patches to a database system. Tenant-independent downtimes and deployments may result in different tenants connected to different versions at a given point in time, such as if deployments are re-attempted for one or more tenants or if given deployments are still ongoing. Tenants can have overlapping deployment timeframes, either due to planned individual upgrade windows or as a result of a problem and a revoke of a particular tenant deployment. An administrator may desire to deploy a patch to those tenants that are on a new version, even when some other tenants have not yet been upgraded to the new version. As another example, it may be desired to deploy a second patch and a first patch to a tenant who has not yet had the first patch deployed.
The system 4100 can support the deployment of multiple patches to tenants. For example, a deployment of a package “p1” to a cluster of a shared database container and N tenant database containers can be partially completed (e.g., M of the N tenants, M<N, do not have the p1 patch deployed). The system 4100 can support the deployment of a patch “p2”, even though the M tenants do not yet have the p1 patch. It may be desired to react, with a new patch, to a problem that is occurring in one or more tenants who already have the p1 patch, without needing to wait until all tenants have the p1 patch.
The system 4100 is an overview showing changes to the system 3400 after different sets of patches have been deployed to different tenant database containers. The shared database container 3402 includes the read-only table 3403 and the read-only table 3512 (e.g., a second version of the read-only table 3402). The first tenant associated with the first tenant database container 3404 has been upgraded to version two. The patch system 3506 has created a view 4102 to the version-two read-only table 3512, and the deployment tool 3516 has deployed content from a patch one deployment package 4104 to the first tenant database container 3404.
A problem may be detected in the second tenant database container 3406 before the patch one deployment package 4104 has been deployed to the second tenant database container 3406. A patch two deployment package 4106 has been created which includes changes to content, including to the TAB and TAB2 tables, to create a third software version to fix the detected problem. The patch system 3506 can clone the version-two read-only table 3512 to create a version-three read-only table 4108. The deployment tool 3516 can deploy content from the patch two deployment package 4106 to the version-three read-only table 4108 to deploy shared content included in the new patch.
The patch system 3506 can create a view 4110 to the version-three read-only table 4108. The deployment tool 3516 can deploy tenant content from the patch one deployment file 4104 and the patch two deployment file 4106 to complete the upgrade of the second tenant database container 3406 to the third software version. Later determinations can be made regarding whether the third software version has corrected the problem and whether to upgrade the first tenant database container 3404, at a later time, to the third software version. Further details of deploying multiple patches are described below with respect to FIGS. 42-48.
FIG. 42 illustrates a system 4200 for preparing a shared database container before deploying multiple patches to a database system. The system 4200 is a view of the system 3400 after a first set of deployment operations have been completed, for preparing for deploying a first patch to the first tenant. The first set of deployment operations are outlined in a flowchart 4202 and are similar to the deployment operations described above for the flowchart 3502.
At 4204, the patch system 3506 reads a deployment package 4206 to identify shared tables to which content is to be deployed. For example, the patch system 3506 can identify, based on data in the deployment package 4206, a mixed table named “TAB” 4208 for which a first patch is to be deployed to the read-only portion of the TAB mixed table in the shared database container 3402.
Although one table, (“TAB”) is used in this example, in general, the patch system 3506 can determine a set of tables in the shared container that will receive data from the deployment package 4206. For purposes of discussion of a general example below, this set of tables can be referred to as a set st_1. The patch system 3506 can determine a version number for each table in the set st_1, and can determine a maximum version number of those tables. The patch system 3506 can determine a target version number, v_target1=maximum version number in st_1+1.
At 4210, the patch system 3506 clones the read-only table 3403 to create a version-two read-only table 4212 that has the same structure as the read-only table 3403, and publishes a name of the version-two read-only table 3512. The version-two read-only table 3512 is named with a target name of “TAB #2”.
Continuing with the general example above, the patch system 3506 can, for each table in the set st_1, identify, in the shared database container 3402, a source table named <table-name>#<v_start>, where v_start is a highest version number of tables that have a same base name of <table-name> (for example, the shared database container 3402 may have tables named DOKTL #3, DOKTL #5, and DOKTL #11, so for a table_name of DOKTL, v_start is 11). The patch system 3506 can create a copy of each identified source table to make a respective target table, using a pattern of <table-name>#<v_target1>.
At 4214, the deployment tool 3516 deploys (e.g., imports) data from the deployment package 4206 to the version-two read-only table 4212, to deploy the first patch to the version-two read-only table 4212. The deployment tool 3516 can determine content that is to be deployed to the shared database container 3402 only (e.g., and not to tenant database containers). Continuing with the general example, the deployment tool 3516 can deploy content of the deployment package 4206 to each of the target tables <table-name>#<v_target1>, in the shared database container 3402.
At 4216, deployment status is stored (e.g., in an administrative table in the shared database container 3402 (not shown)). Deployment status can include an indication that the first patch to the TAB table is partially deployed (e.g., changes to the read-only sharable portion of the TAB mixed table have been made in the shared database container 3402) but the first patch has not yet been applied to the writable portion of the TAB mixed table).
At 4218, the name of the version-two read-only table 4212, with target name of “TAB #2”, is published, or otherwise made available, as the name of the new version of the read-only table 3403. A version number (e.g., version two) can also be published as a target (e.g., “go to”) version number, for later tenant deployments. For the general example, the number v_target1 can be passed to a central control tool as a goto-version for the deployment package 4206, for orchestration of future tenant deployments.
FIG. 43 illustrates a system 4300 for deploying multiple patches to a database system. The system 4300 is a view of the system 3400 after a second set of deployment operations, for deploying a first patch, have been completed during deployment of multiple patches to a database system. The second set of deployment operations are outlined in a flowchart 4302 and are similar to the operations described above for the flowchart 3602.
At 4304, a determination is made that content for the first patch from the deployment package 4206 has been prepared (e.g., deployed to as hidden) in the shared database container 3402. The patch system 3506 can retrieve a target version number v_target1 for use in deploying tenant content.
At 4306, shared tables that have been prepared, and partially deployed, are identified, and a drop view statement is created. For example, the patch system 3506 can identify that the version-two read-only table 4212 has been prepared as a new version of the read-only table 3403. A drop view statement can be prepared to drop a view to the read-only table 3403.
Continuing with the general example, the patch system 3506 can determine, in the deployment package 4206, a complement of what had been deployed from the deployment package 4206 to the shared database container. For example, the patch system 3506 can identify a set of all tables, st_1_all, that are to receive content from the deployment package 4206. The patch system 3506 can remove, from the set st_1_all, tables that have been deployed in the shared (e.g., the set st_1). The patch system 3506 can determine a remaining set, st_1_rest.
For determining drop view statements, the patch system 3506 can identify current views in the tenant database container 3404 that select from a shared table with a version smaller than v_target1. The patch system 3506 can prepare a drop statement for each of those identified current views.
At 4307, a create view statement is computed, by reading, and including in the create view statement, a published target name of the version-two read-only table 4212.
For the general example, the patch system 3506 can compute, for each of the current views that are to be dropped, a version of a table to be used in a new view, by determining a maximum number of the version of the table that is identical or smaller than v_target1. The patch system 3506 can prepare a create view statement using the determined version of the table to be used in the new view.
At 4308, the previously-computed drop view statement and create view statement are executed. The drop view statement drops a view in the first tenant database container 3404 to the read-only table 3403. Accordingly, there is now no arrow (e.g., arrow 3409 on prior figures) originating from the first tenant database container 3404 and ending at the read-only table 3403. The create view statement creates a new view 4310 to the version-two read-only table 4212 (e.g., illustrated by an arrow 4312).
At 4214, the deployment tool 3516 deploys content to the first tenant database container 3404. For example, the deployment tool 3516 can deploy content for the first patch from the deployment package 4206 to one or more writable tables included in the first tenant database container 3404, as illustrated by an updated writable table 4316. As another example, content from the deployment package 4206 for the first patch can be deployed to a writable table that includes tenant-local content associated with the mixed table corresponding to the version-two read-only table 4212, as illustrated by an updated writable table 4318. The deployment tool 3516 can determine content in the deployment package 4206 that has not been deployed to the shared database container 3402 and that is to be deployed to tenants. In the general example, the deployment tool can deploy content from the deployment package for the tables included in the remaining table set st_1_rest.
At 4220, local table structure(s) and union view(s) are updated. For example, the union view 3412 of FIG. 34 can be updated to connect to the new view 4310 and the updated writable table 4318, as illustrated by an updated union view 4322. As another example, structure of the updated writable table 4316 and/or the updated writable table 4318 can be updated, according to data in the deployment package 4206.
FIG. 44 illustrates a system 4400 for deploying multiple patches to a database system. The system 4400 is a view of the system 3400 after a third set of deployment operations, for preparing a shared database container for a second patch, have been completed during deployment of multiple patches to a database system. The third set of deployment operations are outlined in a flowchart 4402 and are similar to the operations described above for the flowchart 4202.
At 4404, the patch system 3506 reads a second patch deployment package 4406 to identify shared tables to which content is to be deployed. For the general example, the patch system 3506 can determine a set of tables in the shared container that will receive data from the deployment package 4406. This set of tables can be referred to as a set st_2. The patch system 3506 can determine a version number for each table in the set st_2, and can determine a maximum version number of those tables. The patch system 3506 can determine a target version number, v_target2=maximum version number in st_2+1.
At 4408, the patch system 3506 clones the version-two read-only table 4212 to create a version-three read-only table 4410 that has the same structure as the version-two read-only table 4212, and publishes a name of the version-three read-only table 4410. The version-three read-only table 4410 is named with a target name of “TAB #3”.
Continuing with the general example above, the patch system 3506 can, for each table in the set st_2, identify, in the shared database container 3402, a source table named <table-name>#<v_start>, where v_start is a highest version number of tables that have a same base name of <table-name>. The patch system 3506 can create a copy of each identified source table to make a respective target table, using a pattern of <table-name>#<v_target2>.
At 4412, the deployment tool 3516 deploys (e.g., imports) data from the second patch deployment package 4402 to the version-three read-only table 4410, to deploy the second patch to the version-three read-only table 4410. Continuing with the general example, the deployment tool 3516 can deploy content of the deployment package 4406 to each of the target tables <table-name>#<v_target2>, in the shared database container 3402.
At 4414, deployment status is stored, (e.g., in an administrative table in the shared database container 3402 (not shown)). Deployment status can include an indication that the second patch to the TAB table is partially deployed.
At 4416, the name of the version-three read-only table 4410, with target name of “TAB #3”, is published, or otherwise made available, as the name of the new version of the read-only table 3403. A version number (e.g., version three) can also be published as a target (e.g., “go to”) version number, for later tenant deployments. For the general example, the number v_target2 can be passed to a central control tool as a goto-version for the deployment package 4406, for orchestration of future tenant deployments of the second patch.
FIG. 45 illustrates a system 4500 for deploying multiple patches to a database system. The system 4500 is a view of the system 3400 after a fourth set of deployment operations, for deploying a first and second patch to the second tenant, have been completed during deployment of multiple patches to a database system. The fourth set of operations are similar to the operations described above in the flowchart 4302, but for deployment of both the first patch and the second patch to the second tenant database container 3406.
For example, a view from the second tenant database container 3406 to the read-only table 3403 (e.g., illustrated as the arrow 3417 on prior figures) has been dropped. A new view 4502 to the version-three read-only table 4410 (illustrated as an arrow 4503) has been created. Content has been deployed to an updated writable table 4504 and possibly to an updated writable table 4506, structure(s) of the updated writable table 4504 and/or the updated writable table 4506 have been updated, and the second tenant database container 3406 now includes an updated union view 4508.
For the general example, the patch system 3506 can retrieve a target version number v_target2 for use in deploying the deployment package 4206 and 4406 to the second tenant database container 3406. The patch system 3506 can determine a first complement of what had been deployed to the shared database container 3402 from the deployment package 4206, and a second complement of what had been deployed to the shared database container 3402 from the deployment package 4406, and deploy the first complement and the second complement to the second tenant database container 3406.
FIG. 46 illustrates a system 4600 for deploying multiple patches to a database system. The system 4600 is a view of the system 3400 after a fifth set of deployment operations, for deploying the second patch to the first tenant, have been completed during deployment of multiple patches to a database system. A determination can be made to deploy the second patch to the first tenant, for example, based on a determination that the second patch successfully resolves an earlier problem identified for the second tenant. The fifth set of operations are similar to the operations described above in the flowchart 4302, but for deployment of the second patch to the first tenant database container 3404, using the second patch deployment package 4406.
For example, a view from the first tenant database container 3404 to the version-two read-only table 4212 (e.g., illustrated as the arrow 4312 on prior figures) has been dropped. A new view 4602 to the version-three read-only table 4410 (illustrated as an arrow 4503) has been created. Content has been deployed to an updated writable table 4604 and possibly to an updated writable table 4606, structure(s) of the updated writable table 4604 and/or the updated writable table 4606 have been updated, and the first tenant database container 3404 now includes an updated union view 4608.
FIG. 47 illustrates a system 4700 for deploying multiple patches to a database system. The system 4700 is a view of the system 3400 after a sixth set of deployment operations, for finalizing a deployment, have been completed during deployment of multiple patches to a database system. The sixth set of deployment operations are outlined in a flowchart 4702.
At 4704, a determination is made as to whether all transports have been deployed to all registered tenants.
If all transports have been deployed to all registered tenants, at 4704, old shared tables that are no longer being used are dropped. For example, the patch system 3506 can drop the read-only table 3403 and the version-two read-only table 4212 since those tables are no longer connected to any tenants.
At 4706, old shared table names that were dropped (e.g., the read-only table 3403 and the version-two read-only table 4212) are removed from a list of published shared tables.
FIG. 48 illustrates a system 4800 after deployment of multiple patches to a database system has completed. The system 4800 is a view of the system 3400 after deployment of multiple patches to all tenants, including the first tenant database container 3404 and the second tenant database container 3406, has been completed. The shared database container 3402 no longer includes the read-only table 3403 and the version-two read-only table 4212, since all tenants are now connected to the version-three read-only table 4410.
Deploying Multiple Types of Changes
When a new version is deployed to a multi-tenancy database system, different types of changes can occur. For example, there can be one or more of the following types of changes: 1) change(s) in table structure; 2) change(s) in which tables are shared and which tables are not shared; or 3) change(s), for mixed tables, regarding which content values are shared and which content values are not shared. With the exchanged shared database container approach, the new shared database container includes any of these changes that are part of changes for the new version. For example, the new shared database container includes tables that are already in the target structure, includes an updated key pattern configuration, if needed, and shared tables that are associated with mixed tables include content that adheres to the updated key pattern configuration.
A deployment tool can determine what changes are to be made in each tenant, to make each tenant compatible with the new shared database container. The deployment tool can use a combination of a structure change mechanism, a sharing type change mechanism, and a data split definition (key pattern) change mechanism, to re-configure tenants, including using these mechanisms in a prescribed order, depending on the types of changes needed for a particular upgrade, as described in more detail below.
Regarding changes in table structure definitions, for a new software version, table definitions can change due to requirements of the application. A deployment procedure can adjust table structures. As described above, in a multi tenancy setup, a logical “single table” (e.g., from an application point of view) in a standard system can be replaced by a table and a view (e.g., for shared read-only tables) or two tables and a view (e.g., for mixed tables). A change in structure to the logical table may need to be carried through to a multiple-item construct (e.g., a table and a view, two tables and a view) in the multi-tenancy system. When a shared database container is exchanged with a new version, the tables in the shared database container already have the new table structure. Tenants can be updated by adjusting structures of tables and views as part of tenant deployment. Adjustment can be necessary, since if table structures and/or view structures do not match, select statements may return wrong results or result in an error.
Regarding changes in sharing type, it can be desired for a new version of the software to change table sharing. Having less tables shared than possible can increase total cost of ownership, so there may be a desire to identify additional tables to share over time. As described in more detail below, a change in sharing type can require moving data from a shared database container to tenant database container(s) and/or from tenant database container(s) to a shared database container. A change in sharing type can also result in the deletion of data from a tenant database container.
If an application expects a table to be of a certain table sharing type, having a different sharing type can lead to query errors upon data insert. For example, if the application wants to write a certain record, but the table is of sharing type read-only, the write statement will not be successful. In an upgrade, potentially various, different kinds of transitions between sharing types read-only, split, and local can be performed. As one example, an application can be configured to support persistency extensibility for key users in a multi tenancy setup. A customer may, at a given point in time, desire to add custom fields to a table. The table to be changed may currently be a read-only or split table type. Extensions to tables (adding fields) may only be allowed for local table types. Accordingly the table may need to change from a read-only or split table type to a local table type, in a next release.
Regarding a change in data split definition, two types of changes can occur. First, additional content may need to be shared. For example, an application (or an administrator or developer) can identify that certain content has never been modified by customers. A decision can be made to share these records so as to lower total cost of ownership and to speed up change deployments. Second, a determination may be made that certain data can no longer be shared. An application (or an administrator or developer) can determine that certain currently-shared entries need to be modifiable.
If a data split definition is changed, stored data may need to be adjusted (e.g., moved) to match the updated definition. The data split definition is a type of contract with an application, to let an application know which values of records can be written to and stored in tenant database containers. If a data split definition changes, data can be moved so that the data split definition consistently describes data stored in tenant database containers (and correspondingly, data stored in the shared database container, e.g., using the complement of the data split definition). Adjusting stored data to match updated data split definitions can avoid uniqueness constraint violations, data loss, and other issues.
FIG. 49 is a flowchart of an example method 4900 for applying different types of changes to a multi-tenancy database system. It will be understood that method 4900 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 4900 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 4900 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1. For example, the method 4900 and related methods can be executed by the change management system of FIG. 1.
At 4902, changes to structure definitions (S), sharing type definitions (T), and key patterns (K) are deployed, to a new shared database container, for a set of tables in a database system. The new shared database container includes tables already in a target structure and includes tables that are now to be shared as defined in the target version of the product (e.g., if a table is changed in sharing type, the new shared database container includes the shared part of the table, or the entire table if the table is now completely shared). Similarly, if a table is changed in split definition, a new version of the shared table in the new shared database container includes content consistent with the new split definition.
At 4904, a table in the set of tables is identified, for purposes of computing a set of actions to be executed for the table, for completing a tenant portion of the deployment.
At 4906, a determination is made as to whether a change to only one of a structure definition, a sharing type definition, or a key pattern is to be made for the identified table.
At 4908, if a change to only one of the structure definition, the sharing type definition, or the key pattern is to be made for the identified table, the one change is executed using a respective structure, sharing type, or key pattern change infrastructure. The sharing type change infrastructure is described below with respect to FIGS. 50-53. The key pattern change infrastructure is described below with respect to FIG. 54.
The structure change infrastructure, which can be part of or otherwise associated with a data dictionary, can include a mechanism for defining table and view structures. The structure change infrastructure can compute table create statements and table change operations, based on table structures and target definitions. The structure change infrastructure can compute view statements out of a table definition, e.g., a view that selects all fields of a table. The structure change infrastructure can compute view statements for a view in one database container, that selects data from another database container and another schema, with the view reading the other data base container name and schema definition as an input parameter.
For a change in structure of a writable table, the structure change infrastructure can adjust the structure of the writable table in place, in the tenant database container. For a change in structure of a read-only table, the structure change infrastructure can drop, in the tenant database container, a view to the old table in the old shared container and create a view, in the tenant database container, to the new table in the new shared database container, with the new view having a new structure (as compared to the old, dropped view) that matches the structure of the new read-only table.
For a change in structure of a split table, the structure change infrastructure can: 1) drop, in the tenant database container, a view to the old read-only table portion of the split table in the old shared database container; 2) drop, in the tenant database container, the union view for the split table; 3) adjust the writable table portion of the split table in the tenant database container; and 4) create a new union view, in the tenant database container, with the union view having a new structure that is the union of the structure of a new read-only table portion of the split table in the shared database container and the adjusted writable table portion of the split table in the tenant database container.
At 4910, a determination is made as to whether a change to the structure definition and the sharing type definition is to be made to the identified table.
At 4912, if a change to the structure definition and the sharing type definition is to be made to the identified table, the change to the sharing type definition is executed using the sharing type change infrastructure including integration of the change to the structure definition by the sharing type change infrastructure.
At 4914, a determination is made as to whether a change to the structure definition and the key pattern is to be made to the identified table.
At 4916, if a change to the structure definition and the key pattern is to be made to the identified table, the structure definition is changed first using the structure change infrastructure.
At 4918, if a change to the structure definition and the key pattern is to be made to the identified table, they key pattern is changed using the key pattern change infrastructure after the structure definition has been changed by the structure change infrastructure.
At 4920, a determination is made as to whether there are more tables to process. If there are more tables to process, a next table is identified (e.g., at 4904, and processed). A combination of a change to both the sharing type and the key pattern will generally not happen at the same time for a given table, since a key pattern change would indicate that the sharing type of the table is split both before and after the table is modified.
FIG. 50 is a flowchart of an example method 5000 for changing a sharing type of one or more tables. It will be understood that method 5000 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 5000 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 5000 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1. For example, the method 5000 and related methods can be executed by the sharing type change infrastructure 140 of FIG. 1.
At 5002, a new shared database container is received with a new set of shared tables that has differences in sharing types for at least some of the new set of shared tables as compared to an old set of tables in an old shared container.
At 5004, a target definition of sharing types is received for the new set of tables. The target definition can include changes to sharing type for one or more tables. A desire to change a sharing type can occur, for example, if a determination is made that remote access of shared data by a tenant has unacceptable performance (e.g., a shared table may be used in a complex view). A desired change may be to make a currently-shared table a local table to improve performance. As another example and as described above, a decision can be made to share more tables than are currently being shared, or to allow for more extensions to tables, which can result in more tables being defined as local tables. A change in sharing type can require more tables, less tables, or new tables to be stored in the shared database container.
At 5006, a current sharing type is compared to a target sharing type for each table in a tenant container. Given the three sharing types of shared read-only, split, and local, six different types of sharing type changes can be identified, including: 1) from shared read-only to local (R→L); 2) from shared read-only to split (R→W); 3) from local to shared read-only (L→R); 4) from local to split (L→W); 5) from split to shared read-only (W→R); and 6) from split to local (W→L).
At 5008, table content and access logic is changed in the tenant container, for each table, to reflect the new sharing type of the respective table. Modifying table content and access logic can include: deleting content in the tenant and linking to content in the shared database container; copying content from the shared database container to the tenant database container and removing link(s) to the shared database container; splitting data by copying tenant data to a new table and creating a union view on tenant and shared data; and merging data by copying shared data to the tenant database container and removing a union view. Further, more-specific details of changing from one sharing type to another sharing type are described below with respect to FIGS. 51 to 53.
FIG. 51 is a table 5100 that illustrates a transition from a first table type to a second, different table type. For example, a table of type local 5102 (“L”) can be converted to a table of type shared read-only 5104 (“R”) or split 5106 (“W”, with split being another term for a mixed table). A table of type shared read-only 5108 can be converted to a table of type local 5110 or the type split 5106. A table of type split 5112 can be converted to a table of the type shared read-only 5104 or the type local 5110.
As indicated in a cell 5114, a conversion from the table type shared read-only 5108 to the table type split 5106 (e.g., R→W) can include processing operations of dropping a view to a shared table 5114 a, creating a “/W/TAB” tenant-local table 5114 b, and creating a union view 5114 c. For example, FIG. 52 illustrates a system 5200 which includes a first system 5202 that is at a first version and a second system 5204 that is at a second, later version. A tenant container 5206 included in the first system 5202 includes a read-only view 5208 on a shared table 5210 that is included in a shared container 5212, with the read-only view 5208 and the shared table 5210 being an implementation of the shared read-only table type 5108. A “:R” indicator in the “T1:R” label for the shared read-only table 5210 indicates that the shared read-only table 5210 is part of a shared read-only implementation.
As represented by the cell 5114, a conversion is performed to change an implementation of the shared read-only table type 5108 to an implementation of the split table type 5106 in the second system 5204. In the conversion from the first system 5202 to the second system 5204, the read-only view 5208 is dropped (e.g., processing operation 5114 a). For example, the read-only view 5208 is not included in a tenant container 5214 in the second system 5204. A writable table 5216 (e.g., “/W/T1”) is created in the tenant container 5214 (e.g., processing operation 5114 b). A union view 5218 is created in the tenant container 5214 for the writable table 5216 and a shared table 5220 in a shared container 5221 (e.g., processing operation 5114 c, with the shared table 5220 corresponding to the shared table 5210). The writable table 5216, the union view 5218, and the shared table 5220 are an implementation of the split table type 5106 in the second system 5204. A “:W” indicator in the “T1:W” label for the shared table 5220 and in the “/W/T1:W” label for the writable table 5216 respectively indicate that the shared table 5220 and the writable table 5216 are part of a split table implementation. If a table structure change is to be performed for the table as well as the sharing type change, the table structure change can be performed on the local table after the sharing type change has completed.
FIG. 53 illustrates conversions between various table types. The conversions between table types include a conversion from the shared read-only type 5108 (“R”) to the split table type 5106 (“W”). For example, a prior-version system 5302 includes an implementation of a shared read-only type, as a read-only view 5304 in a tenant container 5306 and a shared table 5308 in a shared container 5310. A current-version system 5312 illustrates content of the prior-version system 5302 after a conversion from the shared read-only type 5108 (“R”) to the split table type 5106 (“W”). The read-only view 5304 has been dropped, a writable table 5314 has been created in a tenant container 5316 (the tenant container 5316 being a post-conversion illustration of the tenant container 5306), and a union view 5317 has been created in the tenant container 5316 to provide access to the writable table 5314 and a shared table 5318 in a shared container 5319 (with the shared table 5318 corresponding to the shared table 5308 and the shared container 5319 being a post-conversion illustration of the shared container 5310).
Referring again to FIG. 51, as indicated in a cell 5116, a conversion from the shared read-only table type 5108 to the local table type 5110 (e.g., R→L) can include processing operations of dropping a view 5116 a, creating a table 5116 b, and copying data from a shared table 5116 c. For example and as shown in FIG. 52, the tenant container 5206 includes a read-only view 5222 on a shared table 5224 that is included in the shared container 5212, with the read-only view 5222 and the shared table 5224 being an implementation of the shared read-only table type 5108 in the first system 5202. If a table structure change is to be performed for the table as well as the sharing type change, the table structure change can be performed on the local table after the sharing type change has completed.
As represented by the cell 5116, an implementation of the shared read-only table type 5108 is changed to be an implementation of the local table type 5110 in the second system 5204. In the conversion from the first system 5202 to the second system 5204, the read-only view 5222 is dropped (e.g., processing operation 5116 a). For example, the read-only view 5222 is not included in the tenant container 5214 in the second system 5204. A local table 5226 (e.g., “T2”) is created in the tenant container 5214 (e.g., processing operation 5116 b). Data is copied from the shared table 5224 to the created local table 5226. The local table 5226 is an implementation of the local table type 5110 in the second system 5204, as indicated by a “:L” in the “T2:L” label for the local table 5226. In some implementations, the shared table 5224 is dropped after data is copied to the local table 5226.
FIG. 53 includes another illustration of a conversion from the shared read-only table type 5108 (“R”) to the local table type 5110 (“L”). For example, a prior-version system 5320 includes an implementation of a shared read-only type, as a read-only view 5322 in a tenant container 5324 and a shared table 5326 in a shared container 5328. A current-version system 5330 illustrates content of the prior-version system 5320 after a conversion from the shared read-only type 5108 (“R”) to the local table type 5110 (“L”). The read-only view 5322 has been dropped, a local table 5331 has been created in a tenant container 5332 (the tenant container 5332 being a post-conversion illustration of the tenant container 5324), data has been copied from the shared table 5326 to the local table 5331 (e.g., as illustrated by an arrow 5333), and the shared table 5326 has been dropped after completion of the data copy operation (e.g., there is no shared table in a shared container 5334 that is a post-conversion illustration of the shared container 5328).
Referring again to FIG. 51, as indicated in a cell 5118, a conversion from the split table type 5112 to the shared read-only 5104 table type (e.g., W→R) can include processing operations of dropping a local table 5118 a, dropping a union view 5118 b, and creating a view to a shared table 5118 c. For example and as shown in FIG. 52, the tenant container 5206 includes a union view 5228 and a local table 5230 and the shared container 5212 includes a shared table 5232, with the union view 5228, the local table 5230, and the shared table 5232 being an implementation of the split table type 5108 in the first system 5202.
As represented by the cell 5118, an implementation of the split table type 5112 is changed to be an implementation of the shared read-only type table type 5104 in the second system 5204. In the conversion from the first system 5202 to the second system 5204, the local table 5230 is dropped (e.g., processing operation 5118 a) and the union view 5228 is dropped (e.g., processing operation 5118 b). For example, the local table 5230 and the union view 5228 are not included in the tenant container 5214 in the second system 5204. In some implementations, if the local table 5230 includes content, data from the local table 5230 can be stored in a quarantine table for analysis and potential data retrieval after the deployment. A read-only view 5234 is created in the tenant container 5214 to a shared table 5236 included in the shared container 5221, with the shared table 5236 corresponding to the shared table 5232. The read-only view 5234 and the shared table 5236 are an implementation of the shared read-only table type 5104 in the second system 5204.
FIG. 53 includes another illustration of a conversion from the split table type 5112 (“W”) to the shared read-only table type 5104 (“R”). For example, a prior-version system 5336 includes an implementation of the split type, as a union view 5337 in a tenant container 5338 that provides access to a local table 5339 in the tenant container 5338 and a shared table 5340 in a shared container 5341. A current-version system 5342 illustrates content of the prior-version system 5336 after a conversion from the split table type 5112 (“W”) to the shared read-only table type 5104 (“R”). The local table 5339 and the union view 5337 have been dropped (e.g., the local table 5339 and the union view 5337 do not appear in a tenant container 5343 (the tenant container 5343 being a post-conversion illustration of the tenant container 5338). A read-only view 5344 has been created in the tenant container 5343 to provide access to a shared table 5345 in a shared container 5346 (with the shared table 5345 corresponding to the shared table 5340).
Referring again to FIG. 51, as indicated in a cell 5120, a conversion from the split table type 5112 to the local table type 5210 (e.g., W→L) can include processing operations of copying data from a shared table to a local table 5120 a and establishing one table (e.g., as a local table) 5120 b. For example and as shown in FIG. 52, the tenant container 5206 includes a union view 5238 and a writable table 5240 and the shared container 5212 includes a shared table 5242, with the union view 5238, the writable table 5240, and the shared table 5242 being an implementation of the split table type 5108 in the first system 5202. If a table structure change is to be performed for the table as well as the sharing type change, the table structure change can be performed on the local table after the sharing type change has completed.
As represented by the cell 5120, an implementation of the split table type 5112 can be changed to be an implementation of the local table type 5110 in the second system 5204. In the conversion from the first system 5202 to the second system 5204, data is copied from the shared table 5242 to the writable table 5240 (e.g., processing operation 5220 a). At processing operation 5220 b, one table is established as a local table in the tenant container 5214 (e.g., processing operation 5220 b). For example, the shared table 5242 and the union view 5238 can be dropped. For example, the shared table 5242 and the union view 5238 are not included in the tenant container 5214 in the second system 5204. The writable table 5240 can be renamed, in the tenant container 5214, e.g., from an alternative name (e.g., “/W/T4”) to a “standard” name (e.g., “T4”), as shown for a writable table 5244. The writable table 5244 is an implementation of the local table type 5110 in the second system 5204.
FIG. 53 includes another illustration of a conversion from the split table type 5112 (“W”) to the local table type 5110 (“L”). For example, a prior-version system 5350 includes an implementation of the split type, as a union view 5351 in a tenant container 5352 that provides access to a local table 5353 in the tenant container 5351 and a shared table 5354 in a shared container 5355. A current-version system 5356 illustrates content of the prior-version system 5350 after a conversion from the split table type 5112 (“W”) to the local table type 5110 (“L”). The writable table 5353 has been renamed from “/W/T4” to “T4”, as illustrated by a local table 5357 in a tenant container 5358 (the tenant container 5358 being a post-conversion illustration of the tenant container 5352). Data has been copied from the shared table 5354 to the local table 5357, as illustrated by an arrow 5359. After data has been copied, the shared table 5354 has been dropped. The union view 5351 has also been dropped. For example, the shared table 5354 does not appear in a shared container 5360 in the current-version system 5356 and the union view 5351 does not appear in the tenant container 5358.
Referring again to FIG. 51, as indicated in a cell 5122, a conversion from the local table type 5102 to the shared read-only table type 5104 (e.g., L→R) can include processing operations of dropping a local table 5122 a and creating a view to a shared table 5122 b. For example and as shown in FIG. 52, the tenant container 5206 includes a local table 5246 that is an implementation of the local table type 5110 in the first system 5202.
As described in the cell 5122, the local table 5246 is dropped (e.g., processing operation 5122 a). For example, the local table 5246 is not included in the tenant container 5214 in the second system 5204. In some implementations, if the local table 5426 includes content, data from the local table 5426 can be stored in a quarantine table for analysis and potential data retrieval after the deployment. A read-only view 5248 is created to access a shared table 5250 in the shared container 5221. The shared table 5250 may already exist in the shared container 5221 (e.g., to service other tenants) or may be created in the shared container 5221. The read-only view 5248 and the shared table 5250 are an implementation of the shared read-only table type 5104 in the second system 5204.
FIG. 53 includes another illustration of a conversion from the local table type 5110 (“L”) to the shared read-only table type 5104 (“R”). For example, a prior-version system 5362 includes an implementation of the local type, as a local table 5364 in a tenant container 5365. A current-version system 5366 illustrates content of the prior-version system 5362 after a conversion from the local table type 5110 (“L”) to the shared read-only table type 5104 (“R”). The local table 5364 has been dropped (e.g., the local table 5364 does not appear in a tenant container 5367 in the current-version system 5366 (the tenant container 5367 being a post-conversion illustration of the tenant container 5365). A read-only view 5368 has been created in the tenant container 5367 to provide access to a shared table 5369 in a shared container 5370 included in the current-version system 5366. The shared table 5369 may have already existed in the shared container 5370 (e.g., to service other tenants) or have been created in the shared container 5370 as part of the conversion.
Referring again to FIG. 51, as indicated in a cell 5124, a conversion from the local table type 5102 to the split table type 5106 (e.g., L→W) can include processing operations of copying current data according to key patterns to a writable table 5124 a, dropping an old table 5124 b, and creating a union view 5124 c. For example and as shown in FIG. 52, the tenant container 5206 includes a local table 5252 that is an implementation of the local table type 5110 in the first system 5202.
As described in the cell 5124, data is copied from the local table 5252 to a writable table 5254 in the tenant container 5214 (e.g., processing operation 5124 a). For example, the table 5252 can be temporarily renamed (e.g., to “/OLD/T6”), the writable table 5254 can be created (e.g. with name “/W/T6”), and data can be copied from the local table 5252 to the writable table 5254 according to defined key patterns. After data has been copied, the local table 5252 can be dropped (e.g., processing operation 5124 b). A union view 5256 can be created for the writable table 5254 and a shared table 5258 in the shared container 5221 (e.g., processing operation 5124 c). The shared table 5258 may already exist in the shared container 5221 (e.g., to service other tenants) or may be created in the shared container 5221. The union view 5256, the shared table 5258, and the writable table 5254 are an implementation of the split table type 5106 in the second system 5204. If a table structure change is to be performed for the table as well as the sharing type change, the table structure change can be performed on the writable table 5254 before the union view 5256 is created.
FIG. 53 includes another illustration of a conversion from the local table type 5110 (“L”) to the split table type 5106 (“W”). For example, a prior-version system 5372 includes an implementation of the local type, as a local table 5374 in a tenant container 5376. A current-version system 5378 illustrates content of the prior-version system 5372 after a conversion from the local table type 5110 (“L”) to the split table type 5106 (“W”). Instead of copying data from the local table 5374 to a new writable table, as described above for the local table 5252 and the writable table 5254, the local table 5374 can be renamed (e.g., from “T6” to “/W/T6”), as illustrated by a writable table 5380 in a tenant container 5382 (the tenant container 5382 being a post-conversion illustration of the tenant container 5376). A shared table 5384 has been created in a shared container 5385 in the current-version system 5378. A union view 5386 has been created in the tenant container 5382, to provide access to the writable table 5380 and the shared table 5384.
FIG. 54 illustrates a system 5400 for changing tenant keys (e.g., split definition) when exchanging a shared database container. The changing of tenant keys can be performed by a split definition change infrastructure. The split definition change infrastructure includes a mechanism to store split definitions per table in an active and inactive state. The split definition change infrastructure can compute and execute DML (Data Manipulation Language) statements to copy data and delete data so that tables are in accordance with the split definition. As described above, a split definition (also referred to as a key pattern), can be defined using a WHERE clause, which defines records with can be stored in a local table portion of a mixed table, in a tenant database container.
The system 5400 includes a version-one shared database container 5402 that includes a tenant keys table 5404 and a read-only table 5406 that is a read-only portion of a mixed table named “TAB”. The read-only table 5406 includes a record 5408 with a key that starts with “A” and a record 5410 with a key that starts with “Y”. The keys of the records 5408 and 5410 are in compliance with a WHERE clause 5411 included in the tenant keys table 5404. The WHERE clause 5411 defines keys that are allowed to be written for tenants, and a complement of the WHERE clause 5411 defines keys that are allowed to be stored in the read-only table 5406. The key values of “A*” and “Y*” for the records 5408 and 5410, respectively, match a complement of the WHERE clause 5411 of “NOT (Key like ‘B %’ or Key like ‘Z %’). In other words, the keys for the records 5408 and 5410 do not start with either “B” or “Z”.
A version-one tenant database container 5412 for a first tenant includes a view 5413 to the tenant keys table 5404, a view 5414 to the read-only table 5406, a writable table 5416 that is a writable portion of the “TAB” mixed table, and a union view 5418 to the writable table 5416 and the read-only table 5406 (through the view 5414). The writable table 5416 includes a record 5420 with a key that starts with “B” (e.g., matching the WHERE clause 5411) and a record 5422 with a key that starts with “Z” (e.g., also matching the WHERE clause 5411).
During a deployment, developer(s) and/or administrator(s) may determine that the WHERE clause 5411 is now incorrect. For example, a determination may be made that records with keys that start with “Y” should no longer be shared (e.g., it may be desired that tenants are able to store local records with keys that start with “Y”). As another example, a determination may be made that records that start with “B” should now be shared (e.g., a determination may be made that tenant applications do not write local records that start with “B”).
A version-two shared database container 5424 has been prepared for deployment of a version two of the system 5400. The version-two shared database container 5424 includes an updated tenant keys table 5426 that includes an updated WHERE clause 5428 that indicates that tenants are allowed to write, to the mixed table named “TAB”, records that have keys that start with either “Y” or “Z”. An updated read-only table 5430 includes records to be shared for the mixed table named “TAB”. For example, the updated read-only table 5430 includes a record 5432 with a key starting with “A” (which may be a copy of the record 5408) and a record 5434 with a key starting with “B” (which may be a record that was previously provided to, but editable by tenants, but is now to be read-only and shared). The records 5432 and 5434 have keys that match the complement of the updated WHERE clause 5428. The record 5434 may be the same as or different than the record 5420. For example, the first tenant may have modified the record 5420 after the record 5420 was first provided to the first tenant.
An upgrade process can be used to upgrade tenant database containers to version two of the system 5400. For example, a version-two tenant database container 5440 has been upgraded to version two and is now connected to the version-two shared database container 5424. The version-two tenant database container 5440 includes a view 5442 to the updated tenant keys table 5426, an updated writable table 5444, an updated view 5446 to the updated read-only table 5430, and an updated union view 5448. The updated writable table 5444 includes a record 5450 with a key starting with “Y” (e.g., compatible with the updated WHERE clause 5428) and a record 5452 with a key starting with “Z” (e.g., also compatible with the updated WHERE clause 5428).
For purposes of the discussion below, assume that the contents of the version-two tenant database container 5440 was the same as the version-one tenant database container 5412 before the version-two tenant database container 5440 was upgraded to version two, and accordingly, that the version-one tenant database container 5412 can be, for purposes of discussion, a pre-deployment view of the version-two tenant database container 5440.
A deployment tool can determine what to change in the version-one tenant database container 5412 during an upgrade of the version-one tenant database container 5412 to version two. The deployment tool can identify records in the read-only table 5406 that are to be moved from the read-only table 5406 to the writable table 5416 (e.g., records that used to be shared and that are no longer to be shared). The deployment tool can execute the following insert statement, to move records from the read-only table 5406 to the writable table 5416 (assuming the name of the shared database container 5402 is “shared_old” and that “<new_where_condition>” is the updated WHERE clause 5428): INSERT INTO /W/TAB (SELECT * FROM shared_old.TAB WHERE (<new_where_condition>)). The insert statement can result in the moving of the record 5410 to the writable table 5416 (e.g., as illustrated in the updated writable table 5444 by the record 5450), since the key “Y*” of the record 5410 matches the updated WHERE clause 5411.
The deployment tool can identify records to delete in the writable table 5416 (e.g., records that are no longer allowed to be stored locally as editable records by the first tenant). For example, the deployment tool can execute the following statement to delete records from the writable table 5416: DELETE FROM /W/TAB WHERE NOT (<new_where_condition>). The delete statement can result in deletion of the record 5420 from the writable table 5416, since the key “B*” of the record 5420 does not match the updated WHERE clause 5428. For example, a similar record may have been deleted from the updated writable table 5444 during the upgrade of the updated writable table 5444 (e.g., the updated writable table 5444 does not include any records that start with “B”). The record 5420 can be moved to a quarantine location upon being deleted.
Example Methods
FIG. 55 is a flowchart of an example method 5500 for redirecting a write query. It will be understood that method 5500 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 5500 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 5500 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1. For example, the method 5500 and related methods can be executed by the write redirecter 128 of FIG. 1.
At 5502, access is provided to at least one application to a database system. The at least one application can include one or more tenant applications. Access can be provided by a database interface, for example.
At 5504, a first query is received from the at least one application. The first query can be to retrieve, add, or edit data in the database system.
At 5506, a determination is made that the first query is associated with a union view that provides unified read-only access to a read-only table included in a shared database container and a writable table in a tenant database container, in the database system.
At 5508, a determination is made as to whether the first query is a read query. A read query retrieves but does not modify or add data to the database system.
At 5510, in response to determining that the first query is a read query, the first query is processed using the union view. Processing the first query using the union view can include retrieving data from one or both of the read-only table and the writable table.
At 5512, in response to determining that the first query is not a read query (e.g., the first query is a write query), the first query is modified to use the writable table, rather than the union view. The write query is thus redirected to use the writable table rather than the read-only union view.
At 5514, the first query is processed using the writable table. Processing the first query using the writable table can include modifying or adding data to the writable table.
FIG. 56 is a flowchart of an example method 5600 for key pattern management. It will be understood that method 5600 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 5600 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 5600 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1. For example, the method 5600 and related methods can be executed by the constraint enforcement system 126 of FIG. 1.
At 5602, access is provided to at least one application to a database system.
At 5604, at least one query for a logical database table is received from the at least one application. The logical database table is represented in the database system as a first physical database table that includes records of the logical database table that are allowed to be written by the at least one application and a second physical database table that includes records of the logical database table that are allowed to be read but not written by the at least one application.
At 5606, a determination is made that the at least one query is a write query. The write query is configured to modify or add data to the database system.
At 5608, a determination is made as to whether the at least one query complies with a key pattern configuration. The key pattern configuration describes keys of records that are included in or may be included in (e.g., added to) the first physical database table.
At 5610, in response to determining that the at least one query complies with the key pattern definition, the write query is redirected to the first physical database table. Redirecting can include modifying the write query to use the first physical database table rather than the logical database table.
At 5612, in response to determining that the at least one query does not comply with the key pattern configuration, the write query is rejected. Rejecting the write query can prevent records being added to the first physical database table that do not comply with the key pattern configuration.
FIG. 57 is a flowchart of an example method 5700 for transitioning between system sharing types. It will be understood that method 5700 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 5700 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 5700 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1. For example, the method 5700 and related methods can be executed by the system sharing type modifier 148 of FIG. 1.
At 5702, a request is received to convert a database system from a standard system setup to a shared system setup. The database system includes a tenant database container. The tenant database container includes, before conversion of the database system from the standard system setup to the shared system setup: a read-only table for storing read-only data that is read but not written by application(s); a first writable table for storing writable data that is read and written by the application(s); and a mixed table for storing read-only mixed data that is read but not written by the application(s) and writable mixed data that is read and written by the application(s). Although a single read-only table, a single writable table, and a single mixed table are described, the tenant database container can include any combination of tables of various types.
At 5704, a shared database container is created, for storing shared content used by multiple tenants.
At 5706, a first shared table is created in the shared database container, for storing the read-only data that is read but not written by applications.
At 5708, data is copied from the read-only table to the first shared table.
At 5710, the read-only table is dropped from the tenant database container.
At 5712, a read-only view is created in the tenant database container, for providing read access to the first shared table.
At 5714, a second shared table is created in the shared database container, for storing the read-only mixed data.
At 5716, the read-only mixed data is copied from the mixed table to the second shared table.
At 5718, the read-only mixed data is deleted from the mixed table.
At 5720, the mixed table is renamed to be a second writable table.
At 5722, a union view is created to provide unified access to the second shared table and the second writable table.
FIG. 58 is a flowchart of an example method 5800 for exchanging a shared database container. It will be understood that method 5800 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 5800 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 5800 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1. For example, the method 5800 and related methods can be executed by the deployment tool 130 of FIG. 1.
At 5802, a request to deploy a new version of a database system is received.
At 5804, a deployment package is received that includes data for the new version of the database system.
At 5806, a next-version shared database container is installed in the database system in parallel to a current-version shared database container.
At 5808, the new version is deployed to each of multiple tenant database containers. Deploying the new version to each of the multiple tenant database containers includes individually linking, at 5810, each of the multiple tenant database containers to the next-version shared database container. The linking can include dropping at least one view in each respective tenant database container to shared content in the current-version shared database container and adding at least one new view in each respective tenant database container to the updated shared content in the next-version shared database container.
Deploying the new version to each of the multiple tenant database containers includes, at 5812, deploying, from the deployment package, changed local content to each tenant database container.
At 5814, the current-version shared database container is dropped, after deployment to each of the multiple tenant database containers has completed.
FIG. 59 is a flowchart of an example method 5900 for patching a shared database container. It will be understood that method 5900 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 5900 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 5900 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1. For example, the method 5900 and related methods can be executed by the patching system 146 of FIG. 1.
At 5902, a first deployment package is received for an upgrade of a database system to a second software version. The upgrade can include deployment to a shared database container and one or more tenant database containers.
At 5904, shared objects that are completely stored in the shared database container are identified, from information in the deployment package.
At 5906, first shared content for the shared objects in the deployment package is determined.
At 5908, partially-shared objects that have a shared portion in the shared database container and a tenant portion in the tenant database container are identified.
At 5910, second shared content for the partially-shared objects in the deployment package is determined.
At 5912, the determined first shared content and the determined second shared content is deployed to the shared database container as deployed shared content.
At 5914, first local content for the partially-shared objects in the deployment package is determined.
At 5916, the first local content is deployed to respective tenant database containers.
At 5918, local objects that do not store data in the shared database container are identified.
At 5920, second local content for the local objects in the deployment package is identified.
At 5922, the second local content is deployed to the respective tenant database containers.
FIG. 60 is a flowchart of an example method 6000 for deploying different types of changes to a database system. It will be understood that method 6000 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 6000 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 6000 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1. For example, the method 6000 and related methods can be executed by the change management system 134 of FIG. 1.
At 6002, a table structure and a table sharing type are determined for each table in a current-version shared database container.
At 6004, a table structure and a table sharing type are determined for each table in a next-version shared database container.
At 6006, the table structures of the tables in the current-version shared database container are compared to the table structures of the tables in the next-version shared database container to identify table structure differences.
At 6008, the table sharing types of the tables in the current-version shared database container are compared to the table sharing types of the tables in the next-version shared database container to identify table sharing type differences.
At 6010, a current key pattern configuration associated with the current-version shared database container is compared to an updated key pattern configuration associated with the next-version shared database container to identify key pattern configuration differences.
At 6012, each table in at least one tenant database container is upgraded to a next version based on the table structure differences, the table sharing type differences, and the key pattern configuration differences.
FIG. 61 is a flowchart of an example method 6100 for changing key pattern definitions. It will be understood that method 6100 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to execute method 6100 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, the method 6100 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1. For example, the method 6100 and related methods can be executed by the split definition change infrastructure of FIG. 1.
At 6102, a new shared database container that includes a new key pattern configuration is received. The new shared database container is a new version of a current shared database container for storing data accessible to multiple tenants. The new key pattern configuration is a new version of a current key pattern configuration for a logical split table. The logical split table includes a read-only-portion table in the current shared database container and a writable portion in a tenant database container. They current key pattern configuration describes keys of records included in the writable-portion. The new shared database container includes an updated read-only-portion for the logical split table that includes records that match a complement of the new key pattern configuration.
At 6104, records that match the new key pattern configuration are identified in the read-only-portion of the logical split table in the current shared database container.
At 6106, the identified records are moved, from the read-only-portion of the logical split table in the current shared database container to the writable-portion of the logical split table included in the tenant database container.
At 6108, records that do not match the new key pattern configuration are deleted from the writable-portion of the logical split table in the tenant database container.
The preceding figures and accompanying description illustrate example processes and computer-implementable techniques. But system 100 (or its software or other components) contemplates using, implementing, or executing any suitable technique for performing these and other tasks. It will be understood that these processes are for illustration purposes only and that the described or similar techniques may be performed at any appropriate time, including concurrently, individually, or in combination. In addition, many of the operations in these processes may take place simultaneously, concurrently, and/or in different orders than as shown. Moreover, system 100 may use processes with additional operations, fewer operations, and/or different operations, so long as the methods remain appropriate.
In other words, although this disclosure has been described in terms of certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure.

Claims (20)

What is claimed is:
1. A system comprising:
one or more computers;
a non-transitory computer-readable medium coupled to the one or more computers, the computer readable medium having instructions stored thereon which, when executed by the one or more computers, cause the one or more computers to perform operations;
wherein the computer-readable medium comprises:
tables of a database system, including a first physical database table and a second physical database table that each store different records of a same logical database table, wherein the first physical database table includes records of the logical database table that are allowed to be written by at least one application that submits a query that includes a name of the logical database table and wherein the second physical database table includes records of the logical database table that are allowed to be read but not written by the at least one application; and
a key pattern configuration that describes valid values of keys of records that are allowed to be included in the first physical database table, wherein records of the logical database table included in the second physical database table do not have key values that match the key pattern configuration; and
wherein the operations comprise:
providing access to the at least one application to the database system;
receiving the at least one query for the logical database table from the at least one application;
determining whether the at least one query is a write query;
in response to determining that the at least one query is a write query, determining whether the at least one query complies with the key pattern configuration;
redirecting the write query to the first physical database table in response to determining that the at least one query is a write query that matches the valid values described by the key pattern configuration; and
rejecting the write query in response to determining that the at least one query is a write query that does not match the valid values described by the key pattern configuration.
2. The system of claim 1, wherein a complement of the key pattern configuration describes keys of records that can be included in the second physical database table.
3. The system of claim 1, wherein the database system comprises a tenant database container accessible by a first tenant and not accessible by a second tenant, and wherein the first physical database table is included in the tenant database container.
4. The system of claim 3, wherein the database system comprises a shared database container that is accessible by the first tenant and the second tenant and wherein the second physical database table is included in the shared database container.
5. The system of claim 4, wherein the key pattern configuration is included in the shared database container.
6. The system of claim 3, wherein the tenant database container includes a union view that provides unified access to the first physical database table and the second physical database table.
7. The system of claim 3, wherein the second physical database table includes data shared by the first tenant and the second tenant.
8. The system of claim 1, wherein the operations comprise:
receiving a deployment file that includes content to be deployed to the logical database table;
determining first entries of the deployment file that match the key pattern configuration;
adding the first entries to the first physical database table;
determining second entries of the deployment file that do not match the key pattern configuration; and
adding the second entries of the deployment file that do not match the key pattern configuration to the second physical table.
9. The system of claim 1, wherein the first physical database table includes data specific to the first tenant.
10. A method comprising:
providing access to at least one application to a database system, the at least one application configured to submit at least one query that includes a name of a logical database table;
receiving the at least one query for the logical database table from the at least one application;
determining whether the at least one query is a write query;
in response to determining that the at least one query is a write query, determining whether the at least one query complies with a key pattern configuration, wherein the key pattern configuration describes valid values of keys of records that are allowed to be included in a first physical database table, the first physical table including records of the logical database table that are allowed to be written by the at least one application, wherein records of the logical database table included in a second physical database table do not have key values that match the key pattern configuration and wherein the first physical database table and the second physical database table each store different records of the logical database table;
redirecting the write query to the first physical database table in response to determining that the at least one query is a write query that matches the valid values described by the key pattern configuration; and
rejecting the write query in response to determining that the at least one query is a write query that does not match the valid values described by the key pattern configuration.
11. The method of claim 10, wherein a complement of the key pattern configuration describes keys of records that can be included in the second physical database table.
12. The method of claim 10, wherein the database system comprises a tenant database container accessible by a first tenant and not accessible by a second tenant, and wherein the first physical database table is included in the tenant database container.
13. The method of claim 12, wherein the database system comprises a shared database container that is accessible by the first tenant and the second tenant and wherein the second physical database table is included in the shared database container.
14. The method of claim 13, wherein the key pattern configuration is included in the shared database container.
15. The method of claim 12, wherein the tenant database container includes a union view that provides unified access to the first physical database table and the second physical database table.
16. The method of claim 12, wherein the second physical database table includes data shared by the first tenant and the second tenant.
17. The method of claim 10, further comprising:
receiving a deployment file that includes content to be deployed to the logical database table;
determining first entries of the deployment file that match the key pattern configuration;
adding the first entries to the first physical database table;
determining second entries of the deployment file that do not match the key pattern configuration; and
adding the second entries of the deployment file that do not match the key pattern configuration to the second physical table.
18. The method of claim 10, wherein the first physical database table includes data specific to the first tenant.
19. The computer-readable media of claim 10, wherein a complement of the key pattern configuration describes keys of records that can be included in the second physical database table.
20. One or more non-transitory computer-readable media storing instructions which, when executed by at least one processor, cause the at least one processor to perform operations comprising:
providing access to at least one application to a database system, the at least one application configured to submit at least one query that includes a name of a logical database table;
receiving the at least one query for the logical database table from the at least one application;
determining whether the at least one query is a write query;
in response to determining that the at least one query is a write query, determining whether the at least one query complies with a key pattern configuration, wherein the key pattern configuration describes valid values of keys of records that are allowed to be included in a first physical database table, the first physical table including records of the logical database table that are allowed to be written by the at least one application, wherein records of the logical database table included in a second physical database table do not have key values that match the key pattern configuration and wherein the first physical database table and the second physical database table each store different records of the logical database table;
redirecting the write query to the first physical database table in response to determining that the at least one query is a write query that matches the valid values described by the key pattern configuration; and
rejecting the write query in response to determining that the at least one query is a write query that does not match the valid values described by the key pattern configuration.
US15/794,368 2017-10-26 2017-10-26 Key pattern management in multi-tenancy database systems Active 2038-05-17 US10740318B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US15/794,368 US10740318B2 (en) 2017-10-26 2017-10-26 Key pattern management in multi-tenancy database systems
EP17001948.3A EP3477503A1 (en) 2017-10-26 2017-11-29 Key pattern management in multi-tenancy database systems
CN201711270288.4A CN110019215B (en) 2017-10-26 2017-12-05 Key pattern management in a multi-tenancy database system
US16/860,532 US11561956B2 (en) 2017-10-26 2020-04-28 Key pattern management in multi-tenancy database systems

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/794,368 US10740318B2 (en) 2017-10-26 2017-10-26 Key pattern management in multi-tenancy database systems

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/860,532 Continuation US11561956B2 (en) 2017-10-26 2020-04-28 Key pattern management in multi-tenancy database systems

Publications (2)

Publication Number Publication Date
US20190129988A1 US20190129988A1 (en) 2019-05-02
US10740318B2 true US10740318B2 (en) 2020-08-11

Family

ID=60515074

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/794,368 Active 2038-05-17 US10740318B2 (en) 2017-10-26 2017-10-26 Key pattern management in multi-tenancy database systems
US16/860,532 Active 2038-08-28 US11561956B2 (en) 2017-10-26 2020-04-28 Key pattern management in multi-tenancy database systems

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/860,532 Active 2038-08-28 US11561956B2 (en) 2017-10-26 2020-04-28 Key pattern management in multi-tenancy database systems

Country Status (3)

Country Link
US (2) US10740318B2 (en)
EP (1) EP3477503A1 (en)
CN (1) CN110019215B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10915551B2 (en) 2018-06-04 2021-02-09 Sap Se Change management for shared objects in multi-tenancy systems
US11366658B1 (en) 2021-01-19 2022-06-21 Sap Se Seamless lifecycle stability for extensible software features
US11561956B2 (en) 2017-10-26 2023-01-24 Sap Se Key pattern management in multi-tenancy database systems
US11860841B2 (en) 2022-02-07 2024-01-02 Sap Se Online import using system-versioned tables

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10621167B2 (en) 2017-10-26 2020-04-14 Sap Se Data separation and write redirection in multi-tenancy database systems
US10713277B2 (en) 2017-10-26 2020-07-14 Sap Se Patching content across shared and tenant containers in multi-tenancy database systems
US10452646B2 (en) 2017-10-26 2019-10-22 Sap Se Deploying changes in a multi-tenancy database system
US10482080B2 (en) 2017-10-26 2019-11-19 Sap Se Exchanging shared containers and adapting tenants in multi-tenancy database systems
US10657276B2 (en) 2017-10-26 2020-05-19 Sap Se System sharing types in multi-tenancy database systems
US10733168B2 (en) 2017-10-26 2020-08-04 Sap Se Deploying changes to key patterns in multi-tenancy database systems
US10740315B2 (en) 2017-10-26 2020-08-11 Sap Se Transitioning between system sharing types in multi-tenancy database systems
US11762980B2 (en) 2018-03-14 2023-09-19 Microsoft Technology Licensing, Llc Autonomous secrets renewal and distribution
US10965457B2 (en) * 2018-03-14 2021-03-30 Microsoft Technology Licensing, Llc Autonomous cross-scope secrets management
US11061897B2 (en) 2018-05-07 2021-07-13 Sap Se Materializable database objects in multitenant environments
US10983762B2 (en) 2019-06-27 2021-04-20 Sap Se Application assessment system to achieve interface design consistency across micro services
US11249812B2 (en) 2019-07-25 2022-02-15 Sap Se Temporary compensation of outages
US11768878B2 (en) * 2019-09-20 2023-09-26 Fisher-Rosemount Systems, Inc. Search results display in a process control system
US11768877B2 (en) * 2019-09-20 2023-09-26 Fisher-Rosemount Systems, Inc. Smart search capabilities in a process control system
US11269717B2 (en) 2019-09-24 2022-03-08 Sap Se Issue-resolution automation
US11551141B2 (en) 2019-10-14 2023-01-10 Sap Se Data access control and workload management framework for development of machine learning (ML) models
US11416484B2 (en) * 2019-10-15 2022-08-16 Salesforce, Inc. Performance optimization of hybrid sharing model queries
US11379211B2 (en) 2019-12-05 2022-07-05 Sap Se Fencing execution of external tools during software changes
US11561836B2 (en) 2019-12-11 2023-01-24 Sap Se Optimizing distribution of heterogeneous software process workloads
US11354302B2 (en) 2020-06-16 2022-06-07 Sap Se Automatic creation and synchronization of graph database objects
US20230119834A1 (en) * 2021-10-19 2023-04-20 Sap Se Multi-tenancy using shared databases
US20230393845A1 (en) * 2022-06-07 2023-12-07 Sap Se Consolidation spaces providing access to multiple instances of application content

Citations (218)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050052150A1 (en) 2003-09-08 2005-03-10 Bender Paul T. Failsafe operation of active vehicle suspension
US20060248507A1 (en) 2005-04-29 2006-11-02 Sap Aktiengesellschaft Object generation in packages
US20060248545A1 (en) 2005-04-29 2006-11-02 Sap Aktiengesellschaft Calls and return calls using client interfaces
US7191160B2 (en) 2003-11-10 2007-03-13 Sap Ag Platform-independent data dictionary
US20070060609A1 (en) 2000-02-01 2007-03-15 Anderson Maibritt B Identification and Use of Growth Hormone Secretagogue Receptor Type 1A Antagonists
US20070156650A1 (en) 2005-12-30 2007-07-05 Becker Wolfgang A Systems and methods for accessing a shared space in a provider-tenant environment
US20070156849A1 (en) 2005-12-30 2007-07-05 Wolfgang Becker Systems and methods for delivering software upgrades in a provider-tenant environment
US20070162512A1 (en) 2006-01-10 2007-07-12 Microsoft Corporation Providing reporting database functionality using copy-on-write technology
US7302678B2 (en) 2003-09-10 2007-11-27 Sap Aktiengesellschaft Symmetric transformation processing system
US7325233B2 (en) 2001-11-07 2008-01-29 Sap Ag Process attachable virtual machines
US20080059489A1 (en) 2006-08-30 2008-03-06 International Business Machines Corporation Method for parallel query processing with non-dedicated, heterogeneous computers that is resilient to load bursts and node failures
US20080120129A1 (en) 2006-05-13 2008-05-22 Michael Seubert Consistent set of interfaces derived from a business object model
US7392236B2 (en) 2001-03-20 2008-06-24 Sap Ag Method, computer program product and computer system for a single database system to support multiple application systems
US20080162536A1 (en) 2006-12-29 2008-07-03 Becker Wolfgang A Systems and methods for extending shared data structures with tenant content in a provider-tenant environment
US20080162660A1 (en) 2006-12-29 2008-07-03 Becker Wolfgang A Systems and methods for accessing a shared space in a provider-tenant environment by using middleware
US20080162509A1 (en) * 2006-12-29 2008-07-03 Becker Wolfgang A Methods for updating a tenant space in a mega-tenancy environment
US7421437B2 (en) 2003-11-10 2008-09-02 Sap Ag System and method for a data dictionary cache in a distributed system
US7457828B2 (en) 2003-08-29 2008-11-25 Sap Ag System and method for synchronizing distributed buffers when committing data to a database
US7461097B2 (en) 2001-05-30 2008-12-02 Sap Aktiengesellschaft Method, system, and computer program for migrating content from source database to target database
US7480681B2 (en) 2004-12-06 2009-01-20 Sap Ag System and method for a transaction manager
US7490102B2 (en) 2004-06-07 2009-02-10 Sap Ag System and method for interacting with a persistence layer
US7519614B2 (en) 2006-08-31 2009-04-14 Sap Ag Data verification systems and methods using business objects
US7523142B2 (en) 2001-12-17 2009-04-21 Sap Ag Systems, methods and articles of manufacture for upgrading a database with a shadow system
US7565443B2 (en) 2002-12-13 2009-07-21 Sap Ag Common persistence layer
US7571164B2 (en) 2004-10-01 2009-08-04 Sap Ag System and method for deferred database connection configuration
US7631303B2 (en) 2004-06-07 2009-12-08 Sap Aktiengesellschaft System and method for a query language mapping architecture
US7647251B2 (en) 2006-04-03 2010-01-12 Sap Ag Process integration error and conflict handling
US7657575B2 (en) 2005-12-30 2010-02-02 Sap Ag Sequencing updates to business objects
US20100030995A1 (en) 2008-07-30 2010-02-04 International Business Machines Corporation Method and apparatus for applying database partitioning in a multi-tenancy scenario
US7669181B2 (en) 2005-04-29 2010-02-23 Sap (Ag) Client interfaces for packages
US20100070336A1 (en) 2008-09-18 2010-03-18 Sap Ag Providing Customer Relationship Management Application as Enterprise Services
US7693851B2 (en) 2005-12-30 2010-04-06 Sap Ag Systems and methods for implementing a shared space in a provider-tenant environment
US20100094882A1 (en) * 2008-10-09 2010-04-15 International Business Machines Corporation Automated data conversion and route tracking in distributed databases
US7702696B2 (en) 2007-04-16 2010-04-20 Sap Ag Emulation of empty database tables using database views
US7720992B2 (en) 2005-02-02 2010-05-18 Sap Aktiengesellschaft Tentative update and confirm or compensate
US7734648B2 (en) 2006-04-11 2010-06-08 Sap Ag Update manager for database system
US7739387B2 (en) 2007-03-08 2010-06-15 Sap Ag System and method for message packaging
US20100153341A1 (en) 2008-12-17 2010-06-17 Sap Ag Selectable data migration
US20100161648A1 (en) 2008-12-19 2010-06-24 Peter Eberlein Flexible multi-tenant support of metadata extension
US7774319B2 (en) 2004-08-11 2010-08-10 Sap Ag System and method for an optimistic database access
US7788319B2 (en) 2003-05-16 2010-08-31 Sap Ag Business process management for a message-based exchange infrastructure
US7797708B2 (en) 2006-06-26 2010-09-14 Sap Ag Simulating actions on mockup business objects
US20100299664A1 (en) 2009-05-21 2010-11-25 Salesforce.Com, Inc. System, method and computer program product for pushing an application update between tenants of a multi-tenant on-demand database service
US7844659B2 (en) 2006-04-03 2010-11-30 Sap Ag Process integration persistency
US7894602B2 (en) 2006-03-31 2011-02-22 Sap Ag System and method for generating pseudo-random numbers
US7934219B2 (en) 2005-12-29 2011-04-26 Sap Ag Process agents for process integration
US7962920B2 (en) 2006-12-22 2011-06-14 Sap Ag Providing a business logic framework
US7971209B2 (en) 2007-05-18 2011-06-28 Sap Ag Shortcut in reliable communication
US20110173219A1 (en) * 2008-10-09 2011-07-14 International Business Machines Corporation Dynamic context definitions in distributed databases
US8005779B2 (en) 2007-09-12 2011-08-23 Sap Ag System and method for designing a workflow
US8069184B2 (en) 2006-12-29 2011-11-29 Sap Ag Systems and methods to implement extensibility of tenant content in a provider-tenant environment
US20110295839A1 (en) 2010-05-27 2011-12-01 Salesforce.Com, Inc. Optimizing queries in a multi-tenant database system environment
US8108434B2 (en) 2008-08-26 2012-01-31 Sap Ag Dynamic node extensions and extension fields for business objects
US8108433B2 (en) 2008-08-26 2012-01-31 Sap Ag Dynamic extension fields for business objects
US20120036136A1 (en) * 2010-08-06 2012-02-09 At&T Intellectual Property I, L.P. Securing database content
US20120041988A1 (en) 2010-08-11 2012-02-16 Sap Ag Selectively Upgrading Clients in a Multi-Tenant Computing System
US8200634B2 (en) 2008-10-08 2012-06-12 Sap Ag Zero downtime maintenance using a mirror approach
US20120166620A1 (en) 2010-12-23 2012-06-28 Sap Ag System and method for integrated real time reporting and analytics across networked applications
US8214382B1 (en) * 2008-11-25 2012-07-03 Sprint Communications Company L.P. Database predicate constraints on structured query language statements
US20120173488A1 (en) 2010-12-29 2012-07-05 Lars Spielberg Tenant-separated data storage for lifecycle management in a multi-tenancy environment
US20120173581A1 (en) 2010-12-30 2012-07-05 Martin Hartig Strict Tenant Isolation in Multi-Tenant Enabled Systems
US20120174085A1 (en) 2010-12-30 2012-07-05 Volker Driesen Tenant Move Upgrade
US8225303B2 (en) 2007-11-30 2012-07-17 Sap Ag System and method for providing software upgrades
US8250135B2 (en) 2010-07-09 2012-08-21 Sap Ag Brokered cloud computing architecture
US20120254221A1 (en) 2011-03-29 2012-10-04 Salesforce.Com, Inc. Systems and methods for performing record actions in a multi-tenant database and application system
US8291038B2 (en) 2009-06-29 2012-10-16 Sap Ag Remote automation of manual tasks
US8301610B2 (en) 2010-07-21 2012-10-30 Sap Ag Optimizing search for insert-only databases and write-once data storage
US8315988B2 (en) 2006-08-31 2012-11-20 Sap Ag Systems and methods for verifying a data communication process
US20120330954A1 (en) 2011-06-27 2012-12-27 Swaminathan Sivasubramanian System And Method For Implementing A Scalable Data Storage Service
US20120331016A1 (en) 2011-06-23 2012-12-27 Salesforce.Com Inc. Methods and systems for caching data shared between organizations in a multi-tenant database system
US8356056B2 (en) 2008-08-26 2013-01-15 Sap Ag Functional extensions for business objects
US8356010B2 (en) 2010-08-11 2013-01-15 Sap Ag Online data migration
US8375130B2 (en) 2010-12-16 2013-02-12 Sap Ag Shared resource discovery, configuration, and consumption for networked solutions
US8392573B2 (en) 2010-07-30 2013-03-05 Sap Ag Transport of customer flexibility changes in a multi-tenant environment
US8407297B2 (en) 2007-10-22 2013-03-26 Sap Ag Systems and methods to receive information from a groupware client
US8413150B2 (en) 2009-07-31 2013-04-02 Sap Ag Systems and methods for data aware workflow change management
US20130086322A1 (en) 2011-09-30 2013-04-04 Oracle International Corporation Systems and methods for multitenancy data
US8429668B2 (en) 2007-12-07 2013-04-23 Sap Ag Workflow task re-evaluation
US8434060B2 (en) 2010-08-17 2013-04-30 Sap Ag Component load procedure for setting up systems
US20130132349A1 (en) 2010-06-14 2013-05-23 Uwe H.O. Hahn Tenant separation within a database instance
US8467817B2 (en) 2011-06-16 2013-06-18 Sap Ag Generic business notifications for mobile devices
US8473515B2 (en) 2010-05-10 2013-06-25 International Business Machines Corporation Multi-tenancy in database namespace
US8473942B2 (en) 2008-11-28 2013-06-25 Sap Ag Changable deployment conditions
US8479187B2 (en) 2008-12-02 2013-07-02 Sap Ag Adaptive switch installer
US8484167B2 (en) 2006-08-31 2013-07-09 Sap Ag Data verification systems and methods based on messaging data
US8489640B2 (en) 2010-07-19 2013-07-16 Sap Ag Field extensibility using generic boxed components
US8504980B1 (en) 2008-04-14 2013-08-06 Sap Ag Constraining data changes during transaction processing by a computer system
WO2013132377A1 (en) 2012-03-08 2013-09-12 International Business Machines Corporation Managing tenant-specific data sets in a multi-tenant environment
US8555249B2 (en) 2010-12-13 2013-10-08 Sap Ag Lifecycle stable user interface adaptations
US8560876B2 (en) 2010-07-06 2013-10-15 Sap Ag Clock acceleration of CPU core based on scanned result of task for parallel execution controlling key word
US20130275509A1 (en) 2012-04-11 2013-10-17 Salesforce.Com Inc. System and method for synchronizing data objects in a cloud based social networking environment
US8566784B2 (en) 2011-09-22 2013-10-22 Sap Ag Business process change controller
US20130282761A1 (en) 2012-04-18 2013-10-24 Salesforce.Com, Inc. System and method for entity shape abstraction in an on demand environment
US8572369B2 (en) 2009-12-11 2013-10-29 Sap Ag Security for collaboration services
US20130290249A1 (en) 2010-12-23 2013-10-31 Dwight Merriman Large distributed database clustering systems and methods
US20130325672A1 (en) 2012-05-31 2013-12-05 Sap Ag Mobile forecasting of sales using customer stock levels in a supplier business system
US8604973B2 (en) 2010-11-30 2013-12-10 Sap Ag Data access and management using GPS location data
US20130332424A1 (en) 2012-06-12 2013-12-12 Sap Ag Centralized read access logging
US8612927B2 (en) 2011-07-05 2013-12-17 Sap Ag Bulk access to metadata in a service-oriented business framework
US8612406B1 (en) 2012-05-22 2013-12-17 Sap Ag Sharing business data across networked applications
US8645483B2 (en) 2011-11-15 2014-02-04 Sap Ag Groupware-integrated business document management
US20140040294A1 (en) 2012-07-31 2014-02-06 International Business Machines Corporation Manipulation of multi-tenancy database
US20140047319A1 (en) 2012-08-13 2014-02-13 Sap Ag Context injection and extraction in xml documents based on common sparse templates
US8683436B2 (en) 2007-12-19 2014-03-25 Sap Ag Timer patterns for process models
US8694557B2 (en) 2010-07-02 2014-04-08 Sap Ag Extensibility of metaobjects
US20140101099A1 (en) 2012-10-04 2014-04-10 Sap Ag Replicated database structural change management
US20140108440A1 (en) 2012-10-12 2014-04-17 Sap Ag Configuration of Life Cycle Management for Configuration Files for an Application
US8719826B2 (en) 2007-12-21 2014-05-06 Sap Ag Work flow model processing with weak dependencies that allows runtime insertion of additional tasks
US8751573B2 (en) 2010-11-23 2014-06-10 Sap Ag Cloud-processing management with a landscape directory
US8751437B2 (en) 2012-11-01 2014-06-10 Sap Ag Single persistence implementation of business objects
US20140164963A1 (en) 2012-12-11 2014-06-12 Sap Ag User configurable subdivision of user interface elements and full-screen access to subdivided elements
US8762408B2 (en) 2012-03-07 2014-06-24 Sap Ag Optimizing software applications
US8762929B2 (en) 2010-12-16 2014-06-24 Sap Ag System and method for exclusion of inconsistent objects from lifecycle management processes
US8762731B2 (en) 2012-09-14 2014-06-24 Sap Ag Multi-system security integration
US8769704B2 (en) 2010-09-10 2014-07-01 Salesforce.Com, Inc. Method and system for managing and monitoring of a multi-tenant system
US8793230B2 (en) 2012-10-23 2014-07-29 Sap Ag Single-database multiple-tenant software system upgrade
US8805986B2 (en) 2011-10-31 2014-08-12 Sap Ag Application scope adjustment based on resource consumption
US8812554B1 (en) 2012-03-22 2014-08-19 Projectx, International Ltd. Method and system for storing shared data records in relational database
US8819075B2 (en) 2010-07-26 2014-08-26 Sap Ag Facilitation of extension field usage based on reference field usage
US8850432B2 (en) 2012-05-30 2014-09-30 Red Hat, Inc. Controlling utilization in a multi-tenant platform-as-a-service (PaaS) environment in a cloud computing system
US8856727B2 (en) 2012-07-16 2014-10-07 Sap Se Generation framework for mapping of object models in a development environment
US8863005B2 (en) 2009-12-21 2014-10-14 Sap Se Propagating business object extension fields from source to target
US8863097B2 (en) 2010-12-29 2014-10-14 Sap Ag Providing code list extensibility
US8868582B2 (en) 2010-08-23 2014-10-21 Sap Ag Repository infrastructure for on demand platforms
US20140325069A1 (en) 2013-04-29 2014-10-30 Sap Ag Cloud sharing system
US20140324917A1 (en) 2013-04-29 2014-10-30 Sap Ag Reclamation of empty pages in database tables
US8880486B2 (en) 2010-07-27 2014-11-04 Sap Ag Distributed database system utilizing an extended two-phase-commit process
US8886596B2 (en) 2010-10-11 2014-11-11 Sap Se Method for reorganizing or moving a database table
US8892667B2 (en) 2011-09-29 2014-11-18 Sap Se Systems and methods for sending and receiving communications
US8904402B2 (en) 2012-05-30 2014-12-02 Red Hat, Inc. Controlling capacity in a multi-tenant platform-as-a-service environment in a cloud computing system
US20140359594A1 (en) 2013-06-04 2014-12-04 Sap Ag Repository layer strategy adaptation for software solution hosting
US20140379677A1 (en) 2013-06-24 2014-12-25 Sap Ag Test sandbox in production systems during productive use
US8924384B2 (en) 2010-08-04 2014-12-30 Sap Ag Upgrading column-based databases
US20150006608A1 (en) 2013-06-26 2015-01-01 Sap Ag Networked solutions integration using a cloud business object broker
US8930413B2 (en) 2012-01-03 2015-01-06 International Business Machines Corporation Dynamic structure for a multi-tenant database
US8938645B2 (en) 2012-09-19 2015-01-20 Sap Se Invalidation of metadata buffers
US20150026131A1 (en) 2013-07-19 2015-01-22 Sap Ag Data availability during columnar table merges
US8949789B2 (en) 2012-08-13 2015-02-03 Sap Se Adaptable business objects
US20150046413A1 (en) 2013-08-06 2015-02-12 Sap Ag Delta store giving row-level versioning semantics to a non-row-level versioning underlying store
US8972934B2 (en) 2010-12-20 2015-03-03 Sap Ag Support for temporally asynchronous interface extensions
US8978035B2 (en) 2012-09-06 2015-03-10 Red Hat, Inc. Scaling of application resources in a multi-tenant platform-as-a-service environment in a cloud computing system
US8996466B2 (en) 2008-12-01 2015-03-31 Sap Se Extend crud to support lifecyle management and business continuity
US20150095283A1 (en) 2013-09-27 2015-04-02 Microsoft Corporation Master schema shared across multiple tenants with dynamic update
US20150100546A1 (en) 2013-10-07 2015-04-09 Sap Ag Selective Synchronization in a Hierarchical Folder Structure
US9009708B2 (en) 2010-03-31 2015-04-14 Sap Se Method and system to effectuate recovery for dynamic workflows
US9009105B2 (en) 2010-12-30 2015-04-14 Sap Se Application exits for consistent tenant lifecycle management procedures
US9015212B2 (en) 2012-10-16 2015-04-21 Rackspace Us, Inc. System and method for exposing cloud stored data to a content delivery network
US9020881B2 (en) 2008-12-19 2015-04-28 Sap Se Public solution model in an enterprise service architecture
US9021392B2 (en) 2010-07-26 2015-04-28 Sap Se Managing extension projects with repository based tagging
US20150121545A1 (en) 2013-10-24 2015-04-30 Salesforce.Com, Inc. Security descriptors for record access queries
US9026857B2 (en) 2012-10-19 2015-05-05 Sap Se Method and system for postponed error code checks
US9026502B2 (en) 2013-06-25 2015-05-05 Sap Se Feedback optimized checks for database migration
US9031910B2 (en) 2013-06-24 2015-05-12 Sap Se System and method for maintaining a cluster setup
US9032406B2 (en) 2010-07-01 2015-05-12 Sap Se Cooperative batch scheduling in multitenancy system based on estimated execution time and generating a load distribution chart
US9038021B2 (en) 2012-08-15 2015-05-19 Sap Ag Naming algorithm for extension fields in de-normalized views
US20150142730A1 (en) 2012-05-18 2015-05-21 Georgetown University Methods and systems for populating and searching a drug informatics database
US20150178332A1 (en) 2013-12-19 2015-06-25 Sap Ag Transformation of document flow to contributors network
US9069984B2 (en) 2011-12-21 2015-06-30 Sap Se On-demand authorization management
US9069832B2 (en) 2012-12-21 2015-06-30 Sap Ag Approach for modularized sychronization and memory management
US9077717B2 (en) 2012-11-30 2015-07-07 Sap Se Propagation and adoption of extensions across applications in networked solutions
US20150242520A1 (en) 2014-02-26 2015-08-27 International Business Machines Corporation Cross tenant data access
US9122669B2 (en) 2008-08-29 2015-09-01 Sap Se Flat schema integrated document oriented templates
US9137130B2 (en) 2011-09-22 2015-09-15 Sap Se Dynamic network load forecasting
US9176801B2 (en) 2013-09-06 2015-11-03 Sap Se Advanced data models containing declarative and programmatic constraints
US9183540B2 (en) 2012-07-03 2015-11-10 Sap Se Mobile device analytics engine
US9182979B2 (en) 2013-04-29 2015-11-10 Sap Se Social coding extensions
US9182994B2 (en) 2012-07-18 2015-11-10 Sap Se Layering of business object models via extension techniques
US9189520B2 (en) 2013-06-24 2015-11-17 Sap Se Methods and systems for one dimensional heterogeneous histograms
US9189226B2 (en) 2013-06-25 2015-11-17 Sap Se Software logistics protocols
US20150347410A1 (en) 2014-06-03 2015-12-03 Sap Ag Cached Views
US20150363167A1 (en) 2014-06-16 2015-12-17 International Business Machines Corporation Flash optimized columnar data layout and data access algorithms for big data query engines
US9223985B2 (en) 2013-10-09 2015-12-29 Sap Se Risk assessment of changing computer system within a landscape
US9229707B2 (en) 2008-12-18 2016-01-05 Sap Se Zero downtime mechanism for software upgrade of a distributed computer system
US9244697B2 (en) 2010-07-30 2016-01-26 Sap Se Stable anchors in user interface to support life cycle extensions
US9256840B2 (en) 2011-12-01 2016-02-09 Sap Se Establishing business networks using a shared platform
US9262763B2 (en) 2006-09-29 2016-02-16 Sap Se Providing attachment-based data input and output
US9274757B2 (en) 2013-12-19 2016-03-01 Sap Se Customer tailored release master plan generation for hybrid networked solutions
US9275120B2 (en) 2012-05-30 2016-03-01 Sap Se Easy query
WO2016049576A1 (en) 2014-09-25 2016-03-31 Oracle International Corporation System and method for use of a global runtime in a multitenant application server environment
US20160147529A1 (en) 2014-11-20 2016-05-26 Red Hat, Inc. Source Code Management for a Multi-Tenant Platform-as-a-Service (PaaS) System
US9354948B2 (en) 2013-09-06 2016-05-31 Sap Se Data models containing host language embedded constraints
US9361407B2 (en) 2013-09-06 2016-06-07 Sap Se SQL extended with transient fields for calculation expressions in enhanced data models
US9378233B2 (en) 2013-11-26 2016-06-28 Sap Se For all entries processing
US20160224594A1 (en) * 2015-02-03 2016-08-04 Simba Technologies Inc. Schema Definition Tool
US9417917B1 (en) 2012-12-14 2016-08-16 Amazon Technologies, Inc. Equitable resource allocation for storage object deletion
US20160246864A1 (en) 2015-02-23 2016-08-25 International Business Machines Corporation Relaxing transaction serializability with statement-based data replication
US9430523B2 (en) 2013-09-06 2016-08-30 Sap Se Entity-relationship model extensions using annotations
US9436515B2 (en) 2010-12-29 2016-09-06 Sap Se Tenant virtualization controller for exporting tenant without shifting location of tenant data in a multi-tenancy environment
US9442977B2 (en) 2013-09-06 2016-09-13 Sap Se Database language extended to accommodate entity-relationship models
US9471353B1 (en) 2014-03-21 2016-10-18 Amazon Technologies, Inc. Isolating tenants executing in multi-tenant software containers
US9507810B2 (en) 2013-12-10 2016-11-29 Sap Se Updating database schemas in a zero-downtime environment
US9513811B2 (en) 2014-11-25 2016-12-06 Sap Se Materializing data from an in-memory array to an on-disk page structure
US20160358109A1 (en) 2015-06-08 2016-12-08 Sap Se Test System Using Production Data Without Disturbing Production System
US20160371315A1 (en) 2010-12-29 2016-12-22 Sap Se In-Memory Database For Multi-Tenancy
US20170025441A1 (en) 2014-04-08 2017-01-26 Sharp Kabushiki Kaisha Display device
US9575819B2 (en) 2013-09-06 2017-02-21 Sap Se Local buffers for event handlers
US9590872B1 (en) 2013-03-14 2017-03-07 Ca, Inc. Automated cloud IT services delivery solution model
US9619261B2 (en) 2015-06-29 2017-04-11 Vmware, Inc. Method and system for anticipating demand for a computational resource by containers running above guest operating systems within a distributed, virtualized computer system
US9619552B2 (en) 2013-09-06 2017-04-11 Sap Se Core data services extensibility for entity-relationship models
US9639572B2 (en) 2013-09-06 2017-05-02 Sap Se SQL enhancements simplifying database querying
US9641529B2 (en) 2014-11-10 2017-05-02 Coastal Federal Credit Union Methods, systems and computer program products for an application execution container for managing secondary application protocols
US9724757B2 (en) 2013-09-06 2017-08-08 North American Refractories Company Refractory component for lining a metallurgical vessel
US9734230B2 (en) 2013-09-12 2017-08-15 Sap Se Cross system analytics for in memory data warehouse
US20170262638A1 (en) 2015-09-25 2017-09-14 Eliot Horowitz Distributed database systems and methods with encrypted storage engines
US20180096165A1 (en) 2016-09-30 2018-04-05 Salesforce.Com, Inc. Provisioning for multi-tenant non-relational platform objects
US20180150541A1 (en) 2016-11-28 2018-05-31 Sap Se Proxy Views for Extended Monitoring of Database Systems
US20180189370A1 (en) * 2017-01-05 2018-07-05 International Business Machines Corporation Accelerator based data integration
US20190042660A1 (en) 2017-08-01 2019-02-07 salesforce.com,inc. Mechanism for providing multiple views in a multi-tenant data structure
US10248336B1 (en) 2016-09-30 2019-04-02 Tintri By Ddn, Inc. Efficient deletion of shared snapshots
US20190129985A1 (en) 2017-10-26 2019-05-02 Sap Se Deploying changes to key patterns in multi-tenancy database systems
US20190129986A1 (en) 2017-10-26 2019-05-02 Sap Se Transitioning between system sharing types in multi-tenancy database systems
US20190129991A1 (en) 2017-10-26 2019-05-02 Sap Se Exchanging shared containers and adapting tenants in multi-tenancy database systems
US20190129990A1 (en) 2017-10-26 2019-05-02 Sap Se Deploying changes in a multi-tenancy database system
US20190129997A1 (en) 2017-10-26 2019-05-02 Sap Se Data separation and write redirection in multi-tenancy database systems
US20190130121A1 (en) 2017-10-26 2019-05-02 Sap Se System sharing types in multi-tenancy database systems
US20190130010A1 (en) 2017-10-26 2019-05-02 Sap Se Patching content across shared and tenant containers in multi-tenancy database systems
US10346434B1 (en) 2015-08-21 2019-07-09 Amazon Technologies, Inc. Partitioned data materialization in journal-based storage systems
US20190370377A1 (en) 2018-06-04 2019-12-05 Sap Se Change management for shared objects in multi-tenancy systems

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6950823B2 (en) * 2002-12-23 2005-09-27 International Business Machines Corporation Transparent edge-of-network data cache
US7089235B2 (en) * 2003-04-17 2006-08-08 International Business Machines Corporation Method for restricting queryable data in an abstract database
US8112459B2 (en) * 2004-12-17 2012-02-07 International Business Machines Corporation Creating a logical table from multiple differently formatted physical tables having different access methods
JP2008146603A (en) * 2006-12-13 2008-06-26 Canon Inc Document retrieving apparatus, document retrieving method, program, and storage medium
CN101158958B (en) * 2007-10-23 2010-06-09 浙江大学 Fusion enquire method based on MySQL storage engines
CN100594497C (en) * 2008-07-31 2010-03-17 中国科学院计算技术研究所 System for implementing network search caching and search method
CN101404013A (en) * 2008-11-13 2009-04-08 山东浪潮齐鲁软件产业股份有限公司 Storage and query method for large data volume table of database
CN102254029B (en) * 2011-07-29 2013-06-19 株洲南车时代电气股份有限公司 View-based data access system and method
US9043309B2 (en) * 2012-06-05 2015-05-26 Oracle International Corporation SQL transformation-based optimization techniques for enforcement of data access control
CN102999629B (en) * 2012-12-12 2016-01-13 济南大学 A kind of relational database is to the asynchronous converting system of non-mode database and method
CN104216893B (en) * 2013-05-31 2018-01-16 中国电信股份有限公司 Partition management method, server and the system of multi-tenant shared data table
GB2521197A (en) * 2013-12-13 2015-06-17 Ibm Incremental and collocated redistribution for expansion of an online shared nothing database
US9465840B2 (en) * 2014-03-14 2016-10-11 International Business Machines Corporation Dynamically indentifying and preventing skewed partitions in a shared-nothing database
CN106471489B (en) * 2014-06-30 2019-10-11 微软技术许可有限责任公司 Manage the data with flexible modes
US9996563B2 (en) * 2015-03-23 2018-06-12 International Business Machines Corporation Efficient full delete operations
US10740318B2 (en) 2017-10-26 2020-08-11 Sap Se Key pattern management in multi-tenancy database systems

Patent Citations (231)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070060609A1 (en) 2000-02-01 2007-03-15 Anderson Maibritt B Identification and Use of Growth Hormone Secretagogue Receptor Type 1A Antagonists
US7392236B2 (en) 2001-03-20 2008-06-24 Sap Ag Method, computer program product and computer system for a single database system to support multiple application systems
US7461097B2 (en) 2001-05-30 2008-12-02 Sap Aktiengesellschaft Method, system, and computer program for migrating content from source database to target database
US7325233B2 (en) 2001-11-07 2008-01-29 Sap Ag Process attachable virtual machines
US7523142B2 (en) 2001-12-17 2009-04-21 Sap Ag Systems, methods and articles of manufacture for upgrading a database with a shadow system
US7565443B2 (en) 2002-12-13 2009-07-21 Sap Ag Common persistence layer
US7788319B2 (en) 2003-05-16 2010-08-31 Sap Ag Business process management for a message-based exchange infrastructure
US7457828B2 (en) 2003-08-29 2008-11-25 Sap Ag System and method for synchronizing distributed buffers when committing data to a database
US20050052150A1 (en) 2003-09-08 2005-03-10 Bender Paul T. Failsafe operation of active vehicle suspension
US7650597B2 (en) 2003-09-10 2010-01-19 Sap Aktiengesellschaft Symmetric transformation processing system
US7302678B2 (en) 2003-09-10 2007-11-27 Sap Aktiengesellschaft Symmetric transformation processing system
US7421437B2 (en) 2003-11-10 2008-09-02 Sap Ag System and method for a data dictionary cache in a distributed system
US7191160B2 (en) 2003-11-10 2007-03-13 Sap Ag Platform-independent data dictionary
US7631303B2 (en) 2004-06-07 2009-12-08 Sap Aktiengesellschaft System and method for a query language mapping architecture
US7490102B2 (en) 2004-06-07 2009-02-10 Sap Ag System and method for interacting with a persistence layer
US7774319B2 (en) 2004-08-11 2010-08-10 Sap Ag System and method for an optimistic database access
US7571164B2 (en) 2004-10-01 2009-08-04 Sap Ag System and method for deferred database connection configuration
US7480681B2 (en) 2004-12-06 2009-01-20 Sap Ag System and method for a transaction manager
US7720992B2 (en) 2005-02-02 2010-05-18 Sap Aktiengesellschaft Tentative update and confirm or compensate
US20060248545A1 (en) 2005-04-29 2006-11-02 Sap Aktiengesellschaft Calls and return calls using client interfaces
US7587705B2 (en) 2005-04-29 2009-09-08 Sap (Ag) Calls and return calls using client interfaces
US7634771B2 (en) 2005-04-29 2009-12-15 Sap (Ag) Object generation in packages
US20060248507A1 (en) 2005-04-29 2006-11-02 Sap Aktiengesellschaft Object generation in packages
US7669181B2 (en) 2005-04-29 2010-02-23 Sap (Ag) Client interfaces for packages
US7934219B2 (en) 2005-12-29 2011-04-26 Sap Ag Process agents for process integration
US20070156650A1 (en) 2005-12-30 2007-07-05 Becker Wolfgang A Systems and methods for accessing a shared space in a provider-tenant environment
US20070156849A1 (en) 2005-12-30 2007-07-05 Wolfgang Becker Systems and methods for delivering software upgrades in a provider-tenant environment
US7657575B2 (en) 2005-12-30 2010-02-02 Sap Ag Sequencing updates to business objects
US7693851B2 (en) 2005-12-30 2010-04-06 Sap Ag Systems and methods for implementing a shared space in a provider-tenant environment
US20070162512A1 (en) 2006-01-10 2007-07-12 Microsoft Corporation Providing reporting database functionality using copy-on-write technology
US7894602B2 (en) 2006-03-31 2011-02-22 Sap Ag System and method for generating pseudo-random numbers
US7647251B2 (en) 2006-04-03 2010-01-12 Sap Ag Process integration error and conflict handling
US7844659B2 (en) 2006-04-03 2010-11-30 Sap Ag Process integration persistency
US8126919B2 (en) 2006-04-11 2012-02-28 Sap Ag Update manager for database system
US7734648B2 (en) 2006-04-11 2010-06-08 Sap Ag Update manager for database system
US20080120129A1 (en) 2006-05-13 2008-05-22 Michael Seubert Consistent set of interfaces derived from a business object model
US7797708B2 (en) 2006-06-26 2010-09-14 Sap Ag Simulating actions on mockup business objects
US20080059489A1 (en) 2006-08-30 2008-03-06 International Business Machines Corporation Method for parallel query processing with non-dedicated, heterogeneous computers that is resilient to load bursts and node failures
US8315988B2 (en) 2006-08-31 2012-11-20 Sap Ag Systems and methods for verifying a data communication process
US8484167B2 (en) 2006-08-31 2013-07-09 Sap Ag Data verification systems and methods based on messaging data
US7519614B2 (en) 2006-08-31 2009-04-14 Sap Ag Data verification systems and methods using business objects
US9262763B2 (en) 2006-09-29 2016-02-16 Sap Se Providing attachment-based data input and output
US7962920B2 (en) 2006-12-22 2011-06-14 Sap Ag Providing a business logic framework
US8069184B2 (en) 2006-12-29 2011-11-29 Sap Ag Systems and methods to implement extensibility of tenant content in a provider-tenant environment
US20080162660A1 (en) 2006-12-29 2008-07-03 Becker Wolfgang A Systems and methods for accessing a shared space in a provider-tenant environment by using middleware
US20080162536A1 (en) 2006-12-29 2008-07-03 Becker Wolfgang A Systems and methods for extending shared data structures with tenant content in a provider-tenant environment
US20080162509A1 (en) * 2006-12-29 2008-07-03 Becker Wolfgang A Methods for updating a tenant space in a mega-tenancy environment
US7739387B2 (en) 2007-03-08 2010-06-15 Sap Ag System and method for message packaging
US7702696B2 (en) 2007-04-16 2010-04-20 Sap Ag Emulation of empty database tables using database views
US7971209B2 (en) 2007-05-18 2011-06-28 Sap Ag Shortcut in reliable communication
US8005779B2 (en) 2007-09-12 2011-08-23 Sap Ag System and method for designing a workflow
US8407297B2 (en) 2007-10-22 2013-03-26 Sap Ag Systems and methods to receive information from a groupware client
US8225303B2 (en) 2007-11-30 2012-07-17 Sap Ag System and method for providing software upgrades
US8429668B2 (en) 2007-12-07 2013-04-23 Sap Ag Workflow task re-evaluation
US8683436B2 (en) 2007-12-19 2014-03-25 Sap Ag Timer patterns for process models
US8719826B2 (en) 2007-12-21 2014-05-06 Sap Ag Work flow model processing with weak dependencies that allows runtime insertion of additional tasks
US8504980B1 (en) 2008-04-14 2013-08-06 Sap Ag Constraining data changes during transaction processing by a computer system
US20100030995A1 (en) 2008-07-30 2010-02-04 International Business Machines Corporation Method and apparatus for applying database partitioning in a multi-tenancy scenario
US8108433B2 (en) 2008-08-26 2012-01-31 Sap Ag Dynamic extension fields for business objects
US8108434B2 (en) 2008-08-26 2012-01-31 Sap Ag Dynamic node extensions and extension fields for business objects
US8356056B2 (en) 2008-08-26 2013-01-15 Sap Ag Functional extensions for business objects
US9122669B2 (en) 2008-08-29 2015-09-01 Sap Se Flat schema integrated document oriented templates
US20100070336A1 (en) 2008-09-18 2010-03-18 Sap Ag Providing Customer Relationship Management Application as Enterprise Services
US8200634B2 (en) 2008-10-08 2012-06-12 Sap Ag Zero downtime maintenance using a mirror approach
US20110173219A1 (en) * 2008-10-09 2011-07-14 International Business Machines Corporation Dynamic context definitions in distributed databases
US20100094882A1 (en) * 2008-10-09 2010-04-15 International Business Machines Corporation Automated data conversion and route tracking in distributed databases
US8214382B1 (en) * 2008-11-25 2012-07-03 Sprint Communications Company L.P. Database predicate constraints on structured query language statements
US8473942B2 (en) 2008-11-28 2013-06-25 Sap Ag Changable deployment conditions
US8996466B2 (en) 2008-12-01 2015-03-31 Sap Se Extend crud to support lifecyle management and business continuity
US8479187B2 (en) 2008-12-02 2013-07-02 Sap Ag Adaptive switch installer
US20100153341A1 (en) 2008-12-17 2010-06-17 Sap Ag Selectable data migration
US9229707B2 (en) 2008-12-18 2016-01-05 Sap Se Zero downtime mechanism for software upgrade of a distributed computer system
US9020881B2 (en) 2008-12-19 2015-04-28 Sap Se Public solution model in an enterprise service architecture
US20100161648A1 (en) 2008-12-19 2010-06-24 Peter Eberlein Flexible multi-tenant support of metadata extension
US20100299664A1 (en) 2009-05-21 2010-11-25 Salesforce.Com, Inc. System, method and computer program product for pushing an application update between tenants of a multi-tenant on-demand database service
US8291038B2 (en) 2009-06-29 2012-10-16 Sap Ag Remote automation of manual tasks
US8413150B2 (en) 2009-07-31 2013-04-02 Sap Ag Systems and methods for data aware workflow change management
US8572369B2 (en) 2009-12-11 2013-10-29 Sap Ag Security for collaboration services
US8863005B2 (en) 2009-12-21 2014-10-14 Sap Se Propagating business object extension fields from source to target
US9009708B2 (en) 2010-03-31 2015-04-14 Sap Se Method and system to effectuate recovery for dynamic workflows
US8473515B2 (en) 2010-05-10 2013-06-25 International Business Machines Corporation Multi-tenancy in database namespace
US20110295839A1 (en) 2010-05-27 2011-12-01 Salesforce.Com, Inc. Optimizing queries in a multi-tenant database system environment
US20130132349A1 (en) 2010-06-14 2013-05-23 Uwe H.O. Hahn Tenant separation within a database instance
US9032406B2 (en) 2010-07-01 2015-05-12 Sap Se Cooperative batch scheduling in multitenancy system based on estimated execution time and generating a load distribution chart
US8694557B2 (en) 2010-07-02 2014-04-08 Sap Ag Extensibility of metaobjects
US8560876B2 (en) 2010-07-06 2013-10-15 Sap Ag Clock acceleration of CPU core based on scanned result of task for parallel execution controlling key word
US8402086B2 (en) 2010-07-09 2013-03-19 Sap Ag Brokered cloud computing architecture
US8250135B2 (en) 2010-07-09 2012-08-21 Sap Ag Brokered cloud computing architecture
US8489640B2 (en) 2010-07-19 2013-07-16 Sap Ag Field extensibility using generic boxed components
US8301610B2 (en) 2010-07-21 2012-10-30 Sap Ag Optimizing search for insert-only databases and write-once data storage
US8819075B2 (en) 2010-07-26 2014-08-26 Sap Ag Facilitation of extension field usage based on reference field usage
US9021392B2 (en) 2010-07-26 2015-04-28 Sap Se Managing extension projects with repository based tagging
US8880486B2 (en) 2010-07-27 2014-11-04 Sap Ag Distributed database system utilizing an extended two-phase-commit process
US8924565B2 (en) 2010-07-30 2014-12-30 Sap Se Transport of customer flexibility changes in a multi-tenant environment
US9244697B2 (en) 2010-07-30 2016-01-26 Sap Se Stable anchors in user interface to support life cycle extensions
US8392573B2 (en) 2010-07-30 2013-03-05 Sap Ag Transport of customer flexibility changes in a multi-tenant environment
US8924384B2 (en) 2010-08-04 2014-12-30 Sap Ag Upgrading column-based databases
US20120036136A1 (en) * 2010-08-06 2012-02-09 At&T Intellectual Property I, L.P. Securing database content
US20120041988A1 (en) 2010-08-11 2012-02-16 Sap Ag Selectively Upgrading Clients in a Multi-Tenant Computing System
US8380667B2 (en) 2010-08-11 2013-02-19 Sap Ag Selectively upgrading clients in a multi-tenant computing system
US8356010B2 (en) 2010-08-11 2013-01-15 Sap Ag Online data migration
US8434060B2 (en) 2010-08-17 2013-04-30 Sap Ag Component load procedure for setting up systems
US8868582B2 (en) 2010-08-23 2014-10-21 Sap Ag Repository infrastructure for on demand platforms
US8769704B2 (en) 2010-09-10 2014-07-01 Salesforce.Com, Inc. Method and system for managing and monitoring of a multi-tenant system
US8886596B2 (en) 2010-10-11 2014-11-11 Sap Se Method for reorganizing or moving a database table
US8751573B2 (en) 2010-11-23 2014-06-10 Sap Ag Cloud-processing management with a landscape directory
US8604973B2 (en) 2010-11-30 2013-12-10 Sap Ag Data access and management using GPS location data
US8555249B2 (en) 2010-12-13 2013-10-08 Sap Ag Lifecycle stable user interface adaptations
US8762929B2 (en) 2010-12-16 2014-06-24 Sap Ag System and method for exclusion of inconsistent objects from lifecycle management processes
US8375130B2 (en) 2010-12-16 2013-02-12 Sap Ag Shared resource discovery, configuration, and consumption for networked solutions
US8972934B2 (en) 2010-12-20 2015-03-03 Sap Ag Support for temporally asynchronous interface extensions
US20130290249A1 (en) 2010-12-23 2013-10-31 Dwight Merriman Large distributed database clustering systems and methods
US20120166620A1 (en) 2010-12-23 2012-06-28 Sap Ag System and method for integrated real time reporting and analytics across networked applications
US9436515B2 (en) 2010-12-29 2016-09-06 Sap Se Tenant virtualization controller for exporting tenant without shifting location of tenant data in a multi-tenancy environment
US20160371315A1 (en) 2010-12-29 2016-12-22 Sap Se In-Memory Database For Multi-Tenancy
US8863097B2 (en) 2010-12-29 2014-10-14 Sap Ag Providing code list extensibility
US20120173488A1 (en) 2010-12-29 2012-07-05 Lars Spielberg Tenant-separated data storage for lifecycle management in a multi-tenancy environment
US9009105B2 (en) 2010-12-30 2015-04-14 Sap Se Application exits for consistent tenant lifecycle management procedures
US8706772B2 (en) 2010-12-30 2014-04-22 Sap Ag Strict tenant isolation in multi-tenant enabled systems
US8875122B2 (en) 2010-12-30 2014-10-28 Sap Se Tenant move upgrade
US20120174085A1 (en) 2010-12-30 2012-07-05 Volker Driesen Tenant Move Upgrade
US20120173581A1 (en) 2010-12-30 2012-07-05 Martin Hartig Strict Tenant Isolation in Multi-Tenant Enabled Systems
US20120254221A1 (en) 2011-03-29 2012-10-04 Salesforce.Com, Inc. Systems and methods for performing record actions in a multi-tenant database and application system
US8467817B2 (en) 2011-06-16 2013-06-18 Sap Ag Generic business notifications for mobile devices
US20120331016A1 (en) 2011-06-23 2012-12-27 Salesforce.Com Inc. Methods and systems for caching data shared between organizations in a multi-tenant database system
US20120330954A1 (en) 2011-06-27 2012-12-27 Swaminathan Sivasubramanian System And Method For Implementing A Scalable Data Storage Service
US8612927B2 (en) 2011-07-05 2013-12-17 Sap Ag Bulk access to metadata in a service-oriented business framework
US9003356B2 (en) 2011-09-22 2015-04-07 Sap Se Business process change controller
US9137130B2 (en) 2011-09-22 2015-09-15 Sap Se Dynamic network load forecasting
US8566784B2 (en) 2011-09-22 2013-10-22 Sap Ag Business process change controller
US8892667B2 (en) 2011-09-29 2014-11-18 Sap Se Systems and methods for sending and receiving communications
US20130086322A1 (en) 2011-09-30 2013-04-04 Oracle International Corporation Systems and methods for multitenancy data
US8805986B2 (en) 2011-10-31 2014-08-12 Sap Ag Application scope adjustment based on resource consumption
US8645483B2 (en) 2011-11-15 2014-02-04 Sap Ag Groupware-integrated business document management
US9256840B2 (en) 2011-12-01 2016-02-09 Sap Se Establishing business networks using a shared platform
US9069984B2 (en) 2011-12-21 2015-06-30 Sap Se On-demand authorization management
US8930413B2 (en) 2012-01-03 2015-01-06 International Business Machines Corporation Dynamic structure for a multi-tenant database
US8762408B2 (en) 2012-03-07 2014-06-24 Sap Ag Optimizing software applications
WO2013132377A1 (en) 2012-03-08 2013-09-12 International Business Machines Corporation Managing tenant-specific data sets in a multi-tenant environment
US9251183B2 (en) 2012-03-08 2016-02-02 International Business Machines Corporation Managing tenant-specific data sets in a multi-tenant environment
US8812554B1 (en) 2012-03-22 2014-08-19 Projectx, International Ltd. Method and system for storing shared data records in relational database
US20130275509A1 (en) 2012-04-11 2013-10-17 Salesforce.Com Inc. System and method for synchronizing data objects in a cloud based social networking environment
US20130282761A1 (en) 2012-04-18 2013-10-24 Salesforce.Com, Inc. System and method for entity shape abstraction in an on demand environment
US20150142730A1 (en) 2012-05-18 2015-05-21 Georgetown University Methods and systems for populating and searching a drug informatics database
US8612406B1 (en) 2012-05-22 2013-12-17 Sap Ag Sharing business data across networked applications
US8904402B2 (en) 2012-05-30 2014-12-02 Red Hat, Inc. Controlling capacity in a multi-tenant platform-as-a-service environment in a cloud computing system
US8850432B2 (en) 2012-05-30 2014-09-30 Red Hat, Inc. Controlling utilization in a multi-tenant platform-as-a-service (PaaS) environment in a cloud computing system
US9275120B2 (en) 2012-05-30 2016-03-01 Sap Se Easy query
US20130325672A1 (en) 2012-05-31 2013-12-05 Sap Ag Mobile forecasting of sales using customer stock levels in a supplier business system
US20130332424A1 (en) 2012-06-12 2013-12-12 Sap Ag Centralized read access logging
US9183540B2 (en) 2012-07-03 2015-11-10 Sap Se Mobile device analytics engine
US8856727B2 (en) 2012-07-16 2014-10-07 Sap Se Generation framework for mapping of object models in a development environment
US9182994B2 (en) 2012-07-18 2015-11-10 Sap Se Layering of business object models via extension techniques
US20140040294A1 (en) 2012-07-31 2014-02-06 International Business Machines Corporation Manipulation of multi-tenancy database
US20140047319A1 (en) 2012-08-13 2014-02-13 Sap Ag Context injection and extraction in xml documents based on common sparse templates
US8949789B2 (en) 2012-08-13 2015-02-03 Sap Se Adaptable business objects
US9038021B2 (en) 2012-08-15 2015-05-19 Sap Ag Naming algorithm for extension fields in de-normalized views
US8978035B2 (en) 2012-09-06 2015-03-10 Red Hat, Inc. Scaling of application resources in a multi-tenant platform-as-a-service environment in a cloud computing system
US8762731B2 (en) 2012-09-14 2014-06-24 Sap Ag Multi-system security integration
US8938645B2 (en) 2012-09-19 2015-01-20 Sap Se Invalidation of metadata buffers
US20140101099A1 (en) 2012-10-04 2014-04-10 Sap Ag Replicated database structural change management
US20140108440A1 (en) 2012-10-12 2014-04-17 Sap Ag Configuration of Life Cycle Management for Configuration Files for an Application
US9015212B2 (en) 2012-10-16 2015-04-21 Rackspace Us, Inc. System and method for exposing cloud stored data to a content delivery network
US9026857B2 (en) 2012-10-19 2015-05-05 Sap Se Method and system for postponed error code checks
US8793230B2 (en) 2012-10-23 2014-07-29 Sap Ag Single-database multiple-tenant software system upgrade
US8751437B2 (en) 2012-11-01 2014-06-10 Sap Ag Single persistence implementation of business objects
US9077717B2 (en) 2012-11-30 2015-07-07 Sap Se Propagation and adoption of extensions across applications in networked solutions
US20140164963A1 (en) 2012-12-11 2014-06-12 Sap Ag User configurable subdivision of user interface elements and full-screen access to subdivided elements
US9417917B1 (en) 2012-12-14 2016-08-16 Amazon Technologies, Inc. Equitable resource allocation for storage object deletion
US9069832B2 (en) 2012-12-21 2015-06-30 Sap Ag Approach for modularized sychronization and memory management
US9590872B1 (en) 2013-03-14 2017-03-07 Ca, Inc. Automated cloud IT services delivery solution model
US20140324917A1 (en) 2013-04-29 2014-10-30 Sap Ag Reclamation of empty pages in database tables
US9182979B2 (en) 2013-04-29 2015-11-10 Sap Se Social coding extensions
US20140325069A1 (en) 2013-04-29 2014-10-30 Sap Ag Cloud sharing system
US20140359594A1 (en) 2013-06-04 2014-12-04 Sap Ag Repository layer strategy adaptation for software solution hosting
US9031910B2 (en) 2013-06-24 2015-05-12 Sap Se System and method for maintaining a cluster setup
US9189520B2 (en) 2013-06-24 2015-11-17 Sap Se Methods and systems for one dimensional heterogeneous histograms
US20140379677A1 (en) 2013-06-24 2014-12-25 Sap Ag Test sandbox in production systems during productive use
US9026502B2 (en) 2013-06-25 2015-05-05 Sap Se Feedback optimized checks for database migration
US9189226B2 (en) 2013-06-25 2015-11-17 Sap Se Software logistics protocols
US20150006608A1 (en) 2013-06-26 2015-01-01 Sap Ag Networked solutions integration using a cloud business object broker
US20150026131A1 (en) 2013-07-19 2015-01-22 Sap Ag Data availability during columnar table merges
US20150046413A1 (en) 2013-08-06 2015-02-12 Sap Ag Delta store giving row-level versioning semantics to a non-row-level versioning underlying store
US9442977B2 (en) 2013-09-06 2016-09-13 Sap Se Database language extended to accommodate entity-relationship models
US9430523B2 (en) 2013-09-06 2016-08-30 Sap Se Entity-relationship model extensions using annotations
US9724757B2 (en) 2013-09-06 2017-08-08 North American Refractories Company Refractory component for lining a metallurgical vessel
US9639572B2 (en) 2013-09-06 2017-05-02 Sap Se SQL enhancements simplifying database querying
US9619552B2 (en) 2013-09-06 2017-04-11 Sap Se Core data services extensibility for entity-relationship models
US9575819B2 (en) 2013-09-06 2017-02-21 Sap Se Local buffers for event handlers
US9354948B2 (en) 2013-09-06 2016-05-31 Sap Se Data models containing host language embedded constraints
US9361407B2 (en) 2013-09-06 2016-06-07 Sap Se SQL extended with transient fields for calculation expressions in enhanced data models
US9176801B2 (en) 2013-09-06 2015-11-03 Sap Se Advanced data models containing declarative and programmatic constraints
US9734230B2 (en) 2013-09-12 2017-08-15 Sap Se Cross system analytics for in memory data warehouse
US20150095283A1 (en) 2013-09-27 2015-04-02 Microsoft Corporation Master schema shared across multiple tenants with dynamic update
US20150100546A1 (en) 2013-10-07 2015-04-09 Sap Ag Selective Synchronization in a Hierarchical Folder Structure
US9223985B2 (en) 2013-10-09 2015-12-29 Sap Se Risk assessment of changing computer system within a landscape
US20150121545A1 (en) 2013-10-24 2015-04-30 Salesforce.Com, Inc. Security descriptors for record access queries
US9639567B2 (en) 2013-11-26 2017-05-02 Sap Se For all entries processing
US9378233B2 (en) 2013-11-26 2016-06-28 Sap Se For all entries processing
US9507810B2 (en) 2013-12-10 2016-11-29 Sap Se Updating database schemas in a zero-downtime environment
US20150178332A1 (en) 2013-12-19 2015-06-25 Sap Ag Transformation of document flow to contributors network
US9274757B2 (en) 2013-12-19 2016-03-01 Sap Se Customer tailored release master plan generation for hybrid networked solutions
US20150242520A1 (en) 2014-02-26 2015-08-27 International Business Machines Corporation Cross tenant data access
US9471353B1 (en) 2014-03-21 2016-10-18 Amazon Technologies, Inc. Isolating tenants executing in multi-tenant software containers
US20170025441A1 (en) 2014-04-08 2017-01-26 Sharp Kabushiki Kaisha Display device
US20150347410A1 (en) 2014-06-03 2015-12-03 Sap Ag Cached Views
US20150363167A1 (en) 2014-06-16 2015-12-17 International Business Machines Corporation Flash optimized columnar data layout and data access algorithms for big data query engines
WO2016049576A1 (en) 2014-09-25 2016-03-31 Oracle International Corporation System and method for use of a global runtime in a multitenant application server environment
US9641529B2 (en) 2014-11-10 2017-05-02 Coastal Federal Credit Union Methods, systems and computer program products for an application execution container for managing secondary application protocols
US20160147529A1 (en) 2014-11-20 2016-05-26 Red Hat, Inc. Source Code Management for a Multi-Tenant Platform-as-a-Service (PaaS) System
US9513811B2 (en) 2014-11-25 2016-12-06 Sap Se Materializing data from an in-memory array to an on-disk page structure
US20160224594A1 (en) * 2015-02-03 2016-08-04 Simba Technologies Inc. Schema Definition Tool
US20160246864A1 (en) 2015-02-23 2016-08-25 International Business Machines Corporation Relaxing transaction serializability with statement-based data replication
US20160358109A1 (en) 2015-06-08 2016-12-08 Sap Se Test System Using Production Data Without Disturbing Production System
US9619261B2 (en) 2015-06-29 2017-04-11 Vmware, Inc. Method and system for anticipating demand for a computational resource by containers running above guest operating systems within a distributed, virtualized computer system
US10346434B1 (en) 2015-08-21 2019-07-09 Amazon Technologies, Inc. Partitioned data materialization in journal-based storage systems
US20170262638A1 (en) 2015-09-25 2017-09-14 Eliot Horowitz Distributed database systems and methods with encrypted storage engines
US10248336B1 (en) 2016-09-30 2019-04-02 Tintri By Ddn, Inc. Efficient deletion of shared snapshots
US20180096165A1 (en) 2016-09-30 2018-04-05 Salesforce.Com, Inc. Provisioning for multi-tenant non-relational platform objects
US20180150541A1 (en) 2016-11-28 2018-05-31 Sap Se Proxy Views for Extended Monitoring of Database Systems
US20180189370A1 (en) * 2017-01-05 2018-07-05 International Business Machines Corporation Accelerator based data integration
US20190042660A1 (en) 2017-08-01 2019-02-07 salesforce.com,inc. Mechanism for providing multiple views in a multi-tenant data structure
US20190129985A1 (en) 2017-10-26 2019-05-02 Sap Se Deploying changes to key patterns in multi-tenancy database systems
US20190129986A1 (en) 2017-10-26 2019-05-02 Sap Se Transitioning between system sharing types in multi-tenancy database systems
US20190129991A1 (en) 2017-10-26 2019-05-02 Sap Se Exchanging shared containers and adapting tenants in multi-tenancy database systems
US20190129990A1 (en) 2017-10-26 2019-05-02 Sap Se Deploying changes in a multi-tenancy database system
US20190129997A1 (en) 2017-10-26 2019-05-02 Sap Se Data separation and write redirection in multi-tenancy database systems
US20190130121A1 (en) 2017-10-26 2019-05-02 Sap Se System sharing types in multi-tenancy database systems
US20190130010A1 (en) 2017-10-26 2019-05-02 Sap Se Patching content across shared and tenant containers in multi-tenancy database systems
US10482080B2 (en) 2017-10-26 2019-11-19 Sap Se Exchanging shared containers and adapting tenants in multi-tenancy database systems
US20190370377A1 (en) 2018-06-04 2019-12-05 Sap Se Change management for shared objects in multi-tenancy systems

Non-Patent Citations (23)

* Cited by examiner, † Cited by third party
Title
Adaptive Server Et al. "Reference Manual: Commands", Jul. 31, 2012, XP055456066, Retrieved from the Internet: URL: http://infocenter.sybase.com/help/topic/com.sybase.inforcenter.dc36272.1572/pdf/commands.pdf [retrieved on Mar. 2, 2018] 864 pages.
Communication and European Search Report received in re to EPO application No. 17001872.5-1222, dated Jan. 8, 2018, 16 pages.
Communication and European Search Report received in re to Epo application No. 17001902.0-1222 dated Jan. 8, 2018, 15 pages.
Communication and extended European Search Report in re to EPO application No. 17001916.0-1217, dated Mar. 22, 2018, 10 pages.
Communication and extended European Search Report in re to EPO application No. 17001917.8-1217, dated Mar. 15, 2018, 9 pages.
Communication and extended European Search Report in re to EPO application No. 17001922.8-1217, dated Mar. 6, 2018, 12 pages.
Communication and extended European Search Report in re to EPO application No. 17001948.3-1222, dated Feb. 9, 2018, 8 pages.
Communication and extended European Search Report in re to EPO application No. 17001969.9-1217, dated Mar. 1, 2018, 11 pages.
EP Extended European Search Report in European Appln No. 17001049.0-1221, dated Jan. 11, 2018, 16 pages.
EP Extended European Search Report in European Appln. No. 18184931, dated Feb. 14, 2019, 13 pages.
Non-final office action issued in U.S. Appl. No. 15/794,261 dated Nov. 14, 2019, 48 pages.
Non-Final Office Action issued in U.S. Appl. No. 15/794,335 dated May 24, 2019, 33 pages.
Non-Final Office Action issued in U.S. Appl. No. 15/794,362 dated May 24, 2019, 33 pages.
Non-final office action issued in U.S. Appl. No. 15/794,381 dated Nov. 6, 2019, 46 pages.
Non-Final office action issued in U.S. Appl. No. 15/794,424 dated Dec. 17, 2019, 52 pages.
Non-Final office action issued in U.S. Appl. No. 15/794,501 dated Dec. 19, 2019, 49 pages.
Non-Final office Action issued in U.S. Appl. No. 15/996,804 dated Mar. 6, 2020, 40 pages.
Stefan Aulbach: "Schema Flexibility and Data Sharing in Multi-Tenant Databases", dated Dec. 5, 2011; 146 pages; retrieved from the Internet: URL: https://mediatum.ub.tum.de/doc/1075044/document.pdf [retrieved on Dec. 21, 2017].
U.S. Appl. No. 14/960,983, filed Dec. 7, 2015, Eberlein, et al.
U.S. Appl. No. 15/083,918, filed Mar. 29, 2016, Eberlein, et al.
U.S. Appl. No. 15/285,715, filed Oct. 5, 2016, Specht et al.
U.S. Appl. No. 15/593,830, filed May 12, 2017, Eberlein, et al.
Zhi Hu Wang et al. "A Study and Performance Evaluation of the Multi-Tenant Data Tier Design Patterns for Service Oriented Computing", E-Business Engineering, 2008, ICEBE '08, IEEE International Conference On, Oct. 22, 2008, pp. 94-101, XP055453782.

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11561956B2 (en) 2017-10-26 2023-01-24 Sap Se Key pattern management in multi-tenancy database systems
US10915551B2 (en) 2018-06-04 2021-02-09 Sap Se Change management for shared objects in multi-tenancy systems
US11366658B1 (en) 2021-01-19 2022-06-21 Sap Se Seamless lifecycle stability for extensible software features
US11860841B2 (en) 2022-02-07 2024-01-02 Sap Se Online import using system-versioned tables

Also Published As

Publication number Publication date
US20200257673A1 (en) 2020-08-13
US20190129988A1 (en) 2019-05-02
US11561956B2 (en) 2023-01-24
EP3477503A1 (en) 2019-05-01
CN110019215B (en) 2023-10-20
CN110019215A (en) 2019-07-16

Similar Documents

Publication Publication Date Title
US11561956B2 (en) Key pattern management in multi-tenancy database systems
EP3477488B1 (en) Deploying changes to key patterns in multi-tenancy database systems
US10657276B2 (en) System sharing types in multi-tenancy database systems
US10621167B2 (en) Data separation and write redirection in multi-tenancy database systems
US10740315B2 (en) Transitioning between system sharing types in multi-tenancy database systems
US10482080B2 (en) Exchanging shared containers and adapting tenants in multi-tenancy database systems
US10713277B2 (en) Patching content across shared and tenant containers in multi-tenancy database systems
US10452646B2 (en) Deploying changes in a multi-tenancy database system
US10740093B2 (en) Advanced packaging techniques for improving work flows
US9836297B2 (en) Computer implemented method and system for automatically deploying and versioning scripts in a computing environment
US10338910B2 (en) Multi-tenant upgrading
US10620854B1 (en) Validating data for deployment
Piiroinen Containerization and Cloud Migration of Legacy Web Services
Paz Microsoft Azure Cosmos DB Revealed

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: SAP SE, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AUER, ULRICH;BIRN, IMMO-GERT;HAUCK, RALF-JUERGEN;AND OTHERS;SIGNING DATES FROM 20171026 TO 20171109;REEL/FRAME:044113/0112

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4