EP1831798A1 - Augmented database resource management - Google Patents

Augmented database resource management

Info

Publication number
EP1831798A1
EP1831798A1 EP04803407A EP04803407A EP1831798A1 EP 1831798 A1 EP1831798 A1 EP 1831798A1 EP 04803407 A EP04803407 A EP 04803407A EP 04803407 A EP04803407 A EP 04803407A EP 1831798 A1 EP1831798 A1 EP 1831798A1
Authority
EP
European Patent Office
Prior art keywords
requests
operation requests
sequence
resource
resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP04803407A
Other languages
German (de)
French (fr)
Inventor
Eric Kass
Dorothea Rink
Axel Stein
Bernhard Wörner
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAP SE
Original Assignee
SAP SE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAP SE filed Critical SAP SE
Publication of EP1831798A1 publication Critical patent/EP1831798A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2336Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
    • G06F16/2343Locking methods, e.g. distributed locking or locking implementation details

Definitions

  • the present invention relates to data processing by digital computer, and more particularly to the management of database resources.
  • Database management requires the supervision and organization of parallel operations that contend for the same system resources, so that operations that require the use of the same system resources at the same time do not conflict with one another.
  • Database management is typically the responsibility of a database management system (DBMS); each particular DBMS defines the manner in which it manages resources.
  • DBMS database management system
  • database management involves suspending succeeding operations that are incompatible with preceding operations by maintaining locks on resources in use.
  • An operation in process will attempt to acquire locks on the specific resources (e.g., tables, table rows, sets thereof, etc.) as the resources are required for the operation. If a needed resource is already locked by a concurrent operation, the current operation is deemed incompatible with the concurrent operation. The incompatible operation will be blocked until one of several alternatives occurs: e.g., the needed resource is available; the operation times out; or the operation is aborted by the DBMS when the DBMS determines that a dead
  • retry logic can be added to the application.
  • the retry logic will attempt to repeat an operation that has previously failed due to a timeout situation, or due to a DBMS deadlock abort.
  • retry logic is nondeterministic, in that an operation may take an indefinite amount of time to execute; this can prevent other operations that would otherwise be successful from proceeding.
  • retry logic and retried operations consume computational resources, which are wasted for each failed attempt.
  • intelligent application logic can be added to each application that is contending for the same resources.
  • the application logic provides automated control over the order of operations.
  • the system may lack a single point of control, particularly when multiple different applications are contending for the same resources.
  • database operations that require exclusive locks on database resources may have difficulty obtaining an exclusive lock when the database is heavily utilized by a continual flow of shared lock operations that are performed by the DBMS as the operation requests are made. The exclusive operations are thus neglected; waiting for "windows of opportunity;" situations when no shared operation is active against the database.
  • the present invention provides methods and apparatus, including computer program products, that provide distributed resource management coordinated by a central management entity augmenting an existing resource manager.
  • the techniques feature receiving a sequence of actual operation requests directed towards monitored resources from one or more computer program applications.
  • the techniques also include modeling the operation requests as abstract resource requests and generate a sequence of abstract resource requests corresponding to the sequence of operation requests.
  • the techniques further include determining a timing and order of presentation of the actual operation requests to the monitored resources using the sequence of abstract resource requests, and present the actual operation requests to the monitored resources according to the determined timing and order of presentation.
  • the invention can be implemented to include one or more of the following advantageous features.
  • the set of resources may be managed by a database management system and the instructions to present the actual operation requests comprise instructions to present the actual operation requests to the database management system.
  • the sequence of abstract resource requests may be established through the use of pre-defined rules, such that the pre-defined rules establish a specific priority order for each abstract resource request.
  • the actual operation requests may include exclusive lock, shared lock, and free lock operations.
  • the techniques feature receiving incoming requests from one or more processes.
  • the techniques also feature identifying one or more operation requests related to a monitored resource.
  • the techniques further feature categorizing the operation requests related to the monitored resource.
  • the techniques finally feature ordering the operation requests for the monitored resource into a sequence based on the category of the operation requests.
  • the invention can be implemented to include the following advantageous features.
  • the categories may include exclusive lock, shared lock, and free lock.
  • the invention can be implemented to realize one or more of the following advantages.
  • the invention can reduce the amount of computational work and computing resources required for resource management purposes.
  • the invention is application independent, and does not require the addition of application logic to individual applications.
  • the invention can implement a rule-based resource management system.
  • the invention can increase the throughput of operations to the managed resource.
  • the invention can reduce contention to the managed resource.
  • the invention can improve the usage of the managed resource.
  • One implementation of the invention provides all of the above advantages.
  • FIG. 1 is a block diagram of a database system with a DBMS.
  • FIG. 2 is a block diagram of a database system with an Augmenting Database Resource Manager.
  • FIGS. 3 and 4 are state diagrams for a typical Augmenting Database Resource Manager.
  • FIG. 5 is a diagram of sample database operations scheduled by an Augmenting Database Resource Manager. Like reference numbers and designations in the various drawings indicate like elements.
  • a database 150 typically serves multiple applications 110, 120, 130. sending various requests, at various times, to perform operations to create, access, or revise information.
  • a database management system (DBMS) 140 oversees the application requests.
  • the DBMS receives a request from an application, it determines whether the database resources required are available. If the resources are available, the DBMS locks some or all of each database resource required, depending on the request, so that other requests cannot interfere with the first request.
  • the DBMS is responsible for unlocking the portions of the database that had been locked for the request. If another operation request is pending, the DBMS then repeats this procedure, until all pending requests have been completed, or have otherwise been terminated (e.g., through a timeout procedure).
  • An item within the set T of Abstract Resource Requesters can, for example, model a certain database transaction.
  • an item within the set T can also model a certain process, if it is known that transactions do not span multiple processes.
  • a deterministic relationship must exist between real and Abstract Operations. For example, real operations can map to Abstract Operations based on database table lock severity of the real operation. In such a manner, the set O could consist of the elements lock exclusive, lock shared, do not lock, and release lock.
  • a proper modeling of database resources R must also be defined. First, the physical resources managed by the DBMS for which the desired requested operations contend are identified. Then, a set of Abstract Resources is chosen that satisfy the following conditions: an Abstract Resource may identify one or more real resources, but two Abstract Resources may not identify the same real resource.
  • Two extremes of possible modeling are to have a single Abstract Resource represent all of the resources on the database, and to have an Abstract Resource represent each individual resource on the database.
  • an ADRM could do no more than allow only one exclusive transaction to be active at any point in time.
  • the level locking is as fine grained as the underlying DBMS permits, potentially allowing as many exclusive transactions to be active as the number of underlying resources.
  • the resources which a DBMS oversees need not all be managed by an ADRM. It is sufficient to map to R only those resources for which resource management shall be augmented. In such an instance, the set of Abstract Resources does not span the entire database.
  • a single table name can be used as an Abstract Resource having a one-to-one relationship with the physical database table, although the DBMS might have a much finer grained resource resolution based on rows.
  • the Abstract Resource representing the table is that which is reserved in the scope of the ADRM.
  • a database view is considered one entity by the application.
  • a query against a view may involve many underlying database resources.
  • An Abstract Resource representing a single view implicitly represents the set of underlying database resources used in the view definition. Therefore, another Abstract Resource may neither represent a database resource involved in the view, nor represent another view involving any of the underlying resources of the first, hi this instance, the solution to this issue is either to define Abstract Resources representing the underlying resources of a view, or potentially decode from symbolic view and table names an orthogonal set of Abstract Resources.
  • the tuple comprised of an Abstract Resource, an Abstract Operation, and an Abstract Resource Requester defines an Abstract Resource Request.
  • Driver level statement monitoring, resource decoding, and resource request forwarding Requests to the ADRM can either be made by an application directly, or by any software layer, such as a database driver, between the application and the database.
  • the only restriction is that all relevant resource requests must be passed through to the ADRM before being forwarded to the database.
  • the interface to the database or DBMS from an application is a database driver.
  • the database driver obtains requests for requested database operations from an application and forwards the requests to the database or the DBMS.
  • the database driver can be altered to communicate with the ADRM implementing rule based scheduling in such a way that neither the application nor the database engine is aware of the presence of an alternative operation request scheduler.
  • the database driver is enhanced to monitor and analyze database operations, as shown in FIG. 2.
  • an application 110, 120, 130 sends requests to the database manager through the driver associated with the application 115, 125, 135, the driver 115, 125, 135 is responsible for decoding the requests into Abstract Resource Requests that are understood by the ADRM 260 as described above.
  • the driver calls the ADRM 260 and passes to the ADRM 260 the Abstract Resource Request.
  • the driver does not regain control until the ADRM 260 delivers a response. Possible responses can include permission to send the actual request to the database 150 through the DBMS 140, or a mandate to cancel the request.
  • the driver 115, 125, 135 must be instructed how to decode application requests into Abstract Resource Requests, hi one implementation, specific decoding rules can be passed to the driver 115, 125, 135 as configuration information in some form of script.
  • the Abstract Resources are reserved within the scope of the ADRM 260 until released explicitly by the driver 115, 125, 135.
  • the driver 115, 125, 135 also monitors for operations which release resources in the database 150.
  • the driver 115, 125, 135 is responsible for determining the associated set of Abstract Resources corresponding to the real underlying database resources.
  • the driver can be configured with the name of the table associated with N and the names of dependent views.
  • the driver would: first search incoming requests, e.g., structured query language (SQL) statements, for occurrences of the name of the table or names of dependent views in order to decide whether the ADRM is to be involved.
  • SQL structured query language
  • the request is analyzed to categorize the type of operation, e.g., data definition language (DDL) statements map to 'x', commit/rollback operations map to T, and all other operations map to 's'.
  • DDL data definition language
  • the table name (R), the type of request (O), and the process identifier of the requesting process (T) is sent to the ADRM (assuming transactions are bound to a single process).
  • the Augmenting Database Resource Manager may formally be modeled as a state machine whose transitions are based on the inputs R, O, and T as described above, and S, where S is the finite set of states the ADRM may take: ⁇ : (R, S, O, T) ⁇ S.
  • a set of states may be given as shown in FIG. 3. Note that this first model is too simple to deal with deadlocks scenarios. If there is a risk of a deadlock scenario, two additional states must be introduced, as shown in FIG. 4.
  • the ADRM interacts with its environment by determining the order in which real requests are forwarded to the database, and returning control information to its callers. Whether or not an operation is allowed to proceed is determined by the state of the ADRM, which is determined by the set of resources in use and the users of these resources.
  • the environment is a database driver.
  • Database drivers send Abstract Resource Requests to the ADRM.
  • drivers wait for a response before forwarding the real request to the database manager for execution or canceling the request before the request is ever passed to the database manager.
  • a request for an Abstract Resource is in essence a "semaphore acquire operation.”
  • ADRM One requirement of a functioning ADRM is that the period the ADRM considers a resource reserved must span that of the reservation on the underlying database. For this reason, an Abstract Resource should only be released when a natural database operation would have been issued that released the database manager's intrinsic resource locks. This is typically the end of a transaction, determined by commit or rollback.
  • An ADRM can, for example, be implemented by means of a database request queue. The ADRM would maintain the active users T of database resources R, a list of waiters T waiting for the resources R, and how the resources R is or will be used O.
  • the ADRM examines the list of active users and queue of waiters and can either release the caller (and additionally identify the caller as an active user of the resource) or suspend the caller until all conditions are satisfied.
  • the ADRM examines the queue of waiters and releases those satisfying defined conditions.
  • Database operations that require exclusive locks on database resources may have difficulty obtaining an exclusive lock when the database is heavily utilized by a continual flow of shared lock operations that are performed by the DBMS as they are issued. The exclusive operations are thus neglected; waiting for "windows of opportunity;" situations when no shared operation is active against the database.
  • the algorithm shown in FIG. 3 and FIG. 4 applies to one single abstract database resource.
  • the algorithms are pictured as "heuristic" state diagrams where several real states and state transitions have been collected into meta states/transitions. It can be shown that in this case each real state is always associated with exactly the same meta state.
  • the mapping of real state transitions to meta state transitions is not invariant over time. For example, at a particular point in time, a free shared lock request may be one of many, or the last one, referenced as classes "F w " and "F Wn " respectively.
  • each oval represents an internal ADRM state.
  • the transitions between states are a result of a resource request or release.
  • a hexagonal stop sign on a transition indicates that the requester is blocked, the ADRM is unaffected, and the ADRM enters the new state.
  • the column "requester state" in the given transition tables indicates the reaction to the caller: the caller might get the permission to schedule the request to the database ("Active"), the caller might have to wait at the ADRM ("Blocked"), or the caller might have to cancel the request (“Cancelled”), e.g., due to a sequence error or detected deadlock.
  • Class "W” represents the situation where a shared type of request is received, e.g., a read or write request
  • class "X" represents the situation where an exclusive request is received, e.g., an table structure alter request.
  • Classes “Fx” and “Fw” represent one respective free operation (associated with an active request from the same requester), except for the last one of its type. As above, “active” means that the requester was permitted to issue the shared/exclusive request to the database; this holds until the ADRM receives a free request from the same requester; the request is then referred to as "completed.” Further, classes “Fx n “ and “Fw n " denote the last such free operation of its type.
  • S 0 is the dormant state. There are no requests that require processing, and the ADRM is waiting to receive a request.
  • State Si represents the situation where one or more shared operations are active.
  • S 2 represents the situation where one or more shared operations are active, and one or more exclusive operations are waiting for active shared operations to complete.
  • state Ss one or more shared operations are active, one or more exclusive operations are waiting for active shared operations to complete, and one or more shared operations are waiting for all of the exclusive operations, all currently blocked, to complete.
  • State S 8 represents the situation where an exclusive operation is active, and one or more exclusive operations are waiting for the current exclusive operation to complete.
  • an exclusive operation is active, one or more exclusive operations are waiting for the current exclusive operation to complete, and one or more shared operations are waiting for all of the exclusive operations, all but one currently blocked, to complete.
  • a free request is issued by a blocked requester, or where a free request is issued by a requester which is neither active nor blocked (neither owns the resource or is waiting for it).
  • both the free request and the associated blocked resource request are cancelled, while the ADRM retains its current state.
  • the free request is cancelled, while the ADRM retains its current state.
  • the escalated state diagram shown in FIG. 4 adds states S 4 and S 6 to an ADRM which supports exclusive requests by requesters already having shared operations currently being processed within the same transaction (requester). Typically, this situation is presented when the underlying database supports this functionality.
  • State S 4 is when one or more shared operations are active, one or more exclusive operations are waiting for active shared operations to complete; and one requester of an exclusive operation also has shared operations in progress
  • hi state S 6 one or more shared operations are active, one or more exclusive operations are waiting for active shared operations to complete, one requester of an exclusive operation also has shared operations in progress, and one or more shared operations are waiting for exclusive operations to complete. hi order to reflect this, in FIG.
  • transition class "X” is sometimes split up into class "X 1 " and class "X 0 ", where class "X 1 " represents the path taken if an exclusive request is received while there are shared operations active which have previously been requested by the same requester, and class "X 0 " represents the path taken if an exclusive request is received while there are no shared operations active that have been requested by the same requester.
  • the following walkthrough shows the above method applied to one resource; processes issuing multiple requests against multiple resources within a transaction perform the following evaluations concurrently for each resource being reserved or released.
  • state So the ADRM does not have any requests that require processing.
  • the ADRM will shift to state Sj. However, if a defined exclusive lock operation is pending, the state machine transitions to either of states S 5 , S 6 , or S 9 , depending on the situation. If an exclusive lock is pending (all states except So and Si), processes with operations currently using the resource are allowed to complete (the transactions are either committed or rolled-back) but any other process with a new request is suspended.
  • the ADRM can then proceed to any state, depending on the situation, except to states S 0 and Si.
  • states S 0 and Si When all active shared lock operations are completed, the ADRM issues any exclusive lock operations in queued sequence to the database, and continues to either of states S 8 or S 9 .
  • the ADRM issues any exclusive lock operations in an alternative rule sequence. Once all of the pending exclusive lock operations have executed, the ADRM proceeds to state Si and issues all pending shared lock operations simultaneously, or to state So if no shared lock operations are pending.
  • the ADRM can also be implemented to allow this as well, but only when the acquisition would not result in a deadlock situation. In such an instance, the ADRM would proceed to either of states S 4 or S 6 .
  • a deadlock results when more than one process requests an exclusive lock after having owned a shared lock; in this instance, the ADRM permits only one process to escalate the shared lock and fails escalation requests by subsequent processes with a deadlock return code.
  • deadlocks are possible. As the ADRM has an omnipresent view of process and resource locks, deadlock detection is straightforward.
  • deadlocks can be resolved with a timeout, or by the ADRM which can prevent a requester from blocking if any other process is waiting on the requester owned resource.
  • Deadlocks between locks maintained by the ADRM for resources the ADRM is managing and locks generated by the DBMS for other resources are resolved by timeouts. Either the ADRM and/or the DBMS must have a mechanism to time out if a resource is held too long.
  • FIG. 5 shows an example of a shared lock query and insert operations scheduled with exclusive lock alter table operations by the ADRM.
  • Four jobs 510, 520, 530, 540 issue database operations to a single table resource.
  • the first job 510 issues two sequential Inserts 512, 513, and the fourth job 540 issues a Query 542 at approximately the same time.
  • the second job 520 then issues an Alter 522.
  • the rule of the ADRM is to suspend Alters if other operations are currently in progress. Therefore, the ADRM suspends the Alter 522.
  • the third job 520 issues an Insert 532
  • the ADRM suspends the Insert 532 in an attempt to reduce the number of outstanding requests in the system.
  • the ADRM releases the second job 520 to perform the Alter 524. Any newly issued operations by other jobs will continue to be suspended, such as the Query 516 and the Insert 546. After the second job 520 has committed 526, there are no more exclusive lock operations suspended and the ADRM releases all of the other suspended operations, e.g., Query 518, Insert 534 and Insert 548.
  • the invention and all of the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structural means disclosed in this specification and structural equivalents thereof, or in combinations of them.
  • the invention can be implemented as one or more computer program products, i.e., one or more computer programs tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers.
  • a computer program (also known as a program, software, software application, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file.
  • a program can be stored in a portion of a file that holds other programs or data, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read-only memory or a random access memory or both.
  • the essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • Information carriers suitable for embodying computer program instructions and data include all forms of non- volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto-optical disks e.g., CD-ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • the invention can be implemented in a computing system that includes a back-end component (e.g., a data server), a middleware component (e.g., an application server), or a front-end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the invention), or any combination of such back-end, middleware, and front-end components.
  • a back-end component e.g., a data server
  • a middleware component e.g., an application server
  • a front-end component e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the invention
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network ("LAN”) and a wide area network (“WAN”), e.g., the Internet.
  • LAN local area network
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Methods and apparatus, including computer program products, for managing multiple operations contending for the same resources. The techniques feature receiving a sequence of actual operation requests directed towards a set of resources from one or more computer program applications. The techniques further feature modeling each of the operation requests as an abstract resource request and generate a sequence of abstract resource requests corresponding to the sequence of operation requests. The techniques also feature managing the timing and order of presentation of the actual operation requests to the set of resources using the sequence of abstract resource requests.

Description

AUGMENTED DATABASE RESOURCE MANAGEMENT
BACKGROUND
The present invention relates to data processing by digital computer, and more particularly to the management of database resources. Database management requires the supervision and organization of parallel operations that contend for the same system resources, so that operations that require the use of the same system resources at the same time do not conflict with one another. Database management is typically the responsibility of a database management system (DBMS); each particular DBMS defines the manner in which it manages resources. Typically, database management involves suspending succeeding operations that are incompatible with preceding operations by maintaining locks on resources in use. An operation in process will attempt to acquire locks on the specific resources (e.g., tables, table rows, sets thereof, etc.) as the resources are required for the operation. If a needed resource is already locked by a concurrent operation, the current operation is deemed incompatible with the concurrent operation. The incompatible operation will be blocked until one of several alternatives occurs: e.g., the needed resource is available; the operation times out; or the operation is aborted by the DBMS when the DBMS determines that a deadlock situation has occurred.
In an attempt to allow all operations to be executed eventually, retry logic can be added to the application. The retry logic will attempt to repeat an operation that has previously failed due to a timeout situation, or due to a DBMS deadlock abort. However, retry logic is nondeterministic, in that an operation may take an indefinite amount of time to execute; this can prevent other operations that would otherwise be successful from proceeding. Also, retry logic and retried operations consume computational resources, which are wasted for each failed attempt.
Alternatively, intelligent application logic can be added to each application that is contending for the same resources. The application logic provides automated control over the order of operations. However, depending on the complexity of the application logic, the system may lack a single point of control, particularly when multiple different applications are contending for the same resources.
Further, database operations that require exclusive locks on database resources may have difficulty obtaining an exclusive lock when the database is heavily utilized by a continual flow of shared lock operations that are performed by the DBMS as the operation requests are made. The exclusive operations are thus neglected; waiting for "windows of opportunity;" situations when no shared operation is active against the database.
SUMMARY OF THE INVENTION The present invention provides methods and apparatus, including computer program products, that provide distributed resource management coordinated by a central management entity augmenting an existing resource manager.
In one general aspect, the techniques feature receiving a sequence of actual operation requests directed towards monitored resources from one or more computer program applications. The techniques also include modeling the operation requests as abstract resource requests and generate a sequence of abstract resource requests corresponding to the sequence of operation requests. The techniques further include determining a timing and order of presentation of the actual operation requests to the monitored resources using the sequence of abstract resource requests, and present the actual operation requests to the monitored resources according to the determined timing and order of presentation. The invention can be implemented to include one or more of the following advantageous features. The set of resources may be managed by a database management system and the instructions to present the actual operation requests comprise instructions to present the actual operation requests to the database management system. The sequence of abstract resource requests may be established through the use of pre-defined rules, such that the pre-defined rules establish a specific priority order for each abstract resource request. The actual operation requests may include exclusive lock, shared lock, and free lock operations.
In another general aspect, the techniques feature receiving incoming requests from one or more processes. The techniques also feature identifying one or more operation requests related to a monitored resource. The techniques further feature categorizing the operation requests related to the monitored resource. The techniques finally feature ordering the operation requests for the monitored resource into a sequence based on the category of the operation requests.
The invention can be implemented to include the following advantageous features. The categories may include exclusive lock, shared lock, and free lock. The invention can be implemented to realize one or more of the following advantages.
The invention can reduce the amount of computational work and computing resources required for resource management purposes. The invention is application independent, and does not require the addition of application logic to individual applications. The invention can implement a rule-based resource management system. The invention can increase the throughput of operations to the managed resource. The invention can reduce contention to the managed resource. The invention can improve the usage of the managed resource. One implementation of the invention provides all of the above advantages.
Details of one or more implementations of the invention are set forth in the accompanying drawings and in the description below. Further features, aspects, and advantages of the invention will become apparent from the description, the drawings, and the claims.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram of a database system with a DBMS. FIG. 2 is a block diagram of a database system with an Augmenting Database Resource Manager.
FIGS. 3 and 4 are state diagrams for a typical Augmenting Database Resource Manager.
FIG. 5 is a diagram of sample database operations scheduled by an Augmenting Database Resource Manager. Like reference numbers and designations in the various drawings indicate like elements.
DETAILED DESCRIPTION
As shown in FIG. 1, a database 150 typically serves multiple applications 110, 120, 130. sending various requests, at various times, to perform operations to create, access, or revise information. In order to manage these requests, and make sure that conflicting requests are handled in an orderly manner, a database management system (DBMS) 140 oversees the application requests. When the DBMS receives a request from an application, it determines whether the database resources required are available. If the resources are available, the DBMS locks some or all of each database resource required, depending on the request, so that other requests cannot interfere with the first request. When the first application has completed its request, the DBMS is responsible for unlocking the portions of the database that had been locked for the request. If another operation request is pending, the DBMS then repeats this procedure, until all pending requests have been completed, or have otherwise been terminated (e.g., through a timeout procedure).
Abstract modeling of resources, operations, and requesters
Establishing a model of Abstract Resource Requests is a requisite for building an augmenting resource manager of maintainable complexity, namely finite, and with the intention of being much less complex than the underlying DBMS. Three items are distinguished in the modeling of Abstract Resource Requests: (1) a finite set R of Abstract Resources, (2) a finite set O of Abstract Operations, and (3) a finite set T of Abstract Resource Requesters. In the model, Abstract Requesters (T) issue Abstract
Operations (O) against Abstract Resources (R).
An item within the set T of Abstract Resource Requesters can, for example, model a certain database transaction. Alternatively, an item within the set T can also model a certain process, if it is known that transactions do not span multiple processes.
An Abstract Operation must assume at least the level of ownership of an Abstract
Resource, which the associated real operation would assume of the associated real resource.
A deterministic relationship must exist between real and Abstract Operations. For example, real operations can map to Abstract Operations based on database table lock severity of the real operation. In such a manner, the set O could consist of the elements lock exclusive, lock shared, do not lock, and release lock.
A proper modeling of database resources R must also be defined. First, the physical resources managed by the DBMS for which the desired requested operations contend are identified. Then, a set of Abstract Resources is chosen that satisfy the following conditions: an Abstract Resource may identify one or more real resources, but two Abstract Resources may not identify the same real resource.
Two extremes of possible modeling are to have a single Abstract Resource represent all of the resources on the database, and to have an Abstract Resource represent each individual resource on the database. Given the former extreme, an ADRM could do no more than allow only one exclusive transaction to be active at any point in time. Given the latter extreme, the level locking is as fine grained as the underlying DBMS permits, potentially allowing as many exclusive transactions to be active as the number of underlying resources. The resources which a DBMS oversees need not all be managed by an ADRM. It is sufficient to map to R only those resources for which resource management shall be augmented. In such an instance, the set of Abstract Resources does not span the entire database. For example, a single table name can be used as an Abstract Resource having a one-to-one relationship with the physical database table, although the DBMS might have a much finer grained resource resolution based on rows. The Abstract Resource representing the table is that which is reserved in the scope of the ADRM.
In an alternative example, a database view is considered one entity by the application. A query against a view may involve many underlying database resources. An Abstract Resource representing a single view implicitly represents the set of underlying database resources used in the view definition. Therefore, another Abstract Resource may neither represent a database resource involved in the view, nor represent another view involving any of the underlying resources of the first, hi this instance, the solution to this issue is either to define Abstract Resources representing the underlying resources of a view, or potentially decode from symbolic view and table names an orthogonal set of Abstract Resources.
The tuple comprised of an Abstract Resource, an Abstract Operation, and an Abstract Resource Requester defines an Abstract Resource Request.
Driver level statement monitoring, resource decoding, and resource request forwarding Requests to the ADRM can either be made by an application directly, or by any software layer, such as a database driver, between the application and the database. The only restriction is that all relevant resource requests must be passed through to the ADRM before being forwarded to the database. The interface to the database or DBMS from an application is a database driver. The database driver obtains requests for requested database operations from an application and forwards the requests to the database or the DBMS. The database driver can be altered to communicate with the ADRM implementing rule based scheduling in such a way that neither the application nor the database engine is aware of the presence of an alternative operation request scheduler.
The database driver is enhanced to monitor and analyze database operations, as shown in FIG. 2. As an application 110, 120, 130 sends requests to the database manager through the driver associated with the application 115, 125, 135, the driver 115, 125, 135 is responsible for decoding the requests into Abstract Resource Requests that are understood by the ADRM 260 as described above. For each database request to a monitored resource, the driver calls the ADRM 260 and passes to the ADRM 260 the Abstract Resource Request. In one implementation, the driver does not regain control until the ADRM 260 delivers a response. Possible responses can include permission to send the actual request to the database 150 through the DBMS 140, or a mandate to cancel the request.
The driver 115, 125, 135 must be instructed how to decode application requests into Abstract Resource Requests, hi one implementation, specific decoding rules can be passed to the driver 115, 125, 135 as configuration information in some form of script.
The Abstract Resources are reserved within the scope of the ADRM 260 until released explicitly by the driver 115, 125, 135. The driver 115, 125, 135 also monitors for operations which release resources in the database 150. The driver 115, 125, 135 is responsible for determining the associated set of Abstract Resources corresponding to the real underlying database resources.
For example, a database requests directed towards a single table R := {N} can be monitored and categorized into exclusive lock (x), shared lock (s) and free lock (f) abstract requests O := {s, x, f}. hi this case, the driver can be configured with the name of the table associated with N and the names of dependent views. In such an instance, the driver would: first search incoming requests, e.g., structured query language (SQL) statements, for occurrences of the name of the table or names of dependent views in order to decide whether the ADRM is to be involved. For each request where the ADRM is to be involved; the request is analyzed to categorize the type of operation, e.g., data definition language (DDL) statements map to 'x', commit/rollback operations map to T, and all other operations map to 's'. Finally, for each request, the table name (R), the type of request (O), and the process identifier of the requesting process (T) is sent to the ADRM (assuming transactions are bound to a single process).
Augmenting Database Resource Manager
The Augmenting Database Resource Manager (ADRM) may formally be modeled as a state machine whose transitions are based on the inputs R, O, and T as described above, and S, where S is the finite set of states the ADRM may take: δ: (R, S, O, T) → S. In the example presented above, a set of states may be given as shown in FIG. 3. Note that this first model is too simple to deal with deadlocks scenarios. If there is a risk of a deadlock scenario, two additional states must be introduced, as shown in FIG. 4.
Rule based scheduling of real database resource requests based on a given state
Depending on its state, the ADRM interacts with its environment by determining the order in which real requests are forwarded to the database, and returning control information to its callers. Whether or not an operation is allowed to proceed is determined by the state of the ADRM, which is determined by the set of resources in use and the users of these resources. In one implementation the environment is a database driver. Database drivers send Abstract Resource Requests to the ADRM. In one implementation, drivers wait for a response before forwarding the real request to the database manager for execution or canceling the request before the request is ever passed to the database manager. In this case, a request for an Abstract Resource is in essence a "semaphore acquire operation."
One requirement of a functioning ADRM is that the period the ADRM considers a resource reserved must span that of the reservation on the underlying database. For this reason, an Abstract Resource should only be released when a natural database operation would have been issued that released the database manager's intrinsic resource locks. This is typically the end of a transaction, determined by commit or rollback. An ADRM can, for example, be implemented by means of a database request queue. The ADRM would maintain the active users T of database resources R, a list of waiters T waiting for the resources R, and how the resources R is or will be used O. When a new Abstract Resource Request arrives, the ADRM examines the list of active users and queue of waiters and can either release the caller (and additionally identify the caller as an active user of the resource) or suspend the caller until all conditions are satisfied. When a resource is released, the ADRM examines the queue of waiters and releases those satisfying defined conditions.
Algorithm for scheduling high priority exclusive lock operations amongst low priority shared lock operations
Database operations that require exclusive locks on database resources may have difficulty obtaining an exclusive lock when the database is heavily utilized by a continual flow of shared lock operations that are performed by the DBMS as they are issued. The exclusive operations are thus neglected; waiting for "windows of opportunity;" situations when no shared operation is active against the database.
It is also possible to create a rule so that defined exclusive lock operations have precedence over shared lock operations. These rules can be seen in the sample state diagrams FIG. 3 and FIG. 4.
The algorithm shown in FIG. 3 and FIG. 4 applies to one single abstract database resource. For the sake of simplicity, the algorithms are pictured as "heuristic" state diagrams where several real states and state transitions have been collected into meta states/transitions. It can be shown that in this case each real state is always associated with exactly the same meta state. The mapping of real state transitions to meta state transitions is not invariant over time. For example, at a particular point in time, a free shared lock request may be one of many, or the last one, referenced as classes "Fw" and "FWn" respectively.
In the state diagrams 300, 400 shown in FIGS. 3 and 4, each oval represents an internal ADRM state. The transitions between states are a result of a resource request or release. A hexagonal stop sign on a transition indicates that the requester is blocked, the ADRM is unaffected, and the ADRM enters the new state. The column "requester state" in the given transition tables indicates the reaction to the caller: the caller might get the permission to schedule the request to the database ("Active"), the caller might have to wait at the ADRM ("Blocked"), or the caller might have to cancel the request ("Cancelled"), e.g., due to a sequence error or detected deadlock.
Class "W" represents the situation where a shared type of request is received, e.g., a read or write request, and class "X" represents the situation where an exclusive request is received, e.g., an table structure alter request. Classes "Fx" and "Fw" represent one respective free operation (associated with an active request from the same requester), except for the last one of its type. As above, "active" means that the requester was permitted to issue the shared/exclusive request to the database; this holds until the ADRM receives a free request from the same requester; the request is then referred to as "completed." Further, classes "Fxn" and "Fwn" denote the last such free operation of its type.
The following states are shown in FIG. 3. S0 is the dormant state. There are no requests that require processing, and the ADRM is waiting to receive a request. State Si represents the situation where one or more shared operations are active. S2 represents the situation where one or more shared operations are active, and one or more exclusive operations are waiting for active shared operations to complete. In state Ss, one or more shared operations are active, one or more exclusive operations are waiting for active shared operations to complete, and one or more shared operations are waiting for all of the exclusive operations, all currently blocked, to complete. State S8 represents the situation where an exclusive operation is active, and one or more exclusive operations are waiting for the current exclusive operation to complete. In state S9, an exclusive operation is active, one or more exclusive operations are waiting for the current exclusive operation to complete, and one or more shared operations are waiting for all of the exclusive operations, all but one currently blocked, to complete. For simplification purposes, neither included in the picture nor in the transition table are extraordinary cases, where a free request is issued by a blocked requester, or where a free request is issued by a requester which is neither active nor blocked (neither owns the resource or is waiting for it). In the case of the former situation, both the free request and the associated blocked resource request are cancelled, while the ADRM retains its current state. In the case of the latter situation, the free request is cancelled, while the ADRM retains its current state.
The possible transitions of these states are as follows:
The escalated state diagram shown in FIG. 4 adds states S4 and S6 to an ADRM which supports exclusive requests by requesters already having shared operations currently being processed within the same transaction (requester). Typically, this situation is presented when the underlying database supports this functionality. The following are the additional states that are shown in FIG. 4. State S4 is when one or more shared operations are active, one or more exclusive operations are waiting for active shared operations to complete; and one requester of an exclusive operation also has shared operations in progress, hi state S6, one or more shared operations are active, one or more exclusive operations are waiting for active shared operations to complete, one requester of an exclusive operation also has shared operations in progress, and one or more shared operations are waiting for exclusive operations to complete. hi order to reflect this, in FIG. 4, transition class "X" is sometimes split up into class "X1" and class "X0", where class "X1" represents the path taken if an exclusive request is received while there are shared operations active which have previously been requested by the same requester, and class "X0" represents the path taken if an exclusive request is received while there are no shared operations active that have been requested by the same requester.
The possible state transitions as shown in FIG. 4 are as follows:
The following walkthrough shows the above method applied to one resource; processes issuing multiple requests against multiple resources within a transaction perform the following evaluations concurrently for each resource being reserved or released. In state So, the ADRM does not have any requests that require processing. When the ADRM is presented with defined shared lock operations, the ADRM will shift to state Sj. However, if a defined exclusive lock operation is pending, the state machine transitions to either of states S5, S6, or S9, depending on the situation. If an exclusive lock is pending (all states except So and Si), processes with operations currently using the resource are allowed to complete (the transactions are either committed or rolled-back) but any other process with a new request is suspended. The ADRM can then proceed to any state, depending on the situation, except to states S0 and Si. When all active shared lock operations are completed, the ADRM issues any exclusive lock operations in queued sequence to the database, and continues to either of states S8 or S9. In an alternative implementation, the ADRM issues any exclusive lock operations in an alternative rule sequence. Once all of the pending exclusive lock operations have executed, the ADRM proceeds to state Si and issues all pending shared lock operations simultaneously, or to state So if no shared lock operations are pending.
If the DBMS allows a single process to acquire an exclusive lock after having owned a shared lock, the ADRM can also be implemented to allow this as well, but only when the acquisition would not result in a deadlock situation. In such an instance, the ADRM would proceed to either of states S4 or S6. A deadlock results when more than one process requests an exclusive lock after having owned a shared lock; in this instance, the ADRM permits only one process to escalate the shared lock and fails escalation requests by subsequent processes with a deadlock return code. When multiple resources are being coordinated among multiple processes, deadlocks are possible. As the ADRM has an omnipresent view of process and resource locks, deadlock detection is straightforward. In any situation, deadlocks can be resolved with a timeout, or by the ADRM which can prevent a requester from blocking if any other process is waiting on the requester owned resource. Deadlocks between locks maintained by the ADRM for resources the ADRM is managing and locks generated by the DBMS for other resources are resolved by timeouts. Either the ADRM and/or the DBMS must have a mechanism to time out if a resource is held too long.
FIG. 5 shows an example of a shared lock query and insert operations scheduled with exclusive lock alter table operations by the ADRM. Four jobs 510, 520, 530, 540 issue database operations to a single table resource. The first job 510 issues two sequential Inserts 512, 513, and the fourth job 540 issues a Query 542 at approximately the same time. The second job 520 then issues an Alter 522. The rule of the ADRM is to suspend Alters if other operations are currently in progress. Therefore, the ADRM suspends the Alter 522. Similarly, when the third job 520 issues an Insert 532, the ADRM suspends the Insert 532 in an attempt to reduce the number of outstanding requests in the system.
Once the first job 510 has committed 514, the ADRM releases the second job 520 to perform the Alter 524. Any newly issued operations by other jobs will continue to be suspended, such as the Query 516 and the Insert 546. After the second job 520 has committed 526, there are no more exclusive lock operations suspended and the ADRM releases all of the other suspended operations, e.g., Query 518, Insert 534 and Insert 548.
The invention and all of the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structural means disclosed in this specification and structural equivalents thereof, or in combinations of them. The invention can be implemented as one or more computer program products, i.e., one or more computer programs tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program (also known as a program, software, software application, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file. A program can be stored in a portion of a file that holds other programs or data, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification, including the method steps of the invention, can be performed by one or more programmable processors executing one or more computer programs to perform functions of the invention by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus of the invention can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non- volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. The invention can be implemented in a computing system that includes a back-end component (e.g., a data server), a middleware component (e.g., an application server), or a front-end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the invention), or any combination of such back-end, middleware, and front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network ("LAN") and a wide area network ("WAN"), e.g., the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The invention has been described in terms of particular embodiments, but other embodiments can be implemented and are within the scope of the following claims. For example, the operations of the invention can be performed in a different order and still achieve desirable results. In certain implementations, multitasking and parallel processing may be preferable. Other embodiments are within the scope of the following claims. What is claimed is:

Claims

1. A system having one or more application programs, a database driver, and a database management system (DBMS), the one or more application programs making operation requests through the database driver directed to resources managed by the DBMS, the DBMS responding to the operation requests, the system being characterized by: means for receiving a sequence of actual operation requests directed to resources managed by the DBMS from one or more computer program applications; means for modeling each of the operation requests in the sequence of actual operation requests as an abstract resource request and generating a sequence of abstract resource requests corresponding to the sequence of actual operation requests; and means for managing the timing and order of presentation of the actual operation requests to the DBMS using the sequence of abstract resource requests.
2. The system of claim 1, wherein the sequence of abstract resource requests is established through the use of pre-defined rules, such that the pre-defined rules establish a specific priority order for each abstract resource request.
3. The system of claim 1 or 2, wherein the actual operation requests include exclusive lock, shared lock, and free lock operations.
4. A system having one or more application programs, a database driver, and a database management system (DBMS), the one or more application programs making operation requests through the database driver directed to resources managed by the DBMS, the DBMS responding to the operation requests, the system being characterized by: means for receiving incoming requests from one or more processes; means for identifying one or more operation requests related to a monitored resource; means for categorizing the operation requests related to the monitored resource; and means for ordering the operation requests into a sequence based on the category of the operation requests.
5. The system of claim 4, wherein the categories include exclusive lock, shared lock, and free lock.
6. A computer program product, tangibly embodied in an information carrier, comprising of instructions operable to cause data processing apparatus to: receive a sequence of actual operation requests directed towards monitored resources from one or more computer program applications; model the operation requests as abstract resource requests and generate a sequence of abstract resource requests corresponding to the sequence of operation requests; and determine a timing and order of presentation of the actual operation requests to the monitored resources using the sequence of abstract resource requests, and present the actual operation requests to the monitored resources according to the determined timing and order of presentation.
7. The computer program product of claim 6, wherein the set of resources is managed by a database management system and the instructions to present the actual operation requests comprise instructions to present the actual operation requests to the database management system.
8. The computer program product of claim 6 or 7, wherein the sequence of abstract resource requests is established through the use of pre-defined rules, such that the pre-defined rules establish a specific priority order for each abstract resource request.
9. The computer program product of anyone of claims 6 to 8, wherein the actual operation requests include exclusive lock, shared lock, and free lock operations.
10. A computer program product, tangibly embodied in an information carrier, comprising of instructions operable to cause data processing apparatus to: receive incoming requests from one or more processes; identify one or more operation requests related to a monitored resource; categorize the operation requests related to the monitored resource; and order the operation requests for the monitored resource into a sequence based on the category of the operation requests.
11. The computer program product of claim 10, wherein the categories include exclusive lock, shared lock, and free lock.
12. A computer-implemented method for managing database requests contending for the same resources, the method comprising: receiving a sequence of actual operation requests directed towards monitored resources from one or more computer program applications; modeling the operation requests as abstract resource requests and generate a sequence of abstract resource requests corresponding to the sequence of operation requests; and determining a timing and order of presentation of the actual operation requests to the monitored resources using the sequence of abstract resource requests, and present the actual operation requests to the monitored resources according to the determined timing and order of presentation.
13. The method of claim 12, wherein the set of resources is a database.
14. The method of claim 12 or 13, wherein the sequence of abstract resource requests is established through the use of pre-defined rules, such that the pre-defined rules establish a specific priority order for each abstract resource request.
15. The method of anyone of claims 12 to 14, wherein the actual operation requests include exclusive lock, shared lock, and free lock operations.
16. A computer-implemented method for managing database requests contending for the same resources, the method comprising: receiving incoming requests from one or more processes; identifying one or more operation requests related to a monitored resource; categorizing the operation requests related to the monitored resource; and ordering the operation requests into a sequence based on the category of the operation requests.
17. The method of claim 16, wherein the categories include exclusive lock, shared lock, and free lock.
EP04803407A 2004-12-01 2004-12-01 Augmented database resource management Withdrawn EP1831798A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2004/013639 WO2006058549A1 (en) 2004-12-01 2004-12-01 Augmented database resource management

Publications (1)

Publication Number Publication Date
EP1831798A1 true EP1831798A1 (en) 2007-09-12

Family

ID=34959554

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04803407A Withdrawn EP1831798A1 (en) 2004-12-01 2004-12-01 Augmented database resource management

Country Status (2)

Country Link
EP (1) EP1831798A1 (en)
WO (1) WO2006058549A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8209696B2 (en) * 2006-02-13 2012-06-26 Teradata Us, Inc. Method and system for load balancing a distributed database

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999942A (en) * 1993-02-11 1999-12-07 Appage Corporation Method and apparatus for enforcement of behavior of application processing systems without modifying application processing systems
US6058389A (en) * 1997-10-31 2000-05-02 Oracle Corporation Apparatus and method for message queuing in a database system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2006058549A1 *

Also Published As

Publication number Publication date
WO2006058549A1 (en) 2006-06-08

Similar Documents

Publication Publication Date Title
US6304873B1 (en) System and method for performing database operations and for skipping over tuples locked in an incompatible mode
US6339772B1 (en) System and method for performing database operations on a continuous stream of tuples
US6397227B1 (en) Database management system and method for updating specified tuple fields upon transaction rollback
US6349310B1 (en) Database management system and method for accessing rows in a partitioned table
AU2016244128B2 (en) Processing database transactions in a distributed computing system
US6453313B1 (en) Database management system and method for dequeuing rows published to a database table
Abbott et al. Scheduling real-time transactions: A performance evaluation
US5161227A (en) Multilevel locking system and method
Salem et al. Altruistic locking
Kim et al. Enhancing real-time DBMS performance with multiversion data and priority based disk scheduling
JP2001523367A (en) Database management system and method for combining metadata for conditional contention serializability of transactions and varying degrees of reliability
US9164793B2 (en) Prioritized lock requests to reduce blocking
Ulusoy Research issues in real-time database systems: survey paper
Shanker et al. SWIFT—A new real time commit protocol
Singh et al. A non-database operations aware priority ceiling protocol for hard real-time database systems
Lam et al. On using real-time static locking protocols for distributed real-time databases
US20140040219A1 (en) Methods and systems for a deadlock resolution engine
US5752026A (en) Early commit locking computer database protocol
Nanda et al. A Comprehensive Survey of Machine Learning in Scheduling of Transactions
Zhu et al. Interactive transaction processing for in-memory database system
WO2006058549A1 (en) Augmented database resource management
Pang et al. On using similarity for resolving conflicts at commit in mixed distributed real-time databases
Lam et al. The reduced ceiling protocol for concurrency control in real-time databases with mixed transactions
Ragunathan et al. Improving the performance of Read-only Transactions through Speculation
Lam et al. Resolving conflicts with committing transactions in distributed real-time databases

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070621

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LU MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20091016

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SAP SE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20160701