WO2001059568A2 - Active cooperation deadlock detection system/method in a distributed database network - Google Patents
Active cooperation deadlock detection system/method in a distributed database network Download PDFInfo
- Publication number
- WO2001059568A2 WO2001059568A2 PCT/SE2001/000265 SE0100265W WO0159568A2 WO 2001059568 A2 WO2001059568 A2 WO 2001059568A2 SE 0100265 W SE0100265 W SE 0100265W WO 0159568 A2 WO0159568 A2 WO 0159568A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- client
- lock
- deadlock
- clients
- data object
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
- G06F9/524—Deadlock detection or avoidance
Definitions
- This invention pertains to distributed database networks, and more particularly to a system and method for detecting deadlocks in such networks.
- a data object may be, for example, a single data value, a set of data values, any parameter(s) to be changed, or an executable set of code with its own parameter(s)).
- data objects are stored as part of a database.
- more than one computer may require access to a certain data object, and may change or update the value of that data object.
- a distributed database network/system can be established in which a database(s) is made available to a plurality of computers in the network regardless of where the database tables are located.
- database table Tl may reside on a first computer while database table T2 resides on a second computer in the network; but both tables appear and function the same to all clients on these computers (i.e. location transparency).
- all accessible data objects need not be replicated and stored on each computer in a network.
- a replicated database system is a specific type of distributed database system, and is established where each computer in the network maintains its own version of the database. All other computers are advised of changes to a data object in a database so that consistency is maintained among objects in the replicated databases.
- Replicated databases have two primary advantages. A first advantage is fault tolerance. A second advantage is that local access to i replicated database is faster and less expensive than remote access to a database at another computer.
- Each computer can have one or more programs or processes for accessing its local version of a replicated database in order to perform respective tasks.
- a task is a stream of activity, and may be, for example, a database operation, file or data record request, register access or use, memory access request or use, etc.
- a particular process i.e. client
- an exclusive "lock" of the data object must be obtained by the accessing process in order to ensure exclusive access to the data object.
- no other process of the network may access it.
- a client can obtain a lock on a data object that is either located on the same computer or on another computer in the network.
- a lock management system recognizes the fact that an object is replicated, and simultaneously locks all instances of the database object as soon as one local instance of the object is locked.
- Many so-called concurrency control algorithms or strategies use various forms of locking to achieve such goals.
- a process may require access to multiple data objects in order to complete a particular task on which it is working.
- a set of processes is "deadlocked" when each process in the set is waiting for an event (e.g. release of a data object) that only another process in the set can cause.
- An illustrative example of deadlock is shown in Figure 1.
- Computer system 11 includes process x (client 1) while computer system 13 includes process y (client 2).
- Each process requires access to data objects A and B (referred to by reference numerals 15 and 17, respectively) to complete their respective transactions.
- client 1 has an exclusive lock on data object A while client 2 has an exclusive lock on data object B (exclusive locking is illustrated by solid lines).
- client 2 is waiting to access data object A
- client 1 is waiting to access data object B
- deadlock prevention has often been thought to be better than deadlock detection.
- transactions are typically restarted when it is determined that a requested operation, if allowed, might cause deadlock. Unfortunately, this may often result in unnecessary transaction restarts and is undesirable for at least this reason.
- Deadlock detection has been difficult to efficiently implement in distributed systems, since it is often based on analysis of wait-for-graphs (WFGs) which must contain all relevant dependencies to be useful. See “Readings in Database Systems", 2 nd Edition, by Michael Stonebreaker, including the paper “Concurrency Control in Distributed Database Systems” by P.A. Bernstein, et. al.
- a WFG can be conceptualized as a directed acrylic graph (DAG) in which a node is inserted for each transaction (T). For example, if transaction Ti needs a lock which is held exclusively by transaction Tj, an edge Ti- Tj is created to illustrate that Ti waits for Tj.
- a deadlock condition exists if and only if the graph contains a cycle. For purposes of example only, assume that Tj is also waiting for Ti, in addition to Ti- Tj. The result is Ti- ⁇ Tj- ⁇ Ti, which is a deadlock. Deadlock cycles may also be indirect. For example, Ti- ⁇ Tj->Tk- Ti (Ti waits for Tj, which waits for Tk, which waits for Ti) is another example of deadlock.
- DFG directed acrylic graph
- Deadlock detection systems often assume the presence of a central lock manager, and/or central lock table, in which the presence of deadlock can be detected based on global WFG analysis.
- one site is designated as the deadlock detector for the system or network.
- Each scheduler or lock manager of other sites of the network periodically sends its local information (including all objects that its client is locked on and all that it is waiting for) to the one designated site.
- the deadlock detecting designated site merges the same into a global WFG to determine deadlock cycles.
- Client processes/threads at other non-central sites in the network are typically unaware of WFG updates.
- Such a centralized control renders deadlock detection expensive since the central arbitrator must handle a WFG which includes all ongoing transactions in the network or system.
- the resulting high cost creates the need for either periodic (i.e., non-real-time) analysis or more conservative approaches such as timestamp ordering.
- periodic analysis may degrade performance, increase detection cost, and/or introduce "phantom deadlocks" (i.e. incorrect recognition of deadlocks causing transactions to be restarted unnecessarily).
- a distributed deadlock detection system is described at slide 68 of 118, in "Transaction Management in Distributed Computing System" by A. Zaslavsky, accessible at www.ct.monash.edu.au.
- this system suffers from many of the above-listed problems. For example, analysis of WFGs becomes undesirably expensive if the graph includes all ongoing transactions on a given processor. In Zaslavsky, this is the case because a WFG at a node includes all transactions from all other processors on the network that are in any way related to any transaction currently in process at that node. Client processes throughout the network do not communicate with one another in Zaslavsky regarding locking, and are apparently unaware of WFG messages sent between nodes.
- a typical scenario may involve hundreds of ongoing transactions at a given node, where none or only a few are in any danger of deadlock.
- a central scanner of the WFG at each node becomes too heavy to be interrupt-driven and therefore is unable to practically detect deadlocks in real-time.
- the high expense results in periodic forwarding of WFGs and/or periodic WFG analysis, which are undesirable for the above-listed reasons.
- At least one client (process or thread for executing a transaction) in the network may transmit information to at least one other client so as to enable the other client to detect deadlock.
- Clients need not communicate with one another absent deadlock.
- Such active cooperation between client(s) enables each client in the network to have its own deadlock detection system.
- each client's deadlock detection system need only store and analyze information related to the transaction which that client is executing, thereby enabling deadlock to be efficiently detected in approximate real time with minimal communications cost.
- unnecessary transaction/task restarts as well as the need for a centralized deadlock detector may be reduced or even eliminated.
- Fig. 1 is a schematic view of first and second clients in a deadlock scenario.
- Fig. 2 is a schematic view of a network including two nodes whereat replicated data objects reside in accordance with an embodiment of this invention.
- Fig. 3 is a schematic view of a plurality of clients (processes) accessing a plurality of data objects in a distributed or replicated database network in accordance with an embodiment of this invention.
- Fig. 4 is a schematic diagram of three clients (processes) completing respective transactions utilizing three data objects in a manner such that deadlock does not occur.
- Fig. 5 is a schematic diagram illustrating a deadlock detection system/method in a distributed or replicated database network in accordance with an embodiment of this invention that enables deadlock to be detected in real time and resolved in an efficient manner.
- Figs. 6(a) through 6(h) are schematic diagrams illustrating certain basic steps taken in accordance with the deadlock detection and resolution of Fig. 5.
- Figs. 7(a) through 7(d) illustrate certain basic steps taken in the updating of client Cl 's WFG during the course of messages 5-1 through 5-16 of Fig. 5.
- Figs. 8(a) through 8(c) illustrate certain basic steps taken in the updating of client
- Figs. 9(a) through 9(c) illustrate certain basic steps taken in the updating of client C3's WFG during the course of messages 5-1 through 5-16 of Fig. 5.
- Fig. 10 is a flowchart illustrating how a client may determine whether to send another client a message about a lock in accordance with a particular embodiment of this invention.
- Figure 11 is a flowchart illustrating steps taken by an object algorithm in accordance with an embodiment of this invention.
- Figure 12 is a flowchart illustrating steps taken by a client algorithm in accordance with an embodiment of this invention.
- Figure 13 is a flowchart illustrating steps taken by a client algorithm in accordance with an embodiment of this invention.
- Fig. 2 shows a replicated database network 20 comprising two illustrative nodes 30A and 30B. Each node has its own version of a replicated database 33 including at least data objects Ol, 02, and 03. Specifically, node 30A includes hard disk 32A whereon its version of the replicated database, referenced as 33A, is stored. Similarly, node 30B includes hard disk 32B whereon its version of the replicated database, referenced as 33B, is stored. While Fig. 2 illustrates a replicated database network, it is noted that this invention is also applicable to other types of distributed database networks including those where accessible data objects are not stored on all computers in the network.
- Each node 30A and 30B includes a processor or CPU (40A and 40B respectively) which is connected by an internal bus (42A and 42B respectively) to numerous elements. Illustrated ones of the elements connected to internal bus 42 include a read only memory (ROM) (43 A and 43B respectively); a random access memory (RAM) (44A and 44B respectively); a disk drive interface (45 A and 45B respectively); and a network interface (46 A and 46B respectively). Disk drive interfaces 45A and 45B are connected to respective disk drives 50A and 50B at each node.
- Network interfaces 46 connect to network link 60 over which the nodes 30A and 30B communicate with one another and with other similar nodes of the network.
- Hard disks 32A and 32B are one example of a node-status inviolable memory or storage medium. "Node-status inviolable" means that the contents of the memory remain unaffected when the node crashes or assumes a down status. Although the node-status inviolable memory is illustrated in one embodiment as being a hard magnetic disk, it should be understood that other types of memory, e.g., optical disk, magneto-optical disk, magnetic tape, etc., may be utilized for storage by the nodes of the network.
- Processors 40A and 40B execute sets of instructions in respective operating system(s), which in turn allow the processors to execute various application programs which are preferably stored on hard disks 32 A and 32B.
- a set of instructions embodied in a computer product and known as a lock manager application program (LOCK MANAGER) (73A and 73B) may also be provided at each node, or alternatively a objects may take care of locking themselves or a centralized locking manager may be provided for the entire network.
- Processors 40 of the respective nodes execute application programs 70A, 70B. In order to be executed, such programs must be loaded from the medium on which they are stored, e.g., hard disk 32A, 32B, into RAM 44A, 44B.
- Fig. 3 is a diagram relating to the network of Fig. 2 or any other type of distributed database network, including clients C1-C3 and database(s) 33 in which data objects 01, 02, and 03 are stored.
- Each database 33 may contain a complete set of data objects 01-03, or alternatively certain objects (e.g. 01-02) may be stored in a first database at one node of the network and other objects (e.g. 03) may be stored in a second database at another node of the network.
- client as used herein means a "process” or “thread” working on a task or transaction.
- a client may only work on one transaction at a time, with transactions being performed in a sequential manner (one transaction may not be started by a client until the previous one being executed by that client has been completed).
- a "thread” is similar to a "process” in this respect, with this term being used by, for example, programming language JANA.
- clients Cl, C2 and C3 may all be at the same node (i.e., run by the same processor), or alternatively may be distributed among plural nodes (e.g. client Cl may be at a first node with a first processor, client C2 at a second node with a second processor, and client C3 at a third node with a third processor).
- each client Cl, C2 and C3 has a transaction/task to perform involving at least two different database objects.
- client Cl requires exclusive access to data objects 01 and 02 in order to complete its transaction
- C2 requires exclusive access to objects 02 and 03 to complete its transaction
- C3 requires exclusive access to objects 02 and 03 in order to complete its transaction.
- Each data object may be either active (handling its own locks) or passive (locking is administered by a lock manager 73).
- a useful abstraction in the embodiments set forth below and in Figs. 4-9 is for clients to think they do in fact communicate with objects 01-03 directly.
- an active cooperation deadlock detection system/method has clients Cl, C2 and C3 sending or volunteering lock and waiting information to one another on a per transaction basis.
- a first client may inform a second client (which is waiting for the first client to release/surrender a lock on a data object) over network link(s) 60 about other lock operations in which the first client (or another client) is involved. Absent such a message, the second client would have no way of getting the complete picture of what other lock operations the first client (or another client) is involved in and/or what objects for which the first client (or another client) is waiting.
- each client to maintain a simplified deadlock detection system including a WFG relating just to its transaction (i.e. the localized WFG includes the transaction upon which the client is working as well as any transaction related thereto).
- the term "related" is used in a broad sense; for example, if object Ol has a queue AB, object 02 a queue BC, object 03 a queue CA, and object 04 a queue CB, then client A still analyzes and stores information of the transaction on object 02, although it is not executing that transaction (because client A is involved in transaction relating to B and/or C).
- the WFG at each client is updated by an object (or the client) with data received from other client(s) relating to its transaction, so that each client can detect deadlock relating to its transaction on a substantial real-time basis.
- a network centralized deadlock detector although one may be used in non-preferred embodiments of this invention, and unnecessary transaction restarts may be reduced or avoided. Thus, deadlock may be more efficiently detected and resolved.
- Figure 4 illustrates a sequence of messages involving clients Cl, C2 and C3, and data objects Ol, 02 and 03, in which the sequencing takes place in a manner that avoids deadlock (i.e. deadlock does not occur in this example).
- client Cl requires exclusive access to data objects 01 and 02 in order to complete its transaction
- client C2 requires exclusive access to objects 02 and 03 to complete its transaction
- client C3 requires exclusive access to data objects Ol and 03 in order to complete its transaction.
- Client Cl starts by requesting (message 4-1) an exclusive lock on data object Ol .
- Data object Ol responds (directly or via a lock manager) to client Cl indicating that the lock on it has been approved (message 4-2).
- client Cl has its requested lock on data object 01. All other nodes of the network may or may not be informed of a lock on data object 01 at this time.
- Client Cl requests (message 4-3) an exclusive lock on data object 02.
- Data object 02 responds (message 4-4) indicating that the lock has been approved.
- Client Cl thus has its requested exclusive locks on data objects 01 and 02, so that only client Cl may access and/or vary data objects 01 and 02 during the locking period (no other client/process on the network may access or vary these objects so long as the exclusive locks remain in place).
- Client Cl proceeds and completes its transaction/task, after which it unlocks data objects Ol (message 4-5) and 02 (message 4-6) freeing up these data objects for access by other clients of the network.
- client C2 requests (message 4-7) an exclusive lock on data object 02.
- Data object 02 responds (message 4-8) indicating that the lock has been approved.
- Client C2 requests (message 4-9) an exclusive lock on data object 03.
- Data object 03 responds (message 4-10) to client C2 indicating that the lock on it has been approved.
- Client C2 thus has its requested exclusive locks on data objects 02 and 03, so that only client C2 may access and/or vary these two data objects during the locking periods.
- Client C2 proceeds and completes its transaction, after which it unlocks data objects 02 (message 4-11) and 03 (message 4-12) thereby freeing up these objects for access by other clients of the network.
- client C3 begins by requesting (message 4-13) an exclusive lock on data object 01.
- Data object Ol responds (message 4-14) indicating that the lock has been approved.
- Client C3 requests (message 4- 15) an exclusive lock on data object 03.
- Data object 03 responds (message 4-16) to client C3 indicating that the lock on it has been approved.
- Client C3 thus has its requested exclusive locks on objects 01 and 03, so that only client C3 may access and/or vary these two data objects during the locking periods.
- Client C3 proceeds and completes its transaction, after which it unlocks objects Ol (message 4-17) and 03
- deadlock occurs due to the illustrated interleaving of transactions.
- the deadlock is detected by at least one client (as opposed to a centralized deadlock detection system) and rectified in the Fig. 5 scenario using an additional eleven messages (for a total of twenty-nine - the eighteen of Fig. 4 plus the additional eleven) for all transactions to be completed.
- FIG. 5 illustrates sequencing between clients C1-C3 and data objects Oi03
- Figs. 6(a) through 6(h) schematically illustrate locking scenarios as they unfold throughout the sequencing of Fig. 5.
- client Cl requires exclusive access to data objects 01 and 02 in order to complete its transaction TI
- client C2 requires exclusive access to data objects 02 and 03 to complete its transaction T2
- client C3 requires exclusive access to data objects 01 and 03 in order to complete its transaction T3.
- T1-T3 are separate and distinct transactions.
- each localized WFG of a client at the initiation of a transaction includes only the transaction (TI, T2 or T3) to be performed by that process.
- the localized WFGs for different clients are typically different for the reasons discussed below, although in certain scenarios more than one WFG at different clients may end up being the same at different points in a message sequencing scenario.
- client Cl initially requests (message 5-1) an exclusive lock on data object Ol .
- Data object Ol responds (directly or via a lock manager) to client Cl indicating that the lock on it has been approved (message 5-2).
- client Cl has its requested exclusive lock on data object Ol , so that only client Cl may access and/or vary data object Ol during the locking period (no other client or process on the network may access or vary data object 01 so long as this lock remains on it).
- client Cl has its lock on data object Ol
- the next event in the Fig. 5 sequence is client C2 requesting (message 5-3) an exclusive lock on data object 02.
- client C2 begins its task before client Cl has completed its transaction.
- Data object 02 responds (message 5-4) to client C2 indicating that the requested exclusive lock on it has been approved as it was not otherwise locked at the time of the request.
- Client C3 then (before either Cl or C2 have completed their transactions) requests (message 5-5) an exclusive lock on data object 03.
- Data object 03 responds (message 5-6) to client C3 indicating that the requested exclusive lock on it has been approved.
- client Cl has an exclusive lock on data object Ol
- client C2 an exclusive lock on data object 02
- client C3 an exclusive lock on data object 03.
- the respective localized WFGs of clients C1-C3 remain as in Figs. 7(a), 8(a), and 9(a), respectively.
- data object 02 When denying a lock to client Cl, data object 02 also sends a message (message 5-9) to client C2 (which has a lock on data object 02) indicating that client Cl is waiting for data object 02.
- client C2 now knows that client Cl 's transaction is related in some respect to client C2's transaction, as they both require access to data object 02.
- Client C2 updates its WFG accordingly as shown in Fig. 8(b) [T1 ->T2].
- a data object (or its lock manager) when locked by a first client and receives a lock request from a second client, the object responds to both of the first and second clients informing each of them of which client has its lock and which is pending, thereby updating clients with regard to other clients executing related transactions.
- client C2 requests (message 5-10) an exclusive lock on data object 03.
- Data object 03 responds (message 5-11) to client C2 indicating that the lock request cannot be approved because data object 03 is already locked by client C3, thereby denying client C2's request and telling client C2 to wait pending release of the 03 lock.
- client C2 is pending on data object 03 at this point.
- Client C2 updates its WFG accordingly as shown in Fig. 8c, which illustrates client Cl 's transaction TI waiting on client C2's transaction T2, which in turn is waiting on client C3's transaction T3 [i.e. T1- ⁇ T2- T3].
- data object 03 When denying a lock to client C2, data object 03 also sends a message (message 5-12) to client C3 (which already has a lock on data object 03) indicating that client C2 is waiting for data object 03.
- Client C3 updates its localized WFG accordingly as shown in Fig. 9(b) [i.e. T2->T3].
- client C3 has data object 03 exclusively locked, it also knows that client C2's transaction T2 is waiting for data object 03. Since client C2 was informed by data object 02 that client Cl was waiting for 02 (i.e.
- client Cl waiting for client C2 to release object 02
- client C2 determines that its transaction T2 is related in some respect to those (TI and T3) of client Cl and client C3, and that it and client Cl are in waiting patterns (a potential for deadlock exists). Since client C2 has now stored in its WFG certain information (i.e. T2- T3) that it determines may not be known to another waiting client Cl, client C2 sends a message (circled message 5-13) to client Cl (e.g. via link 60 or otherwise) informing it that client C2 is waiting for data object 03 which is held by client C3. Client Cl updates its WFG table accordingly with this information, as shown in Fig.
- Client C3 then requests (message 5-14) an exclusive lock on data object 01.
- Data object 01 responds (message 5-15) to client C3 indicating that the lock request cannot be approved (i.e. data object Ol is already locked by client Cl), thereby denying client C3's request and telling client C3 to wait pending release of the data object 01 lock.
- client C3 is pending on data object Ol at this point.
- Client C3 updates its WFG accordingly as shown in Fig. 9c (i.e. T2 ⁇ T3- T1).
- data object Ol also sends a message (message 5-16) to client Cl (which already has a lock on data object Ol) indicating that C3 is now waiting for data object 01.
- Client Cl updates its WFG accordingly as shown in Fig. 7(d).
- This message (message 5-16) received by client Cl from data object 01 is the last piece of the puzzle needed by client Cl for it to dt tect that it is involved in a deadlock.
- Client Cl updates and now has each of items (i), (ii), (iii), (iv), (v), and (vi) in its WFG table as shown in Fig. 7(d).
- Client Cl 's WFG now shows a complete or circular cycle (i.e. T1- T2- T3- T1).
- client Cl Upon scanning its localized WFG (which includes only its own transaction TI and transactions (T2, T3) related thereto), client Cl detects the circular pattern and thus deadlock. In other words, client Cl determines that each of clients C1-C3 is now waiting for an event (e.g. release of a data object) that only another one of the clients (or transaction) in the client set can cause.
- the deadlock is illustrated in Fig. 6(a), where solid lines indicate exclusive locks by clients on data objects, and broken lines indicate a client waiting (or pending) for a data object.
- Figures 7-9 illustrate that while deadlock is detected after message 5-16, it is only detected by client Cl in this particular embodiment.
- the respective WFGs of clients C2-C3 are not yet updated in this embodiment with enough related information to enable those clients to detect the deadlock.
- Client Cl was able to do so due to the active cooperation among the clients (i.e. client C2 having sent message 5-13 to client Cl).
- client Cl After client Cl detects the deadlock shown in Fig. 6(a), it initiates a solution for the deadlock by surrendering or releasing its lock on data object Ol as shown in Figs. 5 and 6(b). In doing this, client Cl sends an unlock message (message 5-17) to data object 01. Data object Ol responds (message 5-19) to client Cl indicating that the Cl :01 lock has been released, that client Cl is now pending on data object 01 (i.e. client Cl may obtain another lock on object 01 following client C3's lock on data object 01 being released), and that previously pending client C3 now holds a lock on data object 01. As shown in Fig.
- data object 01 also sends a message (message 5- 18) to client C3 indicating that the Cl lock has been released thereby causing client C3 to have an exclusive hold or lock on data object Ol .
- Clients Cl and C3 update their WFGs accordingly (not shown).
- Fig. 6c illustrates this scenario where client C3 holds an exclusive lock on data object Ol, with client Cl pending on data object 01.
- client Cl Since client Cl has now been informed of new information about client C3, and it also knows that the transaction of waiting client C2 is related to its transaction and potentially does not know of the new information, client Cl sends a message (message 5-20) to client C2 informing client C2 that client Cl is now waiting for data object 01 which is held by C3 (this message turns out to be irrelevant; but is sent in accordance with the procedure of clients sharing information with other clients related to their transaction). Client C2 updates its WFG accordingly (not shown). Client C3 now has locks on each of data object Ol and data object 03 thereby allowing client C3 to complete its transaction while client Cl waits for data object 01.Still referring to Fig.
- client C3 once client C3 has completed its transaction, it sends a message (message 5-21) to data object 01 unlocking the same.
- data object Ol unlocks from client C3 and sends a message (message 5-22) to client Cl indicating that client Cl now holds an exclusive lock on data object 01 (client Cl had been pending on data object 01 during the time client C3 had its lock on data object Ol).
- Localized WFGs are updated accordingly (not shown).
- Fig. 6(d) illustrates the situation in which client Cl holds a lock on data object 01, client Cl is pending on data object 02, client C2 holds a lock on data object 02, client C2 is pending on data object 03, and client C3 still holds its lock on data object 03.
- Client C3 then sends a message (message 5-23) to data object 03 unlocking the same.
- data object 03 unlocks from client C3 and sends a message (message 5-24) to client C2 indicating that client C2 now holds an exclusive lock on data object 03 (client C2 had been pending on data object 03 during the time client C3 had its lock on data object 03).
- Localized WFGs are updated accordingly (not shown).
- client Cl illustrates client Cl holding a lock on data object 01, client Cl pending on data object 02, client C2 holding locks on data object 02 and data object 03, and client C3 no longer holding locks on any of data objects 01-03 because it has completed its transaction T3. Accordingly, client C2 now has locks on each of data object 02 and data object 03 thereby allowing it to complete its transaction while client Cl waits for data object 02. Once client C2 has completed its transaction, it sends a message (message 5-25) to data object 02 unlocking the same.
- data object 02 unlocks from client C2 and sends a message (message 5-26) to client Cl indicating that client Cl now holds an exclusive lock on data object 02 (client Cl had been pending on data object 02 during the time client C2 had its lock on data object 02).
- Fig. 6(f) illustrates client Cl holding locks on data object 01 and data object 02, client C2 still holding a lock on data object 03, and client C3 holding no locks on any of data objects 01-03. Accordingly, client Cl now has locks on each of data object 01 and data object 02 thereby allowing it to complete its transaction. Client C2 then sends a message (message 5-27) to data object 03 unlocking the same.
- Fig. 6(g) illustrates this status where client Cl holds locks on data objects 01 and 02 and completes its transaction, and clients C2 and C3 no longer hold any locks on any of data objects 01- 03 because they have completed their respective transactions. Localized WFGs are updated accordingly (not shown).
- Fig. 5 illustrates this status where none of clients C1-C3 hold any locks on any of data objects 01-03, as they have all completed their transactions after resolving the aforesaid deadlock.
- data objects 01-03 After any or all of data objects 01-03 have been modified during the course of the aforesaid transactions, they are updated as to their values and/or other changes across the replicated databases as described above so that each of the replicated databases 33 is the same in this regard.
- communications transmitted between clients C1-C3 via link 60 enable clients (e.g. Cl in the Fig. 5 example) to detect the deadlock via localized WFG analysis.
- clients e.g. Cl in the Fig. 5 example
- the first circled message (from client C2 to client Cl) proved to be what would otherwise have been a missing piece of information needed by C 1 to detect the deadlock, while the circled message volunteered from client Cl to client C2 had no effect.
- client Cl would have never known that client C2 was waiting for data object 03 which was held by client C3, and thus would not have detected the deadlock.
- An advantage of allowing clients to perform deadlock detection is that once deadlock is detected graceful resolution is possible.
- One or more of the affected clients simply trade places on a wait list on the object(s) in question. For example, in Figs. 5-6, once client C3 is done with data object Ol, the lock is again granted to client Cl, which had surrendered its lock to client C3 earlier in order to resolve the deadlock. Thus, no transaction had to be restarted.
- individual clients to detect deadlock via their own localized WFGs, and partitioning WFG information of different clients on a per transaction basis (i.e.
- a client's WFG may include only information about other clients whose transactions relate to the WFG client's transaction), the graphs may be of reduced complexity thereby enabling them to be scanned in substantially real time so that deadlocks can be more efficiently detected and more easily resolved.
- locking processes or clients may synchronize with one another, they can exchange resources in an efficient manner so as to avoid and/or reduce transaction restarts.
- a client C x determines when to send another client C y information in the following manner. For each client C y which is waiting for client C x , client C x knows to send each such client C y all information regarding locks for which any client is waiting for but which client C y is not involved. Client C x determines whether this condition is met each time client C x receives a message from an object indicating that client C x is to wait.
- Figs. 5 and 10 Although the sequence of messages resulting from this Fig. 10 embodiment is slightly different that the sequence shown in Fig. 5).
- the first query 103 is whether a message received is from a data object. If so, client C x then determines at 107 whether the received message (M) is telling client C x to wait for a lock to be released. If not, then no message is sent by client C x to any other client 105. If so, then client C x determines whether any other client C y is waiting for client C x to release a lock (step 109). If so, then at 111 client C x determines whether the received message M includes information relating to a particular lock which some client is waiting for but which client C y is not involved.
- client C x sends a message to client C y informing it of all or a portion of the information in the received message (M) (step 115). However, if the received message (M) was determined in query 111 to relate to a lock which client C y is waiting for or otherwise involved in, then client C x sends no message to client C y (step 113).
- the first message in Fig. 5 where a client is told to wait is message 5-8 to client Cl . Because client Cl does not hold any lock for which any other client is waiting, it does not send any volunteered message to another client.
- client C2 is told to wait (query 107 satisfied) by message 5-11 received from object 03 (query 103 satisfied)
- client C2 already had a lock on object 02 for which client Cl was waiting (query 109 satisfied).
- client C2 was not aware of any relationship between client Cl and object 03 (query 11 1 satisfied).
- client C2 determines that it should send client Cl message 5-13 informing it that client C2 was waiting for a lock on object 03 that was held by client C3 (step 115).
- message 5-15 causes client C3 to send a message (not shown) to client C2 (since client C2 is waiting for client C3, and client C3 is unaware of C2 being related to object Ol) telling client C2 that client Cl holds object Ol and client C3 is pending on object Ol .
- This message results in clients Cl and C2 each being capable of detecting the deadlock.
- the clients are programmed in a manner such that C1 ⁇ C2 ⁇ C3 (Cl surrenders its locks to C2 and/or C3), client C2 does nothing since it knows that client Cl must surrender first.
- message 5-19 does not result in client Cl sending any message to any other client because no client is waiting for client Cl in any lock.
- a client C may determine to send another client C y information about lock L when client C determines that (i) C y >C, and (ii) C y has a lock but is not involved in lock L.
- Other methods may also be used for enabling clients to determine when to send other clients such information according to other embodiments of this invention.
- all or a large portion of messages sent from objects may include therein a sequence or version number.
- the first message that a particular object O sends may have a sequence number of one
- the second message that object O sends may have a sequence number of two therein
- the third message that object O sends may have a sequence number of three therein, and so forth.
- the potential for confusion is reduced (i.e. in a distributed database system, a client may receive messages from objects at much different points in time) as a receiver of message(s) from object(s) can place messages received from that and other objects in a sequence indicative of their time or sequence of origination.
- the details of the locking algorithm and the optional fact of a client performing lock negotiations may be hidden behind a functional interface. Specifically, clients may adhere to a two-phase locking protocol, acquiring all necessary locks before performing any work or releasing any locks.
- the algorithm for implementing certain aspects of this invention may include two entities; one for the object and one for the client. These entities may be seen as interfaces that arrange for client(s) to get access to an object, and ensure that the object is only accessed by clients which are allowed to do so.
- the algorithm implementing the object interface preferably provides read and write functionality, and also guarantees that such read or write functionality is guaranteed to be done by a client that has the right to do so (i.e., holds the lock).
- the algorithm implementing the client interface keeps track of the objects that need to be locked for a read and/or write operation.
- the client interface actively asks objects for read/write permission (a lock) and waits for the results supplied by the objects.
- the information is gathered, combined with consecutive information provided by the objects and competing clients and action is taken upon the received information.
- the client interface can either conclude that it has obtained read/write access to all required objects, or, by absence of all information, wait for more information and/or send information to competing clients.
- Set forth below is a more detailed description of the algorithms for both entities in more detail.
- the algorithm(s) may be stored, for example, in different memories at a plurality of different computers in the distributed network.
- the algorithm(s) may stored in normal memory (e.g., hard drive, RAM, ROM, EPROM, etc.), secondary flash memory, primary flash memory, and/or in the processor memory in different embodiments of this invention. It is noted that since data communication is involved, a portion of data needed by the algorithm(s) is typically sent overt a network so that at certain points in time it is in a wire or other communication media.
- computers run in a distributed environment, and are connected with a communications network having a slow speed compared to computational speed of the individual computers in the network.
- Clients and objects have unique references and a total order exists among these references. The uniqueness of the reference is guaranteed.
- the set of possible references is finite. No strong assumption is therefore made about the relationship between creating a chent or object and the assigned reference(s).
- a client created later in time has a larger reference than earlier-created clients, although this need not be the case in all embodiments.
- Another assumption may be the existence of a sequence of numbers as long as is needed to attach version numbers to messages, such that within the lifetime of an object, we do not run out of version numbers.
- the algorithm implementing the interface to the object stores requests from clients for access to certain locks.
- the requests are stored in a waiting queue in order of arrival (e.g., in any of the memories listed above, with ordering in a queue being performed, e.g., by links).
- the first element of a queue is considered to have been granted access to the objects.
- the request is put at the end of the respective queue and all clients in the queue are notified about the new situation or scenario of the queue (i.e., notified of the new order of and/or client requests in the queue).
- Clients are permitted to cancel their requests which have previously been made, in which case they are removed from the queue. Again, all clients remaining in the queue are notified.
- a client which holds a lock on a particular object may request to surrender its lock on that object.
- the client Upon surrendering, the client is no longer the first in the queue.
- the surrendering client when surrendering a lock, the surrendering client is thereafter placed last in the queue and all clients are again notified about the new order in the queue. Placing a surrendering client at the end of the queue is deemed to be a safe way of implementing a surrender action.
- a surrendering client may independently determine which location in the queue line (e.g., at the end, in the middle, or simply switching places with the request immediately following it) would be the safest place to be located following its surrender to minimize the chance of future deadlock.
- the request of the surrendering client may then be inserted into that particular location in the queue following the client's surrendering of its lock on the object.
- Each notification to clients of an updated status of a queue is supplied with a version number.
- the version number is increased after every change in a particular queue.
- notifications may also carry the identity of the particular client which induced the notification to be sent (e.g., the client which requested access, the client which canceled, the client which unlocked, or the client which surrendered).
- the object algorithm also may implement a read and write option on an object. However, read and write are only permitted whenever a requesting client holds a lock for the object at issue (i.e., is first in the queue).
- the tail of a waiting queue for an object may be sorted with respect to the total order of the references.
- the waiting order in a queue may be dictated by the reference of the clients therein, e.g., so that the oldest client is located at the front and the youngest at the rear.
- this may be unsafe in certain environments such as one where a finite set of references is provided. This potential problem may be overcome where the set of references is large enough and time-out primitives are used to deal with starvation.
- This particular method is not always preferred, but may be more efficient in cases where negotiation concerning objects is relatively fast as compared to the time it takes to run out of references. However, clients which do not get served in time should be caused to restart their requests, which may take a substantial amount of overhead for certain individual clients, whereas other clients may be served much faster. In other embodiments, the ordering in a queue is simply based upon when the various requests arrived.
- the interface for the client is activated whenever a client needs some object(s) on which to perform an operation (e.g., read or write). Access to the object(s) is requested by the client interface. After all objects required by a client to perform its transaction have been locked, the client is notified that the operation may take place. After performing the operation and completing the transaction, the client interface is addressed by the client and the interface releases (unlocks) the lock on all of the objects which it had acquired for the client.
- an operation e.g., read or write
- a lock can be requested, surrendered or canceled. Once a lock has been granted to a client, that client may read/write on the object. Following completion of its transaction, a client typically unlocks all of the locks utilized for the transaction, and the client is removed from the corresponding lock request queues.
- the client interface may be constantly updated when the queue of an object that it has requested changes. This information is stored by the client interface and, as such, the client interface detects easily when all objects are assigned to it (when all objects are assigned to it, the client can begin/complete its transaction).
- the queue information that objects provide to clients have respective version numbers and the identity of a client attached thereto. Clients keep track of version numbers and whenever information is received with a version number smaller than a version number of earlier received information, this information may be ignored because the version numbers indicate that it is old or outdated. This guards against the potential of a client detecting deadlock based upon old information due to outdated information. However, in other embodiments, older version numbers may still be considered in WFG graphs, so as to guard against the possibility of deadlock not being detected as a result of delays in the network.
- a client When a client sends a surrender request to a particular object, it preferably ignores all information coming from the surrendered object until information received from that object carries the identity of the surrendering client.
- the client algorithm may utilize the client information attached to a return message from an object as acknowledgement of the surrender request by that client. This provides for safe operation in a case where a client surrenders and outdated information arrives stating e.g., that the client just received access.
- the client interface may compute the so-called wait- for-graph (WFG).
- WFG wait- for-graph
- This graph expresses which client(s) waits for which other client(s) to get access to an object, as described above and illustrated in the drawings. If a client's graph contains a cycle, then a deadlock situation exists.
- the client interface for each client checks for cycles in the appropriate graph(s) and if such a cycle exists, it computes which of the clients in this cycle holds the lock for an object which has the largest reference value/number (since there exists a total order of the references, this is uniquely defined). If the client interface does not represent this largest reference itself, it waits for more object information to come.
- the client interface represents this largest reference, it sends a surrender message to the object it holds that caused the deadlock situation.
- the client with the largest reference is selected among the clients which hold a lock in the deadlock scenario.
- the client with the smallest reference may be selected to surrender a lock in order to resolve the deadlock.
- Other suitable techniques may also be used.
- a client or client algorithm stores the relationship between that object and the clients waiting for that particular object. It is typically clear which object is waited for in a deadlock scenario, since the client that surrenders may very well hold other objects which are not involved in the deadlock scenario.
- a client may choose to not store all information, but instead surrender all objects that the client holds in order to resolve a deadlock. This latter approach is safe but inefficient.
- a client interface may compute that surrendering to the end of the queue is not the most efficient method of surrendering. But this may require extra information from the application for which the algorithm is used. For example, if it is known that every client is at most claiming three objects at a time, this information could very well be used to compute more optimal surrendering positions.
- a client interface When a client interface does not encounter a cycle in the graph, it reports the graph information to other clients which may be interested in the information (e.g., clients involved in transaction relating to the transaction of the notifying/reporting client), as illustrated at step 357 in Figure 13. Note that all clients only have part of the information of the "global" wait-for-graph. By spreading the information to other clients, one ensures at least one client obtains the "global" wait-for-graph relating to a deadlock (if one occurs) as its local graph. Here, one has several possible ways in which to spread the information around.
- a first, and simplistic, way of spreading the information is for a client to send its entire wait-for-graph to all clients that occur in its own graph. This may be performed in certain embodiments of this invention. However, in other embodiments, client only sends a portion of its wait-for-graph to another client if that client does not have that information and is not getting it from another client. The latter can be achieved by, for example, clients only sending information to clients having smaller reference numbers, or alternatively, only sending information to clients having larger reference numbers.
- Another strategy for spreading information which reduces the number of messages sent and therefore is efficient in nature when computation is cheaper than sending, is for a client to only send its wait- for-graph information (entire wait-for-graph or portions thereof) to other client(s) which the sending client is waiting for and to include only information regarding clients that wait for the sending client.
- a client could send all clients which wait for it information regarding all clients which it is waiting for.
- any of the above-listed systems/methods for determining when and how much information to send other clients may be used in different embodiments of this invention. Moreover, any of these information spreading strategies may be improved by computing which information that a client wishes to send to another client is or should already be stored by the another client. In such a manner, a reduced amount of superfluous information is sent.
- Wait-for-graphs may be implemented by using standard data structures and computing cycles. Such graphs are known in the art, for which several well-known algorithms exist. Any such well-known algorithm will suffice. However, specific for the client interface algorithm herein, a node in a wait-for-graph may include client identity, the object that the client is locking, and whether the client has obtained the lock to this object. All clients need not be presented in a graph; only those that hold a lock. However, the relationship which expresses a client waiting for another client may take all known clients into account. Whenever a cycle is detected, from the nodes on this cycle, the client with, for example, the largest reference number, surrenders the object it locks.
- FIG 11 is a flow chart illustrating certain steps taken by the object algorithm in accordance with an embodiment of this invention.
- the flowchart starts 201 with an incoming communication from a requesting client. If the client desires to read 203 data from the object, then it is determined at 205 whether or not the requesting client has a lock on that object. If so, reading of a value is permitted 207 and a reply sent to the client. If not locked, then reading is not permitted 209 and a corresponding reply is sent to the client indicating the same. Thereafter, the state of the algorithm goes back to start 211. If the client desires to write 213 data to the object, then it is determined at 215 whether or not the requesting client has a lock on that object.
- the state of the algorithm goes back to start state 211.
- the received communication is a request for a lock on that object 225, it is determined whether it is a fresh or new request 227. If not (e.g., an old or outdated request), then the algorithm proceed back to start. If so, then the version number is increased 229 and the lock request is put in the lock request queue for that object 231. Thereafter the object notifies 233 related clients of the updated status of the lock queue and lock status, including the new version number in such notification as discussed above.
- notification 233 After notification 233, it system returns to the start state 211, 201.
- the received communication is a surrender request 235 where a client is attempting to surrender a lock on the object
- it is first determined 237 whether that client does indeed have a lock on that object. If not, proceed to start. If so, then the version number is increased 239 and the client is removed from the front of the lock queue line 241 and placed back into the lock queue line at a different location (e.g., at the end of the lock request line).
- the object notifies 243 related clients of the updated status of the lock queue and lock status of that object, including the new version number in such notification.
- the system After notification 243, the system returns to the start state.
- the object at issue may notify all clients currently in that queue of the updated status; while in other embodiments the determination as to which clients to notify may be made in a different manner.
- FIG. 12 illustrates steps taken by the client aspect of the algorithm according to an embodiment of this invention.
- Starting 301 occurs when the client at issue needs an object(s) to perform an operation on (i.e., to complete a transaction or task).
- a lock request is sent out 305 to that object and that object is added to the stored set of objects that are needed by that client 307.
- the client awaits locks on the requested objects 309.
- a determination 311 is made as to whether the client has all of its requested or needed locks. If not, then proceed again to start. If so, then the client reports 313 that it has all locks necessary to complete its transaction.
- the client determines 317 whether it is a report that all locks have been obtained. If so, then proceed to start and the client can complete its transaction. If not, then it is determined 319 whether the received lock information is outdated. If so, then proceed to start and the information is ignored. If not, then it is determined 321 whether the client is still waiting for an acknowledgement from that object (e.g., as discussed above, when a client sends a surrender request, it may ignore all communications from that object until receiving acknowledgement of the object's receipt of the same). If so, then proceed to start. If not, then the received lock information is added to a stored list of received lock information 323 (e.g., add to locked list and/or local WFG).
- a stored list of received lock information 323 e.g., add to locked list and/or local WFG.
- FIG. 13 illustrates steps taken by the client aspect of the algorithm in handling the lock information 329, 341 in accordance with an embodiment of this invention.
- the locking information in the local WFG is analyzed 343. It is then determined 345 whether a deadlock has been detected (e.g., whether a cycle is detected in the WFG). If not, a computation is performed 355 to identify all related (i.e., indirect) clients and locking information is sent to those clients 357.
- the various methods for determining which clients to sent information to and how much information to send are described above.
- deadlock is detected at 345, then all information is removed on the object to be surrendered 347.
- the client determines 349 whether it should be the client that sends out the surrender request on that object. If not, proceed back to start as another client will be doing so in order to resolve the deadlock. If so, then the client sends a surrender request to a target object 351 and waits for acknowledgement 353. In such a manner, the deadlock is broken.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0216640A GB2374700A (en) | 2000-02-11 | 2001-02-09 | Active cooperation deadlock detection system/method in a distributed database network |
AU2001232565A AU2001232565A1 (en) | 2000-02-11 | 2001-02-09 | Active cooperation deadlock detection system/method in a distributed database network |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US50267200A | 2000-02-11 | 2000-02-11 | |
US09/502,672 | 2000-02-11 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2001059568A2 true WO2001059568A2 (en) | 2001-08-16 |
WO2001059568A3 WO2001059568A3 (en) | 2002-04-25 |
Family
ID=23998850
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/SE2001/000265 WO2001059568A2 (en) | 2000-02-11 | 2001-02-09 | Active cooperation deadlock detection system/method in a distributed database network |
Country Status (3)
Country | Link |
---|---|
AU (1) | AU2001232565A1 (en) |
GB (1) | GB2374700A (en) |
WO (1) | WO2001059568A2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005114504A2 (en) * | 2004-05-13 | 2005-12-01 | Sun Microsystems, Inc. | Method and apparatus for executing event driven simulations |
CN100422936C (en) * | 2003-05-01 | 2008-10-01 | 国际商业机器公司 | Managing locks and transactions |
CN101425070B (en) * | 2008-08-11 | 2011-04-20 | 深圳市金蝶中间件有限公司 | Deadlock positioning method, deadlock positioning device and data system |
US8375367B2 (en) | 2009-08-06 | 2013-02-12 | International Business Machines Corporation | Tracking database deadlock |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0273166A1 (en) * | 1986-11-14 | 1988-07-06 | Siemens Aktiengesellschaft | Method and apparatus for preventing dead-locks in distributed data base systems |
EP0595453A1 (en) * | 1992-10-24 | 1994-05-04 | International Computers Limited | Distributed data processing system |
EP0716377A2 (en) * | 1994-12-07 | 1996-06-12 | International Computers Limited | Deadlock detection mechanism |
-
2001
- 2001-02-09 AU AU2001232565A patent/AU2001232565A1/en not_active Abandoned
- 2001-02-09 GB GB0216640A patent/GB2374700A/en not_active Withdrawn
- 2001-02-09 WO PCT/SE2001/000265 patent/WO2001059568A2/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0273166A1 (en) * | 1986-11-14 | 1988-07-06 | Siemens Aktiengesellschaft | Method and apparatus for preventing dead-locks in distributed data base systems |
EP0595453A1 (en) * | 1992-10-24 | 1994-05-04 | International Computers Limited | Distributed data processing system |
EP0716377A2 (en) * | 1994-12-07 | 1996-06-12 | International Computers Limited | Deadlock detection mechanism |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100422936C (en) * | 2003-05-01 | 2008-10-01 | 国际商业机器公司 | Managing locks and transactions |
WO2005114504A2 (en) * | 2004-05-13 | 2005-12-01 | Sun Microsystems, Inc. | Method and apparatus for executing event driven simulations |
WO2005114504A3 (en) * | 2004-05-13 | 2006-04-06 | Sun Microsystems Inc | Method and apparatus for executing event driven simulations |
US7631108B2 (en) | 2004-05-13 | 2009-12-08 | Sun Microsystems, Inc. | Method and apparatus for executing event driven simulations |
CN101425070B (en) * | 2008-08-11 | 2011-04-20 | 深圳市金蝶中间件有限公司 | Deadlock positioning method, deadlock positioning device and data system |
US8375367B2 (en) | 2009-08-06 | 2013-02-12 | International Business Machines Corporation | Tracking database deadlock |
Also Published As
Publication number | Publication date |
---|---|
GB2374700A (en) | 2002-10-23 |
GB0216640D0 (en) | 2002-08-28 |
AU2001232565A1 (en) | 2001-08-20 |
WO2001059568A3 (en) | 2002-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2705717B2 (en) | Locking device and method, device and method for determining granularity of lock request | |
US7337290B2 (en) | Deadlock resolution through lock requeing | |
US5504899A (en) | Guaranteeing global serializability by applying commitment ordering selectively to global transactions | |
JPH05197604A (en) | Multiprocessor computer and operating method thereof | |
AU707393B2 (en) | System and method for space efficient object locking | |
US20040002974A1 (en) | Thread based lock manager | |
JP2866241B2 (en) | Computer system and scheduling method | |
US7685583B2 (en) | Obstruction-free mechanism for atomic update of multiple non-contiguous locations in shared memory | |
JP4543034B2 (en) | Method, program and system for locking resources | |
US7146366B2 (en) | Distributed concurrency control using serialization ordering | |
JP3318455B2 (en) | Method and system for managing ownership of a released synchronization mechanism | |
US20080209422A1 (en) | Deadlock avoidance mechanism in multi-threaded applications | |
US5446901A (en) | Fault tolerant distributed garbage collection system and method for collecting network objects | |
US5117352A (en) | Mechanism for fail-over notification | |
Huang et al. | Priority inheritance in soft real-time databases | |
JP2001142726A (en) | Method and system for setting communicator over processes in multithreaded computer environment | |
EP0532333A2 (en) | A system and method for preventing deadlock in a multiprocessor environment | |
US6721775B1 (en) | Resource contention analysis employing time-ordered entries in a blocking queue and waiting queue | |
JPH04229335A (en) | Optimizing method for commiting procesure | |
JPH07104813B2 (en) | System and method for matching implantation times in a distributed computer database | |
JPH04229334A (en) | Computer network | |
US6681241B1 (en) | Resource contention monitoring employing time-ordered entries in a blocking queue and waiting queue | |
WO2001059568A2 (en) | Active cooperation deadlock detection system/method in a distributed database network | |
Chen et al. | Effects of deadline propagation on scheduling nested transactions in distributed real-time database systems | |
Shih et al. | Survey of deadlock detection in distributed concurrent programming environments and its application to real-time systems and Ada |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
ENP | Entry into the national phase in: |
Ref country code: GB Ref document number: 200216640 Kind code of ref document: A Format of ref document f/p: F |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 Ref country code: DE Ref legal event code: 8642 |
|
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase in: |
Ref country code: JP |