EP1058882A1 - Leasing for failure detection - Google Patents

Leasing for failure detection

Info

Publication number
EP1058882A1
EP1058882A1 EP99908214A EP99908214A EP1058882A1 EP 1058882 A1 EP1058882 A1 EP 1058882A1 EP 99908214 A EP99908214 A EP 99908214A EP 99908214 A EP99908214 A EP 99908214A EP 1058882 A1 EP1058882 A1 EP 1058882A1
Authority
EP
European Patent Office
Prior art keywords
server
lease
client
resource
failure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP99908214A
Other languages
German (de)
English (en)
French (fr)
Inventor
James H. Waldo
Ann M. Wollrath
Robert Scheifler
Kenneth C. R. C. Arnold
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Microsystems Inc
Original Assignee
Sun Microsystems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/044,916 external-priority patent/US6016500A/en
Application filed by Sun Microsystems Inc filed Critical Sun Microsystems Inc
Publication of EP1058882A1 publication Critical patent/EP1058882A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • G06F12/0261Garbage collection, i.e. reclamation of unreferenced memory using reference counting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/465Distributed object oriented systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/42Loop networks
    • H04L12/427Loop networks with decentralised control
    • H04L12/433Loop networks with decentralised control with asynchronous transmission, e.g. token ring, register insertion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/46Indexing scheme relating to G06F9/46
    • G06F2209/462Lookup

Definitions

  • This invention generally relates to data processing systems and, more particularly, to leasing for failure detection and recovery in data processing systems.
  • resource management involves allocating resources (e.g., memory) in response to requests as well as deallocating resources at appropriate times, for example, when the requesters no longer require the resources.
  • resources e.g., memory
  • deallocating resources at appropriate times, for example, when the requesters no longer require the resources.
  • the resources contain data referenced by computational entities (e.g., applications, programs, applets, etc.) executing in the computers.
  • each resource has a unique "handle" by which the resource can be referenced.
  • the handle may be implemented in various ways, such as an address, a ⁇ ay index, unique value, pointer, etc.
  • Resource management is relatively simple for a single computer because the events indicating when resources can be reclaimed, such as when applications no longer refer to them or after a power failure, are easy to determine. Resource management for distributed systems connecting multiple computers is more difficult because applications in several different computers may be using the same resource. 4
  • Disconnects in distributed systems can lead to the improper and premature reclamation of resources or to the failure to reclaim resources.
  • multiple applications operating on different computers in a distributed system may refer to resources located on other machines. If connections between the computers on which resources are located and the applications referring to those resources are interrupted, then the computers may reclaim the resources prematurely. Alternatively, the computers may maintain the resources in perpetuity, despite the extended period of time that applications failed to access the resources.
  • distributed garbage collection a facility provided by a language or runtime system for distributed systems that automatically manages resources used by an application or group of applications running on different computers in a network.
  • garbage collection uses the notion that resources can be freed for future use when they are no longer referenced by any part of an application.
  • Distributed garbage collection extends this notion to the realm of distributed computing, reclaiming resources when no application on any computer refers to them.
  • Distributed garbage collection must maintain integrity between allocated resources and the references to those resources. In other words, the system must not be permitted to deallocate or free a resource when an application running on any computer in the network continues to refer to that resource.
  • This reference-to-resource binding referred to as "referential integrity,” does not guarantee that the reference will always grant access to the resource to which it refers. For example, network failures can make such access impossible.
  • the integrity guarantees that if the reference can be used to gain access to any resource, it will be the same resource to which the reference was first given.
  • Referential integrity failures and memory leaks often result from disconnections between applications referencing the resources and the garbage collection system managing the allocation 5 and deallocation of those resources. For example, a disconnection in a network connection between an application referring to a resource and a garbage collection system managing that resource may prevent the garbage collection system from determining whether and when to reclaim the resource. Alternatively, the garbage collection system might mistakenly determine that, since an application has not accessed a resource within a predetermined time, it may collect that resource.
  • a number of techniques have been used to improve the distributed garbage collection mechanism by attempting to ensure that such mechanisms maintain referential integrity without memory leaks.
  • One conventional approach uses a form of reference counting, in which a count is maintained of the number of applications referring to each resource.
  • the garbage collection system may reclaim the resource.
  • Such a reference counting scheme only works, however, if the resource is created with a corresponding reference counter.
  • the garbage collection system in this case increments the resource's reference count as additional applications refer to the resource, and decrements the count when an application no longer refers to the resource.
  • some conventional reference counting schemes include "keep- alive" messages, which are also refe ⁇ ed to as “ping back.”
  • applications in the network send messages to the garbage collection system overseeing resources and indicate that the applications can still communicate. These messages prevent the garbage collection system from dropping references to resources. Failure to receive such a "keep-alive” message indicates that the garbage collection system can decrement the reference count for a resource and, thus, when the count reaches zero, the garbage collection system may reclaim the resource. This, however, can still result in the premature reclamation of resources following reference counts reaching zero from a failure to receive "keep-alive” messages because of network failures. This violates the referential integrity requirement. 6
  • Another proposed method for resolving referential integrity problems in garbage collection systems is to maintain not only a reference count but also an identifier co ⁇ esponding to each computational entity referring to a resource. See A. Bi ⁇ ell, et al., "Distributed Garbage Collection for Network Objects," No. 116, Digital Systems Research Center, December 15, 1993. This method suffers from the same problems as the reference counting schemes. Further, this method requires the addition of unique identifiers for each computational entity referring to each resource, adding overhead that would unnecessarily increase communication within distributed systems and add storage requirements (i.e., the list of identifiers co ⁇ esponding to applications referring to each resource).
  • referential integrity is guaranteed without costly memory leaks by leasing resources for a period of time during which the parties in a distributed system, for example, an application holding a reference to a resource and the garbage collection system managing that resource, agree that the resource and a reference to that resource will be guaranteed. At the end of the lease period, the guarantee that the reference to the resource will continue lapses, allowing the garbage collection system to reclaim the resource. Because the application holding the reference to the resource and the garbage collection system managing the resource agree to a finite guaranteed lease period, both can know when the lease and, therefore, the guarantee, expires. This guarantees referential integrity for the duration of a reference lease and avoids the concern of failing to free the resource because of network e ⁇ ors.
  • the leasing technique is used for failure detection and recovery.
  • a client requests a lease from a server, and after the lease is granted, the client performs various processing with respect to a resource managed by the server.
  • the lease is about to expire, the client renews the lease. If for any reason this renew fails, it is because either the server experienced an e ⁇ or or the communication mechanism transferring data between the client and the server experienced an e ⁇ or. In either case, the client has detected an e ⁇ or.
  • the server knows that either the client or the communication mechanism experienced an e ⁇ or. In this case, the server has detected an e ⁇ or. 7
  • the alternative embodiment also provides for failure recovery.
  • the client provides the server with a failure recovery routine
  • the server provides the client with a failure recovery routine.
  • both the client and the server each invoke the failure recovery routine of the other to perform failure recovery for each other.
  • both the client and the server then go to a prenegotiated state. That is, the client and the server, through a negotiation beforehand, have decided upon a state that they will go to upon experiencing an e ⁇ or, such as rolling back all changes made to the resource.
  • both the client and the server know the state of the system after a failure and can continue processing accordingly.
  • FIG. 1 is a flow diagram of the steps performed by the application call processor according to an implementation of the present invention
  • FIG. 2 is a flow diagram of the steps performed by the server call processor to process dirty calls according to the implementation of the present invention
  • FIG. 3 is a flow diagram of the steps performed by the server call processor to process clean calls according to the implementation of the present invention
  • FIG. 4 is a flow diagram of the steps performed by the server call processor to initiate a garbage collection process according to the implementation of the present invention
  • FIG. 5 is a diagram of a prefe ⁇ ed flow of calls within a distributed processing system
  • FIG. 6 is a block diagram of the components of the implementation of a method invocation service according to the present invention.
  • FIG. 7 is a diagram of a distributed processing system that can be used in an implementation of the present invention.
  • FIG. 8 is a diagram of the individual software components in the platforms of the distributed processing system according to the implementation of the present invention.
  • FIG. 9 is a diagram of a data processing system suitable for use with an alternative embodiment of the present invention.
  • FIG. 10 depicts a flowchart of the steps performed by a client when requesting a lease from a server consistent with an alternative embodiment of the present invention.
  • FIG. 11 depicts a flowchart of the steps performed by a server when a client requests a lease consistent with an alternative embodiment of the present invention.
  • the present invention may be implemented by computers organized in a conventional distributed processing system architecture.
  • the architecture for and procedures to implement this invention are not conventional, because they provide a distributed garbage collection scheme that ensures referential integrity and eliminates memory leaks.
  • a method invocation (MI) component located in each of the computers in the distributed processing system implements the distributed garbage collection scheme of this invention.
  • the MI component may consist of a number of software modules preferably written in the JavaTM programming language.
  • an application in the distributed processing system obtains a reference to a distributed resource, by a name lookup, as a retum value to some other call, or another method, and seeks to access the resource, the application makes a call to the resource or to an MI component managing the resource. That MI component, called a managing MI component, keeps track of the number of outstanding references to the resource. When the number of references to a resource is zero, the managing MI component can reclaim the resource.
  • the count of the number of references to a resource is generally called the "reference count" and the call that increments the reference count may be refe ⁇ ed to as a "dirty call.”
  • a dirty call can include a requested time interval, called a lease period, for the reference to the resource.
  • the managing MI component Upon receipt of the dirty call, the managing MI component sends a retum call indicating a period for which the lease was granted.
  • the managing MI component thus tracks the lease period for those references as well as the number of outstanding references. Consequently, when the reference count for a resource goes to zero or when the lease period for the resource expires, the managing MI component can reclaim the resource.
  • An application call processor in an MI component performs the steps of the application call procedure 100 illustrated in FIG. 1.
  • the server call processor in the managing MI component performs the steps of the procedures 200, 300, and 400 illustrated in FIGs. 2-4, respectively.
  • the managing MI component's garbage collector performs conventional procedures to reclaim resources previously bound to references in accordance with instructions from the server call processor. Accordingly, the conventional procedures of the garbage collector will not be explained.
  • FIG. 1 is a flow diagram of the procedure 100 that the application call processor of the MI component uses to handle application requests for references to resources managed by the same or another MI component located in the distributed processing system.
  • the application call processor sends a dirty call, including the resource's reference and a requested lease period to the managing MI component for the resource (step 110).
  • the dirty call may be directed to the resource itself or to the managing MI component.
  • the application call processor then waits for and receives a retum call from the managing MI component (step 120).
  • the retum call includes a granted lease period during which the managing MI component guarantees that the reference of the dirty call will be bound to its resource. In other words, the managing MI component agrees not to collect the resource 10 co ⁇ esponding to the reference of a dirty call for the grant period. If the managing MI component does not provide a grant period, or rejects the request for a lease, then the application call processor will have to send another dirty call until it receives a grant period.
  • the application call processor monitors the application's use of the reference and, either when the application explicitly informs the application call processor that the reference is no longer required or when the application call processor makes this determination on its own (step 130), the application call processor sends a clean call to the managing MI component (step 140). In a manner similar to the method used for dirty calls, the clean call may be directed to the referenced resource and the managing MI component will process the clean call. Subsequently, the application call processor eliminates the reference from a list of references being used by the application (step 150).
  • step 160 If the application is not yet done with the reference (step 130), but the application call processor determines that the grant period for the reference is about to expire (step 160), then the application call processor repeats steps 110 and 120 to ensure that the reference to the resource is maintained by the managing MI component on behalf of the application.
  • the MI component's server call processor performs three main procedures: (1) handling dirty calls; (2) handling incoming clean calls; and (3) initiating a garbage collection cycle to reclaim resources at the appropriate time.
  • FIG. 2 is a flow diagram of the procedure 200 that the MI component's server call processor uses to handle requests to reference resources, i.e., dirty calls, that the MI software component manages. These requests come from application call processors of MI components in the distributed processing system, including the application call processor of the same MI component as the server call processor handling requests.
  • the server call processor receives a dirty call (step 210).
  • the server call processor determines an acceptable grant period (step 220).
  • the grant period may be the same as the requested lease period or some other time period.
  • the server call processor determines the 11 appropriate grant period based on a number of conditions including the amount of resource required and the number of other grant periods previously granted for the same resource.
  • the server call processor determines that a resource has not yet been allocated for the reference of a dirty call (step 230)
  • the server call processor allocates the required resource (step 240).
  • the server call processor then increments a reference count co ⁇ esponding to the reference of a dirty call (step 250), sets the acceptable grant period for the reference-to-resource binding (step 260), and sends a return call to an application call processor with the grant period (step 270). In this way, the server call processor controls incoming dirty calls regarding references to resources under its control.
  • Apps can extend leases by sending dirty calls with an extension request before current leases expire. As shown in procedure 200, a request to extend a lease is treated just like an initial request for a lease. An extension simply means that the resource will not be reclaimed for some additional interval of time, unless the reference count goes to zero.
  • the MI component's server call processor also handles incoming clean calls from application call processors. When an application in the distributed processing system no longer requires a reference to a resource, it informs the MI component managing the resource for that reference so that the resource may be reclaimed for reuse.
  • Fig. 3 is a flow diagram of the procedure 300 with the steps that the MI component's server call processor uses to handle clean calls.
  • the server call processor When the server call processor receives a clean call with a reference to a resource that the MI component manages (step 310), the server call processor decrements a co ⁇ esponding reference count (step 320). The clean call may be sent to the resource, with the server call processor monitoring the resource and executing the procedure 300 to process the call. Subsequently, the server call processor sends a retum call to the MI component that sent the clean call to acknowledge receipt (step 330). In accordance with this implementation of the present invention, a clean call to drop a reference may not be refused, but it must be acknowledged. 12
  • the server call processor also initiates a garbage collection cycle to reclaim resources for which it determines that either no more references are being made to the resource or that the agreed lease period for the resource has expired.
  • the procedure 400 shown in FIG. 4 includes a flow diagram of the steps that the server call processor uses to initiate a garbage collection cycle.
  • the server call processor monitors reference counts and granted lease periods and determines whether a reference count is zero for a resource managed by the MI component, or the grant period for a reference has expired (step 410). When either condition exists, the server call processor initiates garbage collection (step 420) of that resource. Otherwise, the server call processor continues monitoring the reference counts and granted lease periods.
  • FIG. 5 is a diagram illustrating the flow of calls among MI components within the distributed processing system.
  • Managing MI component 525 manages the resources 530 by monitoring the references to those resources 530 (see garbage collect 505). Because the managing MI components 525 manages the resources, the server call processor of managing MI component 525 performs the operations of this call flow description.
  • FIG. 5 also shows that applications 510 and 540 have co ⁇ esponding MI components 515 and 545, respectively.
  • Each of the applications 510 and 540 obtains a reference to one of the resources 530 and seeks to obtain access to one of the resources 530 such that a reference is bound to the co ⁇ esponding resource.
  • applications 510 and 540 invoke their co ⁇ esponding MI components 515 and 545, respectively, to send dirty calls 551 and 571, respectively, to the MI component 525.
  • the MI components 515 and 525 handle application requests for access to resources 530 managed by another MI component, such as managing MI component 525, the application call processors of MI components 515 and 545 perform the operations of this call flow description.
  • managing MI component 525 sends retum calls 552 and 572, respectively, to each of the MI components 515 and 545, respectively.
  • the dirty calls include granted lease periods for the references of the dirty calls 551 and 571. 13
  • FIG. 5 also shows MI components 515 and 545 sending clean calls 561 and 581, respectively, to managing MI component 525.
  • Clean calls 561 and 581 inform managing MI component 525 that applications 510 and 540, respectively, no longer require access to the resource specified in the clean calls 561 and 581.
  • Managing MI component 525 responds to clean calls 561 and 581 with retum calls 562 and 582, respectively.
  • Retum calls 562 and 582 differ from return calls 552 and 572 in that return calls 562 and 582 are simply acknowledgments from MI component 525 of the received clean calls 561 and 581.
  • Both applications 510 and 540 may request access to the same resource.
  • application 510 may request access to "RESOURCE(l)" while application 540 was previously granted access to that resource.
  • MI component 525 handles this situation by making the resource available to both applications 510 and 540 for agreed lease periods. Thus, MI component 525 will not initiate a garbage collection cycle to reclaim the "RESOURCE(l)" until either applications 510 and 540 have both dropped their references to that resource or the latest agreed periods has expired, whichever event occurs first.
  • the present invention also permits an application to access a resource after it sent a clean call to the managing MI component dropping the reference to the resource. This occurs because the resource is still referenced by another application or the reference's lease has not yet expired so the managing MI component 525 has not yet reclaimed the resource. The resource, however, will be reclaimed after a finite period, either when no more applications have leases or when the last lease expires.
  • FIG. 6 is a block diagram of the modules of an MI component 600 according to an implementation of the present invention.
  • MI component 600 can include a reference component 605 for each reference monitored, application call processor 640, server call processor 650, and garbage collector 660.
  • Reference component 605 preferably constitutes a table or comparable structure with reference data portions 610, reference count 620, and grant period register 630.
  • MI component 600 uses the reference count 620 and grant period 630 for each reference specified in a 14 co ⁇ esponding reference data portion 610 to determine when to initiate garbage collector 660 to reclaim the co ⁇ esponding resource.
  • Application call processor 640 is the software module that performs the steps of procedure 100 in FIG. 1.
  • Server call processor 650 is the software module that performs the steps of procedures 200, 300, and 400 in FIGs. 2-4.
  • Garbage collector 660 is the software module that reclaims resources in response to instructions from the server call processor 650, as explained above.
  • FIG. 7 illustrates a distributed processing system 50 which can be used to implement the present invention.
  • distributed processing system 50 contains three independent and heterogeneous platforms 100, 200, and 300 connected in a network configuration represented by the network cloud 55.
  • the composition and protocol of the network configuration represented in FIG. 7 by the cloud 55 is not important as long as it allows for communication of the information between platforms 700, 800 and 900.
  • the use of just three platforms is merely for illustration and does not limit the present invention to the use of a particular number of platforms.
  • the specific network architecture is not crucial to this invention. For example, another network architecture that could be used in accordance with this invention would employ one platform as a network controller to which all the other platforms would be connected.
  • platforms 700, 800 and 900 each include a processor 710, 810, and 910 respectively, and a memory, 750, 850, and 950, respectively. Included within each processor 710, 810, and 910, are applications 720, 820, and 920, respectively, operating systems 740, 840, and 940, respectively, and MI components 730, 830, and 930, respectively.
  • Applications 720, 820, and 920 can be programs that are either previously written and modified to work with the present invention, or that are specially written to take advantage of the services offered by the present invention. Applications 720, 820, and 920 invoke operations to be performed in accordance with this invention.
  • MI components 730, 830, and 930 co ⁇ espond to the MI component 600 discussed above with reference to FIG. 6. 15
  • Operating systems 740, 840, and 940 are standard operating systems tied to the co ⁇ esponding processors 710, 810, and 910, respectively.
  • the platforms 700, 800, and 900 can be heterogenous.
  • platform 700 has an UlfraSparc® microprocessor manufactured by Sun Microsystems Corp. as processor 710 and uses a Solaris® operating system 740.
  • Platform 800 has a MIPS microprocessor manufactured by Silicon Graphics Corp. as processor 810 and uses a Unix operating system 840.
  • platform 900 has a Pentium microprocessor manufactured by Intel Corp. as processor 910 and uses a Microsoft Windows 95 operating system 940.
  • the present invention is not so limited and could accommodate homogenous platforms as well.
  • Sun, Sun Microsystems, Solaris, Java, and the Sun Logo are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. UlfraSparc and all other SPARC trademarks are used under license and are trademarks of SPARC International, Inc. in the United States and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc.
  • Memories 750, 850, and 950 serve several functions, such as general storage for the associated platform. Another function is to store applications 720, 820, and 920, MI components 730, 830, and 930, and operating systems 740, 840, and 940 before execution by the respective processor 710, 810, and 910. In addition, portions of memories 750, 850, and 950 may constitute shared memory available to all of the platforms 700, 800, and 900 in network 50.
  • the present invention may be implemented using a client/server model.
  • the client generates requests, such as the dirty calls and clean calls, and the server responds to requests.
  • FIG. 7 preferably includes both client components and server components.
  • FIG. 8 which is a block diagram of a client platform 1000 and a server platform 1100, applies to any two of the platforms 700, 800, and 900 in FIG. 7.
  • Platforms 1000 and 1100 contain memories 1050 and 1150, respectively, and processors 1010 and 1110, respectively.
  • the elements in the platforms 1000 and 1100 function in the same manner as similar elements described above with reference to FIG. 7.
  • processor 1010 executes a client application 1020
  • processor 1110 executes a server application 1120.
  • Processors 1010 and 1110 also execute operating systems 1040 and 1140, respectively, and MI components 1030 and 1130, respectively.
  • MI components 1030 and 1130 each include a server call processor 1031 and 1131, respectively, an application call processor 1032 and 1132, respectively, and a garbage collector 1033 and 1133, respectively.
  • Each of the MI components 1030 and 1130 also contains reference components, including reference data portions 1034 and 1134, respectively, reference counts 1035 and 1135, respectively, and grant period registers 1036 and 1136, respectively, for each reference that the respective MI component 1030 or 1130 monitors.
  • Application call processors 1032 and 1132 represent the client service and communicate with server call processors 1031 and 1131, respectively, which represent the server service. Because platforms 1000 and 1100 contain a server call processor, an application call processor, a garbage collector, and reference components, either platform can act as a client or a server.
  • platform 1000 is designated the client platform and platform 1100 is designated as the server platform.
  • client application 1020 obtains references to distributed resources and uses MI component 1030 to send dirty calls to the resources managed by MI component 1130 of server platform 1100.
  • server platform 1100 may be executing a server application 1120.
  • Server application 1120 may also use MI component 1130 to send dirty calls, which may be handled by MI component 1130 when the resources of those dirty calls are managed by MI component 1130.
  • server application 1120 may use MI component 1130 to send dirty calls to resources managed by MI component 1030.
  • server call processor 1031, garbage collector 1033, and reference count 1035 for MI component 1030 of client platform 1000 are not active and are therefore presented in FIG. 8 as shaded.
  • application call processor 1132 of MI component 1130 of the server platform 1100 is shaded because it is also dormant.
  • application call processor 1032 sends a dirty call, which server call processor 1131 receives.
  • the dirty call includes a requested lease period.
  • Server call processor 1131 increments the reference count 1135 for the reference in the dirty call and determines a grant period.
  • server call processor 1131 sends a retum call to application call processor 1030 with the grant period. 17
  • Application call processor 1032 uses the grant period to update recorded grant period 1035, and to determine when the resource co ⁇ esponding to the reference of its dirty call may be reclaimed.
  • Server call processor 1131 also monitors the reference counts and grant periods co ⁇ esponding to references for resources that it manages. When one of its reference counts 1135 is zero, or when the grant period 1135 for a reference has expired, whichever event occurs first, server call processor 1131 may initiate the garbage collector 1133 to reclaim the resource co ⁇ esponding to the reference that has a reference count of zero or an expired grant period.
  • the leased-reference scheme does not require that the clocks on the platforms 1000 and 1100 involved in the protocol be synchronized.
  • the scheme merely requires that they have comparable periods of increase. Leases do not expire at a particular time, but rather expire after a specific time interval. As long as there is approximate agreement on the interval, platforms 1000 and 1100 will have approximate agreement on the granted lease period. Further, since the timing for the lease is, in computer terms, fairly long, minor differences in clock rate will have little or no effect.
  • the transmission time of the dirty call can affect the protocol. If MI component 1030 holds a lease to reference and waits until just before the lease expires to request a renewal, the lease may expire before the MI component 1130 receives the request. If so, MI component 1130 may reclaim the resource before receiving the renewal request. Thus, when sending dirty calls, the sender should add a time factor to the requested lease period in consideration of transmission time to the platform handling the resource of a dirty call so that renewal dirty calls may be made before the lease period for the resource expires.
  • a distributed garbage collection scheme ensures referential integrity and eliminates memory leaks by providing granted lease periods co ⁇ esponding to references to resources in the distributed processing system such that when the granted lease periods expire, so do the references to the resources.
  • the resources may then be collected. Resources may also be collected when they are no longer being referenced by processes in the distributed processing system with reference to counters assigned to the references for the resources.
  • the leasing technique described above relates to garbage collection.
  • an alternative embodiment of the present invention can be used with leasing to detect failures and to perform e ⁇ or recovery.
  • Heartbeats a client sends messages at periodic intervals to a server indicating that the client is alive. If at one of the intervals the server does not receive a message, the server knows that a failure has occurred to either the client or the communication mechanism (e.g., the network) that transfers data between the client and the server.
  • the communication mechanism e.g., the network
  • timeouts a predetermined amount of time is set and if the server has not received any communication from the client within that time period, the server knows that either the client or the communication mechanism has experienced a failure.
  • both the client and the server can be left not knowing the state of the system after the failure.
  • the client when the client is a program and the server is a file system manager, the client may request that a write operation be performed on a particular file managed by the server.
  • the conventional failure detection systems will detect a failure when one occurs, the client does not know whether the failure occu ⁇ ed before or after the write operation was performed on the file. In this situation, the client cannot determine a state of the system.
  • An alternative embodiment of the present invention solves this problem by using the leasing technique for failure detection and recovery.
  • the client requests a lease from the server and performs various processing with respect to the resource managed by the server during the granted lease period.
  • the lease is about to expire, the client renews the lease. If for any reason the renew fails, it is because either the server experienced a failure or the communication mechanism experienced a failure. In either case, the client has detected a failure.
  • the server On the server side, if the lease expires without the client either renewing the lease or performing an explicit cancellation, the server knows that either the client or the communication mechanism experienced a failure, and thus, the server has detected a failure.
  • the client and the server Upon detecting a failure, the client and the server perform a recovery by proceeding to a prenegotiated state. That is, the client and the server prenegotiate a state that they will go to 19 upon experiencing or detecting a failure. For instance, in the above file system example, the client and server may prenegotiate to perform a roll back if a failure is detected.
  • a "roll back" refers to putting the client, server, and any related entities, such as the file, in the state they were in before the failure occu ⁇ ed.
  • the server restores the file to its state just before the write operation was performed and the client knows that, after the failure is detected, the write operation has not been performed, so the client can continue its processing accordingly.
  • the client and server may roll back even further.
  • the client and server may prenegotiate that whenever an e ⁇ or occurs during file manipulation, the roll back brings the client and server back to the state they were in before the client had the lease (e.g., before the file was created).
  • the roll back may instead go back to a predetermined checkpoint in the manipulation of the file.
  • This prenegotiation between the client and server to determine the after-failure system state can be performed in a number of ways, including a handshake, reading a predesignated file, or the client and server may simply be instructed at development time to always go to a given after-failure system state.
  • the client may provide the server with a failure recovery routine, and likewise, the server may provide the client with a failure recovery routine.
  • both the client and the server invoke each other's failure recovery routine to perform failure recovery.
  • the server if the server has experienced a failure, once the client detects the failure, the client invokes the server's recovery routine which will perform recovery on the server. For example, the recovery routine may restart the server and send a message to the system administrator.
  • the server invokes the client's recovery routine, thus performing failure recovery on the client.
  • the client and server recover each other, system management is performed on a distributed basis. That is, instead of a centralized manager performing system management, like in some conventional systems, by using the leasing technique for failure detection and recovery, the altemative embodiment distributes the system management processing so that clients can perform recovery on a server and so that the server can perform recovery on its clients.
  • the altemative embodiment can be used in any client-server relationship, including operation in a distributed system where the client and server are located on separate machines communicating via a network or where the client and server are on the same machine.
  • a distributed system suitable for use by the altemative embodiment is the exemplary distributed system described in copending U.S. Patent Application No. , entitled "Dynamic Lookup
  • Storage devices have many storage locations containing various logical groupings of data that may be used by more than one program. These logical groupings may take the form of files, databases, or documents.
  • the leasing of storage locations allows access (e.g., read and write access) to the storage locations for a pre-negotiated amount of time. It is immaterial to the leasing of storage locations what kind of data is contained in the storage locations or whether the storage locations contain any data at all. Also, the leasing of storage locations can be applied on different levels of storage, such as database fields, files, blocks of storage, or actual storage locations.
  • a program When using a lease for a group of storage locations containing the data for a file, a program (“the client”) requests a lease from the file system manager (“the server”) to access the group of storage locations for a period of time ("the lease period"). Depending on availability, priority, and other factors, the server either denies the request or grants a lease period. The lease period granted may be either the entire lease period requested or some portion of it. Once a client receives a lease, the client may access the group of storage locations for the duration of the lease period. 21
  • the client may request an exact lease period.
  • the server only grants the lease if the lease period would be the entire lease period requested, as opposed to a portion of it.
  • Each storage location may have an associated limiting parameter, such as an access parameter or a privilege parameter.
  • the access parameter determines the type of access the server supports for that storage location. For example, a storage location may be defined as readonly access. In this case, the server only allows read access for a subsequently granted lease for that particular storage location. Conversely, an attempt by the client to write to that storage location would not be permitted by the server.
  • Other potential storage location access parameters may include write access, allocation access, re-allocation access, and sub-block access (i.e., for large blocks of storage).
  • the associated privilege parameter specifies the privilege level the client must have before a lease will be granted.
  • the server may use the privilege parameter to prioritize competing lease requests. In other words, when the server has multiple outstanding lease requests for the same storage location, it may prioritize the requests based on the privilege level of the clients making the request.
  • the altemative embodiment also supports concu ⁇ ent access to a group of storage locations by granting multiple, concu ⁇ ent leases to the same storage locations. For example, if a particular group of storage locations' parameter specifies "read" access, the server can grant multiple concu ⁇ ent leases to that storage location without breaching the integrity of the storage location. Concu ⁇ ent leases could also be applied, for example, to large files. The server could merely grant leases to smaller sub-blocks of the file without compromising the integrity of the larger file. 22
  • the server returns to the client an object, including methods for determining the duration of the lease, for renewing the lease, for canceling the lease, and for performing failure recovery.
  • the object is an instance of a class that may be extended in many ways to offer more functionality, but the basic class is defined in the Java programming language as follows: interface Lease ⁇ obj FileHandle; public long getDuration ( ); public void cancel ( ) throws UnknownLeaseException,
  • This class contains a number of methods, including the getDuration method, the cancel method, the renew method, and the recover method.
  • the "getDuration" method provides the client with the length of the granted lease period. This period represents the most recent lease granted by the server. It is the client's responsibility, however, to determine the amount of time remaining on the lease.
  • the "renew" method permits the client to renew the lease, asking for more time, without having to re-initiate the original lease request.
  • Situations where the client may desire to renew the lease include when the original lease proves to be insufficient (i.e., the client requires additional use of the storage location), or when only a partial lease (i.e., less than the requested lease) was granted.
  • the client may use the renew method to request an additional lease period, or the client may continually invoke the renew method multiple times until many additional lease periods are granted.
  • the renew method has no retum value. If the renewal is granted, the new lease period will be reflected in the lease object on which the call was made. If the server is unable or unwilling to renew the lease, the reason is set forth in the lease object on which the call was made.
  • the client invokes the "cancel" method when the client wishes to cancel the lease.
  • invocation of the cancel method allows the server to re-claim the storage locations so that other 23 programs may access them. Accordingly, the cancel method ensures that the server can optimize the use of the storage locations in the distributed system. It should be noted that if the lease expires without an explicit cancellation by the client, the server assumes an e ⁇ or occu ⁇ ed.
  • the "recover" method is provided by the server so that the client can perform failure recovery on the server.
  • e ⁇ or recovery may include restarting the server.
  • FIG. 9 depicts a data processing system 9000 suitable for use by an altemative embodiment of the present invention.
  • the data processing system 9000 includes a computer system 9001 connected to the Internet 9002.
  • the computer system 9001 includes a memory 9003, a secondary storage device 9004, a central processing unit (CPU) 9006, an input device 9008, and a video display 9010.
  • the memory 9003 further includes an operating system 9012 and a program 9014, the client.
  • the operating system 9012 contains a file system manager 9016, the server, that manages files 9018 on the secondary storage device 9004.
  • the secondary storage device 9004 also includes a JavaTM space 9019.
  • the client 9014 requests access to one or more of the files 9018 by requesting a lease from the server 9016.
  • the server 9016 may either choose to grant or deny the lease as further described below.
  • the Java space 9019 is an object repository used by programs within the data processing system 9000 to store objects. Programs use the Java space 9019 to store objects persistently as well as to make them accessible to other devices on the network. Java spaces are described in greater detail in co-pending U.S. Patent Application No. 08/971,529, entitled “Database System Employing Polymorphic Entry and Entry Matching,” assigned to a common assignee, filed on November 17, 1997, which is incorporated herein by reference. One skilled in the art will appreciate that computer 9000 may contain additional or different components.
  • Figure 10 depicts a flowchart of the steps performed by the client when requesting a lease from the server.
  • the first step performed by the client is to send a request for a lease to the server (step 10002).
  • This request is a function call with a number of parameters, including (1) the requested storage locations the client wishes to lease, (2) the desired lease period, (3) an exact lease indicator, (4) the type of access the client desires, (5) the client's privilege, and (6) an object containing a recover method.
  • This method contains code for performing e ⁇ or recovery for the client.
  • the requested storage locations is an indication of the storage locations to be leased.
  • the desired lease period contains an amount of time for which the client wants to utilize the storage locations.
  • the exact lease request contains an indication of whether an exact lease request is being made or whether a lease of less than the requested amount will suffice.
  • the type of access requested indicates the type of storage location access the client requested. The types of access include read access, write access, allocation access, re-allocation access, and sub-block access (i.e., for large blocks of storage).
  • the privilege field indicates the privilege level of the user or the client. To form a valid request, the client request must contain both the requested storage location and the desired lease period.
  • the first scenario occurs when a file is created.
  • a "create" command is used to create the file and also generates a lease request to the server for access to the file.
  • the second scenario occurs when a client desires access to either existing storage locations or a file already having an existing lease (i.e., in the case of concu ⁇ ent leases).
  • the client After sending the request, the client receives a lease object from the server (step 10004).
  • the lease object contains various information, as described above, including the file handle, the getDuration method, the renew method, the cancel method, and the recover method.
  • the client After receiving the lease object, the client utilizes the file (step 10005). Next, the client determines if it has completed its use of the file (step 10006). If so, the client invokes the cancel method on the lease object to explicitly cancel the lease (step 10007). By invoking this method, the lease is canceled by the server without the server believing that a failure occu ⁇ ed. 25
  • the client determines if the lease is about to expire (step 10008). The client performs this step by invoking the getDuration method and determimng whether the remaining time is within a predetermined threshold. If the lease is not about to expire, processing continues to step 10005. However, if the lease is about to expire, the client sends a renew request to the server (step 10009). In this step, the client invokes the renew method on the lease object. After invoking the renew method, the client determines if the renew request was successful (step 10010). In this step, the client determines if the renew request was successful by whether the renew method returned successfully. If so, processing continues to step 10005.
  • the client invokes the recover method on the lease object (step 10012). Because the renew request did not complete successfully, the client knows that a failure occu ⁇ ed and thus needs to perform e ⁇ or recovery by invoking the recover method. The recover method then performs recovery on the server.
  • FIG 11 depicts a flowchart of the steps performed by the server in accordance with an alternative embodiment of the present invention.
  • the first step performed by the server is to access the Java space 9019 (step 11002).
  • the server maintains a Java space in which it stores all objects received during a lease request. These objects are stored in the Java space so that if the server detects a failure, it may access the Java space and perform recovery by invoking the recover methods on the objects. Furthermore, the objects are stored persistently so if the server experiences a failure and crashes, when the server is restarted, it may invoke the recover method on each object in the Java space, reflecting all of the outstanding leases at the time of the server failure.
  • the server accesses the Java space containing all objects, if any, received from clients as part of lease requests. If there any objects in the Java space, a failure must have occu ⁇ ed during the processing of the server.
  • the server invokes the recover method on each of the objects in the Java space (step 11004).
  • the server performs this recovery by invoking the recover method for each client that had a lease.
  • These recover methods may, for example, restart the clients and retum them to a prenegotiated state, like the state they were in before requesting the lease.
  • the server deletes all of the objects from the Java space (step 11006). After a recovery has been performed, the objects are no longer needed. 26
  • the server After deleting the objects, the server receives a lease request from one of the clients (step 11008). After receiving the lease request, the server stores the object received in this request into the Java space (step 11010). By storing the object in the Java space, which stores objects persistently, if a failure occurs, the server can access the Java space and invoke the recover method on the object to perform e ⁇ or recovery for the client.
  • the server After storing the object in the Java space, the server grants the lease request by returning an object with the methods described above, including a recover method for the server (step 11012).
  • the server determines whether it has received a renew request from the client (step 11014). If the renew request has been received, the server renews the lease (step 11017). If, however, a renew has not been received, the server determines if a cancel request has been received by the client invoking the cancel method (step 11015). If the client invoked the cancel method, the server cancels the lease by deleting the object stored in step 11010 from the Java space, and if this is the last outstanding lease on the file, the server deletes the file (step 11016).
  • the server determines if the lease has expired (step 11018). If the lease has not expired, processing continues to step 11014. However, if the lease has expired, the server knows that a failure has occu ⁇ ed and therefore invokes the recover method on the object in the Java space for the client with the lease that terminated (step 11020). After invoking the recover method, the server deletes this object because it is no longer needed (step 11022).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multi Processors (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
EP99908214A 1998-02-26 1999-02-17 Leasing for failure detection Withdrawn EP1058882A1 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US7604898P 1998-02-26 1998-02-26
US76048P 1998-02-26
US09/044,916 US6016500A (en) 1996-10-11 1998-03-20 Leasing for failure detection
PCT/US1999/003398 WO1999044128A1 (en) 1998-02-26 1999-02-17 Leasing for failure detection
US44916 2002-01-15

Publications (1)

Publication Number Publication Date
EP1058882A1 true EP1058882A1 (en) 2000-12-13

Family

ID=26722148

Family Applications (1)

Application Number Title Priority Date Filing Date
EP99908214A Withdrawn EP1058882A1 (en) 1998-02-26 1999-02-17 Leasing for failure detection

Country Status (5)

Country Link
EP (1) EP1058882A1 (ja)
JP (1) JP2002505468A (ja)
CN (1) CN1298515A (ja)
AU (1) AU2770499A (ja)
WO (1) WO1999044128A1 (ja)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100428806C (zh) * 2003-12-26 2008-10-22 华为技术有限公司 告警系统及其方法
CN100466557C (zh) * 2004-11-10 2009-03-04 华为技术有限公司 通信网节点故障监测方法
CN117033092A (zh) * 2023-10-10 2023-11-10 北京大道云行科技有限公司 单例服务故障转移方法及系统、电子设备、存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4939638A (en) * 1988-02-23 1990-07-03 Stellar Computer Inc. Time sliced vector processing
US4979105A (en) * 1988-07-19 1990-12-18 International Business Machines Method and apparatus for automatic recovery from excessive spin loops in an N-way multiprocessing system
US5353343A (en) * 1992-04-30 1994-10-04 Rockwell International Corporation Telephonic switching system with a user controlled data memory access system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9944128A1 *

Also Published As

Publication number Publication date
AU2770499A (en) 1999-09-15
JP2002505468A (ja) 2002-02-19
WO1999044128A1 (en) 1999-09-02
CN1298515A (zh) 2001-06-06

Similar Documents

Publication Publication Date Title
US6016500A (en) Leasing for failure detection
US6237009B1 (en) Lease renewal service
US6421704B1 (en) Method, apparatus, and product for leasing of group membership in a distributed system
US6247026B1 (en) Method, apparatus, and product for leasing of delegation certificates in a distributed system
US6728737B2 (en) Method and system for leasing storage
EP1058882A1 (en) Leasing for failure detection
EP1057105B1 (en) Method and system for leasing storage
EP1057106B1 (en) Method, apparatus, and product for leasing of group membership in a distributed system
KR20010041295A (ko) 고장 검출을 위한 리싱

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20000908

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE FR GB IE NL SE

17Q First examination report despatched

Effective date: 20040212

17Q First examination report despatched

Effective date: 20040212

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20080402