GB2339932A - Local resource registration in a distributed transaction processing coordinating server - Google Patents

Local resource registration in a distributed transaction processing coordinating server Download PDF

Info

Publication number
GB2339932A
GB2339932A GB9815445A GB9815445A GB2339932A GB 2339932 A GB2339932 A GB 2339932A GB 9815445 A GB9815445 A GB 9815445A GB 9815445 A GB9815445 A GB 9815445A GB 2339932 A GB2339932 A GB 2339932A
Authority
GB
United Kingdom
Prior art keywords
coordinator
server
resource
log
transaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9815445A
Other versions
GB9815445D0 (en
Inventor
Amanda Elizabeth Chessell
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to GB9815445A priority Critical patent/GB2339932A/en
Publication of GB9815445D0 publication Critical patent/GB9815445D0/en
Publication of GB2339932A publication Critical patent/GB2339932A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/466Transaction processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1471Saving, restoring, recovering or retrying involving logging of persistent data for recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1474Saving, restoring, recovering or retrying in transactions

Description

2339932 AN APPARATUS, METHOD AND COMPUTER PROGRAM PRODUCT FOR
CLIENT/SERVER COMPUTING WITH ENHANCED REGISTRATION PROCESS FOR A RESOURCE
Field of the Invention
The invention relates to the field of client/server (also known as "distributed") computing, where one computing device O,the client") requests another computing device ("the server") to perform part of the client's work. The client and server can also be both located on the same physical computing device.
Background of the Invention
Client/server computing has become more and more important over the past few years in the information technology world. This type of distributed computing allows one machine to delegate some of its work to another machine that might be, for example, better suited to perform that work. For example, the server could be a high-powered computer running a database program managing the storage of a vast amount of data, while the client is simply a desktop personal computer (PC) which requests information from the database to use in one of its local programs.
The benefits of client/server computing have been even further enhanced by the use of a well-known computer programming technology called object-oriented programming (OOP), which allows the client and server to be located on different (heterogeneous) "platformsn. A platform is a combination of the specific hardware/software/operating system/communication protocol which a machine uses to do its work. OOP allows the client application program and server application program to operate on their own platforms without worrying how the client application's work requests will be communicated and accepted by the server application. Likewise, the server application does not have to worry about how the OOP system will receive, translate and send the server application's processing results back to the requesting client application.
Details of how OOP techniques have been integrated with heterogeneous client/server systems are explained in US Patent No.
5,440,744 and European Patent Published Application No. EP 0 677,943 A2.
These latter two publications are hereby incorporated by reference.
2 However, an example of the basic architecture will be given below for contextual understanding of the invention's environment.
AS shown in Fig. 1, the client computer 10 (which could, for example, be a personal computer having the IBM OS/2 operating system installed thereon) has an application program 40 running on its operating system ("IBM" and "OS/211 are trademarks of the International Business machines corporation). The application program 40 will periodically require work to be performed on the server computer 20 and/or data to be returned from the server 20 for subsequent use by the application program 40. The server computer 20 can be, for example, a high-powered mainframe computer running on IBM's MVS operating system (nMVSn is also a trademark of the IBM corp.). For the purposes of the present invention it is irrelevant whether the requests for communications services to be carried out by the server are instigated by user interaction with the first application program 40, or whether the application program 40 operates independently of user interaction and makes the requests automatically during the running of the program.
when the client computer 10 wishes to make a request for the server computer 20's services, the first application program 40 informs the first logic means 50 of the service required. It may for example do this by sending the first logic means the name of a remote procedure along with a list of input and output parameters. The first logic means 50 then handles the task of establishing the necessary communications with the second computer 20 with reference to definitions of the available communications services stored in the storage device 60. All the possible services are defined as a cohesive framework of object classes 70, these classes being derived from a single object class. Defining the services in this way gives rise to a great number of advantages in terms of performance and reusability.
To establish the necessary communication with the server 20, the first logic means 50 determines which object class in the framework needs to be used, and then creates an instance of that object at the server, a message being sent to that object so as to cause that object to invoke one of its methods. This gives rise to the establishment of the connection with the server computer 20 via the connection means 80, and the subsequent sending of a request to the second logic means 90.
3 The second logic means 90 then passes the request on to the second application program 100 (hereafter called the service application) running on the server computer 20 so that the service application 100 can perform the specific task required by that request, such as running a data retrieval procedure. Once this task has been completed the service application may need to send results back to the first computer 10. The server application 100 interacts with the second logic means 90 during the performance of the requested tasks and when results are to be sent back to the first computer 10. The second logic means 90 establishes instances of objects, and invokes appropriate methods of those objects, as and when required by the server application 100, the object instances being created from the cohesive framework of object classes stored in the storage device 110.
Using the above technique, the client application program 40 is not exposed to the communications architecture. Further the service application 100 is invoked through the standard mechanism for its environment; it does not know that it is being invoked remotely.
The Object Management Group (OMG) is an international consortium of organizations involved in various aspects of client/server computing on heterogeneous platforms with distributed objects as is shown in Fig. 1.
The OMG has set forth published standards by which client computers (e.g.
10) communicate (in OOP form) with server machines (e.g. 20). As part of these standards, an Object Request Broker (called CORBA-the Common Object Request Broker Architecture) has been defined, which provides the object oriented bridge between the client and the server machines. The ORB decouples the client and server applications from the object oriented implementation details, performing at least part of the work of the first and second logic means 50 and 90 as well as the connection means 80.
As part of the CORBA software structure, the OMG has set forth standards related to "transactionSn and these standards are known as the OTS or Object Transaction Service. See, e.g., CORBA Object Transaction service Specification 1.0, OMG Document 94.8.4. Computer implemented transaction processing systems are used for critical business tasks in a number of industries. A transaction defines a single unit of work that must either be fully completed or fully purged without action. For example, in the case of a bank automated teller machine from which a customer seeks to withdraw money, the actions of issuing the money, reducing the balance of money on hand in the machine and reducing the 4 customer's bank balance must all occur or none of them must occur. Failure of one of the subordinate actions would lead to inconsistency between the records and the actual occurrences.
Distributed transaction processing involves a transaction that affects resources at more than one physical or logical location. In the above example, a transaction affects resources managed at the local automated teller device as well as bank balances managed by a bank's main computer. Such transactions involve one particular client computer (e.g, 10) communicating with one particular server computer (e.g., 20) over a series of client requests which are processed by the server. The OMG's OTS is responsible for co-ordinating these distributed transactions.
Usually, an application running on a client process begins a transaction which may involve calling a plurality of different servers, each of which will initiate a server process to make changes to its local database according to the instructions contained in the transaction. The transaction finishes by either committing the transaction (and thus all servers finalize the changes to their local databases) or aborting the transaction (and thus all servers "rollback" or ignore the changes to their local databases). To communicate with the servers during the transaction (e.g., instructing them to either commit or abort their part in the transaction) one of the processes involved must maintain state data for the transaction. This usually involves the process to set up a series of transaction state objects, one of which is a Coordinator object which coordinates the transaction with respect to the various server processes.
The basic software architecture involved in providing an implementation of the OTS is shown in Fig. 2. A client process 21 which wants to begin a transaction (e.g., to withdraw money from a bank account) locates a process which is capable of creating and holding the transaction objects that will maintain the state of the transaction. AS the modern tendency is to create clients that are nthin" (and thus have only the minimum functionality), the client process 21 will usually not be able to maintain the transaction objects locally and must look for a server process for this purpose.
The OTS (or another service, such as the CORBA Lifecycle service) selects server A process 22 on which to create the transaction state objects 221 (which include the Coordinator object, Control object and Terminator object). Upon locating the server A process 22, client process 21 sends (arrow with encircled number 1) a message to server A process 22 to instruct server A process 22 to create the transaction state objects 221. The Control object (known in the OTS as CosTransactions::Control) provides access to the other two transaction state objects. The Terminator object (known in the OTS as CosTransactions;:Terminator) is used to end the transaction. The Coordinator object (known in the OTS as CosTransactions::Coordinator) maintains a list, in local storage 222, of resource objects (known in the OTS as CosTransactions::Resource) that have made updates to their respective data during the transaction. This list is required so that the Coordinator object can consistently call the resource objects at the end of the transaction to command them to commit their transactional changes (make their local data changes final) or to rollback such changes (bring the local data back to the state it was in before the transaction started). A rollback would be necessary, for example, where the transaction could not finish because one of the resources was not working properly.
Server A process 22 then creates the transaction state objects 221 and sends a reply (arrow with encircled number 2) containing the transaction context to client 21. Client 21 then sends, for example, a debit bank account command (arrow with encircled number 3) to server B process 23 (the process containing the resource, for example, bank account, object 231 which the client process 21 wishes to withdraw money from). This latter command carries with it the transaction context supplied to the client 21 by the server A process 22. In this way, the resource object 231 in process 23 can register itself (arrow with encircled number 4) with the transaction objects 221 in process 22 so that the resource object 231 can be commanded (arrow with encircled number 5) to commit or rollback by the transaction state objects 221 at the end of the transaction.
In the above operation, when the transaction state objects 221 are created, they must log information about themselves and the transaction they represent in local storage 222, so that the transaction will be recoverable in case of a server failure which temporarily prevents the server A process 22 from continuing with the transaction.
As part of the transaction, the client process 21 then makes similar calls to server C process 24 (to access the resource object 241) 6 and server D process 24 (to access the resource object 251). Server B process 23, in carrying out its part of the transaction, may need to call another server process, such as server E process 26, to access the resource objects 261, 262 and 263 located in process 26.
Since the number of server processes and resources involved in Fig.
2 is becoming large, the need for careful synchronization of all of the database changes involved becomes readily apparent. The usual way to go about achieving this synchronization is to carry out a two-phase commit process when the client 21 issues a command to end the transaction. The transaction objects 221 first command (phase 1) each of their directly registered resources (231, 241 and 251 in the Fig. 2 example) to prepare to commit their database changes. Phase 1 is also known as the prepare stage of a transaction, as the resources are being prepared for the finalization of their data changes, which will take place in phase 2.
Each of these resources then responds to the transaction objects 221 to indicate that it has prepared to commit its changes, and the resources will not allow any more changes to be made to the databases. This response is, in effect, a vote, signifying that this particular resource is voting that the transaction should be committed. After issuing their votes, the resources are then said to be sitting "in doubt" (also known as in a "prepared" state) waiting for the transaction objects 221 to give a synchronized final command (phase 2) to commit all database changes made during the transaction. This latter final command is only given if all resources have voted that the transaction should be committed.
Server B process 23, which has called another server process 26, would carry out its own two-phase commit protocol with respect to the resource objects 261, 262, and 263, as part of its participation in the main two phase commit protocol discussed above. That is, server B process 23 would send a prepare command to its directly registered resources 261, 262 and 263, and receive a vote from each of them, before server B process 23 sends a consolidated reply to server A process 22.
Rather than voting that a transaction be committed, a resource can also vote that a transaction should be rolled back. A rollback vote would be issued by a resource if that resource had a problem while making its data changes during the transaction (e.g., some type of write error had occurred while a resource was making a local data change). The receipt of a rollback vote from at least one resource will cause the transaction objects 221 to rollback the entire transaction.This is in 7 keeping with the fact that a transaction is an all or nothing prospect:
either all resource changes in a transaction are committed or none are.
oftentimes, the resource objects are located in the same server process as the transaction objects. For example, such resource objects are often used to integrate updates to subordinate resource managers (SRM's) such as XA databases and SNA LU6.2 systems into a CORBA OTS transaction which is being carried out on a server process which contains both the transaction objects and the resource objects. As shown in Fig.
3, server A process 22 has a coordinator object 31 (which is one of the transaction state objects 221 of Fig. 2) and a local resource object 32.
Associated with the coordinator object 31 is a coordinator log object 33 and log section objects 34 and 35, all of which are used by the coordinator object 31 to log (or record) information concerning the transaction into temporary memory 331 (e.g., semiconductor memory). At a later time, the coordinator 31 will instruct the coordinator log 33 to nforce writen the contents of the temporary memory into permanent memory (e.g., hard disk drive 222), so that such data can be recovered in the event that the server running the server A process 22 should fail (e.g., due to a thunderstorm) and then later be restarted (e.g., after the thunderstorm is over).
when a transaction is begun on server A process 22, the coordinator 31 sends a "create log section" command to coordinator log 33 (arrow with encircled numeral 1) which results in the coordinator log 33 sending a ncreatell command to log section 34 (arrow with encircled numeral 2) and a reference to the log section 34 being returned to the coordinator 31 (dotted line with encircled numeral 3). Then, the coordinator 31 sends an "add data" command (arrow with encircled numeral 4) to log section 34 instructing the log section to add state data concerning the transaction (e.g., recording the current stage in which the transaction is in) into the coordinator log 33 (arrow with encircled numeral 5). At this point, the state of the transaction is recorded in temporary (e.g., semiconductor) memory 331.
Then, as part of the transaction, a client that started the transaction calls a local resource object 32 (local to, meaning in the same process as, the coordinator object 31) to, for example, command the object 32 to deduct money from a bank account. The resource object 32 represents a subordinate resource manager, as discussed above. The local resource object 32 would then make a registration call to the coordinator 8 31 (arrow with encircled numeral 6) as discussed above, passing the resource 32's resource address to the coordinator 31. Coordinator 31 then sends another ncreate log sectionn command to the coordinator log 33 (arrow with encircled numeral 7) which results in the coordinator log 33 sending a "create" command to log section 35 (arrow with encircled numeral 8) and a reference to the log section 35 being returned to the coordinator 31 (dotted line with encircled numeral 9). Then, the coordinator 31 sends an nadd datan command (arrow with encircled numeral 10) to log section 35 instructing the log section to add data concerning the resource address of the resource object 32 into the coordinator log 33 (arrow with encircled numeral 11).
Coordinator 31 then sends a,force write,, command (arrow with encircled numeral 12) to log section 35 instructing log section 35 to inform coordinator log 33 (arrow with encircled numeral 13) to "force write,, (arrow with encircled numeral 14) the contents of its temporary memory 331 into the permanent memory 222. By storing the address of this resource (eventually into permanent memory 222), this enables the transaction service to guarantee to call the resource 32 after the server running server A process 22 has failed (e.g., ncrashed") and is restarted. Failure to do so could cause resources to wait indefinitely to be called, since the coordinator could have lost the resource's address information during the server crash. After this force write at arrow 14, the coordinator 31 sends a create command to recovery coordinator object 36 (arrow with encircled numeral 15), whose purpose is to hold the outcome of the transaction (commit or rollback) so that such information can be used to recreate a resource after a server crash.
Recovery coordinator 36 then passes (dotted line with encircled numeral 16) a reference to itself over to the resource 32, thus informing the resource 32 that the registration process is complete.
Thus, for such transaction servers (IBM's Component Broker Connector product, first announced in may 1997 is an example, 11IBMn and "Component Broker Connector" are trademarks of IBM Corp.) the object reference (name/address) of each resource is force written to the log as the resource registers to ensure that the resource is called even if the server where the coordinator resides fails and is restarted. Arrows 6 to 16 discussed above would be repeated for each resource object used by the transaction. An alternative design is for the resource object 32 to write the recovery coordinator 36's address to permanent storage when the address is returned from the registration call. However, this also 9 requires a force write for each resource object that is registering itself in a transaction.
In addition, the coordinator 31 force writes a message to the log whenever a resource object (e.g., 32) is called for the last time in the transaction. This prevents the coordinator (residing in a server that fails and restarts part way through a commit) from recalling a resource that has already deleted itself, which could lead to a segmentation violation error during recovery. In summary, according to the known state of the art, at least two force writes to permanent memory are required for each local resource that is involved in the transaction.
The inventor has noted many inefficiencies with this state of the art transaction server.
Firstly, the requirement of at least two force writes per resource per transaction can prove very expensive, especially in terms of speed, since it takes a finite amount of time to carry out each force write. A common benchmark for measuring the performance of such a server is the speed at which transactions can be carried out, and thus a low speed server will not fare well in the market as compared to servers which are capable of processing transactions faster.
Secondly, each of the resource objects (e.g, 32) must be a managed object with a persistent reference, because the persistent reference is necessary to enable the resource object to be located by the coordinator.
Thus, each such managed resource object requires a cluster of other objects with which to interact in a management relationship, resulting in making such objects expensive to use.
Thirdly, the server described above requires the creation of a recovery coordinator 36, which is another managed object with a persistent reference, thus further adding to the expense involved.
Fourthly, so far we have only considered forced writes which are initiated by the coordinator 31, but each resource (e.g. 32) may also have to initiate a force write to memory 222 in order to store its persistent reference to the recovery coordinator 36 or information about the data updates that the resource represents, thus further increasing the total number of forced writes.
Summary of the Invention
According to a first aspect, the present invention provides a server for use in a client/server computing system which coordinates the processing of distributed transactions in the client/server computing system, the server comprising: a coordinator; a local resource; and a coordinator log; wherein the local resource is adapted to write data to the coordinator log.
Preferably, the local resource writes to the coordinator log data which the local resource needs to rebuild itself under direction of the coordinator in the event of a failure of the server and a subsequent restart of the server and the coordinator writes transaction status data to the coordinator log. Further preferably, the resource data and the transaction status data are force written together from the coordinator log to permanent storage.
Preferably, the local resource is used to update a subordinate resource manager. For example, the subordinate resource manager is an XA database or an SNA LU6.2 system.
Preferably, the coordinator is a coordinator object and the local resource is a local resource object.
Preferably, the coordinator log is embodied in semiconductor memory.
According to a second aspect, the invention provides a computer program product stored on a computer readable storage medium for, when run on a computer, coordinating the processing of distributed transactions in a client/server computing system, the computer program product having the program elements listed above with respect to the first aspect of the present invention.
According to a third aspect the present invention provides a method carried out in a server for use in a client/server computing system which coordinates the processing of distributed transactions in the client/server computing system, the method comprising steps of: a coordinator writing transaction state data to a coordinator log; a local resource writing data to the coordinator log; and the coordinator force writing the contents of the coordinator log to non-volatile storage.
By allowing the resource to write data to the coordinator's log, the present invention achieves a big reduction in the number of force writes to permanent memory. Specifically, there is no need for a force write to occur every time a resource registers, as was necessary in the prior art.
The resource no longer needs to be a managed object with a persistent object reference, since the resource's object reference no longer needs to be stored in the coordinator's log. This allows resource objects to be simple C++ objects, for example, as opposed to the more complex and expensive managed objects.
Further, there is no need for a recovery coordinator to be created since the lifecycle of the resource is now tied into the lifecycle of the transaction (by the resource storing in the coordinator log the data which the resource needs to rebuild itself). Thus, it is no longer necessary to check (via a recovery coordinator) whether the transaction is still active when the resource is recovered as the resource will only be recovered if there is still work to do on the transaction.
Because the present invention eliminates the need for a recovery coordinator, there is of course no need for each resource to force write the recovery coordinator's persistent reference to permanent storage, thus presenting a further savings in the total number of necessary force writes.
Brief Description of the Drawings
The invention will be better understood by the below description of preferred embodiments thereof to be read while referring to the following figures, Figure 1 is a block diagram of a well-known heterogeneous client/server architecture using object technology, in the context of which preferred embodiments of the present invention can be applied; Figure 2 is a block diagram showing an implementation of an objectbased transaction server which serves as background to the present invention;
Figure 3 is a block diagram showing a prior art implementation of a transaction server; and 12 Figure 4 is a block diagram showing a transaction server implementation according to a preferred embodiment of the present invention.
Detailed Description of the Preferred Embodiments
In Fig. 4, blocks which correspond to those in Fig. 3 will be given the same reference numerals as they had in Fig. 3 for purposes of clarity. Server A process 22, which is running on a server machine, has been called by a client process running on a client machine (both machines not shown) to begin and carry out a transaction on behalf of the client. Of course, the client and server processes can be running on the same machine as well.
The beginning steps (shown by lines with encircled arrows 1-7) are the same as those discussed above with respect to Fig. 3 and thus will not be repeated. The exception to this is that when resource 32 registers with coordinator 31 (line 6), it does not provide a persistent reference to itself along with the registration call, because resource object 32 is not a managed object with a persistent reference (as it was in the prior art of Fig. 3). Instead, resource object 32 is a simple C++ object.
once the log section 35 is created, coordinator 31 sends a nset log section" command to resource 32 (arrow with encircled numeral 8) which informs the resource 32 of the log section object 35. Resource 32 then sends an nadd data,, command (arrow with encircled numeral 9) to log section 35 which in turn passes on this data to the coordinator log 33 (arrow with encircled numeral 10). At this point, the data is stored in temporary storage 331.
The client application may then go on to access further data that results in additional resources being registered with the coordinator 31 and writing to their own log sections (in the same way as just described for resource 32). once the client application's work for the transaction iscomplete, the client application requests that the transaction commits (by sending a commit command to the coordinator 31. Coordinator 31 then sends a "force write,, command to log section 35 (arrow with encircled numeral 11) which in turn passes on the command to the coordinator log 33 (arrow with encircled numeral 12). This results in the coordinator log 33 carrying out a force write of the data held in temporary memory 331 into permanent memory 222 (arrow with encircled numeral 13).
13 Thus, according to the preferred embodiment of the present invention, the registration process for a resource is greatly enhanced such that rather than returning a CosTransactions::RecoveryCoordinator object (e.g., recovery coordinator 36 in Fig. 3) to the resource, the resource is given access to a portion of the transaction log into which the resource can log (i.e., write) the data the resource needs to rebuild itself under the direction of the coordinator during server restart (e.g., if the resource is controlling updates to an XA database, the data the resource needs to rebuild itself could include the XA open string required to access the database).
Thus, the coordinator 31 does not issue a force write command for every resource that registers in the transaction. Instead, the resources are given access to the coordinator log 33 and a single force write for all resources that have registered during the transaction is issued by the coordinator as the transaction is ending. This greatly reduces the number of total force writes, as compared to the prior art.
There is thus no need for the coordinator to force write the resource's persistent object reference. Consequently, there is no need for the resource to even have a persistent object reference, thus making it possible for the resource to be a simple C++ object rather than a more complicated managed object with a persistent reference.
In the prior art, only the coordinator 31 was allowed to make "add data,, calls to the log sections. with the preferred embodiment of the present invention, the local resource 32 also makes such nadd datan calls (i.e., write commands) to the log sections. Specifically, the coordinator gives the resource a reference to the log section and then the resource directly calls the log section in order to add (i.e., write) data into the log.
Further, in the preferred embodiment, the transaction service no longer needs to keep the data held in the log file exactly in step with the persistent object service (POS), which manages the recreation of objects with persistent references, because the resource is using the same file store as the coordinator. Thus, the transaction status data which the coordinator writes to the log 33 and the resource data which the resource writes to the log 33 are forced to disk 222 together with a single command, ensuring that such data is always kept in step.
Accordingly, there is no longer a need to include the recovery 14 coordinator 36, which was required in the prior art to account for the fact that the two types of data may not always be synchronized with each other.
It should be clearly noted that the preferred embodiment described above is only one embodiment of the claimed invention and is the embodiment which is preferred as it achieves all of the advantages discussed above. However, other embodiments are clearly contemplated which achieve less than all of the advantages. For example, one alternative embodiment includes a local resource 32 which is a managed object having a persistent object reference but still shares use of the coordinator log with the coordinator.

Claims (18)

1. A server for use in a client/server computing system which coordinates the processing of distributed transactions in the client/server computing system, the server comprising:
coordinator; a local resource; and coordinator log; wherein the local resource is adapted to write data to the coordinator log.
2. The server of claim 1 wherein the local resource writes to the coordinator log data which the local resource needs to rebuild itself under direction of the coordinator in the event of a failure of the server and a subsequent restart of the server.
3. The server of claim 1 wherein the coordinator writes transaction status data to the coordinator log.
4. The server of claim 1:
wherein the local resource writes to the coordinator log resource data which the local resource needs to rebuild itself under direction of the coordinator in the event of a failure of the server and a subsequent restart of the server, and wherein the coordinator writes transaction status data to the coordinator log.
5. The server of claim 4 wherein the resource data and the transaction status data are force written together from the coordinator log to permanent storage.
6. The server of claim 1 wherein the local resource is used to update 40 a subordinate resource manager.
16
7. The server of claim 6 wherein the subordinate resource manager is an XA database.
8. The server of claim 6 wherein the subordinate resource manager is an SNA LU6.2 system.
9. The server of claim 1 wherein the coordinator is a coordinator object and the local resource is a local resource object.
10. The server of claim 1 wherein the coordinator log is embodied in semiconductor memory.
11. A computer program product stored on a computer readable storage medium for, when run on a computer, coordinating the processing of distributed transactions in a client/server computing system, the computer program product comprising:
a coordinator; a local resource; and a coordinator log; wherein the local resource is adapted to write data to the coordinator log.
12. A method carried out in a server for use in a client/server computing system which coordinates the processing of distributed transactions in the client/server computing system, the method comprising steps of:
a coordinator writing transaction state data to a coordinator log; a local resource writing data to the coordinator log; and the coordinator force writing the contents of the coordinator log to non-volatile storage.
13. The method of claim 12 wherein the step of a local resource writing data to the coordinator log takes place as part of the resource's registration with a transaction service.
14. The method of claim 12 wherein the local resource is used to update a subordinate resource manager.
15. The method of claim 14 wherein the subordinate resource manager is an XA database.
16. The method of claim 14 wherein the subordinate resource manager is an SNA LU6.2 system.
17. The method of claim 12 wherein the coordinator is a coordinator object and the local resource is a local resource object.
18. The method of claim 12 wherein the coordinator log is embodied in semiconductor memory. 15
GB9815445A 1998-07-17 1998-07-17 Local resource registration in a distributed transaction processing coordinating server Withdrawn GB2339932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9815445A GB2339932A (en) 1998-07-17 1998-07-17 Local resource registration in a distributed transaction processing coordinating server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9815445A GB2339932A (en) 1998-07-17 1998-07-17 Local resource registration in a distributed transaction processing coordinating server

Publications (2)

Publication Number Publication Date
GB9815445D0 GB9815445D0 (en) 1998-09-16
GB2339932A true GB2339932A (en) 2000-02-09

Family

ID=10835612

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9815445A Withdrawn GB2339932A (en) 1998-07-17 1998-07-17 Local resource registration in a distributed transaction processing coordinating server

Country Status (1)

Country Link
GB (1) GB2339932A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0758114A1 (en) * 1995-02-28 1997-02-12 Ntt Data Communications Systems Corporation Cooperative distributed system, and journal and recovery processings therein

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0758114A1 (en) * 1995-02-28 1997-02-12 Ntt Data Communications Systems Corporation Cooperative distributed system, and journal and recovery processings therein

Also Published As

Publication number Publication date
GB9815445D0 (en) 1998-09-16

Similar Documents

Publication Publication Date Title
US5923833A (en) Restart and recovery of OMG-compliant transaction systems
US20030005172A1 (en) Apparatus, method and computer program product for client/server computing with improved correspondence between transaction identifiers when supporting subordinate resource manager(s)
US6799188B2 (en) Transaction processing system providing improved methodology for two-phase commit decision
US6877111B2 (en) Method and apparatus for managing replicated and migration capable session state for a Java platform
US8073962B2 (en) Queued transaction processing
US6912569B1 (en) Method and apparatus for migration of managed application state for a Java based application
US6038589A (en) Apparatus, method and computer program product for client/server computing with a transaction representation located on each transactionally involved server
US6345316B1 (en) Apparatus, method and computer program product for client/server computing with the ability to select which servers are capable of creating transaction state data
US6374283B1 (en) Apparatus, method & computer program product for client/server computing with client selectable location of transaction objects
US7284018B1 (en) Logless transaction coordination
US6631395B1 (en) Apparatus, method and computer program product for client/server computing with programmable action by transaction coordinator during prepared state
JPH1125061A (en) Computer program processing method and storage medium
US6301606B1 (en) Apparatus, method and computer program product for client/server computing with intelligent location of transaction objects
US6237018B1 (en) Apparatus method and computer program product for client/server computing with improved transactional interface
GB2335516A (en) Failure recovery in distributed transaction avoids heuristic damage
US6324589B1 (en) Apparatus, method and computer program product for client/server computing with reduced cross-process calls
GB2339932A (en) Local resource registration in a distributed transaction processing coordinating server
US6829632B1 (en) Apparatus, method and computer program product for client/server computing with timing of creation of coordinator transaction state object being based on triggering events
GB2339621A (en) Client/server computing system provides extension to basic transaction service
GB2330431A (en) Client/server computing with failure detection

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)