WO2008040621A1 - Data processing system and method of handling requests - Google Patents

Data processing system and method of handling requests Download PDF

Info

Publication number
WO2008040621A1
WO2008040621A1 PCT/EP2007/059651 EP2007059651W WO2008040621A1 WO 2008040621 A1 WO2008040621 A1 WO 2008040621A1 EP 2007059651 W EP2007059651 W EP 2007059651W WO 2008040621 A1 WO2008040621 A1 WO 2008040621A1
Authority
WO
WIPO (PCT)
Prior art keywords
request
processing
service
message
service request
Prior art date
Application number
PCT/EP2007/059651
Other languages
French (fr)
Inventor
Stephen James Todd
Original Assignee
International Business Machines Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corporation filed Critical International Business Machines Corporation
Priority to US12/443,830 priority Critical patent/US9767135B2/en
Priority to CN2007800336788A priority patent/CN101512527B/en
Priority to EP07803467A priority patent/EP2082338A1/en
Priority to JP2009530828A priority patent/JP5241722B2/en
Publication of WO2008040621A1 publication Critical patent/WO2008040621A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2336Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
    • G06F16/2343Locking methods, e.g. distributed locking or locking implementation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating

Definitions

  • the present invention has applications in distributed data processing environments such as database systems and messaging systems.
  • the invention has particular application in data processing systems that update a high availability data store in response to requests from multiple requestors.
  • HADB high availability database
  • IBM mainframe computers that act together as a single system image.
  • Clusters of processors that combine data sharing with parallel processing to achieve high performance and high availability are sometimes referred to as a parallel systems complex or 'parallel sysplex'.
  • a typical HADB implemented in a parallel sysplex can handle multiple parallel requests for data retrieval and data updates from a large number of distributed requestors with high performance and reliability.
  • the HADB system can include a robust, high availability message processing system that combines message queues with business logic and routing functions to manage data access. This can provide assured once-only message delivery, and such systems can handle failures efficiently with reduced delays.
  • the transaction management, redundancy management and recovery features that are typically implemented within such a high availability system incur significant processing overheads during normal request processing. Any such processing has potential business costs - because high availability data processing systems are more expensive than less reliable systems.
  • An example of this additional processing is a requirement for two phase commit processing within the HADB system or, more particularly, two phase commit processing between resources within the HADB system and resources outside the system.
  • implementing message queues within the HADB system typically requires logging of the message data within the HADB.
  • An alternative solution is to employ a cluster of parallel message dispatchers that are separate from the HADB system, such as in a conventional application server cluster in which each server does not implement comprehensive high availability features.
  • Parallel processing can improve throughput and reduce the impact of failures compared with a single message dispatcher, and separating the message dispatcher functions from the HADB system can reduce processing overheads.
  • the message dispatchers run on servers without high availability features, a failure which affects one server will delay the processing of the messages that have been sent to that server. This can be problematic despite the possibility of other messages being successfully processed by other message dispatchers in the meantime.
  • the messages sent to a failed message dispatcher (referred to herein as Orphan messages' or Orphan requests') are typically delayed until that message dispatcher comes back on-line.
  • Some known clustered messaging systems implement a number of features for fast recovery following a node failure, to reduce delays in the processing of orphan messages, but such approaches have not as yet fully solved the problem of delayed processing of orphan messages.
  • a first aspect of the present invention provides a method for managing service requests, for use in a data processing environment that includes a data store and at least one service requestor, the method comprising the steps of:
  • the first request processing component processing its replica of the claimed service request, including accessing data within the data store.
  • the step of claiming responsibility comprises entering an identifier for the service request in a repository in the data store, and the method further comprises preventing any of the plurality of request processing components from entering a duplicate identifier for the service request in the repository.
  • the step of preventing processing of the service request comprises preventing execution of request processing logic.
  • the step of preventing processing of the service request comprises preventing updating of data within the data store (i.e. some processing of requests may be possible, including executing some business logic and possibly reading data from the data store, but writing of data updates is prevented)
  • the data processing environment comprises a plurality of distributed service requestors and the data store comprises a database running on a high availability data processing system.
  • the request processing components comprise business logic for processing a received request, to determine what data access operations are required within the data store, and request dispatcher functions for handling asynchronous delivery of a replica of the request from the service requestor to the data store.
  • a second aspect of the invention provides a data processing system comprising:
  • each of the plurality of request processing components is located within a communication path between at least one service requestor and the data store;
  • a replicator for replicating a service requestor's service request to at least two of the plurality of request processing components
  • a claims manager comprising: functions for preventing any request processing component that has not successfully claimed responsibility for the service request from processing the service request; and functions for claiming responsibility for the service request on behalf of the first request processing component, thereby to enable the first processing component to process a replica of the service request.
  • the functions for claiming responsibility may comprise entering an identifier of the service request in a repository within the data store, and the claims manager may further comprise functions for preventing any of the plurality of request processing components from entering a duplicate service request identifier within the repository.
  • One embodiment of the present invention mitigates one or more problems of existing high availability server environments, by providing multiple request processing components (request dispatchers and associated business logic) and replicating requests to two or more request processing components, and then managing the processing of requests within a high availability system to ensure that only one replica of a particular request can update a high availability data store.
  • request messages are enqueued and business logic processing of messages is performed outside of the high availability system, whereas consequential HADB updates and definitive removal of request messages from the queues are handled under transactional scope by the high availability system.
  • Particular embodiments of the invention provide efficient mechanisms for managing once-only updates within a HADB system.
  • the invention differs from conventional parallel processing, in which any message is assigned to only one of a set of components arranged in parallel, because in the present invention a request or message is replicated across two or more of the set of parallel request processing components.
  • Another aspect of the invention provides a message queuing system which uses replication to achieve reliability for the operation of inserting a message onto a queue and relatively high availability for enqueued messages, but communicates with a single high availability database (HADB) system and uses transaction processing and locking mechanisms for assured once only updating of the HADB.
  • HADB high availability database
  • the operation of definitively removing a message from the queue following processing of the message (and any required updating of the HADB) is controlled by data held within the HADB system.
  • a message queuing system is differentiated from such traditional systems by using replication at one end of the queue (which enables reliability to be achieved despite use of inexpensive components) while making use of the capabilities of a single high availability system at the other end of the queue to ensure data integrity and availability at that end of the queue. This has considerable advantages when the application programs that may be inserting messages are already operating in an environment which relies on inexpensive and relatively unreliable components, whereas the messages are intended to update resources in a high availability system.
  • once-only processing of each request's HADB updates is achieved by saving, within a repository of the HADB, request identifiers for requests that are processed in the HADB; and by checking the repository of request identifiers before any replicated request can update the HADB.
  • Entries in the repository of request identifiers can be locked (using conventional HADB locking capabilities) while a request is being processed, with a commit of this processing being managed in the high availability system. Subsequent recognition of the request identifier ensures deletion of all replicas of that request. Locking entries in the repository of request identifiers, and avoiding duplicate entries, ensures that multiple replicas of the same request cannot update the HADB multiple times.
  • the repository of request identifiers is a database table in which a row of the table corresponds to a request.
  • This table is referred to hereafter as the 'claims table'.
  • a first request processing component inserts a request identifier into the claims table and obtains a lock on the claims table row, and obtains another lock on the part of the HADB that is holding the relevant data items.
  • the locks are implemented by conventional HADB mechanisms and are maintained until the processing of the particular request is complete, when any HADB updates required by this request are committed and any replicas of the request are deleted.
  • One embodiment of the invention implements once-only processing of messages without wasting HADB transactions, by allowing a single HADB transaction to encompass one or more failed attempts to insert a request identifier within the claims table, as well as encompassing the eventual successful insertion of a request identifier and the corresponding request processing by the message dispatcher and business logic (including an HADB update in the case of 'write' requests).
  • Another embodiment of the invention provides a computer program comprising a set of instructions for controlling a data processing apparatus to perform a method as described above.
  • the computer program may be made available as a computer program product comprising program code recorded on a recording medium, or may be available for transfer via a data transfer medium such as for download via the Internet.
  • Figure 1 is a schematic representation of a data processing system in which a single messaging system processes request messages and sends update instructions to a database, such as is known in the art;
  • Figure 2 is a schematic representation of a distributed data processing system according to an embodiment of the invention.
  • Figure 3 provides a schematic representation of a distributed data processing system according to another embodiment of the invention
  • Figure 4 is a schematic representation of a request dispatcher updating a claims table, according to an embodiment of the invention
  • Figure 5 is a schematic representation of a request dispatcher attempting to update a claims table and being rejected
  • Figure 6 is a schematic flow diagram representation of a method according to an embodiment of the invention.
  • Figure 7 shows a sequence of steps of a method according to an embodiment of the invention, in which a request message is successfully processed
  • Figure 8 shows a sequence of steps of a method in which a duplicate message is not permitted to update a database
  • Figure 9 represents relative processing overheads for various alternative approaches to handling service requests.
  • the present invention has particular advantages for high availability data processing environments, where it is necessary to be able to access a data store and to apply updates to the data store with minimal delays and minimal impact from failures.
  • a first known solution employs complex hardware that includes many redundant components, and specialized additional components to arrange failover between the redundant components. These redundant components may include processors, memory, controllers, hard drives and communication buses. Such complex hardware is complemented by appropriate computer software.
  • An example of such a high availability system is an IBM zSeries data processing system running IBM's DB2 database system software and IBM's z/OS operating system. IBM, zSeries, z/OS and DB2 are trademarks of International Business Machines Corporation.
  • a second known solution provides many simple systems, each prone to failure but relatively inexpensive, with a relatively simple software mechanism to share work between them and to avoid failed systems.
  • FIG 1 is a schematic representation of a distributed data processing system in which a high availability data store (HADB) 10 is associated with a messaging system comprising a queue 80, a message dispatcher 20 used to dispatch data access operations that result from execution of business logic 30.
  • the business logic 30 has responsibility for determining what update is required to the HADB and then cooperates with the HADB to manage the update, and to coordinate the update with the removal of the message from the queue.
  • the data store 10 is shown in Figure 1 distributed across three or more servers 12, 14, 16 that each have their own data processing and data storage capability.
  • a plurality of requestors 40,50,60 submit requests for data access via the message dispatcher 20.
  • the messaging system 20 and its business logic 30 may be integrated within the high availability server that holds the HADB.
  • the HADB may be a DB2 database from IBM Corporation and the server may be an IBM zSeries computer system running the z/OS operating system, which includes redundant processors and storage and failover recovery features to provide maximum reliability and availability.
  • the message dispatcher and business logic nevertheless impose an undesirable processing overhead on the high availability server.
  • assurance of once-only updates to the HADB typically involves two-phase commit processing to protect against failures between an HADB update and definitive deletion of the corresponding request message.
  • the insert of messages by requestors 40,50,60 into the queue may require coordination between the message store and other resources used by the requestors. This coordination imposes significant overhead for a highly available message store. The present invention mitigates this problem.
  • message dispatcher and business logic functions may be implemented within a standard application server running on hardware that does not have high availability features.
  • application servers are well known in the art, and products such as WebSphere Application Server from IBM Corporation can be run on relatively inexpensive computer systems that do not provide comprehensive support for high availability.
  • WebSphere is a trademark of International Business Machines Corporation.
  • HADB high availability database
  • a plurality of distributed service requestors can input service requests to a request processing system.
  • the request processing system includes a plurality of inexpensive request processing components communicating with a high availability data store.
  • the service requests are replicated to at least two of the plurality of request processing components. Any request processing component that has not successfully claimed responsibility for the service request is prevented from processing the service request, to avoid duplication of updates to the data store.
  • a first request processing component claims responsibility for the service request, and then processes its replica of the claimed service request - including accessing data within the data store. All of the plurality of request processing components are prevented from entering a duplicate claim to responsibility for the service request, and this assured once-only claiming of responsibility for a service request prevents duplication of updates to the data store.
  • Figure 2 shows a distributed data processing system according to an embodiment of the present invention.
  • a number of distributed parallel requestors 40, 50, 60 are requesting a service, and the business logic of the service requires data access from a HADB.
  • the requests are replicated across two or more of a plurality of related queues 80,82,84,86 that are associated with a plurality of message dispatchers 20,22,24,26 arranged in parallel.
  • the requestors of Figure 1 place messages in the input queue of a single message dispatcher 20, in Figure 2 the messages are enqueued for distribution via a replicator 70.
  • k is 2; in another implementation, k is 3.
  • the replicator 70 may be a receiving component to which all requests are sent by the requestors 40,50,60.
  • the replicator function may be implemented at the requestor's system or at an intermediate proxy server elsewhere in the network.
  • the distribution of requests by the replicator 70 is a k-way fan out distribution that places each message in at least two message queues (80,84); but see the description below relating to exceptions.
  • each message dispatcher associated with a single queue - for example, message dispatcher 20 is associated with queue 80.
  • Alternative embodiments permit a more complex association between queues and message dispatchers.
  • the selection of the particular k queues by the replicator may be as simple as a round-robin allocation from within the set N, but other selections using workload balancing techniques may be implemented.
  • the number k (as well as the choice of which particular k queues) is determined on a per-message basis, selecting a higher value of k for relatively high value and urgent messages and a lower value of k for low value messages.
  • each requestor makes use of the services of a respective replicator 70,71,72, which may be local to and dedicated to the individual requestor 40,50,60 or may be located at some intermediate location within a distributed network.
  • the number k of queues to send replica requests to, and the selection of a particular set of k queues may be determined dynamically or may be predefined as implied by Figure 3.
  • the message dispatchers (20,22,24,26) each implement request dispatcher functions and business logic (30,32,34,36) that can apply updates to the HADB 10.
  • the HADB is viewed by the message dispatchers as a single system and any HADB-internal segmentation for efficiency or availability is invisible to the requestors and message dispatchers.
  • the message dispatcher is responsible for retrieving a message from its input queue and invoking the business logic, and for requesting insertion of an entry in a claims table, but routing within the HADB is not the responsibility of the message dispatchers.
  • the 'claims table' is a repository of service request identifiers, identifying those service requests for which a request processing component has claimed responsibility. Solutions in which a HADB structure is invisible to external requestors, and HADB-internal features are responsible for identifying a required HADB segment, are well known in the art.
  • the business logic is typically application-specific, for example processing an order for goods or services and generating a specific update instruction to apply to the HADB to reflect that order. Because of the replication of messages to more than one dispatcher, the message dispatchers (20,22,24,26) and their input queues (80,82,84,86) are not required to implement high availability features, as explained below.
  • the HADB 10 includes, in addition to the HADB business data 90, a claims table 100.
  • the claims table 100 is a database table that has a single key column for containing a request identifier (such as a message identifier, or another request identifier). Only one of the message dispatchers is allowed to insert an entry into the claims table for any particular request identifier.
  • a claims manager 110 is associated with the claims table 100, and is responsible for checking the claims table before the table can be updated.
  • the functions of the replicator 70, message dispatcher 20, business logic 30 and claims manager 110 are implemented in software, although their logical operations could be implemented in hardware such as using electronic circuitry.
  • the claims table may also contain additional information such as the time of a successful claim to responsibility for a service request, and the identity of the claimant. This is not essential to the operation of the invention, but may provide valuable diagnostic and statistical information about system behaviour.
  • the message dispatcher functions and business logic are implemented within Message-Driven Beans, which are Enterprise JavaBeans (EJBs) implementing the Java Messaging Service (JMS), allowing J2EE applications to asynchronously process JMS messages.
  • the application servers running the message dispatchers are J2EE application servers that receive JMS messages from many distributed requestors (J2EE application clients or other JMS applications or systems).
  • J2EE application clients or other JMS applications or systems A single Message-Driven Bean can process messages from multiple clients.
  • the logic of the claim manager 110 can be implemented as a function of the EJB container that calls methods of the Message-Driven Beans. When a message arrives, the container first calls the claim manager logic 110 to ensure that the message is not a replica of a message that has already been processed.
  • the container then calls a method of the Message-Driven Bean, which identifies the particular JMS message type and handles the message according to the application-specific business logic (as in standard J2EE solutions).
  • This application-specific business logic processes the information within the message to generate a specific database update instruction which is then sent to the HADB.
  • Java and Java-based trademarks are a trademark of Sun Microsystems, Inc.
  • the dispatching of messages is implemented by a conventional messaging program performing a message GET operation.
  • the claims manager 110 becomes part of the messaging system, and the claims table 100 is an HADB extension to the message store, but the rest of the message store may be implemented using a different technology.
  • Any of a potentially large number of requestors 40,50,60 may request a service by sending a request message via a replicator 70. Each message is replicated and enqueued at a subset k of the available queues 80,84. Any of the associated k message dispatchers (20,24) may perform the following operations for a replica message. The operations are described with reference to Figures 4, 5, 6, 7 and 8. On receipt of a request message by one of the message dispatchers, an HADB transaction is started 210, and then the current message dispatcher 24 retrieves 220 a first message from its input queue 84. These steps are shown in Figures 6, 7 and 8. Updates to the business data and claims table of the HADB (described below) take place within the same HADB transaction, to ensure that both the claims table and business data are successfully updated or, if an update is unsuccessful, both are backed out.
  • the message dispatcher 24 attempts 230 to insert a claim 130 to this message in the claims table 100 of the HADB 10, in particular attempting to insert the unique request identifier as a new row in the claims table.
  • a scan of the request identifiers is performed 240 (a logical scan corresponding to scanning all of the rows of the claims table, but performed against an index of the claims table to avoid the need for a full physical scan of the table). This scan identifies any matching request identifier.
  • the attempted claim does not succeed at this time if another message dispatcher is determined 250 to have already entered this request identifier in the claims table; but the claim succeeds if there is no matching entry in the claims table, as shown in Figures 4, 6 and 7.
  • the HADB locks 270 this row of the claims table to prevent changes by other message dispatchers 20,22,26.
  • a copy of the retrieved message remains in the message dispatcher's input queue 84, but the message status is implicitly changed by the action of updating the claims table, from a "received" status (i.e. the message is available on the queue and ready for retrieval) to a "retrieved but in doubt” status (i.e. the message has been retrieved from the queue for processing, but processing of the message is not yet committed).
  • the first message dispatcher 24 (from a set of k dispatchers) that attempts to insert a new request identifier into the claims table 100 is successful.
  • the claims table is extended by inserting 260 a new row 130 that contains the request identifier, and this first dispatcher obtains 270 a lock on that row of the claims table using standard HADB locking mechanisms.
  • the HADB implements locks on the claims table in response to requests from the claims manager 110, maintaining a record of which message dispatcher 24 is associated with a locked row.
  • the row locks prevent modification by other message dispatchers 20,22,26 while the first message dispatcher's message is being processed (i.e. until commit or abort of an HADB transaction and message transaction - as described in more detail below).
  • a new entry to the claims table is only permitted by the claims manager 110 if the table currently contains no entry for the particular request identifier, so duplicate table entries are not possible. If the claims manager 110 finds 240,250 an entry in the claims table with a matching request identifier, as shown in Figures 5, 6 and 8, the claim entry obtained by the first message dispatcher 24 prevents changes being made on behalf of any other request dispatcher.
  • the matching claims table row 130 is found 290 to be locked on behalf of the first message dispatcher 24, a different message dispatcher's request to insert a duplicate entry in the claims table is held in memory to await 300 unlocking of the row of the claims table that is holding the matching entry.
  • the first message dispatcher's claims table entry 130 may be deleted or committed before being unlocked, depending on the outcome of the corresponding HADB transaction and message transaction, so it is not possible at this stage to predict whether a subsequent attempt to update the claims table will succeed or fail. This is why duplicate claims requests may be held in memory to enable a retry. While a request to update the claims table is held in memory, the copy of the message on the input queue of the message dispatcher 20 now has implicit status "retrieved but in doubt".
  • the HADB has not commenced processing this replica request's data update, but another replica of the same request is being processed and so the claim request for the current replica is held in memory to await the outcome of that processing.
  • all replicas of a message for which an HADB update is attempted now have the same implicit status "retrieved but in doubt", while they remain uncommitted on their respective message dispatcher's input queue 80,82,84,86.
  • any subsequent attempts to insert a duplicate entry in the claims table must fail.
  • a failure report is returned 310 to the message dispatcher that is attempting to insert the duplicate entry in the claims table, and the message dispatcher deletes 320 the corresponding message from its input queue.
  • Row-specific locks are obtained on the claims table, rather than a less granular lock on the entire table or the page that contains a relevant claims table entry, because the replication of requests makes it likely that there will be frequent conflicting attempts to update the claims table.
  • a more granular row lock will reduce the number of requests that must be held up compared with a less granular page lock, avoiding conflicts between different messages that would occur if page locks were implemented.
  • the handling of claims requests requires little processing within the high availability system and, in particular, can involve less of a performance cost than if all of the message dispatcher functions and business logic were implemented within the high availability system.
  • One embodiment of the invention also reduces conflicts between multiple replicas of the same message, by introducing a slight 'stagger' in the timing of replicas generated by the replicator 70.
  • the first released replica will be completely processed before an attempt is made to schedule the second replica.
  • the second replica will still be processed in a timely manner. There will be cases where there are simultaneous attempts to process both replicas, but the number of such cases will be reduced.
  • the prevention of conflicting updates may involve the HADB temporarily storing a new request to update the claims table (i.e. a claim request may be held in a queue in memory) in response to identifying 240,250 a matching claims table entry.
  • the claims request awaits 300 the outcome of a service request that is currently being processed (or a timeout 330 if this occurs earlier) and only then decides whether to accept or reject the new claim request.
  • a claims request that is held in memory will remain until an 'unlocked' signal is received from the claims manager or until a timeout, whichever occurs first.
  • a timeout may be followed by a repeated attempt to update the claims table on behalf of the timed-out message dispatcher, for example after a predefined delay period.
  • An 'unlocked' signal prompts a repeated scan 240 of the claims table 100 to identify 250 any matching request identifier for the claim request.
  • a lock is obtained 270 on the relevant row of the claims table and a lock is obtained on the relevant part of the HADB (for example, locking a database row or page 140 on a specified server) - as shown schematically in Figure 4 in which an HADB manager 120 obtains a page lock. Both locks are implemented using the standard mechanisms available within the HADB.
  • Successful updating of the claims table 100 is a pre-requisite to the database manager 120 updating the business data 90 within the HADB and only one message dispatcher 24 can update the claims table for each request. Updating of the business data 90 can only occur when there are no conflicting locks within the HADB.
  • the business logic 34 within the message dispatcher 24 is executed to process 350 the request and to determine any required updates to the HADB. The determined updates are then applied 360 to the HADB business data 90.
  • the business logic may process a message that specifies an order for goods or services, updating a database of orders.
  • Another example request may be a request for retrieval of status information relating to an existing order, perhaps checking progress on the preparation or delivery of a product.
  • Many other business applications may be implemented using the present invention.
  • replicas of the message may remain for a period on input queues 80,82,86 of other ones of the k message dispatchers 20,22,26. Each remaining replica of the message now has an implicit status of committed, and merely awaits deletion from its respective queue.
  • any new attempts to insert a claim in the claims table for another replica of the same request will be rejected.
  • the comparison between a request message's unique request identifier and the set of request identifiers in the claims table will identify a match, and the matching request now has a "committed" status that is reflected by the unlocked claims table entry. That status is sufficient to confirm that the message should now be deleted from the input queue 80 of the respective message dispatcher 20 that is unsuccessfully attempting to add a duplicate entry into the claims table.
  • a committed HADB update is effective as a definitive instruction to commit the message retrieval (i.e. delete the message from the input queues of all message dispatchers) but the actual deletion for a particular input queue may be deferred until the corresponding message dispatcher attempts to enter a claim in the claims table.
  • the successful processing and deletion of the winning replica does not involve expensive two phase commit operation between the HADB and the message store. If the message dispatcher should fail between the commit operation 380 and the delete message operation 320 for the winning replica message, the claims mechanism will prevent re-execution of the winning replica message in exactly the same way as it prevents execution for losing replicas. It will be appreciated that the mechanism to prevent re-execution of a single instance message is already known in the art (sometimes referred to as "one-and-a-half phase" commit).
  • the replicator 70 may choose not to take advantage of the invention for certain lower- value messages. It may be that these messages are tagged by the replicator as non-replicated, and handled in a special manner. In other embodiments, low value and non-urgent request messages are handled using the claim mechanism described above to achieve the benefit of one-and-a-half phase commit. In an alternative embodiment, identification of a locked matching row of the claims table may trigger a reply to the respective message dispatcher with an explicit rejection of a claims request (instead of the above-described saving of claims requests to memory). This may occur where the database uses a 'presumed commit' locking strategy.
  • This explicit rejection would leave the replica message on the respective message dispatcher's queue until a subsequent retry of the attempt to update the claims table, and this is likely to involve more processing than in embodiments in which requests to update a locked row of the claims table are temporarily saved to memory.
  • an attempt to update the claims table may be rejected while the processing of a first replica of the request message is still in progress (i.e. the processing is not yet complete and could still fail). Therefore, the message dispatcher holding a request message corresponding to a rejected claims request is able to subsequently retry updating the claims table, unless the message dispatcher has received an implicit instruction to delete the request (i.e. a claims table update failure report in response to a determination that a matching claims table entry is unlocked, since this implies that the message processing was successfully completed and committed). Such a subsequent retry may be performed following a predefined delay period, or the frequency of retries may depend on current system activity.
  • the HADB transaction must be aborted 400,410. In this situation, the claims table entry is deleted and the claims table row is unlocked 410.
  • the message dispatcher may take various actions, including [a] leaving the replica message on the input queue for later reattempted processing, [b] moving the replica message to a dead letter queue, or [c] deleting the replica message and sending a failure notice to the original requestor. Alternatively, it may be the message dispatcher itself that fails, in which case the system will implicitly abort the HADB transaction (including removing the in-doubt claims table entry) and the replica message will remain on the input queue for subsequent retry. These examples of failure processing are known in the art.
  • a particular optimization is implemented in one embodiment of the invention to reduce HADB processing overheads. Where a new HADB transaction was started, but a claims table update was unsuccessful because a replica of a particular request had already been processed, the new HADB transaction is not terminated. Instead, a next request message is retrieved from the particular message dispatcher's input queue within the first
  • HADB transaction This is shown in Figure 8.
  • the message dispatcher attempts to add a claims table entry (entering the request identifier in the table) for this next request message, and either succeeds or fails for one of the previously-described reasons.
  • the transaction processing mechanisms that are involved in the achievement of once-only updating of the HADB are optimised to reduce the processing overhead (compared with solutions that are required to start and end a transaction for each attempt to add an entry to the claims table).
  • Table 1 in Figure 9 represents some of the processing overheads (numbers of transactions and which resources are involved) for a number of alternative approaches to handling requests for services that require access to data within a HADB.
  • the message insert transaction includes logging the data of the message, and may be coordinated with another high availability system resource. Both of these transactions involve significant overheads for high availability system resources. This is shown as solution (A) on the left hand side of Table 1.
  • a solution (B) that implements the queues and business logic outside of the high availability system avoids any queue management activity within the high availability system.
  • Table 1 An embodiment of the present invention is represented as solution (C) on the right hand side of Table 1. This is likely to involve many more operations overall, and so is counter-intuitive, but most of these operations do not impact the high availability system. Only one transaction is necessary within the high availability system, and this transaction is coordinated with other data held in the same HADB and using the same claims manager 110 and HADB manager 120. Also, as the potentially large volume of data on the queue is not saved on the high availability system, this saves logging on the high availability system.
  • the message dispatchers and associated business logic do not have an awareness of the organisation of data within the HADB.
  • either the message dispatchers or the associated business logic are implemented with an awareness of HADB segmentation and structure, and include routing/selection logic to ensure the most efficient path to the appropriate HADB segment.
  • routing and selection logic is well known in the art.
  • the claim manager can segment the claims table for greater affinity between application data segments and claims table segments.
  • messages are handled by each message dispatcher in batches for more efficient message handling.
  • This batching increases the likelihood of two or more message processing components attempting to enter conflicting claims in the claims table, but the handling of conflicting claim attempts is a relatively low processing cost.
  • the impact of conflicting claim attempts on the HADB is low.
  • the embodiments described above include a mechanism whereby duplicates of a service request message are deleted, during normal processing, in response to failure of an attempt to insert an entry in the claims table.
  • This mechanism may be augmented by further mechanisms. For example, a successful message dispatcher may notify other message dispatchers of its success, and the other message dispatchers may then delete their replicas of the message. Additionally, mechanisms may be provided for cleaning up the claims table, which must be accompanied by a clean-up of associated replicas of a successfully processed message from input queues of the message dispatchers.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Methods, apparatus and computer programs are provided for managing service requests. The invention mitigates problems within a data processing environment that includes a data store running on a highly available data processing system. A plurality of distributed service requestors input service requests, and the service requests are replicated to at least two of a plurality of request processing components that are located within a communication path between the requestors and the data store. The method includes: preventing any request processing component that has not successfully claimed the service request from processing the service request; a first request processing component claiming responsibility for the service request; and the first request processing component processing its replica of the claimed service request, including accessing data within the data store. The method also includes preventing any of the plurality of request processing components from entering a duplicate claim to responsibility for the service request. In one embodiment, the request processing components comprise business logic for processing a received request, to determine what data access operations are required within the data store, and request dispatcher functions for handling asynchronous delivery of a replica of the request from the service requestor to the data store.

Description

DATA PROCESSING SYSTEM AND METHOD OF HANDLING REQUESTS
FIELD OF INVENTION
The present invention has applications in distributed data processing environments such as database systems and messaging systems. The invention has particular application in data processing systems that update a high availability data store in response to requests from multiple requestors.
BACKGROUND
Increasingly, business applications such as ordering systems that are implemented using computer hardware and computer software are required to have high availability and reliability. Many businesses demand that their data processing systems are operational 24 hours of every day and never lose data, and the best information technology companies have responded to those demands (in some cases achieving availability of data processing systems above 99.99%). Businesses typically also want high performance (high throughput without loss of reliability), which requires scalable solutions as data processing requirements increase, and they do not want high costs.
Highly available data processing systems have been developed using a combination of redundancy (of storage, processors and network connections) and recovery features (backup and failover) to avoid any single point of failure. One such solution includes a high availability database (HADB) that is distributed across a tightly integrated cluster of servers using redundant storage arrangements, such as a cluster of highly reliable IBM mainframe computers that act together as a single system image. Clusters of processors that combine data sharing with parallel processing to achieve high performance and high availability are sometimes referred to as a parallel systems complex or 'parallel sysplex'. A typical HADB implemented in a parallel sysplex can handle multiple parallel requests for data retrieval and data updates from a large number of distributed requestors with high performance and reliability. The HADB system can include a robust, high availability message processing system that combines message queues with business logic and routing functions to manage data access. This can provide assured once-only message delivery, and such systems can handle failures efficiently with reduced delays. However, the transaction management, redundancy management and recovery features that are typically implemented within such a high availability system incur significant processing overheads during normal request processing. Any such processing has potential business costs - because high availability data processing systems are more expensive than less reliable systems. An example of this additional processing is a requirement for two phase commit processing within the HADB system or, more particularly, two phase commit processing between resources within the HADB system and resources outside the system. Also, implementing message queues within the HADB system typically requires logging of the message data within the HADB.
An alternative solution is to employ a cluster of parallel message dispatchers that are separate from the HADB system, such as in a conventional application server cluster in which each server does not implement comprehensive high availability features. Parallel processing can improve throughput and reduce the impact of failures compared with a single message dispatcher, and separating the message dispatcher functions from the HADB system can reduce processing overheads. However, if the message dispatchers run on servers without high availability features, a failure which affects one server will delay the processing of the messages that have been sent to that server. This can be problematic despite the possibility of other messages being successfully processed by other message dispatchers in the meantime. The messages sent to a failed message dispatcher (referred to herein as Orphan messages' or Orphan requests') are typically delayed until that message dispatcher comes back on-line.
Some known clustered messaging systems implement a number of features for fast recovery following a node failure, to reduce delays in the processing of orphan messages, but such approaches have not as yet fully solved the problem of delayed processing of orphan messages. SUMMARY
A first aspect of the present invention provides a method for managing service requests, for use in a data processing environment that includes a data store and at least one service requestor, the method comprising the steps of:
replicating a requestor's service request to at least two of a plurality of request processing components, the plurality of request processing components each being located within a communication path between the requestor and the data store;
preventing processing of the service request by any request processing component that has not successfully claimed responsibility for the service request;
a first request processing component claiming responsibility for the service request; and
the first request processing component processing its replica of the claimed service request, including accessing data within the data store.
In one embodiment, the step of claiming responsibility comprises entering an identifier for the service request in a repository in the data store, and the method further comprises preventing any of the plurality of request processing components from entering a duplicate identifier for the service request in the repository.
In one embodiment, the step of preventing processing of the service request comprises preventing execution of request processing logic. In an alternative embodiment, the step of preventing processing of the service request comprises preventing updating of data within the data store (i.e. some processing of requests may be possible, including executing some business logic and possibly reading data from the data store, but writing of data updates is prevented) In one embodiment of the invention, the data processing environment comprises a plurality of distributed service requestors and the data store comprises a database running on a high availability data processing system.
In one embodiment, the request processing components comprise business logic for processing a received request, to determine what data access operations are required within the data store, and request dispatcher functions for handling asynchronous delivery of a replica of the request from the service requestor to the data store.
A second aspect of the invention provides a data processing system comprising:
a data store;
a plurality of request processing components, wherein each of the plurality of request processing components is located within a communication path between at least one service requestor and the data store; and
a replicator for replicating a service requestor's service request to at least two of the plurality of request processing components;
a claims manager comprising: functions for preventing any request processing component that has not successfully claimed responsibility for the service request from processing the service request; and functions for claiming responsibility for the service request on behalf of the first request processing component, thereby to enable the first processing component to process a replica of the service request.
The functions for claiming responsibility may comprise entering an identifier of the service request in a repository within the data store, and the claims manager may further comprise functions for preventing any of the plurality of request processing components from entering a duplicate service request identifier within the repository. One embodiment of the present invention mitigates one or more problems of existing high availability server environments, by providing multiple request processing components (request dispatchers and associated business logic) and replicating requests to two or more request processing components, and then managing the processing of requests within a high availability system to ensure that only one replica of a particular request can update a high availability data store.
By providing a number of request processing components that work in parallel with each other, and replicating each request across the parallel request processing components, it is possible to reduce the problem of delayed orphan requests. If one replica request is significantly delayed, another replica request should succeed. Implementing these request processing components outside a high availability system allows this mitigation of the problem of orphan requests to be implemented without imposing a high processing overhead on the high availability system, whereas managing once-only data updates within the high availability system maintains data integrity. In one embodiment of the invention, request messages are enqueued and business logic processing of messages is performed outside of the high availability system, whereas consequential HADB updates and definitive removal of request messages from the queues are handled under transactional scope by the high availability system. Particular embodiments of the invention provide efficient mechanisms for managing once-only updates within a HADB system.
The invention differs from conventional parallel processing, in which any message is assigned to only one of a set of components arranged in parallel, because in the present invention a request or message is replicated across two or more of the set of parallel request processing components.
Another aspect of the invention provides a message queuing system which uses replication to achieve reliability for the operation of inserting a message onto a queue and relatively high availability for enqueued messages, but communicates with a single high availability database (HADB) system and uses transaction processing and locking mechanisms for assured once only updating of the HADB. The operation of definitively removing a message from the queue following processing of the message (and any required updating of the HADB) is controlled by data held within the HADB system.
In a traditional queuing system, the action of inserting a message onto a queue (e.g. in response to a PUT message command), the storage of the message on the queue, and the action of retrieving the message (e.g. in response to a GET message command) are all implemented within the same system and hence use the same reliability mechanisms. A message queuing system according to the present invention is differentiated from such traditional systems by using replication at one end of the queue (which enables reliability to be achieved despite use of inexpensive components) while making use of the capabilities of a single high availability system at the other end of the queue to ensure data integrity and availability at that end of the queue. This has considerable advantages when the application programs that may be inserting messages are already operating in an environment which relies on inexpensive and relatively unreliable components, whereas the messages are intended to update resources in a high availability system.
In one embodiment, once-only processing of each request's HADB updates is achieved by saving, within a repository of the HADB, request identifiers for requests that are processed in the HADB; and by checking the repository of request identifiers before any replicated request can update the HADB. Entries in the repository of request identifiers can be locked (using conventional HADB locking capabilities) while a request is being processed, with a commit of this processing being managed in the high availability system. Subsequent recognition of the request identifier ensures deletion of all replicas of that request. Locking entries in the repository of request identifiers, and avoiding duplicate entries, ensures that multiple replicas of the same request cannot update the HADB multiple times.
In one embodiment of the invention, the repository of request identifiers is a database table in which a row of the table corresponds to a request. This table is referred to hereafter as the 'claims table'. A first request processing component inserts a request identifier into the claims table and obtains a lock on the claims table row, and obtains another lock on the part of the HADB that is holding the relevant data items. The locks are implemented by conventional HADB mechanisms and are maintained until the processing of the particular request is complete, when any HADB updates required by this request are committed and any replicas of the request are deleted.
One embodiment of the invention implements once-only processing of messages without wasting HADB transactions, by allowing a single HADB transaction to encompass one or more failed attempts to insert a request identifier within the claims table, as well as encompassing the eventual successful insertion of a request identifier and the corresponding request processing by the message dispatcher and business logic (including an HADB update in the case of 'write' requests).
Another embodiment of the invention provides a computer program comprising a set of instructions for controlling a data processing apparatus to perform a method as described above. The computer program may be made available as a computer program product comprising program code recorded on a recording medium, or may be available for transfer via a data transfer medium such as for download via the Internet.
BRIEF DESCRIPTION OF DRAWINGS
Embodiments of the invention are described below in more detail, by way of example, with reference to the accompanying drawings in which:
Figure 1 is a schematic representation of a data processing system in which a single messaging system processes request messages and sends update instructions to a database, such as is known in the art;
Figure 2 is a schematic representation of a distributed data processing system according to an embodiment of the invention;
Figure 3 provides a schematic representation of a distributed data processing system according to another embodiment of the invention; Figure 4 is a schematic representation of a request dispatcher updating a claims table, according to an embodiment of the invention;
Figure 5 is a schematic representation of a request dispatcher attempting to update a claims table and being rejected;
Figure 6 is a schematic flow diagram representation of a method according to an embodiment of the invention;
Figure 7 shows a sequence of steps of a method according to an embodiment of the invention, in which a request message is successfully processed;
Figure 8 shows a sequence of steps of a method in which a duplicate message is not permitted to update a database; and
Figure 9 represents relative processing overheads for various alternative approaches to handling service requests.
DESCRIPTION OF EMBODIMENTS
The present invention has particular advantages for high availability data processing environments, where it is necessary to be able to access a data store and to apply updates to the data store with minimal delays and minimal impact from failures.
As is known in the art, high availability may be achieved in a variety of ways. A first known solution employs complex hardware that includes many redundant components, and specialized additional components to arrange failover between the redundant components. These redundant components may include processors, memory, controllers, hard drives and communication buses. Such complex hardware is complemented by appropriate computer software. An example of such a high availability system is an IBM zSeries data processing system running IBM's DB2 database system software and IBM's z/OS operating system. IBM, zSeries, z/OS and DB2 are trademarks of International Business Machines Corporation. A second known solution provides many simple systems, each prone to failure but relatively inexpensive, with a relatively simple software mechanism to share work between them and to avoid failed systems. Each instance of the software running on such systems is also typically less resilient than in a high availability system, relying on the multiplicity to ensure that there are always sufficient systems operating at any one time. This second solution is generally used for high volume processing and stateless logic, but is more difficult to use for stateful information, because of the problems of deciding which is the definitive instance of data when there are multiple copies.
Referring again to the first known solution as described above, Figure 1 is a schematic representation of a distributed data processing system in which a high availability data store (HADB) 10 is associated with a messaging system comprising a queue 80, a message dispatcher 20 used to dispatch data access operations that result from execution of business logic 30. The business logic 30 has responsibility for determining what update is required to the HADB and then cooperates with the HADB to manage the update, and to coordinate the update with the removal of the message from the queue. By way of example, the data store 10 is shown in Figure 1 distributed across three or more servers 12, 14, 16 that each have their own data processing and data storage capability. A plurality of requestors 40,50,60 submit requests for data access via the message dispatcher 20.
To achieve high availability for the system as a whole, the messaging system 20 and its business logic 30 may be integrated within the high availability server that holds the HADB. As noted above, the HADB may be a DB2 database from IBM Corporation and the server may be an IBM zSeries computer system running the z/OS operating system, which includes redundant processors and storage and failover recovery features to provide maximum reliability and availability. Although this reliability and availability are desirable, and are critical requirements for some applications, the message dispatcher and business logic nevertheless impose an undesirable processing overhead on the high availability server. For example, assurance of once-only updates to the HADB typically involves two-phase commit processing to protect against failures between an HADB update and definitive deletion of the corresponding request message. In particular, the insert of messages by requestors 40,50,60 into the queue may require coordination between the message store and other resources used by the requestors. This coordination imposes significant overhead for a highly available message store. The present invention mitigates this problem.
Alternatively, message dispatcher and business logic functions may be implemented within a standard application server running on hardware that does not have high availability features. Such application servers are well known in the art, and products such as WebSphere Application Server from IBM Corporation can be run on relatively inexpensive computer systems that do not provide comprehensive support for high availability. WebSphere is a trademark of International Business Machines Corporation.
It is also known to mix the different processing styles described above, running some data processing systems on low cost duplicated hardware but accessing a robust high availability database (HADB) for long term data storage. The approach of separating the business logic and message dispatcher functions from the HADB system has a lower impact on the high availability server that is running the HADB, at least during normal forward processing. However, running the business logic and message dispatcher on a standard application server and low cost hardware increases the risk of delay for Orphan requests'. That is, a request that is sent to a failed application server is delayed until the application server is back on-line.
The present invention mitigates this problem. In a first embodiment, as described in detail below, a plurality of distributed service requestors can input service requests to a request processing system. The request processing system includes a plurality of inexpensive request processing components communicating with a high availability data store. The service requests are replicated to at least two of the plurality of request processing components. Any request processing component that has not successfully claimed responsibility for the service request is prevented from processing the service request, to avoid duplication of updates to the data store. A first request processing component claims responsibility for the service request, and then processes its replica of the claimed service request - including accessing data within the data store. All of the plurality of request processing components are prevented from entering a duplicate claim to responsibility for the service request, and this assured once-only claiming of responsibility for a service request prevents duplication of updates to the data store.
Figure 2 shows a distributed data processing system according to an embodiment of the present invention. As in Figure 1, a number of distributed parallel requestors 40, 50, 60 are requesting a service, and the business logic of the service requires data access from a HADB. However, in the solution of Figure 2 the requests are replicated across two or more of a plurality of related queues 80,82,84,86 that are associated with a plurality of message dispatchers 20,22,24,26 arranged in parallel. Where the requestors of Figure 1 place messages in the input queue of a single message dispatcher 20, in Figure 2 the messages are enqueued for distribution via a replicator 70. The replicator 70 fulfils the limited role of replicating messages to a chosen subset, k, of the set of N parallel message dispatchers, where 1 < k = N. In a particular implementation, k is 2; in another implementation, k is 3.
As shown in Figure 2, the replicator 70 may be a receiving component to which all requests are sent by the requestors 40,50,60. In another embodiment (described later with reference to Figure 3), the replicator function may be implemented at the requestor's system or at an intermediate proxy server elsewhere in the network.
According to the embodiment of Figure 2, the distribution of requests by the replicator 70 is a k-way fan out distribution that places each message in at least two message queues (80,84); but see the description below relating to exceptions. In the embodiment as illustrated, we have each message dispatcher associated with a single queue - for example, message dispatcher 20 is associated with queue 80. Alternative embodiments permit a more complex association between queues and message dispatchers. The selection of the particular k queues by the replicator may be as simple as a round-robin allocation from within the set N, but other selections using workload balancing techniques may be implemented.
In one embodiment of the invention, the number k (as well as the choice of which particular k queues) is determined on a per-message basis, selecting a higher value of k for relatively high value and urgent messages and a lower value of k for low value messages. A particular embodiment provides for exceptions to the replication of messages, allowing selection of a single message dispatcher (k=l) for identifiably low value or non-urgent messages. Thus these messages will be processed as in the prior art, and will not benefit from the invention, but will not prevent the application of the invention to other messages.
Referring briefly to Figure 3, this differs from Figure 2 by the different implementation of the distributed replication function. Each requestor makes use of the services of a respective replicator 70,71,72, which may be local to and dedicated to the individual requestor 40,50,60 or may be located at some intermediate location within a distributed network. For each replicator, the number k of queues to send replica requests to, and the selection of a particular set of k queues, may be determined dynamically or may be predefined as implied by Figure 3.
In the embodiments of both Figures 2 and 3, the message dispatchers (20,22,24,26) each implement request dispatcher functions and business logic (30,32,34,36) that can apply updates to the HADB 10. In a first embodiment, the HADB is viewed by the message dispatchers as a single system and any HADB-internal segmentation for efficiency or availability is invisible to the requestors and message dispatchers. In such a system, the message dispatcher is responsible for retrieving a message from its input queue and invoking the business logic, and for requesting insertion of an entry in a claims table, but routing within the HADB is not the responsibility of the message dispatchers. As noted above and described in more detail below, the 'claims table' is a repository of service request identifiers, identifying those service requests for which a request processing component has claimed responsibility. Solutions in which a HADB structure is invisible to external requestors, and HADB-internal features are responsible for identifying a required HADB segment, are well known in the art. The business logic is typically application-specific, for example processing an order for goods or services and generating a specific update instruction to apply to the HADB to reflect that order. Because of the replication of messages to more than one dispatcher, the message dispatchers (20,22,24,26) and their input queues (80,82,84,86) are not required to implement high availability features, as explained below.
The HADB 10 includes, in addition to the HADB business data 90, a claims table 100. The claims table 100 is a database table that has a single key column for containing a request identifier (such as a message identifier, or another request identifier). Only one of the message dispatchers is allowed to insert an entry into the claims table for any particular request identifier. A claims manager 110 is associated with the claims table 100, and is responsible for checking the claims table before the table can be updated. In the current embodiment of the invention, the functions of the replicator 70, message dispatcher 20, business logic 30 and claims manager 110 are implemented in software, although their logical operations could be implemented in hardware such as using electronic circuitry.
The claims table may also contain additional information such as the time of a successful claim to responsibility for a service request, and the identity of the claimant. This is not essential to the operation of the invention, but may provide valuable diagnostic and statistical information about system behaviour.
In practice, the operation of the claims manager is likely to be partitioned between claims manager specific code executing in the distributed processor of the message dispatcher, and generic database locking code operating within the HADB. This is shown in more detail later.
In one embodiment, the message dispatcher functions and business logic are implemented within Message-Driven Beans, which are Enterprise JavaBeans (EJBs) implementing the Java Messaging Service (JMS), allowing J2EE applications to asynchronously process JMS messages. In this embodiment, the application servers running the message dispatchers are J2EE application servers that receive JMS messages from many distributed requestors (J2EE application clients or other JMS applications or systems). A single Message-Driven Bean can process messages from multiple clients. The logic of the claim manager 110 can be implemented as a function of the EJB container that calls methods of the Message-Driven Beans. When a message arrives, the container first calls the claim manager logic 110 to ensure that the message is not a replica of a message that has already been processed. The container then calls a method of the Message-Driven Bean, which identifies the particular JMS message type and handles the message according to the application-specific business logic (as in standard J2EE solutions). This application-specific business logic processes the information within the message to generate a specific database update instruction which is then sent to the HADB. Java and Java-based trademarks are a trademark of Sun Microsystems, Inc.
In an alternative embodiment, the dispatching of messages is implemented by a conventional messaging program performing a message GET operation. In this case, the claims manager 110 becomes part of the messaging system, and the claims table 100 is an HADB extension to the message store, but the rest of the message store may be implemented using a different technology.
Any of a potentially large number of requestors 40,50,60 may request a service by sending a request message via a replicator 70. Each message is replicated and enqueued at a subset k of the available queues 80,84. Any of the associated k message dispatchers (20,24) may perform the following operations for a replica message. The operations are described with reference to Figures 4, 5, 6, 7 and 8. On receipt of a request message by one of the message dispatchers, an HADB transaction is started 210, and then the current message dispatcher 24 retrieves 220 a first message from its input queue 84. These steps are shown in Figures 6, 7 and 8. Updates to the business data and claims table of the HADB (described below) take place within the same HADB transaction, to ensure that both the claims table and business data are successfully updated or, if an update is unsuccessful, both are backed out.
The message dispatcher 24 attempts 230 to insert a claim 130 to this message in the claims table 100 of the HADB 10, in particular attempting to insert the unique request identifier as a new row in the claims table. Before a claim is inserted, a scan of the request identifiers is performed 240 (a logical scan corresponding to scanning all of the rows of the claims table, but performed against an index of the claims table to avoid the need for a full physical scan of the table). This scan identifies any matching request identifier. As shown in figures 5, 6 and 8, the attempted claim does not succeed at this time if another message dispatcher is determined 250 to have already entered this request identifier in the claims table; but the claim succeeds if there is no matching entry in the claims table, as shown in Figures 4, 6 and 7. On successful entry 260 of a claim in the claims table 100, the HADB locks 270 this row of the claims table to prevent changes by other message dispatchers 20,22,26. A copy of the retrieved message remains in the message dispatcher's input queue 84, but the message status is implicitly changed by the action of updating the claims table, from a "received" status (i.e. the message is available on the queue and ready for retrieval) to a "retrieved but in doubt" status (i.e. the message has been retrieved from the queue for processing, but processing of the message is not yet committed).
Referring to Figures 4, 6 and 7, the first message dispatcher 24 (from a set of k dispatchers) that attempts to insert a new request identifier into the claims table 100 is successful. The claims table is extended by inserting 260 a new row 130 that contains the request identifier, and this first dispatcher obtains 270 a lock on that row of the claims table using standard HADB locking mechanisms. Specifically, the HADB implements locks on the claims table in response to requests from the claims manager 110, maintaining a record of which message dispatcher 24 is associated with a locked row. The row locks prevent modification by other message dispatchers 20,22,26 while the first message dispatcher's message is being processed (i.e. until commit or abort of an HADB transaction and message transaction - as described in more detail below).
A new entry to the claims table is only permitted by the claims manager 110 if the table currently contains no entry for the particular request identifier, so duplicate table entries are not possible. If the claims manager 110 finds 240,250 an entry in the claims table with a matching request identifier, as shown in Figures 5, 6 and 8, the claim entry obtained by the first message dispatcher 24 prevents changes being made on behalf of any other request dispatcher.
If the matching claims table row 130 is found 290 to be locked on behalf of the first message dispatcher 24, a different message dispatcher's request to insert a duplicate entry in the claims table is held in memory to await 300 unlocking of the row of the claims table that is holding the matching entry. The first message dispatcher's claims table entry 130 may be deleted or committed before being unlocked, depending on the outcome of the corresponding HADB transaction and message transaction, so it is not possible at this stage to predict whether a subsequent attempt to update the claims table will succeed or fail. This is why duplicate claims requests may be held in memory to enable a retry. While a request to update the claims table is held in memory, the copy of the message on the input queue of the message dispatcher 20 now has implicit status "retrieved but in doubt". The HADB has not commenced processing this replica request's data update, but another replica of the same request is being processed and so the claim request for the current replica is held in memory to await the outcome of that processing. Thus, all replicas of a message for which an HADB update is attempted now have the same implicit status "retrieved but in doubt", while they remain uncommitted on their respective message dispatcher's input queue 80,82,84,86. There is no need for the message dispatchers 20,22,24,26 to write any explicit status information because the implicit status is easily determined from the claims table 100; if an attempt is made to retrieve and process a replica message the status is easily determined, and the status is unimportant until such retrieval and update are attempted.
Alternatively, if the matching claims table row has already been committed and unlocked, any subsequent attempts to insert a duplicate entry in the claims table must fail. In this case a failure report is returned 310 to the message dispatcher that is attempting to insert the duplicate entry in the claims table, and the message dispatcher deletes 320 the corresponding message from its input queue.
Row-specific locks are obtained on the claims table, rather than a less granular lock on the entire table or the page that contains a relevant claims table entry, because the replication of requests makes it likely that there will be frequent conflicting attempts to update the claims table. A more granular row lock will reduce the number of requests that must be held up compared with a less granular page lock, avoiding conflicts between different messages that would occur if page locks were implemented. Despite the likelihood of frequent conflicts at the claims table level, the handling of claims requests requires little processing within the high availability system and, in particular, can involve less of a performance cost than if all of the message dispatcher functions and business logic were implemented within the high availability system. One embodiment of the invention also reduces conflicts between multiple replicas of the same message, by introducing a slight 'stagger' in the timing of replicas generated by the replicator 70. Typically, where all message dispatchers are operating at the same rate, the first released replica will be completely processed before an attempt is made to schedule the second replica. Where the first dispatcher is temporarily inoperable, the second replica will still be processed in a timely manner. There will be cases where there are simultaneous attempts to process both replicas, but the number of such cases will be reduced.
As noted above, the prevention of conflicting updates may involve the HADB temporarily storing a new request to update the claims table (i.e. a claim request may be held in a queue in memory) in response to identifying 240,250 a matching claims table entry. The claims request awaits 300 the outcome of a service request that is currently being processed (or a timeout 330 if this occurs earlier) and only then decides whether to accept or reject the new claim request. A claims request that is held in memory will remain until an 'unlocked' signal is received from the claims manager or until a timeout, whichever occurs first. A timeout may be followed by a repeated attempt to update the claims table on behalf of the timed-out message dispatcher, for example after a predefined delay period. An 'unlocked' signal prompts a repeated scan 240 of the claims table 100 to identify 250 any matching request identifier for the claim request.
When a claims table update succeeds 260, a lock is obtained 270 on the relevant row of the claims table and a lock is obtained on the relevant part of the HADB (for example, locking a database row or page 140 on a specified server) - as shown schematically in Figure 4 in which an HADB manager 120 obtains a page lock. Both locks are implemented using the standard mechanisms available within the HADB. Successful updating of the claims table 100 is a pre-requisite to the database manager 120 updating the business data 90 within the HADB and only one message dispatcher 24 can update the claims table for each request. Updating of the business data 90 can only occur when there are no conflicting locks within the HADB. This ensures once-only updating of the HADB business data for each replicated request, despite the k-way replication of update requests and despite each of the queue data, the message dispatcher and business logic being implemented outside the high availability system of the HADB. As noted above, the identification of the relevant part of the HADB may be performed within the HADB itself or as a function of the message dispatcher - in either case using conventional data access mechanisms. Write access to the locked part 140 of the HADB is now reserved for the business logic 34 associated with the current message dispatcher 24
(and in some implementations the lock reserves exclusive read access as well). The business logic 34 within the message dispatcher 24 is executed to process 350 the request and to determine any required updates to the HADB. The determined updates are then applied 360 to the HADB business data 90.
For example, the business logic may process a message that specifies an order for goods or services, updating a database of orders. Another example request may be a request for retrieval of status information relating to an existing order, perhaps checking progress on the preparation or delivery of a product. Many other business applications may be implemented using the present invention.
If the processing 350,360 of a request is successful 370, all changes to the HADB (including the claims table entry and business data updates) are committed 380. Commit of the HADB transaction is then confirmed to the message dispatcher system, which then deletes 320 the processed message.
Despite commit of the message transaction by one successful message dispatcher, replicas of the message may remain for a period on input queues 80,82,86 of other ones of the k message dispatchers 20,22,26. Each remaining replica of the message now has an implicit status of committed, and merely awaits deletion from its respective queue.
When the processing of a request has been committed 390, following successful processing 350,360 of the request, any new attempts to insert a claim in the claims table for another replica of the same request will be rejected. In this case, the comparison between a request message's unique request identifier and the set of request identifiers in the claims table will identify a match, and the matching request now has a "committed" status that is reflected by the unlocked claims table entry. That status is sufficient to confirm that the message should now be deleted from the input queue 80 of the respective message dispatcher 20 that is unsuccessfully attempting to add a duplicate entry into the claims table.
Other replicas of the message should also be deleted, although a definitive delete instruction for a committed request message does not require instantaneous deletion of all replicas. A committed HADB update is effective as a definitive instruction to commit the message retrieval (i.e. delete the message from the input queues of all message dispatchers) but the actual deletion for a particular input queue may be deferred until the corresponding message dispatcher attempts to enter a claim in the claims table.
The above-described saving of duplicate claim requests to memory while a first message processing is in progress (not yet committed), and rejecting duplicate claims requests and deleting the corresponding replica messages if the first message processing is successful, entails relatively little processing overhead and in particular minimizes expensive processing in the high availability system and HADB.
The successful processing and deletion of the winning replica does not involve expensive two phase commit operation between the HADB and the message store. If the message dispatcher should fail between the commit operation 380 and the delete message operation 320 for the winning replica message, the claims mechanism will prevent re-execution of the winning replica message in exactly the same way as it prevents execution for losing replicas. It will be appreciated that the mechanism to prevent re-execution of a single instance message is already known in the art (sometimes referred to as "one-and-a-half phase" commit).
As has been observed above, even in a system where this invention is used, the replicator 70 may choose not to take advantage of the invention for certain lower- value messages. It may be that these messages are tagged by the replicator as non-replicated, and handled in a special manner. In other embodiments, low value and non-urgent request messages are handled using the claim mechanism described above to achieve the benefit of one-and-a-half phase commit. In an alternative embodiment, identification of a locked matching row of the claims table may trigger a reply to the respective message dispatcher with an explicit rejection of a claims request (instead of the above-described saving of claims requests to memory). This may occur where the database uses a 'presumed commit' locking strategy. This explicit rejection would leave the replica message on the respective message dispatcher's queue until a subsequent retry of the attempt to update the claims table, and this is likely to involve more processing than in embodiments in which requests to update a locked row of the claims table are temporarily saved to memory.
In either of the above-described embodiments, an attempt to update the claims table may be rejected while the processing of a first replica of the request message is still in progress (i.e. the processing is not yet complete and could still fail). Therefore, the message dispatcher holding a request message corresponding to a rejected claims request is able to subsequently retry updating the claims table, unless the message dispatcher has received an implicit instruction to delete the request (i.e. a claims table update failure report in response to a determination that a matching claims table entry is unlocked, since this implies that the message processing was successfully completed and committed). Such a subsequent retry may be performed following a predefined delay period, or the frequency of retries may depend on current system activity.
If a failure is experienced during the message processing, the HADB transaction must be aborted 400,410. In this situation, the claims table entry is deleted and the claims table row is unlocked 410. Depending on the nature of the failure, the message dispatcher may take various actions, including [a] leaving the replica message on the input queue for later reattempted processing, [b] moving the replica message to a dead letter queue, or [c] deleting the replica message and sending a failure notice to the original requestor. Alternatively, it may be the message dispatcher itself that fails, in which case the system will implicitly abort the HADB transaction (including removing the in-doubt claims table entry) and the replica message will remain on the input queue for subsequent retry. These examples of failure processing are known in the art. A particular optimization is implemented in one embodiment of the invention to reduce HADB processing overheads. Where a new HADB transaction was started, but a claims table update was unsuccessful because a replica of a particular request had already been processed, the new HADB transaction is not terminated. Instead, a next request message is retrieved from the particular message dispatcher's input queue within the first
HADB transaction. This is shown in Figure 8. The message dispatcher then attempts to add a claims table entry (entering the request identifier in the table) for this next request message, and either succeeds or fails for one of the previously-described reasons. In this way, the transaction processing mechanisms that are involved in the achievement of once-only updating of the HADB are optimised to reduce the processing overhead (compared with solutions that are required to start and end a transaction for each attempt to add an entry to the claims table).
Table 1 in Figure 9 represents some of the processing overheads (numbers of transactions and which resources are involved) for a number of alternative approaches to handling requests for services that require access to data within a HADB. A solution that is fully implemented within a high availability system, with the queue and business logic and HADB are all implemented in the same system, involves two transactions within the high availability system. The message insert transaction includes logging the data of the message, and may be coordinated with another high availability system resource. Both of these transactions involve significant overheads for high availability system resources. This is shown as solution (A) on the left hand side of Table 1.
A solution (B) that implements the queues and business logic outside of the high availability system avoids any queue management activity within the high availability system.
However, a typical approach would not provide the desired assurance of once only message processing, for example because of the delayed orphan message problem mentioned above. Also the message processing transaction is likely to involve coordination between HADB resources and the external queuing system, which is more expensive than an internally-coordinated transaction. This solution (B) is represented in the middle columns of
Table 1. An embodiment of the present invention is represented as solution (C) on the right hand side of Table 1. This is likely to involve many more operations overall, and so is counter-intuitive, but most of these operations do not impact the high availability system. Only one transaction is necessary within the high availability system, and this transaction is coordinated with other data held in the same HADB and using the same claims manager 110 and HADB manager 120. Also, as the potentially large volume of data on the queue is not saved on the high availability system, this saves logging on the high availability system.
In the embodiments described in detail above, the message dispatchers and associated business logic do not have an awareness of the organisation of data within the HADB. In an alternative embodiment, either the message dispatchers or the associated business logic are implemented with an awareness of HADB segmentation and structure, and include routing/selection logic to ensure the most efficient path to the appropriate HADB segment. Such routing and selection logic is well known in the art. In such an embodiment, the claim manager can segment the claims table for greater affinity between application data segments and claims table segments.
In another example embodiment, messages are handled by each message dispatcher in batches for more efficient message handling. This batching increases the likelihood of two or more message processing components attempting to enter conflicting claims in the claims table, but the handling of conflicting claim attempts is a relatively low processing cost. In particular, the impact of conflicting claim attempts on the HADB is low. The operations performed by the message dispatcher according to this embodiment are represented below:
Loop on message batches
Start message transaction [Loop on messages]
Start HADB transaction
Loop until claim succeeds Get message
Attempt to insert claim in HADB claims table
If claim attempt fails, loop to attempt claim of next message If claim succeeds, continue Perform business logic, updating HADB Commit HADB [loop to next message] commit message [s] loop to next message batch
The embodiments described above include a mechanism whereby duplicates of a service request message are deleted, during normal processing, in response to failure of an attempt to insert an entry in the claims table. This mechanism may be augmented by further mechanisms. For example, a successful message dispatcher may notify other message dispatchers of its success, and the other message dispatchers may then delete their replicas of the message. Additionally, mechanisms may be provided for cleaning up the claims table, which must be accompanied by a clean-up of associated replicas of a successfully processed message from input queues of the message dispatchers.
Many messages do not require once-and-once-only processing. For example, messages requesting a simple read-only database query can be duplicated without sacrificing data integrity. The present invention mitigates problems associated with the cost of ensuring once-and-once-only execution of messages, but it may still be preferable to avoid the need for even this optimized once-and-once-only processing. As is known in the art, methods for avoiding unnecessary once-only processing include: explicit tagging of the required quality of service in the message, and the use of assumptions such as non-persistent messages need not incur the overhead of once-and-once-only processing. Some systems include explicit support, such as IBM Corporation's WebSphere MQ products in which one of the message get options is 'syncpoint if persistent'.
Various embodiments of the invention have been described above to provide a detailed illustration of how the invention may be implemented in different embodiments and to highlight some of the advantages of particular embodiments of the invention. The invention is not, however, limited to the particular exemplary embodiments described above and various modifications will occur to persons skilled in the art that are within the scope of the invention as set out in the attached claims.

Claims

1. A method for managing service requests, for use in a data processing environment that includes a data store and at least one service requestor, the method comprising the steps:
replicating a requestor's service request to at least two of a plurality of request processing components, the plurality of request processing components each being located within a communication path between the requestor and the data store;
preventing any request processing component that has not successfully claimed responsibility for the service request from processing the service request;
a first request processing component claiming responsibility for the service request; and
the first request processing component processing its replica of the claimed service request, including accessing data within the data store.
2. A method according to claim 1 , wherein the step of claiming responsibility comprises a step of entering an identifier for the service request in a repository in the data store, and wherein the method further comprises preventing any of the plurality of request processing components from entering a duplicate identifier for the service request in the repository.
3. A method according to claim 1 or claim 2, wherein the data processing environment comprises a plurality of distributed service requestors and the data store comprises a database running on a high availability data processing system.
4. A method according to any one of claims 1 to 3, wherein the request processing components comprise business logic for requesting updates to data within the data store in response to a service request, and wherein the step of preventing any request processing component from processing the service request comprises a step of preventing execution of the respective request processing component's business logic.
5. A method according to any one of claims 1 to 3, wherein complete processing of the service request requires updating of data within the data store, and wherein the step of preventing processing of the service request comprises preventing a completed processing by preventing updating of data within the data store.
6. A method according to claim 5, wherein the step of preventing processing of the service request comprises preventing a completed processing by preventing both read and write access to data within the data store.
7. A method according to claim 2, wherein the request processing components are messaging systems, and wherein the replicated service requests are messages enqueued on input queues of at least two of the messaging systems, and wherein the step of entering an identifier is performed on behalf of a first messaging system in response to a successful retrieval of a message from an input queue of the first messaging system.
8. A method according to claim 2, wherein the request processing components are messaging systems, and wherein the replicated service requests are messages enqueued on input queues of at least two of the messaging systems, and wherein the step of preventing any of the plurality of request processing components from entering a duplicate identifier comprises preventing a message retrieve operation from retrieving any messages for previously claimed service requests.
9. A method according to claim 2, wherein the step of entering an identifier includes the step of locking said identifier until the step of processing the replica service request is complete, and wherein the step of preventing any request processing component entering a duplicate identifier in the repository includes examining said lock and deferring or rejecting an attempt to enter an identifier until the lock is released.
10. A method according to claim 9, wherein identification of an unlocked identifier within the repository is recognised as a confirmation that the identified service request has been successfully processed, and any request processing component attempting to enter a duplicate identifier in the repository responds to the unlocked identifier by deleting the respective request processing component's replica of the service request.
11. A method according to any preceding claim, wherein the repository is a database table in which each table row includes a single service request identifier corresponding to a claim to responsibility for the identified service request.
12. A method according to claim 11, wherein the data store is a database that includes said database table, and wherein an effect of successful processing of the replica of the claimed service request is an update to the data store, and wherein the successful processing includes a single phase commit of the steps of processing the request and updating the data store.
13. A method according to any one of the preceding claims, wherein the replicating of a requestor's service request comprises time-separating the replicas of the service request.
14. A data processing system comprising:
a data store;
a plurality of request processing components, wherein each of the plurality of request processing components is located within a communication path between at least one service requestor and the data store;
a replicator for replicating a service requestor's service request to at least two of the plurality of request processing components; and a claims manager comprising: functions for preventing any request processing component that has not successfully claimed responsibility for the service request from processing the request; and functions for claiming responsibility for the service request on behalf of the first request processing component.
15. A data processing apparatus according to claim 14, wherein the functions for claiming responsibility comprise functions for inserting an identifier of the service request within a repository of the data store, and wherein the claims manager further comprises functions for preventing any of the plurality of request processing components from entering a duplicate service request identifier within the repository.
16. A set of computer program components, each comprising a set of instructions that are implemented in computer program code for controlling operations within a data processing environment that includes a data store and at least one service requestor, the operations combining to provide a method for managing service requests that comprises the steps:
replicating a requestor's service request to at least two of a plurality of request processing components, the plurality of request processing components each being located within a communication path between the requestor and the data store;
preventing any request processing component that has not successfully claimed responsibility for the service request from processing the request;
a first request processing component claiming responsibility for the service request; and
the first request processing component processing its replica of the claimed service request, including accessing data within the data store.
17. A method for managing service requests, for use in a data processing environment that includes a data store, the method comprising the steps of:
enqueuing a plurality of replicas of a service request on respective input queues of a plurality of message processing components, outside of a data store transaction;
within a data store transaction, a first message processing component retrieving a first replica of the service request and processing the first replica of the service request, including accessing data within the data store; while preventing any of said plurality of message processing components other than the first message processing component from processing other replicas of the same service request unless the first message processing component experiences a failure.
18. A method according to claim 17, further comprising performing a single phase commit of the data store transaction in response to the first message processing component successfully processing the service request, and deleting replicas of the service request in response to an identification of the committed data store transaction.
PCT/EP2007/059651 2006-10-05 2007-09-13 Data processing system and method of handling requests WO2008040621A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US12/443,830 US9767135B2 (en) 2006-10-05 2007-09-13 Data processing system and method of handling requests
CN2007800336788A CN101512527B (en) 2006-10-05 2007-09-13 Data processing system and method of handling requests
EP07803467A EP2082338A1 (en) 2006-10-05 2007-09-13 Data processing system and method of handling requests
JP2009530828A JP5241722B2 (en) 2006-10-05 2007-09-13 Data processing system and method for request processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB0619644.8A GB0619644D0 (en) 2006-10-05 2006-10-05 Data processing system and method of handling requests
GB0619644.8 2006-10-05

Publications (1)

Publication Number Publication Date
WO2008040621A1 true WO2008040621A1 (en) 2008-04-10

Family

ID=37453991

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2007/059651 WO2008040621A1 (en) 2006-10-05 2007-09-13 Data processing system and method of handling requests

Country Status (7)

Country Link
US (1) US9767135B2 (en)
EP (1) EP2082338A1 (en)
JP (1) JP5241722B2 (en)
KR (1) KR20090055608A (en)
CN (1) CN101512527B (en)
GB (1) GB0619644D0 (en)
WO (1) WO2008040621A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009259009A (en) * 2008-04-17 2009-11-05 Internatl Business Mach Corp <Ibm> Device and method for controlling execution of transaction

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8793691B2 (en) * 2010-04-15 2014-07-29 Salesforce.Com, Inc. Managing and forwarding tasks to handler for processing using a message queue
CN103503388B (en) * 2011-09-01 2016-08-03 华为技术有限公司 A kind of distributed queue's message read method and equipment, system
JP5874399B2 (en) * 2012-01-05 2016-03-02 株式会社リコー Processing equipment
CN103310334B (en) * 2012-03-16 2016-12-28 阿里巴巴集团控股有限公司 A kind of method and device of Business Processing
CN104145260B (en) * 2012-03-26 2016-08-10 华为技术有限公司 Method for processing business, performance element and the system of a kind of distributed job system
CN104423982B (en) * 2013-08-27 2018-03-06 阿里巴巴集团控股有限公司 The processing method and processing equipment of request
CN103455604A (en) * 2013-09-03 2013-12-18 北京京东尚科信息技术有限公司 Method and device for preventing repeated data processing
US10795910B2 (en) 2013-12-31 2020-10-06 Sybase, Inc. Robust communication system for guaranteed message sequencing with the detection of duplicate senders
GB2533086A (en) * 2014-12-08 2016-06-15 Ibm Controlling a multi-database system
CN107301179A (en) * 2016-04-14 2017-10-27 北京京东尚科信息技术有限公司 The method and apparatus of data base read-write separation
WO2017220721A1 (en) * 2016-06-22 2017-12-28 Siemens Convergence Creators Gmbh Method for automatically and dynamically assigning the responsibility for tasks to the available computing components in a highly distributed data-processing system
CN108921532A (en) * 2018-06-28 2018-11-30 中国建设银行股份有限公司 transaction request processing method, device and server
CN111866191B (en) * 2020-09-24 2020-12-22 深圳市易博天下科技有限公司 Message event distribution method, distribution platform, system and server
CN113220730B (en) * 2021-05-28 2024-03-26 中国农业银行股份有限公司 Service data processing system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002041149A2 (en) * 2000-10-27 2002-05-23 Eternal Systems, Inc. Fault tolerance for computer programs that operate over a communication network

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5335325A (en) * 1987-12-22 1994-08-02 Kendall Square Research Corporation High-speed packet switching apparatus and method
US5555388A (en) * 1992-08-20 1996-09-10 Borland International, Inc. Multi-user system and methods providing improved file management by reading
DE59508633D1 (en) * 1994-04-08 2000-09-21 Ferag Ag Method and arrangement for packaging printed products
US5613139A (en) * 1994-05-11 1997-03-18 International Business Machines Corporation Hardware implemented locking mechanism for handling both single and plural lock requests in a lock message
US5748468A (en) * 1995-05-04 1998-05-05 Microsoft Corporation Prioritized co-processor resource manager and method
US6247056B1 (en) * 1997-02-03 2001-06-12 Oracle Corporation Method and apparatus for handling client request with a distributed web application server
JP3036487B2 (en) * 1997-10-14 2000-04-24 日本電気株式会社 Earth sensor
US6058389A (en) * 1997-10-31 2000-05-02 Oracle Corporation Apparatus and method for message queuing in a database system
US6594651B2 (en) * 1999-12-22 2003-07-15 Ncr Corporation Method and apparatus for parallel execution of SQL-from within user defined functions
US7136857B2 (en) * 2000-09-01 2006-11-14 Op40, Inc. Server system and method for distributing and scheduling modules to be executed on different tiers of a network
US7206964B2 (en) * 2002-08-30 2007-04-17 Availigent, Inc. Consistent asynchronous checkpointing of multithreaded application programs based on semi-active or passive replication
US7305582B1 (en) * 2002-08-30 2007-12-04 Availigent, Inc. Consistent asynchronous checkpointing of multithreaded application programs based on active replication
US7243351B2 (en) 2002-12-17 2007-07-10 International Business Machines Corporation System and method for task scheduling based upon the classification value and probability
US7334014B2 (en) * 2003-01-03 2008-02-19 Availigent, Inc. Consistent time service for fault-tolerant distributed systems
US20040225546A1 (en) * 2003-05-09 2004-11-11 Roland Oberdorfer Method and apparatus for monitoring business process flows within an integrated system
US7523130B1 (en) * 2004-01-28 2009-04-21 Mike Meadway Storing and retrieving objects on a computer network in a distributed database
US7376890B2 (en) * 2004-05-27 2008-05-20 International Business Machines Corporation Method and system for checking rotate, shift and sign extension functions using a modulo function
US7797342B2 (en) * 2004-09-03 2010-09-14 Sybase, Inc. Database system providing encrypted column support for applications
JP4319971B2 (en) * 2004-11-22 2009-08-26 株式会社日立製作所 Session information management system, session information management method and program thereof
US7792342B2 (en) * 2006-02-16 2010-09-07 Siemens Medical Solutions Usa, Inc. System and method for detecting and tracking a guidewire in a fluoroscopic image sequence

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002041149A2 (en) * 2000-10-27 2002-05-23 Eternal Systems, Inc. Fault tolerance for computer programs that operate over a communication network

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
CHEN-LIANG FANG ET AL.: "A redundant nested invocation suppression mechanism for active replication fault-tolerant web service", 2004 IEEE INTERNATIONAL CONFERENCE ON E-TECHNOLOGY, E-COMMERCE AND E-SERVICE (EEE'04), TAIWAN, 28 March 2004 (2004-03-28), pages 1 - 8
CHEN-LIANG FANG ET AL: "A redundant nested invocation suppression mechanism for active replication fault-tolerant web service", E-TECHNOLOGY, E-COMMERCE AND E-SERVICE, 2004. EEE '04. 2004 IEEE INTERNATIONAL CONFERENCE ON TAIPEI, TAIWAN 28-31 MARCH 2004, PISCATAWAY, NJ, USA,IEEE, 28 March 2004 (2004-03-28), pages 9 - 16, XP010697565, ISBN: 0-7695-2073-1 *
CHEREQUE M ET AL: "Active replication in Delta-4", FAULT-TOLERANT PARALLEL AND DISTRIBUTED SYSTEMS, 1992. DIGEST OF PAPERS., 1992 IEEE WORKSHOP ON AMHERST, MA, USA 6-7 JULY 1992, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 6 July 1992 (1992-07-06), pages 28 - 37, XP010133167, ISBN: 0-8186-2870-7 *
FELBER P ET AL: "Experiences, strategies, and challenges in building fault-tolerant CORBA systems", IEEE TRANSACTIONS ON COMPUTERS, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 53, no. 5, May 2004 (2004-05-01), pages 497 - 511, XP011109093, ISSN: 0018-9340 *
MOSER L E ET AL: "The Eternal system: an architecture for enterprise applications", ENTERPRISE DISTRIBUTED OBJECT COMPUTING CONFERENCE, 1999. EDOC '99. PROCEEDINGS. THIRD INTERNATIONAL MANNHEIM, GERMANY 27-30 SEPT. 1999, PISCATAWAY, NJ, USA,IEEE, US, 27 September 1999 (1999-09-27), pages 214 - 222, XP010351695, ISBN: 0-7803-5784-1 *
NARASIMHAN P ET AL.: "Enforcing determinism for the consistent replication of multithreaded CORBA application", PROCEEDINGS OF THE 18TH SYMPOSIUM ON RELIABLE DISTRIBUTED SYSTEMS (SRDS '99), LAUSANNE, SWITZERLAND, 19 October 1999 (1999-10-19), pages 263 - 273
NARASIMHAN P ET AL: "ENFORCING DETERMINISM FOR THE CONSISTENT REPLICATION OF MULTITHREADED CORBA APPLICATIONS", PROCEEDINGS OF THE 18TH. SYMPOSIUM ON RELIABLE DISTRIBUTED SYSTEMS. SRDS '99. LAUSANNE, SWITZERLAND, OCT. 19 - 22, 1999, PROCEEDINGS OF THE SYMPOSIUM ON RELIABLE DISTRIBUTED SYSTEMS, LOS ALMITOS, CA : IEEE COMP. SOC, US, 19 October 1999 (1999-10-19), pages 263 - 273, XP000895536, ISBN: 0-7695-0291-1 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009259009A (en) * 2008-04-17 2009-11-05 Internatl Business Mach Corp <Ibm> Device and method for controlling execution of transaction

Also Published As

Publication number Publication date
CN101512527A (en) 2009-08-19
GB0619644D0 (en) 2006-11-15
JP5241722B2 (en) 2013-07-17
US20090300018A1 (en) 2009-12-03
JP2010506277A (en) 2010-02-25
CN101512527B (en) 2013-05-15
US9767135B2 (en) 2017-09-19
KR20090055608A (en) 2009-06-02
EP2082338A1 (en) 2009-07-29

Similar Documents

Publication Publication Date Title
US9767135B2 (en) Data processing system and method of handling requests
US7716181B2 (en) Methods, apparatus and computer programs for data replication comprising a batch of descriptions of data changes
US10942823B2 (en) Transaction processing system, recovery subsystem and method for operating a recovery subsystem
US6988099B2 (en) Systems and methods for maintaining transactional persistence
US8271448B2 (en) Method for strategizing protocol presumptions in two phase commit coordinator
US5465328A (en) Fault-tolerant transaction-oriented data processing
US5095421A (en) Transaction processing facility within an operating system environment
US8140483B2 (en) Transaction log management
US8073962B2 (en) Queued transaction processing
US6442552B1 (en) Method and apparatus for implementing three tier client asynchronous transparency
US20040215998A1 (en) Recovery from failures within data processing systems
US7580979B2 (en) Message ordering in a messaging system
US20130110781A1 (en) Server replication and transaction commitment
US7203863B2 (en) Distributed transaction state management through application server clustering
US9104471B2 (en) Transaction log management
US8336053B2 (en) Transaction management
US12099416B1 (en) Apparatus for resolving automatic transaction facility (ATF) failures
Son Replicated data management in distributed database systems
US7228455B2 (en) Transaction branch management to ensure maximum branch completion in the face of failure
EP0817019B1 (en) Method of stratified transaction processing

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200780033678.8

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07803467

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: KR

Ref document number: 1020097006387

Country of ref document: KR

ENP Entry into the national phase

Ref document number: 2009530828

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2007803467

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 12443830

Country of ref document: US